Oh, woe is me. I was having a perfectly good day, doing perfectly good things, when I was asked a perfectly simple question, and I found out that I'm clueless. Because it turns out that I don't know my impact factor or my H factor or my scientific factoid quotient (okay, I made up that one), and I don't know how to find them. Somehow, while I wasn't looking, my life's work, studying biomedical phenomena with my wonderful colleagues and reporting on our progress in the professional literature, ceased to be. At least from the perspective of those who want to evaluate said life's work. I need to reduce my contributions to a number, and I don't know how. I feel like the hapless fly at the end of The Fly - not the big one but the little one lost in the spider's web. Actually, what was his impact factor?
Some of my colleagues have kindly offered to help with calculating my HIP factor (I think that's what it was). List my best publications and use an online (and apparently costly, but I don't know) service to tell me how often each one was cited. Count the maximum number of publications with the maximum number of citations for that maximum (or perhaps mode), and generate a curve relating these to the average citations per publication for the three months around the time that each publication appeared. Somehow shoe size (European scale) and average calorie intake figured in. Apparently, I'm very HIP - or maybe not.
Once, many years ago, I was involved in a site visit. This is a scientific outing of a rather unique type: a number of scientists from different institutions descended on a small group of individuals at another institution, who had had the audacity to ask for money to support their work, and it was our job as the site visitors to make a recommendation one way or another. As I recall, this particular site visit was in a cold place at a cold time of year, and the snow was piled up well above head-high. But the chilliness in the room was even worse, as we struggled to find things we actually liked in the proposed program of research. But what I remember with the clarity of a clear winter day was one of our fellow reviewers distributing his calculations of the reviewees' `numerical values' based on a simple summation of the numbers of times their publications had been cited. To our credit, I think, we not only tore up these calculations but also threatened to send our colleague out into the blizzard if he brought up such dehumanizing algebra again.
But the last laugh was on us, I fear. Now, in our more enlightened times, we routinely employ algorithms to convert a body of scientific achievement to a number, and then compare that number against other numbers to reach conclusions that affect others' lives and happiness. How did we get here? How did we allow a set of calculations based on mechanized data accrual to supersede (or at least hold place with) personal assessment of a colleague's contributions to our joint endeavour? When did we start valuing numbers of citations more than the content of the work?
The rise of the impact factor, as it applies to people (as in, “See that guy? Bad haircut, huge impact factor”) came about as part of a needed transition. There was a time in the distant past when all faculty appointments and promotions were done at a personal level, relying entirely on recommendations from colleagues near and far (and this practice continues), but, ultimately, there was no way to gauge the candidate beyond these opinions - except to read their work. We could look at the publications, read them, and evaluate whether this was someone we wished to hire or promote - or ultimately tenure. This reading was a lot of work, and as the numbers of papers being produced increased, so did the work of reading them. And if we were going to put them into a useful context, we had to read other papers as well, while of course continuing to do our own research and write our own papers. And we did it, as best we could, or faked it if we couldn't, because this was about hiring and promotion, and tenure. And therein lies the problem.
Oh, tenure. The word itself has a texture, an almost chocolate quality (or substitute anything you personally find delicious). My friends in the world of business find the entire concept of tenure utterly alien. “Why”, they say, “do you think you deserve a job for life? And why would you want to work at a place that gives out these sorts of jobs?” And they are right, you know. Because tenure doesn't only apply to you (oh, rapture), but also to others (oh, ick). You may earn and deserve (so to speak) your tenure, but you're going to have to put up with the invertebrate in the next office for all of eternity - or at least until retirement. A special hell for academic scientists, just one floor up from the one reserved for journal editors.
Tenure once had a real purpose, and probably still does, in some fields. The basic idea was that, as an intellectual, you are constrained by your academic community, prevented from doing things that are, well, out there. But once you've established that you are a solid, level-headed, serious academic, then maybe it's a good thing to take chances and do things that are out there.
Which is all fine, if you don't have to get grants. Once we bring peer-reviewed funding into the picture, tenure is a bit outmoded, at least for biomedical research. It's been a long time since a renegade scientist, working without peer support (or outside funding), generated breakthroughs that justified their tenure. I can think of a few, but that was long ago, and Dr Frankenstein doesn't count. Nevertheless, when we're young and hungry we strive for tenure, we yearn for it, and we'd do, well, not anything, but a lot to get it.
And as I say, therein lies the problem. When things like promotion and tenure are strictly subjective, this invites potential abuse. Junior faculty are forced into positions where they work mostly to the credit of their seniors, who then reap the (sometimes substantial) rewards, in return for which the juniors are tantalized with the promise of lifelong employment. In some organizations, including the academic systems of some nations, this promise starts remarkably early, and the paying of the rent continues until attrition of the seniors sets in. Without any way to `score' contributions objectively, the seniors will take from the juniors, and the great wheel will turn.
Enter citations, impact factors and the calculus of achievement. Two candidates for the same tenure position (oh yes, we expect you to compete for them), one who has not published particularly well, but has paid adoring allegiance to the chairman of the department, and the other who has made real contributions, publications that have had impact, despite there being no dues paid to the higher-ups. When objective criteria are applied, the latter candidate might actually have a chance and, in applying such criteria (to a point), the entire system might move up a notch in terms of productivity and accountability. And this goes deeper. My dear friend James (not his real name, but it could be) was fortunate to receive a rather large grant when he was a rather new faculty member at a rather big university, whereupon a rather slimy senior faculty member (sorry, that isn't an appropriate description: he was extremely slimy) convinced the administration that such a rather large grant for a rather small fry should be supervised by said senior slime-lord. And so it would have been, but for objective scoring: James pointed out to the administration that his total citations (despite being junior) were more than tenfold those of the senior icky-person, and the administration relented. Well done James, and well done objective scoring.
But it just feels wrong, this tallying up of factors and citations. Where is the science? When does the actual work come to the fore? And even worse, this system has urged many of our colleagues to churn out papers without regard to actual content, with the single-minded goal of generating a score rather than a body of significant work. We know these colleagues, these gunslingers, who put notches on their gun barrels for each publication and don't remember what they had been trying to accomplish, if anything, other than the whole set of notches. I didn't get into this business to do that, and I hope you didn't either, hombre.
What's the alternative? How can we have the best of both worlds: the impartial evaluation of our work together with the personal assessment of what we've actually accomplished. Because, let's face it, we want to be objectively, rigorously evaluated for what we've accomplished and we want our colleagues to be too.
Those who make the decisions can't read every paper we've each written, and we can't ask others to do that either - it's just too much work in too many disparate areas (the areas that collectively make up a vibrant department). And we might miss something important or fall prey to the same sort of number crunching that we're trying to avoid. How do we level the playing field so that someone who has published a few great papers is evaluated as having made a contribution that equals or outshines someone who has produced a plethora of fodder? And how can we do this without relying on a computer to tell us?
Hey, this is Mole you're talking to. Of course I have a suggestion, and it's very simple. Put out your Greatest Hits compilation, and be judged on this. Pop musicians do it all the time. Depending on your experience and the sort of thing you're being evaluated for - be it job, grant or award - you should be asked to compile some number (three, five, not more than ten) of your most notable works, with an emphasis on the most recent. Everyone applying for the same thing will be asked to do this as well. You can briefly annotate them, explain why these are your best contributions. And you can say how you think they've impacted on the field, not in numbers of citations (which, as we know, can be deceptive) but in terms of how others have followed up on the work. Most of all, we can read these few papers and get a feeling for what you've done, what contributions you have really made, what sort of scientist you are.
Evaluate me based on my best few publications in the last few years, or throughout my career, and let me pick which ones I'm most proud of. Let me say why they matter. Don't reduce me to a number. Because I'm not. Even if it's a really huge number. And let's face it, we all want people to be playing our greatest hits for years to come. It's what I want to be remembered for.
- © The Company of Biologists Limited 2006