by PhilipJ on 14 June 2006
PLoS Medicine has an editorial (freely accessible, as always) on the impact factor, and how it is a poor judge of a journal’s impact. For anyone unfamiliar with a the impact factor, Thomson Scientific calculates the impact factor as:
Journal X’s 2005 impact factor =
Citations in 2005 (in journals indexed by Thomson Scientific [formerly known as Thomson ISI]) to all articles published by Journal X in 2003â€“2004
Number of articles deemed to be â€œcitableâ€? by Thomson Scientific that were published in Journal X in 2003â€“2004
As you can imagine, there are some issues, not the least of which is: what exactly does Thomson deem a citable article? From the editorial,
During discussions with Thomson Scientific over which article types in PLoS Medicine the company deems as â€œcitable,â€? it became clear that the process of determining a journal’s impact factor is unscientific and arbitrary. After one in-person meeting, a telephone conversation, and a flurry of e-mail exchanges, we came to realize that Thomson Scientific has no explicit process for deciding which articles other than original research articles it deems as citable. We conclude that science is currently rated by a process that is itself unscientific, subjective, and secretive.
Depending on whether Thomson included only original research articles the impact factor for PLoS Medicine weighed in at 11, but it was as low as three if all articles in the journal were deemed citable. An impact factor of 11 puts it among the higher cited journals around, while an impact factor of three, while still respectable, is considerably more pedestrian.
Some of us may be wondering, “who cares?” The editorial is quick to point out, however, that a journal’s impact factor comes into play all the time, everything from determining who gets tenure to where scientists should attempt to publish their own work. Everyone would like their articles to be read (and ultimately cited), and the impact factor is one way to try and find the right journal.
Or is it? Over at the new PLoS blog, Chris Surridge has his own take on the impact factor game, and mentions that the impact factor of a journal is really not a good judge of the potential impact a paper published in the journal is going to have. He says,
Problem is citations arenâ€™t normally distributed across those papers making the power of that average to predict the likely citations of an individual paper very low. As a rule of thumb 80% of a journals impact factor is determined by 20% of the papers published.
So, what is the solution? So far there doesn’t seem to be any obvious alternative (though they do mention the Y factor which combines the impact factor and Google’s pagerank algorithm), and I suspect inertia is going to keep impact factors in the spotlight for a while yet. As a perfect example, even when a publishing house such as PLoS realises the near-useless quantitative value in the impact factor, it is still quoted as though it is meaningful on the PLoS Biology page.