I have a slide that I use in my talks that sums up one particular problem – that the impact factor (IF) of any given journal tells you absolutely nothing about any given article in that journal. For example, the current IF of Organometallics is just over 4, whereas Nature‘s is more than 10 times that at just over 41. But does that mean that every Nature paper is 10 times ‘better’ than every Organometallics paper? (Answer: of course not! – and how on Earth would you measure ‘better’ anyway?). It also doesn’t guarantee that a particular Nature paper will have received more citations than any given Organometallics paper (after all, a wide distribution of citations make up an IF). Considering the perverse incentives in science, however, I wonder how many people would rather have on their CV an Organometallics paper that has received 50 citations in a year instead of a Nature paper that has garnered only 10 in the same period of time?
Anyway, I digress. The slide I have looks at things from a different point of view. Wouldn’t it be interesting if you could take exactly the same paper and publish it at roughly the same time in a bunch of different journals? Take your fancy-metal-catalyzed-cross-coupling-based synthesis of tenurepleaseamycin and submit it to (and have it published in) Angewandte, JACS, Nature Chem, Science, JOC, Tet Lett and Doklady Chemistry and then sit back and see how the citations roll in. Of course, it’s the same paper – it’s not a better paper in one journal than another, so it will get cited roughly equally in all journals, right? Well, all you can really do is speculate, because if you did try to do exactly that you’d end up really annoying some chemistry-journal editors and you might not get the paper published anywhere (well, I can think of a few places that would probably still take it, but discretion is the better part of valour and all that).
Well, never fear! The experiment has been done. Although it wasn’t an experiment, it wasn’t done for the purpose of comparing citations in different journals and it’s happened more than once. It turns out that in medical publishing, editorials/white papers occasionally get published in more than one journal. So, say hello to ‘Clinical Trial Registration — Looking Back and Moving Ahead‘. A few years back, I looked at the citations this paper had received in a range of different journals and the IFs of those journals – the slide from my talk with all of the data on is shown below.
There’s a pretty good correlation between the number of citations that this identical paper received in each journal with the IFs of those journals. Of course, perhaps more people read the New England Journal of Medicine than the Medical Journal of Australia and so a wider audience will likely mean a wider potential-citation pool. Whatever the reasons (and it’s not all that difficult to come up with others), the slide shows how silly it is to assume that the IF of a journal has any bearing on how good any particular paper in that journal is. As I have said before, the only way to figure out if a paper is any good is to actually read the damn thing – the name (or IF) of the journal in which a paper is published should never act as a proxy for how awesome (or not) a paper is.
So, as well as pointing out one specific flaw in the IF, when showing this slide it does allow me to make a joke about how the correlation would be even better if it wasn’t for some (imaginary, I hasten to add) Croatian citation ring… I apologize if I have offended any Croatian doctors who happen to be reading this… but the joke usually gets a laugh.