how many Cortex publications in the hand is a Nature publication in the bush worth?

A provocative and very short Opinion piece by Julien Mayor (Are scientists nearsighted gamblers? The misleading nature of impact factors) was recently posted on the Frontiers in Psychology website (open access! yay!). Mayor’s argument is summed up nicely in this figure:

The left panel plots the mean versus median number of citations per article in a given year (each year is a separate point) for 3 journals: Nature (solid circles), Psych Review (squares), and Psych Science (triangles). The right panel plots the number of citations each paper receives in each of the first 15 years following its publication. What you can clearly see is that (a) the mean and median are very strongly related for the psychology journals, but completely unrelated for Nature, implying that a very small number of articles account for the vast majority of Nature citations (Mayor cites data indicating that up to 40% of Nature papers are never cited); and (b) Nature papers tend to get cited heavily for a year or two, and then disappear, whereas Psych Science, and particularly Psych Review, tend to have much longer shelf lives. Based on these trends, Mayor concludes that:

From this perspective, the IF, commonly accepted as golden standard for performance metrics seems to reward high-risk strategies (after all your Nature article has only slightly over 50% chance of being ever cited!), and short-lived outbursts. Are scientists then nearsighted gamblers?

I’d very much like to believe this, in that I think the massive emphasis scientists collectively place on publishing work in broad-interest, short-format journals like Nature and Science is often quite detrimental to the scientific enterprise as a whole. But I don’t actually believe it, because I think that, for any individual paper, researchers generally do have good incentives to try to publish in the glamor mags rather than in more specialized journals. Mayor’s figure, while informative, doesn’t take a number of factors into account:

  • The type of papers that gets published in Psych Review and Nature are very different. Review papers, in general, tend to get cited more often, and for a longer time. A better comparison would be between Psych Review papers and only review papers in Nature (there’s not many of them, unfortunately). My guess is that that difference alone probably explains much of the difference in citation rates later on in an article’s life. That would also explain why the temporal profile of Psych Science articles (which are also overwhelmingly short empirical reports) is similar to that of Nature. Major theoretical syntheses stay relevant for decades; individual empirical papers, no matter how exciting, tend to stop being cited as frequently once (a) the finding fails to replicate, or (b) a literature builds up around the original report, and researchers stop citing individual studies and start citing review articles (e.g., in Psych Review).
  • Scientists don’t just care about citation counts, they also care about reputation. The reality is that much of the appeal of having a Nature or Science publication isn’t necessarily that you expect the work to be cited much more heavily, but that you get to tell everyone else how great you must be because you have a publication in Nature. Now, on some level, we know that it’s silly to hold glamor mags in such high esteem, and Mayor’s data are consistent with that idea. In an ideal world, we’d read all papers ultra-carefully before making judgments about their quality, rather than using simple but flawed heuristics like what journal those papers happen to be published in. But this isn’t an ideal world, and the reality is that people do use such heuristics. So it’s to each scientist’s individual advantage (but to the field’s detriment) to take advantage of that knowledge.
  • Different fields have very different citation rates. And articles in different fields have very different shelf lives. For instance, I’ve heard that in many areas of physics, the field moves so fast that articles are basically out of date within a year or two (I have no way to verify if this is true or not). That’s certainly not true of most areas of psychology. For instance, in cognitive neuroscience, the current state of the field in many areas is still reasonably well captured by highly-cited publications that are 5 – 10 years old. Most behavioral areas of psychology seem to advance even more slowly. So one might well expect articles in psychology journals to peak later in time than the average Nature article, because Nature contains a high proportion of articles in the natural sciences.
  • Articles are probably selected for publication in Nature, Psych Science, and Psych Review for different reasons. In particular, there’s no denying the fact that Nature selects articles in large part based on the perceived novelty and unexpectedness of the result. That’s not to say that methodological rigor doesn’t play a role, just that, other things being equal, unexpected findings are less likely to be replicated. Since Nature and Science overwhelmingly publish articles with new and surprising findings, it shouldn’t be surprising if the articles in these journals have a lower rate of replication several years on (and hence, stop being cited). That’s presumably going to be less true of articles in specialist journals, where novelty factor and appeal to a broad audience are usually less important criteria.

Addressing these points would probably go a long way towards closing, and perhaps even reversing, the gap implied  by Mayor’s figure. I suspect that if you could do a controlled experiment and publish the exact same article in Nature and Psych Science, it would tend to get cited more heavily in Nature over the long run. So in that sense, if citations were all anyone cared about, I think it would be perfectly reasonable for scientists to try to publish in the most prestigious journals–even though, again, I think the pressure to publish in such journals actually hurts the field as a whole.

Of course, in reality, we don’t just care about citation counts anyway; lots of other things matter. For one thing, we also need to factor in the opportunity cost associated with writing a paper up in a very specific format for submission to Nature or Science, knowing that we’ll probably have to rewrite much or all of it before it gets published. All that effort could probably have been spent on other projects, so one way to put the question is: how many lower-tier publications in the hand is a top-tier publication in the bush worth?

Ultimately, it’s an empirical matter; I imagine if you were willing to make some strong assumptions, and collect the right kind of data, you could come up with a meaningful estimate of the actual value of a Nature publication, as a function of important variables like the number of other publications the authors had, the amount of work invested in rewriting the paper after rejection, the authors’ career stage, etc. But I don’t know of any published work to that effect; it seems like it would probably be more trouble than it was worth (or, to get meta: how many Nature manuscripts can you write in the time it takes you to write a manuscript about how many Nature manuscripts you should write?). And, to be honest, I suspect that any estimate you obtained that way would have little or no impact on the actual decisions scientists make about where to submit their manuscripts anyway, because, in practice, such decisions are driven as much by guesswork and wishful thinking as by any well-reasoned analysis. And on that last point, I speak from extensive personal experience…