how many Cortex publications in the hand is a Nature publication in the bush worth?

A provocative and very short Opinion piece by Julien Mayor (Are scientists nearsighted gamblers? The misleading nature of impact factors) was recently posted on the Frontiers in Psychology website (open access! yay!). Mayor’s argument is summed up nicely in this figure:

The left panel plots the mean versus median number of citations per article in a given year (each year is a separate point) for 3 journals: Nature (solid circles), Psych Review (squares), and Psych Science (triangles). The right panel plots the number of citations each paper receives in each of the first 15 years following its publication. What you can clearly see is that (a) the mean and median are very strongly related for the psychology journals, but completely unrelated for Nature, implying that a very small number of articles account for the vast majority of Nature citations (Mayor cites data indicating that up to 40% of Nature papers are never cited); and (b) Nature papers tend to get cited heavily for a year or two, and then disappear, whereas Psych Science, and particularly Psych Review, tend to have much longer shelf lives. Based on these trends, Mayor concludes that:

From this perspective, the IF, commonly accepted as golden standard for performance metrics seems to reward high-risk strategies (after all your Nature article has only slightly over 50% chance of being ever cited!), and short-lived outbursts. Are scientists then nearsighted gamblers?

I’d very much like to believe this, in that I think the massive emphasis scientists collectively place on publishing work in broad-interest, short-format journals like Nature and Science is often quite detrimental to the scientific enterprise as a whole. But I don’t actually believe it, because I think that, for any individual paper, researchers generally do have good incentives to try to publish in the glamor mags rather than in more specialized journals. Mayor’s figure, while informative, doesn’t take a number of factors into account:

  • The type of papers that gets published in Psych Review and Nature are very different. Review papers, in general, tend to get cited more often, and for a longer time. A better comparison would be between Psych Review papers and only review papers in Nature (there’s not many of them, unfortunately). My guess is that that difference alone probably explains much of the difference in citation rates later on in an article’s life. That would also explain why the temporal profile of Psych Science articles (which are also overwhelmingly short empirical reports) is similar to that of Nature. Major theoretical syntheses stay relevant for decades; individual empirical papers, no matter how exciting, tend to stop being cited as frequently once (a) the finding fails to replicate, or (b) a literature builds up around the original report, and researchers stop citing individual studies and start citing review articles (e.g., in Psych Review).
  • Scientists don’t just care about citation counts, they also care about reputation. The reality is that much of the appeal of having a Nature or Science publication isn’t necessarily that you expect the work to be cited much more heavily, but that you get to tell everyone else how great you must be because you have a publication in Nature. Now, on some level, we know that it’s silly to hold glamor mags in such high esteem, and Mayor’s data are consistent with that idea. In an ideal world, we’d read all papers ultra-carefully before making judgments about their quality, rather than using simple but flawed heuristics like what journal those papers happen to be published in. But this isn’t an ideal world, and the reality is that people do use such heuristics. So it’s to each scientist’s individual advantage (but to the field’s detriment) to take advantage of that knowledge.
  • Different fields have very different citation rates. And articles in different fields have very different shelf lives. For instance, I’ve heard that in many areas of physics, the field moves so fast that articles are basically out of date within a year or two (I have no way to verify if this is true or not). That’s certainly not true of most areas of psychology. For instance, in cognitive neuroscience, the current state of the field in many areas is still reasonably well captured by highly-cited publications that are 5 – 10 years old. Most behavioral areas of psychology seem to advance even more slowly. So one might well expect articles in psychology journals to peak later in time than the average Nature article, because Nature contains a high proportion of articles in the natural sciences.
  • Articles are probably selected for publication in Nature, Psych Science, and Psych Review for different reasons. In particular, there’s no denying the fact that Nature selects articles in large part based on the perceived novelty and unexpectedness of the result. That’s not to say that methodological rigor doesn’t play a role, just that, other things being equal, unexpected findings are less likely to be replicated. Since Nature and Science overwhelmingly publish articles with new and surprising findings, it shouldn’t be surprising if the articles in these journals have a lower rate of replication several years on (and hence, stop being cited). That’s presumably going to be less true of articles in specialist journals, where novelty factor and appeal to a broad audience are usually less important criteria.

Addressing these points would probably go a long way towards closing, and perhaps even reversing, the gap implied  by Mayor’s figure. I suspect that if you could do a controlled experiment and publish the exact same article in Nature and Psych Science, it would tend to get cited more heavily in Nature over the long run. So in that sense, if citations were all anyone cared about, I think it would be perfectly reasonable for scientists to try to publish in the most prestigious journals–even though, again, I think the pressure to publish in such journals actually hurts the field as a whole.

Of course, in reality, we don’t just care about citation counts anyway; lots of other things matter. For one thing, we also need to factor in the opportunity cost associated with writing a paper up in a very specific format for submission to Nature or Science, knowing that we’ll probably have to rewrite much or all of it before it gets published. All that effort could probably have been spent on other projects, so one way to put the question is: how many lower-tier publications in the hand is a top-tier publication in the bush worth?

Ultimately, it’s an empirical matter; I imagine if you were willing to make some strong assumptions, and collect the right kind of data, you could come up with a meaningful estimate of the actual value of a Nature publication, as a function of important variables like the number of other publications the authors had, the amount of work invested in rewriting the paper after rejection, the authors’ career stage, etc. But I don’t know of any published work to that effect; it seems like it would probably be more trouble than it was worth (or, to get meta: how many Nature manuscripts can you write in the time it takes you to write a manuscript about how many Nature manuscripts you should write?). And, to be honest, I suspect that any estimate you obtained that way would have little or no impact on the actual decisions scientists make about where to submit their manuscripts anyway, because, in practice, such decisions are driven as much by guesswork and wishful thinking as by any well-reasoned analysis. And on that last point, I speak from extensive personal experience…

4 thoughts on “how many Cortex publications in the hand is a Nature publication in the bush worth?”

  1. It seems to me that, just as one wants an investment portfolio that is balanced in terms of risk/reward, science benefits from having a range of journals, from boring but fairly reliable like Psych Review to flashy but fairly unreliable like Nature.

    I also wonder whether the sheer number of papers in Nature (which publishes more papers in one week than Psych Review does in a month) biases the comparison. Think of it as r vs. K selection for articles.

  2. I agree that it’s good to have a balance of risk/reward, I’m just not sure that the range of journals we have actually optimizes that balance. I imagine in a world with a very different publishing model (e.g., everyone deposits to a preprint server first, a la arxiv.org), you would still have that balance arising naturally, but there’d be less pressure to IF-chase.

    I’m not sure how much number of papers matters, in that Nature gets to choose from all fields, and Psych Review only gets one… so in theory, Nature should be able to select only the cream of the crop (especially since they don’t have to publish any articles in a field in a given week if they don’t want to).

  3. ince Nature and Science overwhelmingly publish articles with new and surprising findings, it shouldn’t be surprising if the articles in these journals have a lower rate of replication several years on (and hence, stop being cited).

    And psych science doesn’t? 65% of the 2500 manuscripts PsychScience receives each year are declined without review, mainly on concern that the results are “incremental,” which in many ways is another way of saying “not surprising.” I’m not actually sure whether it’s empirically harder to get a psych paper into Nature or PsychScience, or if anyone collects such data.

    But I think your other points are on target. I’d add one more: do we *really* think that where a paper is published has a large effect on how likely it is to be cited? Or are more-cited papers likely to be published in certain journals? It’s not that hard to predict which papers are going to be highly cited in advance, and highly selective journals choose their papers based largely on those same criteria that lead to future citations.

  4. Asking “Are scientists then nearsighted gamblers?” Mayor assumes that scientists are the ones who set the standards for how science should be evaluated. At least, from France, this appears far from true. Evaluative authorities come up with a scoring system that rest to some (larger or smaller) extent on IF. This may be partly justified by the simplicity of this index, as well by the inertia in the domain, like in any other.

    Actually , there may also be a gap between the actual practice of such agencies and what scientists believe they are [see : http://www.nature.com/news/2010/100616/full/465860a.html%5D

Leave a Reply