No, it’s not The Incentives—it’s you

There’s a narrative I find kind of troubling, but that unfortunately seems to be growing more common in science. The core idea is that the mere existence of perverse incentives is a valid and sufficient reason to knowingly behave in an antisocial way, just as long as one first acknowledges the existence of those perverse incentives. The way this dynamic usually unfolds is that someone points out some fairly serious problem with the way many scientists behave—say, our collective propensity to p-hack as if it’s going out of style, or the fact that we insist on submitting our manuscripts to publishers that are actively trying to undermine our interests—and then someone else will say, “I know, right—but what are you going to do, those are the incentives.”

As best I can tell, the words “it’s the incentives” are magic. Once they’re uttered by someone, natural law demands that everyone else involved in the conversation immediately stop whatever else they were doing, solemnly nod, and mumble something to the effect that, yes, the incentives are very bad, very bad indeed, and it’s a real tragedy that so many smart, hard-working people are being crushed under the merciless, gigantic boot of The System. Then there’s usually a brief pause, and after that, everyone goes back to discussing whatever they were talking about a moment earlier.

Perhaps I’m getting senile in my early middle age, but my anecdotal perception is that it used to be that, when somebody pointed out to a researcher that they might be doing something questionable, that researcher would typically either (a) argue that they weren’t doing anything questionable (often incorrectly, because there used to be much less appreciation for some of the statistical issues involved), or (b) look uncomfortable for a little while, allow an awkward silence to bloom, and then change the subject. In the last few years, I’ve noticed that uncomfortable discussions about questionable practices disproportionately seem to end with a chuckle or shrug, followed by a comment to the effect that we are all extremely sophisticated human beings who recognize the complexity of the world we live in, and sure it would be great if we lived in a world where one didn’t have to occasionally engage in shenanigans, but that would be extremely naive, and after all, we are not naive, are we?

There is, of course,  an element of truth to this kind of response. I’m not denying that perverse incentives exist; they obviously do. There’s no question that many aspects of modern scientific culture systematically incentivize antisocial behavior, and I don’t think we can or should pretend otherwise. What I do object to quite strongly is the narrative that scientists are somehow helpless in the face of all these awful incentives—that we can’t possibly be expected to take any course of action that has any potential, however small, to impede our own career development.

“I would publish in open access journals,” your friendly neighborhood scientist will say. “But those have a lower impact factor, and I’m up for tenure in three years.”

Or: “if I corrected for multiple comparisons in this situation, my effect would go away, and then the reviewers would reject the paper.”

Or: “I can’t ask my graduate students to collect an adequately-powered replication sample; they need to publish papers as quickly as they can so that they can get a job.”

There are innumerable examples of this kind, and they’ve become so routine that it appears many scientists have stopped thinking about what the words they’re saying actually mean, and instead simply glaze over and nod sagely whenever the dreaded Incentives are invoked.

A random bystander who happened to eavesdrop on a conversation between a group of scientists kvetching about The Incentives could be forgiven for thinking that maybe, just maybe, a bunch of very industrious people who generally pride themselves on their creativity, persistence, and intelligence could find some way to work around, or through, the problem. And I think they would be right. The fact that we collectively don’t see it as a colossal moral failing that we haven’t figured out a way to get our work done without having to routinely cut corners in the rush for fame and fortune is deeply troubling.

It’s also aggravating on an intellectual level, because the argument that we’re all being egregiously and continuously screwed over by The Incentives is just not that good. I think there are a lot of reasons why researchers should be very hesitant to invoke The Incentives as a justification for why any of us behave the way we do. I’ll give nine of them here, but I imagine there are probably others.

1. You can excuse anything by appealing to The Incentives

No, seriously—anything. Once you start crying that The System is Broken in order to excuse your actions (or inactions), you can absolve yourself of responsibility for all kinds of behaviors that, on paper, should raise red flags. Consider just a few behaviors that few scientists would condone:

  • Fabricating data or results
  • Regulary threatening to fire trainees in order to scare them into working harder
  • Deliberately sabotaging competitors’ papers or grants by reviewing them negatively

I think it’s safe to say most of us consider such practices to be thoroughly immoral, yet there are obviously people who engage in each of them. And when those people are caught or confronted, one of the most common justifications they fall back on is… you guessed it: The Incentives! When Diederik Stapel confessed to fabricating the data used in over 50 publications, he didn’t explain his actions by saying “oh, you know, I’m probably a bit of a psychopath”; instead, he placed much of the blame squarely on The Incentives:

I did not withstand the pressure to score, to publish, the pressure to get better in time. I wanted too much, too fast. In a system where there are few checks and balances, where people work alone, I took the wrong turn. I want to emphasize that the mistakes that I made were not born out of selfish ends.

Stapel wasn’t acting selfishly, you see… he was just subject to intense pressures. Or, you know, Incentives.

Or consider these quotes from a New York Times article describing Stapel’s unraveling:

In his early years of research — when he supposedly collected real experimental data — Stapel wrote papers laying out complicated and messy relationships between multiple variables. He soon realized that journal editors preferred simplicity. “They are actually telling you: ‘Leave out this stuff. Make it simpler,'” Stapel told me. Before long, he was striving to write elegant articles.

The experiment — and others like it — didn’t give Stapel the desired results, he said. He had the choice of abandoning the work or redoing the experiment. But he had already spent a lot of time on the research and was convinced his hypothesis was valid. “I said — you know what, I am going to create the data set,” he told me.

Reading through such accounts, it’s hard to avoid the conclusion that Stapel’s self-narrative is strikingly similar to the one that gets tossed out all the time on social media, or in conference bar conversations: here I am, a good scientist trying to do an honest job, and yet all around me is a system that incentivizes deception and corner-cutting. What do you expect me to do?.

Curiously, I’ve never heard any of my peers—including many of the same people who are quick to invoke The Incentives to excuse their own imperfections—seriously endorse The Incentives as an acceptable justification for Stapel’s behavior. In Stapel’s case, the inference we overwhelmingly jump to is that there must be something deeply wrong with Stapel, seeing as the rest of us also face the same perverse incentives on a daily basis, yet we somehow manage to get by without fabricating data. But this conclusion should make us a bit uneasy, I think, because if it’s correct (and I think it is), it implies that we aren’t really such slaves to The Incentives after all. When our morals get in the way, we appear to be perfectly capable of resisting temptation. And I mean, it’s not even like it’s particularly difficult; I doubt many researchers actively have to fight the impulse to manipulate their data, despite the enormous incentives to do so. I submit that the reason many of us feel okay doing things like reporting exploratory results as confirmatory results, or failing to mention that we ran six other studies we didn’t report, is not really that The Incentives are forcing us to do things we don’t like, but that it’s easier to attribute our unsavory behaviors to unstoppable external forces than to take responsibility for them and accept the consequences.

Needless to say, I think this kind of attitude is fundamentally hypocritical. If we’re not comfortable with pariahs like Stapel blaming The Incentives for causing them to fabricate data, we shouldn’t use The Incentives as an excuse for doing things that are on the same spectrum, albeit less severe. If you think that what the words “I did not withstand the pressure to score” really mean when they fall out of Stapel’s mouth is something like “I’m basically a weak person who finds the thought of not being important so intolerable I’m willing to cheat to get ahead”, then you shouldn’t give yourself a free pass just because when you use that excuse, you’re talking about much smaller infractions. Consider the possibility that maybe, just like Stapel, you’re actually appealing to The Incentives as a crutch to avoid having to make your life very slightly more difficult.

2. It would break the world if everyone did it

When people start routinely accepting that The System is Broken and The Incentives Are Fucking Us Over, bad things tend to happen. It’s very hard to have a stable, smoothly functioning society once everyone believes (rightly or wrongly) that gaming the system is the only way to get by. Imagine if every time you went to your doctor—and I’m aware that this analogy won’t work well for people living outside the United States—she sent you to get a dozen expensive and completely unnecessary medical tests, and then, when prompted for an explanation, simply shrugged and said “I know I’m not an angel—but hey, them’s The Incentives.” You would be livid—even though it’s entirely true (at least in the United States; other developed countries seem to have figured this particular problem out) that many doctors have financial incentives to order unnecessary tests.

To be clear, I’m not saying perverse incentives never induce bad behavior in medicine or other fields. Of course they do. My point is that practitioners in other fields at least appear to have enough sense not to loudly trumpet The Incentives as a reasonable justification for their antisocial behavior—or to pat themselves on the back for being the kind of people who are clever enough to see the fiendish Incentives for exactly what they are. My sense is that when doctors, lawyers, journalists, etc. fall prey to The Incentives, they generally consider that to be a source of shame. I won’t go so far as to suggest that we scientists take pride in behaving badly—we obviously don’t—but we do seem to have collectively developed a rather powerful form of learned helplessness that doesn’t seem to be matched by other communities. Which is a fortunate thing, because if every other community also developed the same attitude, we would be in a world of trouble.

3. You are not special

Individual success in science is, to a first approximation, a zero-sum game—at least in the short term. many scientists who appeal to The Incentives seem to genuinely believe that opting out of doing the right thing is a victimless crime. I mean, sure, it might make the system a bit less efficient overall… but that’s just life, right? It’s not like anybody’s actually suffering.

Well yeah, people actually do suffer. There are many scientists who are willing to do the right things—to preregister their analysis plans, to work hard to falsify rather than confirm their hypotheses, to diligently draw attention to potential confounds that complicate their preferred story, and so on. When you assert your right to opt out of these things because apparently your publications, your promotions, and your students are so much more important than everyone else’s, you’re cheating those people.

No, really, you are. If you don’t like to think of yourself as someone who cheats other people, don’t reflexively collapse on a crutch made out of stainless steel Incentives any time someone questions your process. You are not special. Your publications, job, and tenure are not more important than other people’s. The fact that there are other people in your position engaging in the same behaviors doesn’t mean you and your co-authors are all very sophisticated, and that the people who refuse to cut corners are naive simpletons. What it actually demonstrates is that, somewhere along the way, you developed the reflexive ability to rationalize away behavior that you would disapprove of in others and that, viewed dispassionately, is clearly damaging to science.

4. You (probably) have no data

It’s telling that appeals to The Incentives are rarely supported by any actual data. It’s simply taken for granted that engaging in the practice in question would be detrimental to one’s career. The next time you’re tempted to blame The System for making you do bad things, you might want to ask yourself this: Do you actually know that, say, publishing in PLOS ONE rather than [insert closed society journal of your choice] would hurt your career? If so, how do you know that? Do you have any good evidence for it, or have you simply accepted it as stylized fact?

Coming by the kind of data you’d need to answer this question is actually not that easy: it’s not enough to reflexively point to, say, the fact that some journals have higher impact factors than others, To identify the utility-maximizing course of action, you’d need to integrate over both benefits and costs, and the costs are not always so obvious. For example, the opportunity cost of submitting your paper to a “good” journal will be offset to some extent by the likelihood of faster publication (no need to spend two years racking up rejections at high-impact venues), by the positive image you send to at least some of your peers that you support open scientific practices, and so on.

I’m not saying that a careful consideration of the pros and cons of doing the right thing would usually lead people to change their minds. It often won’t. What I’m saying is that people who blame The Incentives for forcing them to submit their papers to certain journals, to tell post-hoc stories about their work, or to use suboptimal analytical methods don’t generally support their decisions with data, or even with well-reasoned argument. The defense is usually completely reflexive—which should raise our suspicion that it’s also just a self-serving excuse.

5. It (probably) won’t matter anyway

This one might hurt a bit, but I think it’s important to consider—particularly for early-career researchers. Let’s suppose you’re right that doing the right thing in some particular case would hurt your career. Maybe it really is true that if you comprehensively report in your paper on all the studies you ran, and not just the ones that “worked”, your colleagues will receive your work less favorably. In such cases it may seem natural to think that there has to be a tight relationship between the current decision and the global outcome—i.e., that if you don’t drop the failed studies, you won’t get a tenure-track position three years down the road. After all, you’re focusing on that causal relationship right now, and it seems so clear in your head!

Unfortunately (or perhaps fortunately?), reality doesn’t operate that way. Outcomes in academia are multiply determined and enormously complex. You can tell yourself that getting more papers out faster will get you a job if it makes you feel better, but that doesn’t make it true. If you’re a graduate student on the job market these days, I have sad news for you: you’re probably not getting a tenure-track job no matter what you do. It doesn’t matter how many p-hacked papers you publish, or how thinly you slice your dissertation into different “studies”; there are not nearly enough jobs to go around for everyone who wants one.

Suppose you’re right, and your sustained pattern of corner-cutting is in fact helping you get ahead. How far ahead do you think it’s helping you get? Is it taking you from a 3% chance of getting a tenure-track position at an R1 university to an 80% chance? Almost certainly not. Maybe it’s increasing that probability from 7% to 11%; that would still be a non-trivial relative increase, but it doesn’t change the fact that, for the average grad student, there is no full-time faculty position waiting at the end of the road. Despite what the environment around you may make you think, the choice most graduate students and postdocs face is not actually between (a) maintaining your integrity and “failing” out of science or (b) cutting a few corners and achieving great fame and fortune as a tenured professor. The Incentives are just not that powerful. The vastly more common choice you face as a trainee is between (a) maintaining your integrity and having a pretty low chance of landing a permanent research position, or (b) cutting a bunch of corners that threaten the validity of your work and having a slightly higher (but still low in absolute terms) chance of landing a permanent research position. And even that’s hardly guaranteed, because you never know when there’s someone on a hiring committee who’s going to be turned off by the obvious p-hacking in your work.

The point is, the world is complicated, and as a general rule, very few things—including the number of publications you produce—are as important as they seem to be when you’re focusing on them in the moment. If you’re an early-career researcher and you regularly find yourself strugging between doing what’s right and doing what isn’t right but (you think) benefits your career, you may want to take a step back and dispassionately ask yourself whether this integrity versus expediency conflict is actually a productive way to frame things. Instead, consider the alternative framing I suggested above: you are most likely going to leave academia eventually, no matter what you do, so why not at least try to see the process through with some intellectual integrity? And I mean, if you’re really so convinced that The System is Broken, why would you want to stay in it anyway? Do you think standards are going to change dramatically in the next few years? Are you laboring under the impression that you, of all people, are going to somehow save science?

This brings us directly to the next point…

6. You’re (probably) not going to “change things from the inside”

Over the years, I’ve talked to quite a few early-career researchers who have told me that while they can’t really stop engaging in questionable research practices right now without hurting their career, they’re definitely going to do better once they’re in a more established position. These are almost invariably nice, well-intentioned people, and I don’t doubt that they genuinely believe what they say. Unfortunately, what they say is slippery, and has a habit of adapting to changing circumstances. As a grad student or postdoc, it’s easy to think that once you get a faculty position, you’ll be able to start doing research the “right” way. But once you get a faculty position, it then turns out you need to get papers and grants in order to get tenure (I mean, who knew?), so you decide to let the dreaded Incentives win for just a few more years. And then, once you secure tenure, well, now the problem is that your graduate students also need jobs, just like you once did, so you can’t exactly stop publishing at the same rate, can you? Plus, what would all your colleagues think if you effectively said, “oh, you should all treat the last 15 years of my work with skepticism—that was just for tenure”?

I’m not saying there aren’t exceptions. I’m sure there are. But I can think of at least a half-dozen people off-hand who’ve regaled me with me some flavor of “once I’m in a better position” story, and none of them, to my knowledge, have carried through on their stated intentions in a meaningful way. And I don’t find this surprising: in most walks of life, course correction generally becomes harder, not easier, the longer you’ve been traveling on the wrong bearing. So if part of your unhealthy respect for The Incentives is rooted in an expectation that those Incentives will surely weaken their grip on you just as soon as you reach the next stage of your career, you may want to rethink your strategy. The Incentives are not going to dissipate as you move up the career ladder; if anything, you’re probably going to have an increasingly difficult time shrugging them off.

7. You’re not thinking long-term

One of the most frustrating aspects of appeals to The Incentives is that they almost invariably seem to focus exclusively on the short-to-medium term. But the long term also matters. And there, I would argue that The Incentives very much favor a radically different—and more honest—approach to scientific research. To see this, we need only consider the ongoing “replication crisis” in many fields of science. One thing that I think has been largely overlooked in discussions about the current incentive structure of science is what impact the replication crisis will have on the legacies of a huge number of presently famous scientists.

I’ll tell you what impact it will have: many of those legacies will be completely zeroed out. And this isn’t just hypothetical scaremongering. It’s happening right now to many former stars of psychology (and, I imagine, other fields I’m less familiar with). There are many researchers we can point to right now who used to be really famous (like, major-chunks-of-the-textbook famous), are currently famous-with-an-asterisk, and will in all likelihood, be completely unknown again within a couple of decades. The unlucky ones are probably even fated to become infamous—their entire scientific legacies eventually reduced to footnotes in cautionary histories illustrating how easily entire areas of scientific research can lose their footing when practitioners allow themselves to be swept away by concerns about The Incentives.

You probably don’t want this kind of thing to happen to you. I’m guessing you would like to retire with at least some level of confidence that your work, while maybe not Earth-shattering in its implications, isn’t going to be tossed on the scrap heap of history one day by a new generation of researchers amazed at how cavalier you and your colleagues once were about silly little things like “inferential statistics” and “accurate reporting”. So if your justification for cutting corners is that you can’t otherwise survive or thrive in the present environment, you should consider the prospect—and I mean, really take some time to think about it—that any success you earn within the next 10 years by playing along with The Incentives could ultimately make your work a professional joke within the 20 years after that.

8. It achieves nothing and probably makes things worse

Hey, are you a scientist? Yes? Great, here’s a quick question for you: do you think there’s any working scientist on Planet Earth who doesn’t already know that The Incentives are fucked up? No? I didn’t think so. Which means you really don’t need to keep bemoaning The Incentives; I promise you that you’re not helping to draw much-needed attention to an important new problem nobody’s recognized before. You’re not expressing any deep insight by pointing out that hiring committees prefer applicants with lots of publications in high-impact journals to applicants with a few publications in journals no one’s ever heard of. If your complaints are achieving anything at all, they’re probably actually making things worse by constantly (and incorrectly) reminding everyone around you about just how powerful The Incentives are.

Here’s a suggestion: maybe try not talking about The Incentives for a while. You could even try, I don’t know, working against The Incentives for a change. Or, if you can’t do that, just don’t say anything at all. Probably nobody will miss anything, and the early-career researchers among us might even be grateful for a respite from their senior colleagues’ constant reminder that The System—the very same system those senior colleagues are responsible for creating!—is so fucked up.

9. It’s your job

This last one seems so obvious it should go without saying, but it does need saying, so I’ll say it: a good reason why you should avoid hanging bad behavior on The Incentives is that you’re a scientist, and trying to get closer to the truth, and not just to tenure, is in your fucking job description. Taxpayers don’t fund you because they care about your career; they fund you to learn shit, cure shit, and build shit. If you can’t do your job without having to regularly excuse sloppiness on the grounds that you have no incentive to be less sloppy, at least have the decency not to say that out loud in a crowded room or Twitter feed full of people who indirectly pay your salary. Complaining that you would surely do the right thing if only these terrible Incentives didn’t exist doesn’t make you the noble martyr you think it does; to almost anybody outside your field who has a modicum of integrity, it just makes you sound like you’re looking for an easy out. It’s not sophisticated or worldly or politically astute, it’s just dishonest and lazy. If you find yourself unable to do your job without regularly engaging in practices that clearly devalue the very science you claim to care about, and this doesn’t bother you deeply, then maybe the problem is not actually The Incentives—or at least, not The Incentives alone. Maybe the problem is You.

40 thoughts on “No, it’s not The Incentives—it’s you”

  1. I’m pretty sure that psychology has shown us — perhaps even without p-hacking! — that people who actually have only a 7% chance of getting whatever their desired goal is will behave as if that chance is a lot higher, even if you show them a lot of 8×10 colour glossy photographs with circles and arrows and a paragraph on the back of each one explaining that their chance is only 7%. (If it isn’t psychology, it’s Garrison Keillor. We could call it the Lake Wobegon University effect.)

    1. I think that’s probably right, but I suspect part of it is probably due to the kind of natural tunnel vision I allude to in #5 in my post: we tend to think the world is much more deterministic than it actually is. So the hope is that by encouraging people to take a step back and think about what the actual effect of publishing 1 or 2 more papers is, at least some people might realize that the way they’d been internally framing the choice is maybe not the best way to think about it.

  2. Excellent article! I’m not excusing anyone here, but I think something similar happens when it comes to speeding in your car. It’s illegal, but almost everyone does it, not REALLY over the limit, but just a bit, so it’s OK, right? We all think big crimes, like burglary and making up data, are bad, but little ones, like speeding and mild p-hacking, or submitting to Elsevier journals, or … are, well, almost everyone does it, right? So, it’s kind of OK if we don’t think about it too hard.

    1. That’s a good analogy, and I think the point I’m making actually works well for speeding too: when you get pulled over by a police officer for going 10 over the limit, nobody is going to take you seriously if your objection to the ticket is “but I’m incentivized to go 10 over, because I can get home a little faster, and hardly anyone ever gets pulled over at that speed!” The way we all think about speeding tickets is that, sure, there may be reasons we choose to break the law, but it’s still our informed decision to do so. We don’t try shirk the responsibility for speeding by pretending that we’re helpless in the face of the huge incentive to get where we’re going just a little bit faster than the law actually allows. I think if we looked at research practice the same way, that would be a considerable improvement.

      1. If only scientists got speeding fines instead of just ‘speeding, drunk, mobile phoning, crashed and killed hundreds of people’ fines…

  3. I fully agree that the Incentives are no excuse, but I think Stapel’s explanation for his fraud is more nuanced than ‘it was the Incentives’.

    1. Agreed–I don’t think I suggested that that was the *only* explanation he gave. But it clearly was a big part of his explanation.

  4. The major thing that I totally agree with you about, and which effectively kills any excuse, is where you write that one has to be asking oneself the question, “if, doing my job as good as I can, I can’t compete in this field with people who do their job poorly, do I want to be in this field?”

    Though I do think that your piece implicitly assumes that researchers are somehow, or should be, morally above anyone else. Somehow, scientists should first be thinking about science, then about where the money for the next day comes from. Unfortunately, these are not the 60s, nor the 90s, and job-poor under-paid ultra-competitive no-social-safety-net time is here and scientists, intelligent as they are, know this. So once you have a cash flow going, or if you’ve invested close to 10 years into PhD and postdoc, who are you to suggest the Scientist should selflessly go on the dole or flipping burgers, because they love science so much? This is no excuse, but there is an economic reality in 2018 that this piece is entirely blind to.

    I don’t think the cutting corners in and of itself should be the focus here though, it seems like you’re putting the burden of it all on junior researchers (the ones in the current system mostly hampered by the fact that good science takes time), they should tank themselves in the name of Pure Science (although you swap the career level between different pieces of your text). I think the only way junior scientists can do good science is if the criteria by which they get hired look at (and have ways of looking at, eg open data/prereg etc) good science, and not at quantity. So yes, the burden is on the scientists doing the paper reviews, the grant reviews, etc. And they are us, especially senior researchers. Who’s the first one to stand up and say that a 1-year postdoc can hardly cover data collection let alone preregistration, replication, double check, decent analysis, and publication? And even a 2-year one. Who’s the first to say that no, science doesn’t thrive in 3-year projects?

    But I agree that these shouldn’t be an excuse to misbehave. But whereas I see your point, it reads like you see this responding to incentives as the main culprit, which it is only up to a certain point. Let’s face it, if the tenure system would not exist, and scientists knew that they were training the people that would kick them out of their chair by the time they were 50, would you be saying, “I sacrifice my post-50 career on the altar of education, because it’s my fucking job?” If so, kudos.

    1. Hi Bert,

      Thanks for the comment. There’s a lot here, so let me address it piece by piece:

      Though I do think that your piece implicitly assumes that researchers are somehow, or should be, morally above anyone else. Somehow, scientists should first be thinking about science, then about where the money for the next day comes from.

      I think I’m arguing the opposite, actually. As I note in the post, there are plenty of other domains of life where perverse incentives exist, and in these domains, we don’t usually find it acceptable when people appeal to perverse incentives to justify their antisocial behavior. I’m not suggesting that no scientist should ever give in to perverse incentives, but simply that when we do so, we should acknowledge what we’re doing and take some (not all) of the responsibility for it.

      So once you have a cash flow going, or if you’ve invested close to 10 years into PhD and postdoc, who are you to suggest the Scientist should selflessly go on the dole or flipping burgers, because they love science so much? This is no excuse, but there is an economic reality in 2018 that this piece is entirely blind to.

      Again, I think if anything the opposite is true. There is very little economic argument for almost anyone to stay in academia. Almost anyone with a PhD and reasonable technical skills can make considerably more in industry, and most people without technical skills can too. For people who don’t manage to land faculty positions at major research universities, the pay at second- and third-tier schools is often abysmal. Outside of academia, the unemployment rate for PhDs is effectively zero, and virtually none of the jobs people get involve flipping burgers. So no, I am certainly not saying that scientists should be selfless. Quite the opposite. Part of my argument (#5) explicitly rests on the recognition that most PhDs are going to end up outside of academia anyway (not starving, but making good livings!), so there is little cost to insisting on doing research with integrity in cases where the increment in likelihood of success is small, and “failure” can look pretty attractive anyway.

      I don’t think the cutting corners in and of itself should be the focus here though, it seems like you’re putting the burden of it all on junior researchers (the ones in the current system mostly hampered by the fact that good science takes time)

      I don’t think I am. Again, I don’t think of this as placing a burden on anyone so much as asking people to take (some) responsibility for their decisions. Whether you’re a graduate student or a full professor, if the story you’ve been telling yourself is that you have no choice but to engage in shenanigans, I’m saying that that narrative is not accurate. We all have a choice. It may be a bad choice, and we may decide that engaging in shenanigans is still the optimal course of action given all of the various pressures we face, but that doesn’t mean we get to shirk all responsibility and blame “The System”. As I noted in response to Alan Parker above, there can be perfectly good reasons for people to speed on the highway, but when people get a ticket, we still expect them to say, “yes, it was my responsibility–I was speeding, and nobody was forcing me to.”

      Who’s the first one to stand up and say that a 1-year postdoc can hardly cover data collection let alone preregistration, replication, double check, decent analysis, and publication?

      Lots of people are saying that, at all career stages. I’m not sure I follow you here.

      Let’s face it, if the tenure system would not exist, and scientists knew that they were training the people that would kick them out of their chair by the time they were 50, would you be saying, “I sacrifice my post-50 career on the altar of education, because it’s my fucking job?” If so, kudos.

      I’m not a fan of the tenure system, nor am I on the tenure track myself. My own position is grant funded, and if I lose my funding, I lose my job. So your concern is not hypothetical for me; it’s quite real. And I’m fine with that. If three or twenty years from now, someone else can do my job better than me (and I don’t doubt that many people can), and my funding dries up, so be it. The prospect of having to do something other than science does not terrify me; in some ways, it appeals. There are all kinds of excellent opportunities in the world for industrious, intellectually curious people with PhDs. I don’t think any of us are entitled to a lifelong position indulging our intellectual interests at taxpayer expense.

      1. Thanks for the extensive reply, Tal. I follow you almost all the way I think, and I may be guilty of painting the world outside of academia in a worse light than I should. (What I meant with the “Who’s the first one to stand up..” bit is that saying it is one thing, seeing it change is more difficult)

        Nonetheless, I don’t know across the pond, but the paradoxical joblessness with highly skilled individuals, especially those having spent just a bit too long in academia, is real (which is why I usually tell beginning researchers to think deeply after the PhD and before they head into postdoc, which can become a trap). I know the plural of anecdote is not data, but what I find though is (a) it gets progressively more difficult to get a job outside academia the older you get, irrespective of whether this is purely about age or the perception that you’re one of those ivory tower people with bad time management (also, I have been looking, but tell me which one pays better; and I have bills to pay), (b) one builds some sort of life, and while the young mobile academic (preferably single and childless) is a fabulous place to be and home’s where you hang your hat, there is such a thing as constraints. On virtual paper you make it sound like we live in an age where you walk out the door and find a better paid job next door. PhD or not, that is not the world as I know it.

        But yes, I follow that, in academia or elsewhere, the excuse of “everyone’s doing it” is lame, and has been perpetuating many a skewed situation.

  5. Thank you for your candid and insightful post, Tal! I 110% agree with your arguments. Here’s a “crazy” (and not new) idea…Maybe working in the private sector (e.g., as a data scientist) to secure personal income would help with the “I need a job to survive argument”, and then treat research as a hobby you are passionate about (“prof. dr.” titles are becoming so overrated, in my opinion). I think that this way, Science would suffer way less…

  6. [insert general pleasantries, appreciation for the overall tone of your posting, and a note to point out that some of us have actually been bullheadedly applying these principles in our work for decades…]

    That being said, I think you conflate two distinctly different varieties of “antisocial” behavior of which you disapprove in what you’ve written, and I would argue that they are far from equivalent:

    There is a world of difference between failing to be completely truthful in reporting results (p-hacking, chasing “confirmation” rather than attempting falsification, etc), and publishing ones results in a venue of which you disapprove.

    Your thesis rests on the implicit assumption that publishing in an open-source venue is somehow “less antisocial”, and I believe that assumption lacks any data to back it up. By the available metrics – the impact factor – publishing in the highest impact venue produces the most social good (before you reflexively kick me, realize that the counterargument is that publishing in a venue where your paper will never be read, somehow maximizes benefit).

    There may be some kind of “feel good” psychology attached to “sticking it to the man” by publishing in open-access venues, but if there is any actual social benefit, it comes from the improved impact and readership penetration that may come from publishing there. Interestingly, the P&T incentive is well aligned with that social benefit – it incentivizes publishing work where it will have the largest impact on future scientific endeavors.

    To demonstrate the point, I don’t think you will find /any/ researchers publishing in mixed-model journals who voluntarily opt_out of the open-access option for any reason other than financial. That would be anti-social. Impact is increased, even in high-impact journals, for open-access articles. And I don’t think you’ll find any P&T incentives aligned with that behavior either.

    Now, there may be an interesting discussion to be had about whether there would be a social benefit if /everyone/ eschewed publishing in closed journals. The “then everything would be available to everyone, everyone wins, yay!” analysis is, at best naive. It may turn out that this position is actually true, but I don’t believe there is adequate evidence to place the “maximum social benefit” pin, in any specific location on the continuum from “everything is a blog post” to “everything is historic-closed-access”. There is a distinct benefit from the careful gatekeeping that goes hand-in-hand with the historic closed-access model. I doubt that moves the pin to the “everything is closed access” end of the continuum, but it also makes it unlikely that it’s fully at the “everything is a blog post” end of the continuum either.

    1. Thanks for the comment. I don’t really want to turn this into a discussion about what an optimal publishing model would look like (I’ve written about that here, if you’re interested), or what the benefits of OA are, as they’re not really central to this piece. The point I’m making here is conditional on someone feeling as though The Incentives are pushing them to do things they otherwise wouldn’t want to do. It goes without saying that people will have different ideas about what would or wouldn’t be good for science in some ideal world, so there’s no question that different decisions will precipitate that internal dilemma for different people. If you don’t like some of the examples I gave, and want to mentally substitute others, I’m totally cool with that.

  7. Thank you so much for writing this!

    I have been annoyed with the whole “incentives are to blame for everything” narrative in the last few years for some time now.

    I hope it’s okay for me to add to your post the following 2 things:

    1) I reason talking about “the incentives” is an indirect, and unclear, way to talk about possible problematic things, that in turn could stop you from really thinking about matters. This in turn could result in coming up with a whole new set of “incentives” that may not even be good for (improving) science!?

    Just like a doctor needs a good diagnosis before recommending treatment, so do scientists need a good diagnosis before coming up with all kinds of “solutions”. For instance, what if “pressure to publish” is simply a symptom of researchers needing to get grants so their universities get “a nice piece of the pie” via overhead costs, etc. The solutions to possibly stop this should then not involve focusing on “writing less papers with higher quality” and/or hogging up lots of grant money via massive “collaborative projects”, but more on possibly equally spreading the available grant money.

    Also see “Primum non nocere” (https://en.wikipedia.org/wiki/Primum_non_nocere), and the paper “Excellence by nonsense: the competition for publications in modern science” (https://link.springer.com/chapter/10.1007/978-3-319-00026-8_3)

    2) I tried to (with some sarcasm/humor) make several points concerning “the incentives” in a little pre-print i wrote. I hope it’s okay for me to share it here. It’s called “Making the most of tenure in two acts: An additional way to help change incentives in Psychological Science?”. You can find it here: https://psyarxiv.com/5vsqw/

  8. Thanks for the interesting post. I’m an ECR who is trying to resist poor practices although sometimes my instructions are to engage in p-hacking practices and if I don’t do it or give too much push-back, I get in trouble with my boss.

    I’d like to take issue with point 4, that you probably have no data. I think we have all noticed that funders may say one thing in their public statements – release your data, publish open access – but that when we read the fellowship and grant guidelines for peer review, there is 0% of the assessment allocated to publishing open data or open access papers. It’s not even rolled into a more general term like ‘methodological best practice’. Even if I did get lucky and get the one or two senior scientists reviewing my fellowship application who cared about these things, they wouldn’t be able to give me credit for it. They’d be constrained by the assessment criteria to count my papers, multiply by journal impact factor and drop my application in the appropriate basket for literally 50-85% of my application’s merit.

    I still publish open data and deposit my accepted manuscripts with my library, but I know I am not going to get rewarded for it. It takes time to understand embargo periods and license agreements or curate and prepare your data in accordance with FAIR data sharing principles. In such a competitive job market, someone who doesn’t waste that time is going to have a slight advantage in the fellowship assessment process.

    I guess then this raises the question – why haven’t funders added adherence to best practices to their assessment criteria? Why aren’t assessors asked to give people a boost for sharing their datasets or preregistering their studies? And I think it’s because there isn’t a critical mass of people who care enough about these issues to do anything.

  9. This is a passionate essay arguing for personal responsibility & accountability. No one would argue against that. Setting aside the hand-wringing apologetics of the ones who have been exposed, at least the biologists in the audience would do well to keep Darwin in mind – the ‘Perverse Incentives’ model argues not that they necessarily turn good (moral, high integrity) scientists bad, but that they exert a selection pressure for people who are more willing to compromise scientific and societal values in the struggle for ‘career fitness’.

  10. You know, I get the whole appeal to personal accountability. And it’s fine to use this to convince an individual to do good. It’s great to convince yourself to do good, and there’s true value in that. But it’s totally meaningless to expect all of society to change their ways and “do better” simply because it’s right and it’s just and you think they should. You’re not going to get a large swath of people to behave better without relying on incentives and structural change. It’s just an old man’s rant at that point.

    Simply expecting everybody in the world to spontaneously do better and stop being ethically apathetic is basically the same as shaking your fist at the sky for raining on you. At scale humankind is statistical and roughly deterministic, with a bit of chaos thrown in for good measure. Rarely a philosophy based on pure words and ideas gets enough traction and takes off, but in almost all cases there’s a real change in people’s environment that makes change probable on a meaningful scale through incentives and disincentives.

    It is wrong to use incentives to hand-wave one’s own behavior; it’s not a valid excuse, that much is true. But the grand majority of people aren’t going to stop doing that on their own, and it’s not productive to hope they will.

  11. I believe the original nomenclature for this phenomenon is “don’t hate the player, hate the game.”

  12. I disagree with a few points.

    I have always found cheats to be the most unreliable of sources regarding their own actions and motivations. They seem to have a hardwired instinct to present themselves in the best possible light. So I’d suggest that such “confessions” must be handled with extreme care, although I understand that in the present case of Stapel, the point is to some extent what he claims as motivation, true or not.

    I disagree that the benefits of cheating or ceding to the Incentives are marginal. There are many cases where the “big lies” have attracted extreme prominence and rewards. Some are eventually exposed and can be cited as examples: Schôn, Anversa, Poldermans,… (it’s a long list), some are not. By any measure, they reached the top of the tree. Put another way, if the benefits are marginal, success must be highly nonlinear.

    I also disagree that corner-cutting is not a long-term strategy. If they can fake it until they make it (become established and networked), it becomes extraordinarily difficult to dislodge even shameless cheats. The asterisk then refers to a full life with tenure, money, power and a pension to follow, at worst.

    I do have two practical suggestions for progress:
    1) mandate full data sharing at publication for all (with tightly-drawn confidentiality exceptions)
    2) we, as a community, must penalise low-quality work when performing any sort of evaluation; in particular, this should apply even to isolated instances of bad papers, because it is impractical to counter the “the rest of the work must be OK” argument by reading a voluminous corpus.

    Those suggestions appear in my last two blog posts at: https://referee3.org

  13. This article seems to actively despise academics, and perhaps even people in general. Don’t get me wrong: the main points and most of the analysis is correct. The issue arises, as was brought up by a few commenters, from the fanatical focus on moral grandstanding and personal responsibility as the solution. This sentiment is very typical of the American individualist mindset, but should be challenged.

    Individual responsibility is useful, but it puts all the weight of the world on ones shoulders, and shames the “lax morals” of those who cannot take the pressure. Scientists are increasingly diverse and many of us are not fortunate enough in our circumstances to be able to forfeit all to retain purity. Most can barely retain their integrity, that is why you get cheats. The line is crossed often not from the desire for fame but simple survival.

    The power of human kind comes from our ability to act together, for the outsize benefit of all compared to what everyone can achieve individually. If we work together, we can change the incentives, little by little, instead of either using them as an excuse or abstaining from them through temperance alone. It is not as hopeless as the article simplifies it to be in point 6. There have been initiatives such as DORA, etc.

    If you don’t see what is wrong with some of the conclusions, mentally swap out the context from being about academic life to being about – say, prisons or crime and poverty. This becomes a very distasteful argument, one I am sure we have all seen in certain publications…

  14. you might not check this anymore, but I figured that I would let you know that I really appreciate this post and have been using it in my graduate methods course to try and challenge my students (along with Feynman’s cargo cult speech). I try to convince them that this is a character issue. With admittedly mixed success. Anyway, thanks.

  15. If I told somebody they were behaving evilly, and they explained that it was just the incentives causing them to behave that way and they were utterly helpless to fight their incentives

    I think I would probably try to punish them, and make it clear to them that I was going to keep trying to punish them until they stopped

    incentives are unstoppable after all

Leave a Reply to Bert TimmermansCancel reply