There is no “tone” problem in psychology

Much ink has been spilled in the last week or so over the so-called “tone” problem in psychology, and what to do about it. I speak here, of course, of the now infamous (and as-yet unpublished) APS Observer column by APS Past President Susan Fiske, in which she argues rather strenuously that psychology is in danger of falling prey to “mob rule” due to the proliferation of online criticism generated by “self-appointed destructo-critics” who “ignore ethical rules of conduct.”

Plenty of people have already weighed in on the topic (my favorite summary is Andrew Gelman’s take), and to be honest, I don’t really have (m)any new thoughts to offer. But since that’s never stopped me before, I will now proceed to throw those thoughts at you anyway, just for good measure.

Since I’m verbose but not inconsiderate, I’ll summarize my main points way up here, so you don’t have to read 6,500 more words just to decide that you disagree with me. Basically, I argue the following points:

  1. There is nothing wrong with the general tone of our discourse in psychology at the moment.
  2. Even if there was something wrong with the tone of our discourse, it would be deeply counterproductive to waste our time talking about it in vague general terms.
  3. Fear of having one’s scientific findings torn apart by others is not unusual or pathological; it’s actually a completely normal–and healthy–feeling for a scientist.
  4. Appeals to fairness are not worth taking seriously unless the argument is pitched at the level of the entire scientific community, rather than just the sub-community one happens to belong to.
  5. When other scientists do things we don’t like, it’s pointless and counterproductive to question their motives.

There, that’s about as much of being brief and to the point as I can handle. From here on out, it’s all adjective soup, mixed metaphor, and an occasional literary allusion*.

1. There is no tone problem

Much of the recent discussion over how psychologists should be talking to one another simply takes it for granted that there’s some deep problem with the tone of our scientific discourse. Personally, I don’t think there is (and on the off-chance we’re doing this by vote count, neither do Andrew Gelman, Chris Chambers, Sam Schwarzkopf, or NeuroAnaTody). At the very least, I haven’t seen any good evidence for it. As far as I can tell, all of the complaints about tone thus far have been based exclusively on either (a) a handful of rather over-the-top individual examples of bad behavior, or (b) vague but unsupported allegations that certain abusive practices are actually quite common. Neither of these constitutes a satisfactory argument, in my view. The former isn’t useful because anecdotes are just that. I imagine many people can easily bring to mind several instances of what seem like unwarranted attacks on social media. For example, perhaps you don’t like the way James Coyne sometimes calls out people he disagrees with:

Or maybe you don’t appreciate Dan Gilbert calling a large group of researchers with little in common except their efforts to replicate one or more studies as “shameless little bullies”:

I don’t doubt that statements like these can and do offend some people, and I think people who are offended should certainly feel free to publicly raise their concerns (ideally by directly responding to the authors of such remarks). Still, such cases are the exception, not the norm, and academic psychologists should appreciate better than most people the dangers of over-generalizing from individual cases. Nobody should labor under any misapprehension that it’s possible to have a field made up of thousands of researchers all going about their daily business without some small subset of people publicly being assholes to one another. Achieving zero instances of bad behavior cannot be a sane goal for our field (or any other field). When Dan Gilbert called replicators “second-stringers” and “shameless little bullies,” it did not follow that all social psychologists above the age of 45 are reactionary jackasses. For that matter, it didn’t even follow that Gilbert is a jerk. The correct attributions in such cases–until such time as our list of notable examples grows many times larger than it presently is–are that (a) reasonable people sometimes say unreasonable things they later regret, or (b) some people are just not reasonable, and are best ignored. There is no reason to invent a general tone problem where none exists.

The other main argument for the existence of a “tone” problem—and one that’s prominently on display in Fiske’s op-ed—is the gossipy everyone-knows-this-stuff-is-happening kind of argument. You could be excused for reading Fiske’s op-ed and coming away thinking that verbal abuse is a rampant problem in psychology. Consider just one paragraph (but the rest of it reads much the same):

Only what’s crashing are people. These unmoderated attacks create collateral damage to targets’ careers and well being, with no accountability for the bullies. Our colleagues at all career stages are leaving the field because of the sheer adversarial viciousness. I have heard from graduate students opting out of academia, assistant professors afraid to come up for tenure, mid-career people wondering how to protect their labs, and senior faculty retiring early, all because of methodological terrorism. I am not naming names because ad hominem smear tactics are already damaging our field. Instead, I am describing a dangerous minority trend that has an outsized impact and a chilling effect on scientific discourse.

I will be the first to admit that it sounds very ominous, all this talk of people crashing, unmoderated attacks with no accountability, and people leaving the field. But before you panic, you might want to consider an alternative paragraph that, at least from where I’m sitting, Fiske could just as easily have written:

Only what’s crashing are people. The proliferation of flashy, statistically incompetent findings creates collateral damage to targets’ careers and well being, with no accountability for the people who produce such dreck. Our colleagues at all career stages are leaving the field due to the sheer atrocity of its standards. I have heard from graduate students opting out of academia, assistant professors suffering from depression, mid-career people wondering how to sustain their research, and senior faculty retiring early, all because of their dismay at common methodological practices. I am not naming names because ad hominem smear tactics are already damaging our field. Instead, I am describing a dangerous trend that has an outsized impact and a chilling effect on scientific progress.

Or if you don’t like that one, maybe this one is more your speed:

Only what’s crashing are our students. These unmoderated attacks on students by their faculty advisors create collateral damage to our students, with no accountability for the bullies. Our students at all stages of graduate school are leaving the field because of the sheer adversarial viciousness. I have heard from graduate students who work 90-hour weeks, are afraid to have children at this stage of their careers, or have fled grad school, all out of fear of being terrorized by their advisors. I am not naming names because ad hominem smear tactics are already damaging our field. Instead, I am describing a dangerous trend that has an outsized impact and a chilling effect on scientific progress.

If you don’t like that one either, feel free to crib the general structure and play fill in the blank with your favorite issue. It could be low salaries, unreasonable publication expectations, or excessively high teaching loads; whatever you like. The formula is simple: first, you find a few people with (perfectly legitimate) concerns about some aspect of their professional environment; then you just have to (1) recount those stories in horrified tones, (2) leave out any mention of exactly how many people you’re talking about, (3) provide no concrete details that would allow anyone to see any other side to the story, and (4) not-so-subtly imply that all hell will break loose if this problem isn’t addressed some time real soon.

Note that what makes Fiske’s description unproductive and incendiary here is not that we have any reason to doubt the existence of the (anonymous) cases she alludes to. I have no doubt that Fiske does in fact hear regularly from students who have decided to leave academia because they feel unfairly targeted. But the thing is, it’s also an indisputable fact that many (in absolute terms) students leave academia because they have trouble getting along with their advisors, because they’re fed up with the low methodological standards in the field, or because they don’t like the long, unstructured hours that science requires.

The problem is not that Fiske is being untruthful; it’s that she’s short-circuiting the typical process of data- and reason-based argument by throwing lots of colorful anecdotes and emotional appeals at us. No indication is provided in her piece—or in my creative adaptations—as to whether the scenarios described are at all typical. How often, we should be asking ourselves, does it actually happen that people opt out of academia, or avoid seeking tenure, because of legitimate concerns about being unfairly criticized by their colleagues? How often do people leave the field because our standards are so terrible? Just how many psychology faculty are really such terrible advisors that their students regularly quit? If the answer to all of these questions is “extremely rarely”–or if there is reason to believe that in many cases, the story is not nearly as simple as the way Fiske is making it sound–then we don’t have systematic problems that deserves our collective attention; at worst, we have isolated cases of people behaving badly. Unfortunately, the latter is a malady that universally afflicts every large group or organization, and as far as I know, there is no known cure.

From where I’m sitting, there is no evidence of an epidemic of interpersonal cruelty in psychology. There has undeniably been a rapid increase in open, critical commentary online; but as Chris Chambers, Andrew Gelman, and others have noted, this is much better understood as a welcome democratization of scientific discourse that levels the playing field and devalues the role of (hard-earned) status than some kind of verbal war to the pain between rival psychological ideologies.

2. Three reasons why complaining about tone is a waste of time

Suppose you disagree with my argument above (which is totally cool—please let me know why in the comments below!) and insist that there clearly is a problem with the tone of our discourse. What then? Well, in that case, I would still respectfully suggest that if your plan for dealing with this problem is to complain about it in general terms, the way Fiske does—meaning, without ever pointing to specific examples or explaining exactly what you mean by “critiques of such personal ferocity” or “ad hominem smear tactics”—then you’re probably just wasting your time. Actually, it’s worse than that: not only are you wasting your own time, but you’re probably also going to end up pouring more fuel on the very fire you claim to be trying to put out (and indeed, this is exactly what Fiske’s op-ed seems to have accomplished).

I think there are at least three good reasons to believe that spending one’s time arguing over tone in abstract terms is a generally bad idea. Since I appear to have nothing but time, and you appear to still be reading this, I’ll discuss each of them in great gory detail.

The engine-on-fire view of science

First, unlike in many other domains of life, in science, the validity or truth value of a particular viewpoint is independent of the tone with which that viewpoint is being expressed. We can perhaps distinguish between two ways of thinking about what it means to do science. One approach is what we might call the negotiation model of science. On this model, when two people disagree over some substantive scientific issue, what they’re doing is trying to find a compromise position that’s palatable to both parties. If you say your finding is robust, and I say it’s totally p-hacked, then our goal is to iterate until we end up in a position that we both find acceptable. This doesn’t necessarily mean that the position we end up with must be an intermediate position (e.g., “okay, you only p-hacked a tiny bit”); it’s possible that I’ll end up entirely withdrawing my criticism, or that you’ll admit to grave error and retract your study. The point is just that the goal is, at least implicitly, to arrive at some consensual agreement between parties regarding our original disagreement.

If one views science through this kind of negotiation lens, concerns about tone make perfect sense. After all, in almost any other context when you find yourself negotiating with someone, it’s a generally bad idea to start calling them names or insulting their mother. If you’re hawking your goods at a market, it’s probably safe to assume that every prospective buyer has other options–they can buy whatever it is they need from some other place, and they don’t have to negotiate specifically with you if they don’t like the way you talk to them. So you watch what you say. And if everyone manages to get along without hurling insults, it’s possible you might even successfully close a deal, and go home one rug lighter and a few Euros richer.

Unfortunately, the negotiation model isn’t a good way to think about science, because in science, the validity of one’s views does not change in any way depending on whether one is dispositionally friendly, or perpetually acts like a raging asshole. A better way to think about science is in terms of what we might call, with great nuance and sophistication, the “engine-on-fire” model. This model can be understood as follows. Suppose you get hungry while driving a long distance, and pull into a convenience store to buy some snacks. Just as you’re opening the door to the store, some guy yells out behind you, “hey, asshole, your engine’s on fire!” He then continues to stand around and berate you while you call for emergency services and frantically run around looking for a fire extinguisher–all without ever lifting a finger to help you.

Two points about this story should be obvious. First, the guy who alerted you to your burning engine is very likely a raging asshole. And second, the fact that he’s a raging asshole doesn’t absolve you in any way from taking steps to put out your flaming engine. It may absolve you from saying thank you to him after the fact, but his unpleasant demeanor unfortunately doesn’t mean you can just choose to look the other way out of spite, and calmly head inside to buy your teriyaki beef jerky as the flames outside engulf your vehicle.

For better or worse, scientific disagreements are more like the engine-on-fire scenario than the negotiation scenario. Superficially, it may seem that two people with a scientific disagreement are in a process of negotiation. But a crucial difference is that if one person inexplicably decides to start yelling at the other–even as they continue to toss out methodological or theoretical criticisms (“only a buffoon of a scientist could fail to model stimulus as a random factor in this design!”)–their criticisms don’t become any less true in virtue of their tone. This doesn’t mean that tone is irrelevant and should be ignored, of course; if a critic calls you names while criticizing your work, it’s perfectly reasonable for you to object to the tone they’re using, and ask that they avoid personal attacks. Unfortunately, you can’t compel them be nice to you, and the fact remains that if your critic decides to keep yelling at you, you still have a professional obligation to address the substance of their arguments, no matter how repellent you find their tone. If you don’t respond at all–either by explaining why the concern is invalid, or by adjusting your methodological procedures in some way–then there are now two scientific assholes in the world.

Distinguishing a bad case of the jerks from a bad case of the feels isn’t always easy

Much of the discussion over tone thus far has taken, as its starting point, people’s hurt feelings. Feelings deserve to be taken seriously; scientists are human beings, and the fact that the merit of a scientific argument is indepedendent of the tone used to convey it doesn’t mean we should run roughshod over people’s emotions. The important point to note, though, is that the opposite point also holds: the fact that someone might be upset by someone else’s conduct doesn’t automatically mean that the other party is under any obligation–or even expectation–to change their behavior. Sometimes people are upset for understandable reasons that nevertheless do not imply that anyone else did anything wrong.

Daniel Lakens recently pointed this problem out in a nice blog post. The fundamental point is that it’s often impossible for scientists to cleanly separate substantive intellectual issues from personal reputation and ego, because it’s simply a fact that one’s intellectual output is, to varying extents, a reflection of one’s abilities as a scientist. Meaning, if I consistently put out work that’s heavily criticized by other researchers, there is a point at which that criticism does in fact begin to impugn my general ability as a scientist–even if the criticism is completely legitimate, impersonal, and never strays from substantive discussion of the intellectual issues.

Examples of this aren’t hard to find in psychology. To take just one widely-cited example: among the best-replicated findings in behavioral genetics (and indeed, all of psychology) is the finding that most traits show high heritability (typically on the order of 50%) and little influence of shared environment (typically close to 0%). In other words, an enormous amount of evidence suggests that parents have minimal influence on how their children will eventually turn out, independently of the genes they pass on. Given such knowledge, the scientifically honest thing to do, it would seem, is to assume that most child-parent behavioral correlations are largely driven by heritable factors rather than by parenting. Nevertheless, a large fraction of the developmental literature consists of researchers conducting purely correlational studies and drawing strong conclusions about the causal influence of parenting on children’s behavior on the basis of observed child-parent correlations.

If you think I’m exaggerating, consider the latest issue of Psychological Science, where we find a report of a purely longitudinal study (no randomized experiment, and no behavioral genetic component) that claims to find evidence of “a positive link between more nurturing family environments in childhood and greater security of attachment to spouses more than 60 years later.” The findings, we’re told in the abstract, “…underscore the far-reaching influence of childhood environment on well-being in adulthood.” The fact that 50 years of behavioral genetics studies have conclusively demonstrated that all, or nearly all, of this purported parenting influence is actually accounted for by genetic factors does not seem to deter the authors. The terms “heritable” or “genetic” do not show up anywhere in the article, and no consideration at all is given to the possibility that the putative effect of warm parental environment is at least partly (and quite possibly wholly) spurious. And there are literally thousands of other papers just like this one in the developmental literature–many of them continually published in some of our most prestigious journals.

Now, an important question arises: how is a behavioral geneticist supposed to profesionally interact with a developmental scientist who appears to willfully ignore the demonstrably small influence of parenting, even after it is repeatedly pointed out to him? Is the geneticist supposed to simply smile and nod at the developmentalist and say, “that’s nice, you’re probably right about how important attachment styles are, because after all, you’re a nice person to talk to, and I want to keep inviting you to my dinner parties”? Or should she instead point out—repeatedly, if need be—the critical flaw in purely correlational designs that precludes any serious causal conclusions about parenting? And if she does the latter—always in a perfectly civil tone, mind you—how can that sentiment possibly be expressed in a way that both (a) is taken seriously enough by the target of criticism to effect a meaningful change in behavior, and (b) doesn’t seriously injure the target’s feelings?

This example highlights two important points, I think. First, when we’re being criticized, it can be very difficult to determine whether our critics are being unreasonable jerks, or are instead quite calmly saying things that we just don’t want to hear. As such, it’s a good idea to give our critics the benefit of the doubt, and assume they have fundamentally good intentions, even if our gut response is to retaliate as if they’re trying to cast our firstborn child into a giant lake of fire.

Second, unfortunate as it may be, being a nice person and being a good scientist are often in fundamental tension with one another–and virtually all scientists are frequently forced to choose which of the two they want to prioritize. I’m not saying you can’t be both a nice person and a good scientist on average. Of course you can. I’m just saying that there are a huge number of individual situations in which you can’t be both at the same time. If you ever find yourself at a talk given by one of the authors of the Psychological Science paper I mention above, you will have a choice between (a) saying nothing to the speaker during the question period (a “nice” action that hurts nobody’s feelings, but impedes scientific progress), and (b) pointing out that the chief conclusion expressed during the talk simply does not follow from any of the evidence presented (a “mean” action that will probably hurt the speaker’s feelings, but also serves to brings a critical scientific flaw to the attention of other scientists in the audience).

Now, one could potentially mount a reasonable argument in favor of being either a nice person, or a good scientist. I’m not going to argue that the appropriate thing to do is to always to put science ahead of people’s feelings. Sometimes there can be good reasons to privilege the latter. But I don’t think we should pretend that the tension between good science and good personal relationships doesn’t exist. My own view, for what it’s worth, is that people who want to do science for a living should accept that they are going to be regularly and frequently criticized, and that hurt feelings and wounded egos are part and parcel of being cognitively limited agents with deep emotions who spend their time trying to understand something incredibly difficult. This doesn’t mean that it’s okay to yell at people or call them idiots in public–it isn’t, and we should work hard collectively to prevent such behavior. But it does mean that at some point in one’s scientific career–and probably at many, many points–one may have the distinctly unpleasant experience of another scientist saying “I think the kind of work you do is fundamentally not capable of answering the questions you’re asking,” or, “there’s a critical flaw in your entire research program.” In such cases, it’s understandable if one’s feelings are hurt. But hurt feelings don’t in any way excuse one from engaging seriously with the content of the criticism. Listening to people tell us we’re wrong is part of the mantle we assume when we decide to become scientists; if we only want to talk to other people when they agree with us, there are plenty of other good ways we can spend our lives.

Who’s actually listening?

The last reason that complaining about the general tone of discourse seems inadvisable is that it’s not clear who’s actually listening. I mean, obviously plenty of people are watching the current controversy unfold in the hold on, let me get some popcorn sense. But the real question is, who do we think is going to read Fiske’s commentary, or any other commentary like it, and think, you know what–I see now that I’ve been a total jerk until now, and I’m going to stop? I suspect that if we were to catalogue all the cases that Fiske thinks of as instances of “ad hominem smear tactics” or “public shaming and blaming”, and then ask the perpetrators for their side of the story, we would probably get a very different take on things. I imagine that in the vast majority of cases, what people like Fiske see as behavior that’s completely beyond the pale would be seen by the alleged perpetrators as harsh but perfectly reasonable criticism–and apologies or promises to behave better in future would probably not flow very freely.

Note that I’m emphatically not suggesting that the actions in question are always defensible. I’m not passing any judgment on anyone’s behavior at all. I have no trouble believing that in some of the cases Fiske alludes to, there are probably legitimate and serious causes for concern. But the problem is, I see no reason to think that in cases where someone really is being an asshole, they’re likely to stop being an asshole just because Fiske wrote an op-ed complaining about tone in general terms. For example, I personally don’t think Andrew Gelman’s criticism of Cuddy, Norton, or Fiske has been at all inappropriate; but supposing you do think it’s inappropriate, do you really think Gelman is going to stop vigorously criticizing research he disagrees with just because Fiske wrote a column calling for civility?

We therefore find ourselves in a rather unfortunate situation: Fiske’s appeal is likely to elicit both heartfelt nods of approval from anyone who feels they’ve ever been personally attacked by a “methodological terrorist”, and shrieks of indignation and moral outrage from anyone who feels Fiske is mistaking their legitimate criticism for personal abuse. What it’s not likely to elicit much of is serious self-reflection or change in behavior—if for no other reason that it doesn’t describe any behavior in sufficient detail that anyone could actually think, “oh, yes, I see how that could be perceived as a personal attack.” In trying to avoid “damaging our field” by naming names, Fiske has, ironically, ended up writing a deeply divisive piece that appears to have only fanned the flames. I don’t think this is an accident; it seems to me like the inevitable fate of any general call for civility of this kind that fails to actually define or give examples of the behavior that is supposed to be so offensive.

The moral of the story is, if you’re going to complain about “critiques of such personal ferocity and relentless frequency that they resemble a denial-of-service attack” (and you absolutely should, if you think you have a legitimate case!), then you need to point to concrete behaviors that people can consider, evaluate, and learn from, and not just throw out vague allusions to “public shaming and blaming”, “ignoring ethical rules of conduct”, and “attacking the person and not the work”.

3. Fear of criticism is important—and healthy

Accusations of actual bullying are not the only concern raised by Fiske and other traditionalists. One of the other recurring themes that have come up in various commentaries on the tone of our current discourse is a fear of future criticism–and in particular, of being unfairly “targeted” for attack. In her column, Fiske writes that targets “often seem to be chosen for scientifically irrelevant reasons: their contrary opinions, professional prominence, or career-stage vulnerability.” On its face, this concern seems reasonable: surely it would be a bit unseemly for researchers to go running around gunning for each another purely to satisfy their petty personal vendettas. Science is supposed to be about the pursuit of truth, not vengeance!

Unfortunately, there is, so far as I can see, no possible way to enforce an injunction against pettiness or malicious intent. Nor should we want to try, because that would require a rather active form of thought policing. After all, who gets to decide what was in my head when I set out to replicate someone else’s study? Do we really want editors or reviewers passing judgment on whether an author’s motives for conducting a study were pure–and using that as a basis to discount the actual findings reported by the study? Does that really seem to Fiske like a good way to improve the tone of scientific discourse?

For better or worse, researchers do not–and cannot–have any right not to fear being “targeted” by other scientists–no matter what the motives in question may be. To the contrary, I would argue that a healthy fear of others’ (possibly motivated) negative evaluations is a largely beneficial influence on the quality of our science. Personally, I feel a not-insubstantial amount of fear almost any time I contemplate the way something I’ve written will be received by others (including these very words–as I’m writing them!). I frequently ask myself what I myself would say if I were reading a particular sentence or paragraph in someone else’s paper. And if the answer is “I would criticize it, for the following reasons…”, then I change or remove the offending statement(s) until I have no further criticisms. I have no doubt that it would do great things for my productivity if I allowed myself to publish papers as if they were only going to be read by friendly, well-intentioned colleagues. But then the quality of my papers would also decrease considerably. So instead, I try to write papers as if I expect them to be read by a death panel with a 90% kill quota. It admittedly makes writing less fun, but I also think it makes the end product much better. (The same principle also applies when seeking critical feedback on one’s work from others: if you only ever ask friendly, pleasant collaborators for their opinion on your papers, you shouldn’t be surprised if anonymous reviewers who have no reason to pull their punches later take a somewhat dimmer view.)

4. Fairness is in the eye of the beholder

Another common target of appeal in arguments about tone is fairness. We find fairness appeals implicitly in Fiske’s op-ed (presumably it’s a bad thing if some people switch careers because of fear of being bullied), and explicitly in a number of other commentaries. The most common appeal is to the negative career consequences of being (allegedly) unfairly criticized or bullied. The criticism doesn’t just impact on one’s scientific findings (goes the argument); it also makes it less likely that one will secure a tenure-track position, promotion, raise, or speaking invitations. Simone Schnall went so far as to suggest that the public criticism surrounding a well-publicized failure to replicate one of her studies made her feel like “a criminal suspect who has no right to a defense and there is no way to win.”

Now, I’m not going to try to pretend that Fiske, Schnall, and others are wrong about the general conclusion they draw. I see no reason to deny Schnall’s premise that her career has suffered as a result of the replication failure (though I would also argue that the bulk of that damage is likely attributable to the way she chose to respond to that replication failure, rather than to the actual finding itself). But the critical point here is, the fact that Schnall and others have suffered as a result of others’ replication failures and methodological criticisms is not in and of itself any kind of argument against those replication efforts and criticisms. No researcher has a right to lead a successful career untroubled and unencumbered by any serious questioning of their findings. Nor do early-career researchers like Alec Beall, whose paper suggesting that fertile women are more likely to wear red shirts was severely criticized by Andrew Gelman and others. It is lamentably true that incisive public criticism may injure the reputation and job prospects of those whose work has been criticized. And it’s also true that this can be quite unfair, in the sense that there is generally no particular reason why these particular people should be criticized and suffer for it, while other people with very similar bodies of work go unscathed, and secure plum jobs or promotions.

But here’s the thing: what doesn’t seem fair at the level of one individual is often perfectly fair–or at least, unavoidable–at the level of an entirely community. As soon as one zooms out from any one individual, and instead surveys the field of psychology as a whole, it becomes clear that the job and reputation markets are, to a first approximation, a zero-sum game. As Gelman and many other people have noted, for every person who doesn’t get a job because their paper was criticized by a “replicator”, there could be three other candidates who didn’t get jobs because their much more methodologically rigorous work took too long to publish and/or couldn’t stack up in flashiness to the PR-grabbing work that did win the job lottery. At an individual level, neither of these outcomes is “fair”. But then, very little in the world of professional success–in any field–is fair; almost every major professional outcome, good or bad, is influenced by an enormous amount of luck, and I would argue that it is delusional to pretend otherwise.

At root, I think the question we should ask ourselves, when something good or bad happens, is not: is it fair that I got treated [better|worse] than the way that other person over there was treated? Instead, it should be: does the distribution of individual outcomes we’re seeing align well with what maximizes the benefit to our community as a whole? Personally, I find it very difficult to see trenchant public criticism of work that one perceives as sub-par as a bad thing–even as I recognize that it may seem deeply unfair to the people whose work is the target of that criticism. The reason for this is that an obvious consequence of an increasing norm towards open, public criticism of people’s work is that the quality of our work will, collectively, improve. There should be no doubt that this shift will entail a redistribution of resources: the winners and losers under the new norm will be different from the winners and losers under the old norm. But that observation provides no basis for clinging to the old norm. Researchers who don’t like where things are currently headed cannot simply throw out complaints about being “unfairly targeted” by critics; instead, they need to articulate principled arguments for why a norm of open, public scientific criticism would be bad for science as a whole–and not just bad for them personally.

5. Everyone but me is biased!

The same logic that applies to complaints about being unfairly targeted also applies, I think, to complaints about critics’ nefarious motives or unconscious biases. To her credit, Fiske largely avoids imputing negative intent to her perceived adversaries–even as she calls them all kinds of fun names. Other commentators, however, have been less restrained–for example, suggesting that “there’s a lot of stuff going on where there’s now people making their careers out of trying to take down other people’s careers”, or that replicators “seem bent on disproving other researchers’ results by failing to replicate”. I find these kinds of statements uncompelling and, frankly, unseemly. The reason they’re unseemly is not that they’re wrong. Actually, they’re probably right. I don’t doubt that, despite what many reformers say, some of them are, at least some of the time, indeed motivated by personal grudges, a desire to bring down colleagues of whom they’re envious, and so on and so forth.

But the thing is, those motives are completely irrelevant to the evaluation of the studies and critiques that these people produce. The very obvious reason why the presence of bias on the part of a critic cannot be grounds to discount an study is that critics are not the only people with biases. Indeed, applying such a standard uniformly would mean that nobody’s finding should ever be taken seriously. Let’s consider just a few of the incentives that could lead a researcher conducting novel research, and who dreams of publishing their findings in the hallowed pages of, say, Psychological Science, to cut a few corners and end up producing some less-than-reliable findings:

  • Increased productivity: It’s less work to collect small convenience samples than large, representative ones.
  • More compelling results: Statistically significant results generated in small samples are typically more impressive-looking than one’s obtained from very large samples, due to sampling error and selection bias.
  • Simple stories: The more one probes a particular finding, the greater the likelihood that one will identify some problem that questions the validity of the results, or adds nuance and complexity to an otherwise simple story. And “mixed” findings are harder to publish.

All of these benefits, of course, feed directly into better prospects for fame, fortune, jobs, and promotions. So the idea that a finding published in one of our journals should be considered bias-free because it happened to come first, while a subsequent criticism or replication of that finding should be discounted because of personal motives or other biases is, frankly, delusional. Biases are everywhere; everyone has them. While this doesn’t mean that we should ignore them, it does mean that we should either (a) call all biases out equally–which is generally impossible, or at the very least extremely impractical–or (b) accept that doing so is not productive, and that the best way to eliminate bias over the long term is to pit everyone’s biases against each other and let logical argument and empirical data decide who’s right. Put differently, if you’re going to complain that Jane Doe is clearly motivated to destroy your cherished finding in order to make a name for herself, you should probably preface such an accusation with the admission that you obviously had plenty of motivation to cut corners when you produced the finding in the first place, since you knew it would help you make a name for yourself. Asymmetric appeals that require one to believe that bias exists in only one group of people simply don’t deserve to be taken seriously.

Personally, I would suggest that we adopt a standard policy of simply not talking about other people’s motivations or biases. If you can find evidence of someone’s bias in the methods they used or the analyses they conducted, then great–you can go ahead and point out the perceived flaws. That’s just being a good scientist. But if you can’t, then what was in your (perceived) adversary’s head when she produced her findings is quite irrelevant to scientific discourse–unless you think it would be okay for your critics to discount your work on the grounds that you clearly had all kinds of incentives to cheat.

Conclusions

Uh, no. No conclusions this time–this post is already long enough as is. And anyway, I already posted all of my conclusions way back at the beginning. So you can scroll all the way up there if you want to read them again. Instead, I’m going to try to improve your mood a tiny bit (if not the tone of the debate) by leaving you with this happy little painting automagically generated by Bot Ross:


* I lied! There were no literary allusions.

what the arsenic effect means for scientific publishing

I don’t know very much about DNA (and by ‘not very much’ I sadly mean ‘next to nothing’), so when someone tells me that life as we know it generally doesn’t use arsenic to make DNA, and that it’s a big deal to find a bacterium that does, I’m willing to believe them. So too, apparently, are at least two or three reviewers for Science, which published a paper last week by a NASA group purporting to demonstrate exactly that.

Turns out the paper might have a few holes. In the last few days, the blogosphere has reached fever delirium pitch as critiques of the article have emerged from every corner; it seems like pretty much everyone with some knowledge of the science in question is unhappy about the paper. Since I’m not in any position to critique the article myself, I’ll take Carl Zimmer’s word for it in Slate yesterday:

Was this merely a case of a few isolated cranks? To find out, I reached out to a dozen experts on Monday. Almost unanimously, they think the NASA scientists have failed to make their case.  “It would be really cool if such a bug existed,” said San Diego State University’s Forest Rohwer, a microbiologist who looks for new species of bacteria and viruses in coral reefs. But, he added, “none of the arguments are very convincing on their own.” That was about as positive as the critics could get. “This paper should not have been published,” said Shelley Copley of the University of Colorado.

Zimmer then follows his Slate piece up with a blog post today in which he provides 13 experts’ unadulterated comments. While there are one or two (somewhat) positive reviews, the consensus clearly seems to be that the Science paper is (very) bad science.

Of course, scientists (yes, even Science reviewers) do occasionally make mistakes, so if we’re being charitable about it, we might chalk it up to human error (though some of the critiques suggest that these are elementary problems that could have been very easily addressed, so it’s possible there’s some disingenuousness involved). But what many bloggers (1, 2, 3, etc.) have found particularly inexcusable is the way NASA and the research team have handled the criticism. Zimmer again, in Slate:

I asked two of the authors of the study if they wanted to respond to the criticism of their paper. Both politely declined by email.

“We cannot indiscriminately wade into a media forum for debate at this time,” declared senior author Ronald Oremland of the U.S. Geological Survey. “If we are wrong, then other scientists should be motivated to reproduce our findings. If we are right (and I am strongly convinced that we are) our competitors will agree and help to advance our understanding of this phenomenon. I am eager for them to do so.”

“Any discourse will have to be peer-reviewed in the same manner as our paper was, and go through a vetting process so that all discussion is properly moderated,” wrote Felisa Wolfe-Simon of the NASA Astrobiology Institute. “The items you are presenting do not represent the proper way to engage in a scientific discourse and we will not respond in this manner.”

A NASA spokesperson basically reiterated this point of view, indicating that NASA scientists weren’t going to respond to criticism of their work unless that criticism appeared in, you know, a respectable, peer-reviewed outlet. (Fortunately, at least one of the critics already has a draft letter to Science up on her blog.)

I don’t think it’s surprising that people who spend much of their free time blogging about science, and think it’s important to discuss scientific issues in a public venue, generally aren’t going to like being told that science blogging isn’t a legitimate form of scientific discourse. Especially considering that the critics here aren’t laypeople without scientific training; they’re well-respected scientists with areas of expertise that are directly relevant to the paper. In this case, dismissing trenchant criticism because it’s on the web rather than in a peer-reviewed journal seems kind of like telling someone who’s screaming at you that your house is on fire that you’re not going to listen to them until they adopt a more polite tone. It just seems counterproductive.

That said, I personally don’t think we should take the NASA team’s statements at face value. I very much doubt that what the NASA researchers are saying really reflect any deep philosophical view about the role of blogs in scientific discourse; it’s much more likely that they’re simply trying to buy some time while they figure out how to respond. On the face of it, they have a choice between two lousy options: either ignore the criticism entirely, which would be antithetical to the scientific process and would look very bad, or address it head-on–which, judging by the vociferousness and near-unanimity of the commentators, is probably going to be a losing battle. Shifting the terms of the debate by insisting on responding only in a peer-reviewed venue doesn’t really change anything, but it does buy the authors two or three weeks. And two or three weeks is worth like, forty attentional cycles in the blogosphere.

Mind you, I’m not saying we should sympathize with the NASA researchers just because they’re in a tough position. I think one of the main reasons the story’s attracted so much attention is precisely because people see it as a case of justice being served. The NASA team called a major press conference ahead of the paper’s publication, published its results in one of the world’s most prestigious science journals, and yet apparently failed to run relatively basic experimental controls in support of its conclusions. If the critics are to be believed, the NASA researchers are either disingenuous or incompetent; either way, we shouldn’t feel sorry for them.

What I do think this episode shows is that the rules of scientific publishing have fundamentally changed in the last few years–and largely for the better. I haven’t been doing science for very long, but even in the halcyon days of 2003, when I started graduate school, science blogging was practically nonexistent, and the main way you’d find out what other people thought about an influential new paper was by talking to people you knew at conferences (which could take several months) or waiting for critiques or replication failures to emerge in other peer-reviewed journals (which could take years). That kind of delay between publication and evaluation is disastrous for science, because in the time it takes for a consensus to emerge that a paper is no good, several research teams might have already started trying to replicate and extend the reported findings, and several dozen other researchers might have uncritically cited their paper peripherally in their own work. This delay is probably why, as John Ioannidis’ work so elegantly demonstrates, major studies published in high-impact journals tend to exert a disproportionate influence on the literature long after they’ve been resoundingly discredited.

The Arsenic Effect, if we can call it that, provides a nice illustration of the impact of new media on scientific communication. It’s a safe bet that there are now very few people who do anything even vaguely related to the NASA team’s research who haven’t been made aware that the reported findings are controversial. Which means that the process of attempting to replicate (or falsify) the findings will proceed much more quickly than it might have ten or twenty years ago, and there probably won’t be very many people who cite the Science paper as compelling evidence of terrestrial arsenic-based life. Perhaps more importantly, as researchers get used to the idea that their high-profile work is going to be instantly evaluated by thousands of pairs of highly trained eyes, any of which might be attached to a highly prolific pair of typing hands, there will be an increasingly strong disincentive to avoid being careless. That isn’t to say that bad science will disappear, of course; just that, in cases where the badness reflects a pressure to tell a good story at all costs, we’ll probably see less of it.

scientists aren’t dumb; statistics is hard

There’s a feature article in the new issue of Science News on the failure of science “to face the shortcomings of statistics”. The author, Tom Siegfried, argues that many scientific results shouldn’t be believed because they depend on faulty statistical practices:

Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.

I have mixed feelings about the article. It’s hard to disagree with the basic idea that many scientific results are the results of statistical malpractice and/or misfortune. And Siegfried generally provides lucid explanations of some common statistical pitfalls when he sticks to the descriptive side of things. For instance, he gives nice accounts of Bayesian inference, of the multiple comparisons problem, and of the distinction between statistical significance and clinical/practical significance. And he nicely articulates what’s wrong with one of the most common (mis)interpretations of p values:

Correctly phrased, experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.“

So as a laundry list of common statistical pitfalls, it works quite nicely.

What I don’t really like about the article is that it seems to lay the blame squarely on the use of statistics to do science, rather than the way statistical analysis tends to be performed. That’s to say, a lay person reading the article could well come away with the impression that the very problem with science is that it relies on statistics. As opposed to the much more reasonable conclusion that science is hard, and statistics is hard, and ensuring that your work sits at the intersection of good science and good statistical practice is even harder. Siegfried all but implies that scientists are silly to base their conclusions on statistical inference. For instance:

It’s science’s dirtiest secret: The “scientific method“ of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions.

Or:

Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.

The problem is that there isn’t really any viable alternative to the “love affair with statistics”. Presumably Siegfried doesn’t think (most) scientists ought to be doing qualitative research; so the choice isn’t between statistics and no statistics, it’s between good and bad statistics.

In that sense, the tone of a lot of the article is pretty condescending: it comes off more like Siegfried saying “boy, scientists sure are dumb” and less like the more accurate observation that doing statistics is really hard, and it’s not surprising that even very smart people mess up frequently.

What makes it worse is that Siegfried slips up on a couple of basic points himself, and says some demonstrably false things in a couple of places. For instance, he explains failures to replicate genetic findings this way:

Nowhere are the problems with statistics more blatant than in studies of genetic influences on disease. In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.

“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor“ for the syndrome, the researchers reported in the Journal of the American Medical Association.

How could so many studies be wrong? Because their conclusions relied on “statistical significance,“ a concept at the heart of the mathematical analysis of modern scientific experiments.

This is wrong for at least two reasons. One is that, to believe the JAMA study Siegfried is referring to, and disbelieve the results of all 85 previously reported findings, you have to accept the null hypothesis, which is one of the very same errors Siegfried is supposed to be warning us against. In fact, you have to accept the null hypothesis 85 times. In the JAMA paper, the authors are careful to note that it’s possible the actual effects were simply overstated in the original studies, and that at least some of the original findings might still hold under more restrictive conditions. The conclusion that there really is no effect whatsoever is almost never warranted, because you rarely have enough power to rule out even very small effects. But Siegfried offers no such qualifiers; instead, he happily accepts 85 null hypotheses in support of his own argument.

The other issue is that it isn’t really the reliance on statistical significance that causes replication failures; it’s usually the use of excessively liberal statistical criteria. The problem has very little to do with the hypothesis testing framework per se. To see this, consider that if researchers always used a criterion of p < .0000001 instead of the conventional p < .05, there would almost never be any replication failures (because there would almost never be any statistically significant findings, period). So the problem is not so much with the classical hypothesis testing framework as with the choices many researchers make about how to set thresholds within that framework. (That’s not to say that there aren’t any problems associated with frequentist statistics, just that this isn’t really a fair one.)

Anyway, Siegfried’s explanations of the pitfalls of statistical significance then leads him to make what has to be hands-down the silliest statement in the article:

But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.

If you take this statement at face value, you should conclude there’s no point in doing statistical analysis, period. No matter what statistical procedure you use, you’re never going to know for cross-your-heart-hope-to-die sure that your conclusions are warranted. After all, you’re always going to have the same two possibilities: either the effect is real, or it’s not (or, if you prefer to frame the problem in terms of magnitude, either the effect is about as big as you think it is, or it’s very different in size). The same exact conclusion goes through if you take a threshold of p < .001 instead of one of p < .05: the effect can still be a spurious and improbable fluke. And it also goes through if you have twelve replications instead of just one positive finding: you could still be wrong (and people have been wrong). So saying that “two possible conclusions remain” isn’t offering any deep insight; it’s utterly vacuous.

The reason scientists use a conventional threshold of p < .05 when evaluating results isn’t because we think it gives us some magical certainty into whether a finding is “real” or not; it’s because it feels like a reasonable level of confidence to shoot for when making claims about whether the null hypothesis of no effect is likely to hold or not. Now there certainly are many problems associated with the hypothesis testing framework–some of them very serious–but if you really believe that “there’s no logical basis for using a P value from a single study to draw any conclusion,” your beef isn’t actually with p values, it’s with the very underpinnings of the scientific enterprise.

Anyway, the bottom line is Siegfried’s article is not so much bad as irresponsible. As an accessible description of some serious problems with common statistical practices, it’s actually quite good. But I guess the sense I got in reading the article was that at some point Siegfried became more interested in writing a contrarian piece about how scientists are falling down on the job than about how doing statistics well is just really hard for almost all of us (I certainly fail at it all the time!). And ironically, in the process of trying to explain just why “science fails to face the shortcomings of statistics”, Siegfried commits some of the very same errors he’s taking scientists to task for.

[UPDATE: Steve Novella says much the same thing here.]

[UPDATE 2: Andrew Gelman has a nice roundup of other comments on Siegfried’s article throughout the blogosphere.]