I hate open science

Now that I’ve got your attention: what I hate—and maybe dislike is a better term than hate—isn’t the open science community, or open science initiatives, or open science practices, or open scientists… it’s the term. I fundamentally dislike the term open science. For the last few years, I’ve deliberately tried to avoid using it. I don’t call myself an open scientist, I don’t advocate publicly for open science (per se), and when people use the term around me, I often make a point of asking them to clarify what they mean.

This isn’t just a personal idiosyncracy of mine in a chalk-on-chalkboard sense; I think at this point in time there are good reasons to think the continued use of the term is counterproductive, and we should try to avoid it in most contexts. Let me explain.

It’s ambiguous

At SIPS 2019 last week (SIPS is the Society for Improvement of Psychological Science), I had a brief chat with a British post-undergrad student who was interested in applying to graduate programs in the United States. He asked me what kind of open science community there was at my home institution (the University of Texas at Austin). When I started to reply, I realized that I actually had no idea what question the student was asking me, because I didn’t know his background well enough to provide the appropriate context. What exactly did he mean by “open science”? The term is now used so widely, and in so many different ways, that the student could plausibly have been asking me about any of the following things, either alone or in combination:

  • Reproducibility. Do people [at UT-Austin] value the ability to reproduce, computationally and/or experimentally, the scientific methods used to produce a given result? More concretely, do they conduct their analyses programmatically, rather than using GUIs? Do they practice formal version control? Are there opportunities to learn these kinds of computational skills?
  • Accessibility. Do people believe in making their scientific data, materials, results, papers, etc. publicly, freely, and easily available? Do they work hard to ensure that other scientists, funders, and the taxpaying public can easily get access to what scientists produce?
  • Incentive alignment. Are there people actively working to align individual incentives and communal incentives, so that what benefits an individual scientist also benefits the community at large? Do they pursue local policies meant to promote some of the other practices one might call part of “open science”?
  • Openness of opinion. Do people feel comfortable openly critiquing one another? Is there a culture of discussing (possibly trenchant) problems openly, without defensiveness? Do people take discussion on social media and post-publication review forums seriously?
  • Diversity. Do people value and encourage the participation in science of people from a wide variety of ethnicities, genders, skills, personalities, socioeconomic strata, etc.? Do they make efforts to welcome others into science, invest effort and resources to help them succeed, and accommodate their needs?
  • Metascience and informatics. Are people thinking about the nature of science itself, and reflecting on what it takes to promote a healthy and productive scientific enterprise? Are they developing systematic tools or procedures for better understanding the scientific process, or the work in specific scientific domains?

This is not meant to be a comprehensive list; I have no doubt there are other items one could add (e.g., transparency, collaborativeness, etc.). The point is that open science is, at this point, a very big tent. It contains people who harbor a lot of different values and engage in many different activities. While some of these values and activities may tend to co-occur within people who call themselves open scientists, many don’t. There is, for instance, no particular reason why someone interested in popularizing reproducible science methods should also be very interested in promoting diversity in science. I’m not saying there aren’t people who want to do both (of course there are); empirically, there might even be a modest positive correlation—I don’t know. But they clearly don’t have to go together, and plenty of people are far more invested in one than in the other.

Further, as in any other enterprise, if you monomaniacally push a single value hard enough, then at a certain point, tensions will arise even between values that would ordinarily co-exist peacefully if each given only partial priority. For example, if you think that doing reproducible science well requires a non-negotiable commitment to doing all your analyses programmatically, and maintaining all your code under public version control, then you’re implicitly condoning a certain reduction in diversity within science, because you insist on having only people with a certain set of skills take part in science, and people from some backgrounds are more likely than others (at least at present) to have those skills. Conversely, if diversity in science is the thing you value most, then you need to accept that you’re effectively downgrading the importance of many of the other values listed above in the research process, because any skill or ability you might use to select or promote people in science is necessarily going to reduce (in expectation) the role of other dimensions in the selection process.

This would be a fairly banal and inconsequential observation if we lived in a world where everyone who claimed membership in the open science community shared more or less the same values. But we clearly don’t. In highlighting the ambiguity of the term open science, I’m not just saying hey, just so you know, there are a lot of different activities people call open science; I’m saying that, at this point in time, there are a few fairly distinct sub-communities of people that all identify closely with the term open science and use it prominently to describe themselves or their work, but that actually have fairly different value systems and priorities.

Basically, we’re now at the point where, when someone says they’re an open scientist, it’s hard to know what they actually mean.

It wasn’t always this way; I think ten or even five years ago, if you described yourself as an open scientist, people would have identified you primarily with the movement to open up access to scientific resources and promote greater transparency in the research process. This is still roughly the first thing you find on the Wikipedia entry for Open Science:

Open science is the movement to make scientific research (including publications, data, physical samples, and software) and its dissemination accessible to all levels of an inquiring society, amateur or professional. Open science is transparent and accessible knowledge that is shared and developed through collaborative networks. It encompasses practices such as publishing open research, campaigning for open access, encouraging scientists to practice open notebook science, and generally making it easier to publish and communicate scientific knowledge.

That was a fine definition once upon a time, and it still works well for one part of the open science community. But as a general, context-free definition, I don’t think it flies any more. Open science is now much broader than the above suggests.

It’s bad politics

You might say, okay, but so what if open science is an ambiguous term; why can’t that be resolved by just having people ask for clarification? Well, obviously, to some degree it can. My response to the SIPS student was basically a long and winding one that involved a lot of conditioning on different definitions. That’s inefficient, but hopefully the student still got the information he wanted out of it, and I can live with a bit of inefficiency.

The bigger problem though, is that at this point in time, open science isn’t just a descriptive label for a set of activities scientists often engage in; for many people, it’s become an identity. And, whatever you think the value of open science is as an extensional label for a fairly heterogeneous set of activities, I think it makes for terrible identity politics.

There are two reasons for this. First, turning open science from a descriptive label into a full-blown identity risks turning off a lot of scientists who are either already engaged in what one might otherwise call “best practices”, or who are very receptive to learning such practices, but are more interested in getting their science done than in discussing the abstract merits of those practices or promoting their use to others. If you walk into a room and say, in the next three hours, I’m going to teach you version control, and there’s a good chance this could really help your research, probably quite a few people will be interested. If, on the other hand, you walk into the room and say, let me tell you how open science is going to revolutionize your research, and then proceed to either mention things that a sophisticated audience already knows, or blitz a naive audience with 20 different practices that you describe as all being part of open science, the reception is probably going to be frostier.

If your goal is to get people to implement good practices in their research—and I think that’s an excellent goal!—then it’s not so clear that much is gained by talking about open science as a movement, philosophy, culture, or even community (though I do think there are some advantages to the latter). It may be more effective to figure out who your audience is, what some of the low-hanging fruit are, and focus on those. Implying that there’s an all-or-none commitment—i.e., one is either an open scientist or not, and to be one, you have to buy into a whole bunch of practices and commitments—is often counterproductive.

The second problem with treating open science as a movement or identity is that the diversity of definitions and values I mentioned above almost inevitably leads to serious rifts within the broad open science community—i.e., between groups of people who would have little or no beef with one another if not for the mere fact that they all happen to identify as open scientists. If you spend any amount of time on social media following people whose biography includes the phrases “open science” or “open scientist”, you’ll probably know what I’m talking about. At a rough estimate, I’d guess that these days maybe 10 – 20% of tweets I see in my feed containing the words “open science” are part of some ongoing argument between people about what open science is, or who is and isn’t an open scientist, or what’s wrong with open science or open scientists—and not with substantive practices or applications at all.

I think it’s fair to say that most (though not all) of these arguments are, at root, about deep-seated differences in the kinds of values I mentioned earlier. People care about different things. Some people care deeply about making sure that studies can be accurately reproduced, and only secondarily or tertiarily about the diversity of the people producing those studies. Other people have the opposite priorities. Both groups of people (and there are of course many others) tend to think their particular value system properly captures what open science is (or should be) all about, and that the movement or community is being perverted or destroyed by some other group of people who, while perhaps well-intentioned (and sometimes even this modicum of charity is hard to find), just don’t have their heads screwed on quite straight.

This is not a new or special thing. Any time a large group of people with diverse values and interests find themselves all forced to sit under a single tent for a long period of time, divisions—and consequently, animosity—will eventually arise. If you’re forced to share limited resources or audience attention with a group of people who claim they fill the same role in society that you do, but who you disagree with on some important issues, odds are you’re going to experience conflict at some point.

Now, in some domains, these kinds of conflicts are truly unavoidable: the factors that introduce intra-group competition for resources, prestige, or attention are structural, and resolving them without ruining things for everyone is very difficult. In politics, for example, one’s nominal affiliation with a political party is legitimately kind of a big deal. In the United States, if a splinter group of disgruntled Republican politicians were to leave their party and start a “New Republican” party, they might achieve greater ideological purity and improve their internal social relations, but the new party’s members would also lose nearly all of their influence and power pretty much overnight. The same is, of course, true for disgruntled Democrats. The Nash equilibrium is, presently, for everyone to stay stuck in the same dysfunctional two-party system.

Open science, by contrast, doesn’t really have this problem. Or at least, it doesn’t have to have this problem. There’s an easy way out of the acrimony: people can just decide to deprecate vague, unhelpful terms like “open science” in favor of more informative and less controversial ones. I don’t think anything terrible is going to happen if someone who previously described themselves as an “open scientist” starts avoiding that term and instead opts to self-describe using more specific language. As I noted above, I speak from personal experience here (if you’re the kind of person who’s more swayed by personal anecdotes than by my ironclad, impregnable arguments). Five years ago, my talks and papers were liberally sprinkled with the term “open science”. For the last two or three years, I’ve largely avoided the term—and when I do use it, it’s often to make the same point I’m making here. E.g.,:

For the most part, I think I’ve succeeded in eliminating open science from my discourse in favor of more specific terms like reproducibility, transparency, diversity, etc. Which term I use depends on the context. I haven’t, so far, found myself missing the term “open”, and I don’t think I’ve lost brownie points in any club for not using it more often. I do, on the other hand, feel very confident that (a) I’ve managed to waste fewer people’s time by having to follow up vague initial statements about “open” things with more detailed clarifications, and (b) I get sucked into way fewer pointless Twitter arguments about what open science is really about (though admittedly the number is still not quite zero).

The prescription

So here’s my simple prescription for people who either identify as open scientists, or use the term on a regular basis: Every time you want to use the term open science—in your biography, talk abstracts, papers, tweets, conversation, or whatever else—pause and ask yourself if there’s another term you could substitute that would decrease ambiguity and avoid triggering never-ending terminological arguments. I’m not saying that the answer will always be yes. If you’re confident that the people you’re talking to have the same definition of open science as you, or you really do believe that nobody should ever call themselves an open scientist unless they use git, then godspeed—open science away. But I suspect that for most uses, there won’t be any such problem. In most instances, “open science” can be seamlessly replaced with something like “reproducibility”, “transparency”, “data sharing”, “being welcoming”, and so on. It’s a low-effort move, and the main effect of making the switch is that other people will have a clearer understanding of what you mean, and may be less inclined to argue with you about it.

Postscript

Some folks on twitter were concerned that this post makes it sound as if I’m passing off prior work and ideas as my own (particularly as relates to the role of diversity in open science). So let me explicitly state here that I don’t think any of the ideas expressed in this post are original to me in any way. I’ve heard most (if not all) expressed many times by many people in many contexts, and this post just represents my effort to distill them into a clear summary of my views.

what exactly is it that 53% of neuroscience articles fail to do?

[UPDATE: Jake Westfall points out in the comments that the paper discussed here appears to have made a pretty fundamental mistake that I then carried over to my post. I’ve updated the post accordingly.]

[UPDATE 2: the lead author has now responded and answered my initial question and some follow-up concerns.]

A new paper in Nature Neuroscience by Emmeke Aarts and colleagues argues that neuroscientists should start using hierarchical  (or multilevel) models in their work in order to account for the nested structure of their data. From the abstract:

In neuroscience, experimental designs in which multiple observations are collected from a single research object (for example, multiple neurons from one animal) are common: 53% of 314 reviewed papers from five renowned journals included this type of data. These so-called ‘nested designs’ yield data that cannot be considered to be independent, and so violate the independency assumption of conventional statistical methods such as the t test. Ignoring this dependency results in a probability of incorrectly concluding that an effect is statistically significant that is far higher (up to 80%) than the nominal α level (usually set at 5%). We discuss the factors affecting the type I error rate and the statistical power in nested data, methods that accommodate dependency between observations and ways to determine the optimal study design when data are nested. Notably, optimization of experimental designs nearly always concerns collection of more truly independent observations, rather than more observations from one research object.

I don’t have any objection to the advocacy for hierarchical models; that much seems perfectly reasonable. If you have nested data, where each subject (or petrie dish or animal or whatever) provides multiple samples, it’s sensible to try to account for as many systematic sources of variance as you can. That point may have been made many times before,  but it never hurts to make it again.

What I do find surprising though–and frankly, have a hard time believing–is the idea that 53% of neuroscience articles are at serious risk of Type I error inflation because they fail to account for nesting. This seems to me to be what the abstract implies, yet it’s a much stronger claim that doesn’t actually follow just from the observation that virtually no studies that have reported nested data have used hierarchical models for analysis. What it also requires is for all of those studies that use “conventional” (i.e., non-hierarchical) analyses to have actively ignored the nesting structure and treated repeated measurements as if they in fact came from entirely different subjects or clusters.

To make this concrete, suppose we have a dataset made up of 400 observations, consisting of 20 subjects who each provided 10 trials in 2 different experimental conditions (i.e., 20 x 2 x 10 = 400). And suppose the thing we ultimately want to know is whether or not there’s a statistical difference in outcome between the two conditions. There are three at least three ways we could set up our comparison:

  1. Ignore the grouping variable (i.e., subject) entirely, effectively giving us 200 observations in each condition. We then conduct the test as if we have 200 independent observations in each condition.
  2. Average the 10 trials in each condition within each subject first, then conduct the test on the subject means. In this case, we effectively have 20 observations in each condition (1 per subject).
  3. Explicitly include the effects of both subject and trial in our model. In this case we have 400 observations, but we’re explictly accounting for the correlation between trials within a given subject, so that the statistical comparison of conditions effectively has somewhere between 20 and 400 “observations” (or degrees of freedom).

Now, none of these approaches is strictly “wrong”, in that there could be specific situations in which any one of them would be called for. But as a general rule, the first approach is almost never appropriate. The reason is that we typically want to draw conclusions that generalize across the cases in the higher level of the hierarchy, and don’t have any intrinsic interest in the individual trials themselves. In the above example, we’re asking whether people on average, behave differently in the two conditions. If we treat our data as if we had 200 subjects in each condition, effectively concatenating trials across all subjects, we’re ignoring the fact that the responses acquired from each subject will tend to be correlated (i.e., Jane Doe’s behavior on Trial 2 will tend to be more similar to her own behavior on Trial 1 than to another subject’s behavior on Trial 1). So we’re pretending that we know something about 200 different individuals sampled at random from the population, when in fact we only know something about 20 different  individuals. The upshot, if we use approach (1), is that we do indeed run a high risk of producing false positives we’re going to end up answering a question quite different from the one we think we’re answering. [Update: Jake Westfall points out in the comments below that we won’t necessarily inflate Type I error rate. Rather, the net effect of failing to model the nesting structure properly will depend on the relative amount of within-cluster vs. between-cluster variance. The answer we get will, however, usually deviate considerably from the answer we would get using approaches (2) or (3).]

By contrast, approaches (2) and (3) will, in most cases, produce pretty similar results. It’s true that the hierarchical approach is generally a more sensible thing to do, and will tend to provide a better estimate of the true population difference between the two conditions. However, it’s probably better to describe approach (2) as suboptimal, and not as wrong. So long as the subjects in our toy example above are in fact sampled at random, it’s pretty reasonable to assume that we have exactly 20 independent observations, and analyze our data accordingly. Our resulting estimates might not be quite as good as they could have been, but we’re unlikely to miss the mark by much.

To return to the Aarts et al paper, the key question is what exactly the authors mean when they say in their abstract that:

In neuroscience, experimental designs in which multiple observations are collected from a single research object (for example, multiple neurons from one animal) are common: 53% of 314 reviewed papers from five renowned journals included this type of data. These so-called ‘nested designs’ yield data that cannot be considered to be independent, and so violate the independency assumption of conventional statistical methods such as the t test. Ignoring this dependency results in a probability of incorrectly concluding that an effect is statistically significant that is far higher (up to 80%) than the nominal α level (usually set at 5%).

I’ve underlined the key phrases here. It seems to me that the implication the reader is supposed to draw from this is that roughly 53% of the neuroscience literature is at high risk of reporting spurious results. But in reality this depends entirely on whether the authors mean that 53% of studies are modeling trial-level data but ignoring the nesting structure (as in approach 1 above), or that 53% of studies in the literature aren’t using hierarchical models, even though they may be doing nothing terribly wrong otherwise (e.g., because they’re using approach (2) above).

Unfortunately, the rest of the manuscript doesn’t really clarify the matter. Here’s the section in which the authors report how they obtained that 53% number:

To assess the prevalence of nested data and the ensuing problem of inflated type I error rate in neuroscience, we scrutinized all molecular, cellular and developmental neuroscience research articles published in five renowned journals (Science, Nature, Cell, Nature Neuroscience and every month’s first issue of Neuron) in 2012 and the first six months of 2013. Unfortunately, precise evaluation of the prevalence of nesting in the literature is hampered by incomplete reporting: not all studies report whether multiple measurements were taken from each research object and, if so, how many. Still, at least 53% of the 314 examined articles clearly concerned nested data, of which 44% specifically reported the number of observations per cluster with a minimum of five observations per cluster (that is, for robust multilevel analysis a minimum of five observations per cluster is required11, 12). The median number of observations per cluster, as reported in literature, was 13 (Fig. 1a), yet conventional analysis methods were used in all of these reports.

This is, as far as I can see, still ambiguous. The only additional information provided here is that 44% of studies specifically reported the number of observations per cluster. Unfortunately this still doesn’t tell us whether the effective degrees of freedom used in the statistical tests in those papers included nested observations, or instead averaged over nested observations within each group or subject prior to analysis.

Lest this seem like a rather pedantic statistical point, I hasten to emphasize that a lot hangs on it. The potential implications for the neuroscience literature are very different under each of these two scenarios. If it is in fact true that 53% of studies are inappropriately using a “fixed-effects” model (approach 1)–which seems to me to be what the Aarts et al abstract implies–the upshot is that a good deal of neuroscience research is very bad statistical shape, and the authors will have done the community a great service by drawing attention to the problem. On the other hand, if the vast majority of the studies in that 53% are actually doing their analyses in a perfectly reasonable–if perhaps suboptimal–way, then the Aarts et al article seems rather alarmist. It would, of course, still be true that hierarchical models should be used more widely, but the cost of failing to switch would be much lower than seems to be implied.

I’ve emailed the corresponding author to ask for a clarification. I’ll update this post if I get a reply. In the meantime, I’m interested in others’ thoughts as to the likelihood that around half of the neuroscience literature involves inappropriate reporting of fixed-effects analyses. I guess personally I would be very surprised if this were the case, though it wouldn’t be unprecedented–e.g., I gather that in the early days of neuroimaging, the SPM analysis package used a fixed-effects model by default, resulting in quite a few publications reporting grossly inflated t/z/F statistics. But that was many years ago, and in the literatures I read regularly (in psychology and cognitive neuroscience), this problem rarely arises any more. A priori, I would have expected the same to be true in cellular and molecular neuroscience.


UPDATE 04/01 (no, not an April Fool’s joke)

The lead author, Emmeke Aarts, responded to my email. Here’s her reply in full:

Thank you for your interest in our paper. As the first author of the paper, I will answer the question you send to Sophie van der Sluis. Indeed we report that 53% of the papers include nested data using conventional statistics, meaning that they did not use multilevel analysis but an analysis method that assumes independent observations like a students t-test or ANOVA.

As you also note, the data can be analyzed at two levels, at the level of the individual observations, or at the subject/animal level. Unfortunately, with the information the papers provided us, we could not extract this information for all papers. However, as described in the section ‘The prevalence of nesting in neuroscience studies’, 44% of these 53% of papers including nested data, used conventional statistics on the individual observations, with at least a mean of 5 observations per subject/animal. Another 7% of these 53% of papers including nested data used conventional statistics at the subject/animal level. So this leaves 49% unknown. Of this 49%, there is a small percentage of papers which analyzed their data at the level of individual observations, but had a mean less than 5 observations per subject/animal (I would say 10 to 20% out of the top of my head), the remaining percentage is truly unknown. Note that with a high level of dependency, using conventional statistics on nested data with 2 observations per subject/animal is already undesirable. Also note that not only analyzing nested data at the individual level is undesirable, analyzing nested data at the subject/animal level is unattractive as well, as it reduces the statistical power to detect the experimental effect of interest (see fig. 1b in the paper), in a field in which a decent level of power is already hard to achieve (e.g., Button 2013).

I think this definitively answers my original question: according to Aarts, of the 53% of studies that used nested data, at least 44% performed conventional (i.e., non-hierarchical) statistical analyses on the individual observations. (I would dispute the suggestion that this was already stated in the paper; the key phrase is “on the individual observations”, and the wording in the manuscript was much more ambiguous.) Aarts suggests that ~50% of the studies couldn’t be readily classified, so in reality that proportion could be much higher. But we can say that at least 23% of the literature surveyed committed what would, in most domains, constitute a fairly serious statistical error.

I then sent Aarts another email following up on Jake Westfall’s comment (i.e., how nested vs. crossed designs were handled. She replied:

As Jake Westfall points out, it indeed depends on the design if ignoring intercept variance (so variance in the mean observation per subject/animal) leads to an inflated type I error. There are two types of designs we need to distinguish here, design type I, where the experimental variable (for example control or experimental group) does not vary within the subjects/animals but only over the subjects/animals, and design Type II, where the experimental variable does vary within the subject/animal. Only in design type I, the type I error is increased by intercept variance. As pointed out in the discussion section of the paper, the paper only focuses on design Type I (“Here we focused on the most common design, that is, data that span two levels (for example, cells in mice) and an experimental variable that does not vary within clusters (for example, in comparing cell characteristic X between mutants and wild types, all cells from one mouse have the same genotype)”), to keep this already complicated matter accessible to a broad readership. Moreover, design type I is what is most frequently seen in biological neuroscience, taking multiple observations from one animal and subsequently comparing genotypes automatically results in a type I research design.

When dealing with a research design II, it is actually the variation in effect within subject/animals that increases the type I error rate (the so-called slope variance), but I will not elaborate too much on this since it is outside the scope of this paper and a completely different story.

Again, this all sounds very straightforward and sound to me. So after both of these emails, here’s my (hopefully?) final take on the paper:

  • Work in molecular, cellular, and developmental neuroscience–or at least, the parts of those fields well-represented in five prominent journals–does indeed appear to suffer from some systemic statistical problems. While the proportion of studies at high risk of Type I error is smaller than the number Aarts et al’s abstract suggests (53%), the latter, more accurate, estimate (at least 23% of the literature) is still shockingly high. This doesn’t mean that a quarter or more of the literature can’t be trusted–as some of the commenters point out below, most conclusions aren’t based on just a single p value from a single analysis–but it does raise some very serious concerns. The Aarts et al paper is an important piece of work that will help improve statistical practice going forward.
  • The comments on this post, and on Twitter, have been interesting to read. There appear to be two broad camps of people who were sympathetic to my original concern about the paper. One camp consists of people who were similarly concerned about technical aspects of the paper, and in most cases were tripped up by the same confusion surrounding what the authors meant when they said 53% of studies used “conventional statistical analyses”. That point has now been addressed. The other camp consists of people who appear to work in the areas of neuroscience Aarts et al focused on, and were reacting not so much to the specific statistical concern raised by Aarts et al as to the broader suggestion that something might be deeply wrong with the neuroscience literature because of this. I confess that my initial knee-jerk impression to the Aarts et al paper was driven in large part by the intuition that surely it wasn’t possible for so large a fraction of the literature to be routinely modeling subjects/clusters/groups as fixed effects. But since it appears that that is in fact the case, I’m not sure what to say with respect to the broader question over whether it is or isn’t appropriate to ignore nesting in animal studies. I will say that in the domains I personally work in, it seems very clear that collapsing across all subjects for analysis purposes is nearly always (if not always) a bad idea. Beyond that, I don’t really have any further opinion other than what I said in this response to a comment below.
  • While the claims made in the paper appear to be fundamentally sound, the presentation leaves something to be desired. It’s unclear to me why the authors relegated some of the most important technical points to the Discussion, or didn’t explictly state them at all. The abstract also seems to me to be overly sensational–though, in hindsight, not nearly as much as I initially suspected. And it also seems questionable to tar all of neuroscience with a single brush when the analyses reported only applied to a few specific domains (and we know for a fact that in, say, neuroimaging, this problem is almost nonexistent). I guess to be charitable, one could pick the same bone with a very large proportion of published work, and this kind of thing is hardly unique to this study. Then again, the fact that a practice is widespread surely isn’t sufficient to justify that practice–or else there would be little point in Aarts et al criticizing a practice that so many people clearly engage in routinely.
  • Given my last post, I can’t help pointing out that this is a nice example of how mandatory data sharing (or failing that, a culture of strong expectations of preemptive sharing) could have made evaluation of scientific claims far easier. If the authors had attached the data file coding the 315 studies they reviewed as a supplement, I (and others) would have been able to clarify the ambiguity I originally raised much more quickly. I did send a follow up email to Aarts to ask if she and her colleagues would consider putting the data online, but haven’t heard back yet.