more on the absence of brain training effects

A little while ago I blogged about the recent Owen et al Nature study on the (null) effects of cognitive training. My take on the study, which found essentially no effect of cognitive training on generalized cognitive performance, was largely positive. In response, Martin Walker, founder of Mind Sparke, maker of Brain Fitness Pro software, left this comment:

I’ve done regular aerobic training for pretty much my entire life, but I’ve never had the kind of mental boost from exercise that I have had from dual n-back training. I’ve also found that n-back training helps my mood.

There was a foundational problem with the BBC study in that it didn’t provide anywhere near the intensity of training that would be required to show transfer. The null hypothesis was a forgone conclusion. It seems hard to believe that the scientists didn’t know this before they began and were setting out to debunk the populist brain game hype.

I think there are a couple of points worth making. One is the standard rejoinder that one anecdotal report doesn’t count for very much. That’s not meant as a jibe at Walker in particular, but simply as a general observation about the fallibility of human judgment. Many people are perfectly convinced that homeopathic solutions have dramatically improved their quality of life, but that doesn’t mean we should take homeopathy seriously. Of course, I’m not suggesting that cognitive training programs are as ineffectual as homeopathy–in my post, I suggested they may well have some effect–but simply that personal testimonials are no substitute for controlled studies.

With respect to the (also anecdotal) claim that aerobic exercise hasn’t worked for Walker, it’s worth noting that the effects of aerobic exercise on cognitive performance take time to develop. No one expects a single brisk 20-minute jog to dramatically improve cognitive performance. If you’ve been exercising regularly your whole life, the question isn’t whether exercise will improve your cognitive function–it’s whether not doing any exercise for a month or two would lead to poorer performance. That is, if Walker stopped exercising, would his cognitive performance suffer? It would be a decidedly unhealthy hypothesis to test, of course, but that would really be the more reasonable prediction. I don’t think anyone thinks that a person in excellent physical condition would benefit further from physical exercise; the point is precisely that most people aren’t in excellent physical shape. In any event, as I noted in my post, the benefits of aerobic exercise are clearly largest for older adults who were previously relatively sedentary. There’s much less evidence for large effects of aerobic exercise on cognitive performance in young or middle-aged adults.

The more substantive question Walker raises has to do with whether the tasks Owen et al used were too easy to support meaningful improvement. I think this is a reasonable question, but I don’t think the answer is as straightforward as Walker suggests. For one thing, participants in the Owen et al study did show substantial gains in performance on the training tasks (just not the untrained tasks), so it’s not like they were at ceiling. That is, the training tasks clearly weren’t easy. Second, participants varied widely in the number of training sessions they performed, and yet, as the authors note, the correlation between amount of training and cognitive improvement was negligible. So if you extrapolate from the observed pattern, it doesn’t look particularly favorable. Third, Owen et al used 12 different training tasks that spanned a broad range of cognitive abilities. While one can quibble with any individual task, it’s hard to reconcile the overall pattern of null results with the notion that cognitive training produces robust effects. Surely at least some of these measures should have led to a noticeable overall effect if they successfully produced transfer. But they didn’t.

To reiterate what I said in my earlier post, I’m not saying that cognitive training has absolutely no effect. No study is perfect, and it’s conceivable that more robust effects might be observed given a different design. But the Owen et al study is, to put it bluntly, the largest study of cognitive training conducted to date by about two orders of magnitude, and that counts for a lot in an area of research dominated by relatively small studies that have generally produced mixed findings. So, in the absence of contradictory evidence from another large training study, I don’t see any reason to second-guess Owen et al’s conclusions.

Lastly, I don’t think Walker is in any position to cast aspersions on people’s motivations (“It seems hard to believe that the scientists didn’t know this before they began and were setting out to debunk the populist brain game hype”). While I don’t think that his financial stake in brain training programs necessarily impugns his evaluation of the Owen et al study, it can’t exactly promote impartiality either. And for what it’s worth, I dug around the Mind Sparke website and couldn’t find any “scientific proof” that the software works (which is what the website claims)–just some vague allusions to customer testimonials and some citations of other researchers’ published work (none of which, as far as I can tell, used Brain Fitness Pro for training).

3 thoughts on “more on the absence of brain training effects”

  1. I find it hard to concur more with this report. Several experiments have shown neuro exercises do impact neuron advancement but it should be brought up that it is not the end all be all solution.

  2. Hello again.

    That Owen’s training used 12 training tasks spanning a broad range of cognitive abilities would have diluted the possible impact of any one task, despite the fact that some participants trained for far more than the average 30 minutes per week.

    In their study showing significant transfer to fluid intelligence from working memory training, Jaeggi and Buschkuehl chose one task — progressive dual n-back working memory training — for very specific reasons (working memory’s close relationship to Gf and the theory that the two place overlapping demands on processing power), and they set an intensive, monitored training duration of about 25 minutes per session, 5 sessions per week because they knew that prior studies with less demanding regimes had failed to show transfer. The trainees began to show significant benefits only after about 8 days of training on this one task, with benefits increasing steadily past this point.

    It makes absolute sense to treat my remarks with skepticism. But I appreciate you letting them stand.

    Martin Walker
    (The dual n-back training protocol used in Mind Sparke’s Brain Fitness Pro faithfully matches the dual n-back training protocol used in the Jaeggi / Buschkuehl study published in 2008. The researchers published the parameters of the training protocol in their paper and I was in contact with them as we designed our software.)

  3. Hi Martin,

    Thanks for the comment. It’s certainly possible that any effect would be diluted if, say, only 1 out of 12 tasks worked, but we’re talking about essentially zero effect here (not just a weak one), and that’s an implausible worst case scenario where the vast majority of these clearly quite difficult cognitive tasks aren’t doing anything to improve cognitive performance.

    I didn’t realize you implemented the procedure as described in the Jaeggi study, that’s good to know. That said, that’s still only one small study, and you’re comparing it to a study that’s two orders of magnitude larger. I personally would feel very uncomfortable claiming that this is “scientific proof” that your training regimen works! It’s also worth pointing out that the training effects observed in the Jaeggi study could simply reflect increases in participant engagement and comfort level. The control condition in their study consisted of participants who didn’t come into the lab at all for the interim period between pre-test and post-test. If you’ve ever run human subjects in an experiment, you know full well that many if not most participants come in slightly nervous and less than 100% engaged. So it’s not at all clear to me that a comparison between an experimental group that have the benefit of forming personal relationships with the experimenters and coming into the same lab situation ten or twenty times versus a control group who just show up twice is ideal. Which is to say, I suspect at least some of the benefit in the Jaeggi study reflects increased motivation and comfort, rather than a genuine transfer effect. (Oh, and also, note that the Jaeggi effect sizes aren’t huge either: they’re on the order of half a standard deviation.)

Leave a Reply to Tal YarkoniCancel reply