the APS likes me!

Somehow I wound up profiled in this month’s issue of the APS Observer as a “Rising Star“. I’d like to believe this means I’m a really big deal now, but I suspect what it actually means is that someone on the nominating committee at APS has extraordinarily bad judgment. I say this in no small part because I know some of the other people who were named Rising Stars quite well (congrats to Karl Szpunar,  Jason Chan, and Alan Castel, among many other people!), so I’m pretty sure I can distinguish people who actually deserve this from, say, me.

Of course, I’m not going to look a gift horse in the mouth. And I’m certainly thrilled to be picked for this. I know these things are kind of a crapshoot, but it still feels really nice. So while the part of my brain that understands measurement error is saying “meh, luck of the draw,” that other part of my brain that likes to be told it’s awesome is in the middle of a three day coke bender right now*. The only regret both parts of the brain have is that there isn’t any money attached to the award–or even a token prize like, say, a free statistician for a year. But I don’t think I’m going to push my luck by complaining to APS about it.

One thing I like a lot about the format of the Rising Star awards is they give you a full page to talk about yourself and your research. If there’s one thing I like to talk about, it’s myself. Usually, you can’t talk about yourself for very long before people start giving you dirty looks. But in this case, it’s sanctioned, so I guess it’s okay. In any case, the kind folks at the Observer sent me a series of seven questions to answer. And being an upstanding gentleman who likes to be given fancy awards, I promptly obliged. I figured they would just run what I sent them with minor edits… but I WAS VERY WRONG. They promptly disassembled nearly all of my brilliant observations and advice and replaced them with some very tame ramblings. So if you actually bother to read my responses, and happen to fall asleep halfway through, you’ll know who to blame. But just to set the record straight, I figured I would run through each of the boilerplate questions I was asked, and show you the answer that was printed in the Observer as compared to what I actually wrote**:

What does your research focus on?

What they printed: Most of my current research focuses on what you might call psychoinformatics: the application of information technology to psychology, with the aim of advancing our ability to study the human mind and brain. I’m interested in developing new ways to acquire, synthesize, and share data in psychology and cognitive neuroscience. Some of the projects I’ve worked on include developing new ways to measure personality more efficiently, adapting computer science metrics of string similarity to visual word recognition, modeling fMRI data on extremely short timescales, and conducting large-scale automated synthesis of published neuroimaging findings. The common theme that binds these disparate projects together is the desire to develop new ways of conceptualizing and addressing psychological problems; I believe very strongly in the transformative power of good methods.

What I actually said: I don’t know! There’s so much interesting stuff to think about! I can’t choose!

What drew you to this line of research? Why is it exciting to you?

What they printed: Technology enriches and improves our lives in every domain, and science is no exception. In the biomedical sciences in particular, many revolutionary discoveries would have been impossible without substantial advances in information technology. Entire subfields of research in molecular biology and genetics are now synonymous with bioinformatics, and neuroscience is currently also experiencing something of a neuroinformatics revolution. The same trend is only just beginning to emerge in psychology, but we’re already able to do amazing things that would have been unthinkable 10 or 20 years ago. For instance, we can now collect data from thousands of people all over the world online, sample people’s inner thoughts and feelings in real time via their phones, harness enormous datasets released by governments and corporations to study everything from how people navigate their spatial world to how they interact with their friends, and use high-performance computing platforms to solve previously intractable problems through large-scale simulation. Over the next few years, I think we’re going to see transformative changes in the way we study the human mind and brain, and I find that a tremendously exciting thing to be involved in.

What I actually said: I like psychology a lot, and I like technology a lot. Why not combine them!

Who were/are your mentors or psychological influences?

What they printed: I’ve been fortunate to have outstanding teachers and mentors at every stage of my training. I actually started my academic career quite disinterested in science and owe my career trajectory in no small part to two stellar philosophy professors (Rob Stainton and Chris Viger) who convinced me as an undergraduate that engaging with empirical data was a surprisingly good way to discover how the world really works. I can’t possibly do justice to all the valuable lessons my graduate and postdoctoral mentors have taught me, so let me just pick a few out of a hat. Among many other things, Todd Braver taught me how to talk through problems collaboratively and keep recursively questioning the answers to problems until a clear understanding materializes. Randy Larsen taught me that patience really is a virtue, despite my frequent misgivings. Tor Wager has taught me to think more programmatically about my research and to challenge myself to learn new skills. All of these people are living proof that you can be an ambitious, hard-working, and productive scientist and still be extraordinarily kind and generous with your time. I don’t think I embody those qualities myself right now, but at least I know what to shoot for.

What I actually said: Richard Feynman, Richard Hamming, and my mother. Not necessarily in that order.

To what do you attribute your success in the science?

What they printed: Mostly to blind luck. So far I’ve managed to stumble from one great research and mentoring situation to another. I’ve been fortunate to have exceptional advisors who’ve provided me with the perfect balance of freedom and guidance and amazing colleagues and friends who’ve been happy to help me out with ideas and resources whenever I’m completely out of my depth — which is most of the time.

To the extent that I can take personal credit for anything, I think I’ve been good about pursuing ideas I’m passionate about and believe in, even when they seem unlikely to pay off at first. I’m also a big proponent of exploratory research; I think pure exploration is tremendously undervalued in psychology. Many of my projects have developed serendipitously, as a result of asking, “What happens if we try doing it this way?”

What I actually said: Mostly to blind luck.

What’s your future research agenda?

What they printed: I’d like to develop technology-based research platforms that improve psychologists’ ability to answer existing questions while simultaneously opening up entirely new avenues of research. That includes things like developing ways to collect large amounts of data more efficiently, tracking research participants over time, automatically synthesizing the results of published studies, building online data repositories and collaboration tools, and more. I know that all sounds incredibly vague, and if you have some ideas about how to go about any of it, I’d love to collaborate! And by collaborate, I mean that I’ll brew the coffee and you’ll do the work.

What I actually said: Trading coffee for publications?

Any advice for even younger psychological scientists? What would you tell someone just now entering graduate school or getting their PhD?

What they printed: The responsible thing would probably be to say “Don’t go to graduate school.” But if it’s too late for that, I’d recommend finding brilliant mentors and colleagues and serving them coffee exactly the way they like it. Failing that, find projects you’re passionate about, work with people you enjoy being around, develop good technical skills, and don’t be afraid to try out crazy ideas. Leave your office door open, and talk to everyone you can about the research they’re doing, even if it doesn’t seem immediately relevant. Good ideas can come from anywhere and often do.

What I actually said: “Don’t go to graduate school.”

What publication you are most proud of or feel has been most important to your career?

What they printed: Yarkoni, T., Poldrack, R. A., Nichols, T. E., Van Essen, D. C., & Wager, T. D. (2011). Large-scale automated synthesis of human functional neuroimaging data. Manuscript submitted for publication.

In this paper, we introduce a highly automated platform for synthesizing data from thousands of published functional neuroimaging studies. We used a combination of text mining, meta-analysis, and machine learning to automatically generate maps of brain activity for hundreds of different psychological concepts, and we showed that these results could be used to “decode” cognitive states from brain activity in individual human subjects in a relatively open-ended way. I’m very proud of this work, and I’m quite glad that my co-authors agreed to make me first author in return for getting their coffee just right. Unfortunately, the paper isn’t published yet, so you’ll just have to take my word for it that it’s really neat stuff. And if you’re thinking, “Isn’t it awfully convenient that his best paper is unpublished?”… why, yes. Yes it is.

What I actually said: …actually, that’s almost exactly what I said. Except they inserted that bit about trading coffee for co-authorship. Really all I had to do was ask my co-authors nicely.

Anyway, like I said, it’s really nice to be honored in this way, even if I don’t really deserve it (and that’s not false modesty–I’m generally the first to tell other people when I think I’ve done something awesome). But I’m a firm believer in regression to the mean, so I suspect the run of good luck won’t last. In a few years, when I’ve done almost no new original work, failed to land a tenure-track job, and dropped out of academia to ride horses around the racetrack***, you can tell people that you knew me back when I was a Rising Star. Right before you tell them you don’t know what the hell happened.

———————————-

* But not really.

** Totally lying. Pretty much every word is as I wrote it. And the Observer staff were great.

*** Hopefully none of these things will happen. Except the jockey thing; that would be awesome.

how many Cortex publications in the hand is a Nature publication in the bush worth?

A provocative and very short Opinion piece by Julien Mayor (Are scientists nearsighted gamblers? The misleading nature of impact factors) was recently posted on the Frontiers in Psychology website (open access! yay!). Mayor’s argument is summed up nicely in this figure:

The left panel plots the mean versus median number of citations per article in a given year (each year is a separate point) for 3 journals: Nature (solid circles), Psych Review (squares), and Psych Science (triangles). The right panel plots the number of citations each paper receives in each of the first 15 years following its publication. What you can clearly see is that (a) the mean and median are very strongly related for the psychology journals, but completely unrelated for Nature, implying that a very small number of articles account for the vast majority of Nature citations (Mayor cites data indicating that up to 40% of Nature papers are never cited); and (b) Nature papers tend to get cited heavily for a year or two, and then disappear, whereas Psych Science, and particularly Psych Review, tend to have much longer shelf lives. Based on these trends, Mayor concludes that:

From this perspective, the IF, commonly accepted as golden standard for performance metrics seems to reward high-risk strategies (after all your Nature article has only slightly over 50% chance of being ever cited!), and short-lived outbursts. Are scientists then nearsighted gamblers?

I’d very much like to believe this, in that I think the massive emphasis scientists collectively place on publishing work in broad-interest, short-format journals like Nature and Science is often quite detrimental to the scientific enterprise as a whole. But I don’t actually believe it, because I think that, for any individual paper, researchers generally do have good incentives to try to publish in the glamor mags rather than in more specialized journals. Mayor’s figure, while informative, doesn’t take a number of factors into account:

  • The type of papers that gets published in Psych Review and Nature are very different. Review papers, in general, tend to get cited more often, and for a longer time. A better comparison would be between Psych Review papers and only review papers in Nature (there’s not many of them, unfortunately). My guess is that that difference alone probably explains much of the difference in citation rates later on in an article’s life. That would also explain why the temporal profile of Psych Science articles (which are also overwhelmingly short empirical reports) is similar to that of Nature. Major theoretical syntheses stay relevant for decades; individual empirical papers, no matter how exciting, tend to stop being cited as frequently once (a) the finding fails to replicate, or (b) a literature builds up around the original report, and researchers stop citing individual studies and start citing review articles (e.g., in Psych Review).
  • Scientists don’t just care about citation counts, they also care about reputation. The reality is that much of the appeal of having a Nature or Science publication isn’t necessarily that you expect the work to be cited much more heavily, but that you get to tell everyone else how great you must be because you have a publication in Nature. Now, on some level, we know that it’s silly to hold glamor mags in such high esteem, and Mayor’s data are consistent with that idea. In an ideal world, we’d read all papers ultra-carefully before making judgments about their quality, rather than using simple but flawed heuristics like what journal those papers happen to be published in. But this isn’t an ideal world, and the reality is that people do use such heuristics. So it’s to each scientist’s individual advantage (but to the field’s detriment) to take advantage of that knowledge.
  • Different fields have very different citation rates. And articles in different fields have very different shelf lives. For instance, I’ve heard that in many areas of physics, the field moves so fast that articles are basically out of date within a year or two (I have no way to verify if this is true or not). That’s certainly not true of most areas of psychology. For instance, in cognitive neuroscience, the current state of the field in many areas is still reasonably well captured by highly-cited publications that are 5 – 10 years old. Most behavioral areas of psychology seem to advance even more slowly. So one might well expect articles in psychology journals to peak later in time than the average Nature article, because Nature contains a high proportion of articles in the natural sciences.
  • Articles are probably selected for publication in Nature, Psych Science, and Psych Review for different reasons. In particular, there’s no denying the fact that Nature selects articles in large part based on the perceived novelty and unexpectedness of the result. That’s not to say that methodological rigor doesn’t play a role, just that, other things being equal, unexpected findings are less likely to be replicated. Since Nature and Science overwhelmingly publish articles with new and surprising findings, it shouldn’t be surprising if the articles in these journals have a lower rate of replication several years on (and hence, stop being cited). That’s presumably going to be less true of articles in specialist journals, where novelty factor and appeal to a broad audience are usually less important criteria.

Addressing these points would probably go a long way towards closing, and perhaps even reversing, the gap implied  by Mayor’s figure. I suspect that if you could do a controlled experiment and publish the exact same article in Nature and Psych Science, it would tend to get cited more heavily in Nature over the long run. So in that sense, if citations were all anyone cared about, I think it would be perfectly reasonable for scientists to try to publish in the most prestigious journals–even though, again, I think the pressure to publish in such journals actually hurts the field as a whole.

Of course, in reality, we don’t just care about citation counts anyway; lots of other things matter. For one thing, we also need to factor in the opportunity cost associated with writing a paper up in a very specific format for submission to Nature or Science, knowing that we’ll probably have to rewrite much or all of it before it gets published. All that effort could probably have been spent on other projects, so one way to put the question is: how many lower-tier publications in the hand is a top-tier publication in the bush worth?

Ultimately, it’s an empirical matter; I imagine if you were willing to make some strong assumptions, and collect the right kind of data, you could come up with a meaningful estimate of the actual value of a Nature publication, as a function of important variables like the number of other publications the authors had, the amount of work invested in rewriting the paper after rejection, the authors’ career stage, etc. But I don’t know of any published work to that effect; it seems like it would probably be more trouble than it was worth (or, to get meta: how many Nature manuscripts can you write in the time it takes you to write a manuscript about how many Nature manuscripts you should write?). And, to be honest, I suspect that any estimate you obtained that way would have little or no impact on the actual decisions scientists make about where to submit their manuscripts anyway, because, in practice, such decisions are driven as much by guesswork and wishful thinking as by any well-reasoned analysis. And on that last point, I speak from extensive personal experience…

the naming of things

Let’s suppose you were charged with the important task of naming all the various subdisciplines of neuroscience that have anything to do with the field of research we now know as psychology. You might come up with some or all of the following terms, in no particular order:

  • Neuropsychology
  • Biological psychology
  • Neurology
  • Cognitive neuroscience
  • Cognitive science
  • Systems neuroscience
  • Behavioral neuroscience
  • Psychiatry

That’s just a partial list; you’re resourceful, so there are probably others (biopsychology? psychobiology? psychoneuroimmunology?). But it’s a good start. Now suppose you decided to make a game out of it, and threw a dinner party where each guest received a copy of your list (discipline names only–no descriptions!) and had to guess what they thought people in that field study. If your nomenclature made any sense at all, and tried to respect the meanings of the individual words used to generate the compound words or phrases in your list, your guests might hazard something like the following guesses:

  • Neuropsychology: “That’s the intersection of neuroscience and psychology. Meaning, the study of the neural mechanisms underlying cognitive function.”
  • Biological psychology: “Similar to neuropsychology, but probably broader. Like, it includes the role of genes and hormones and kidneys in cognitive function.”
  • Neurology: “The pure study of the brain, without worrying about all of that associated psychological stuff.”
  • Cognitive neuroscience: “Well if it doesn’t mean the same thing as neuropsychology and biological psychology, then it probably refers to the branch of neuroscience that deals with how we think and reason. Kind of like cognitive psychology, only with brains!”
  • Cognitive science: “Like cognitive neuroscience, but not just for brains. It’s the study of human cognition in general.”
  • Systems neuroscience: “Mmm… I don’t really know. The study of how the brain functions as a whole system?”
  • Behavioral neuroscience: “Easy: it’s the study of the relationship between brain and behavior. For example, how we voluntarily generate actions.”
  • Psychiatry: “That’s the branch of medicine that concerns itself with handing out multicolored pills that do funny things to your thoughts and feelings. Of course.”

If this list seems sort of sensible to you, you probably live in a wonderful world where compound words mean what you intuitively think they mean, the subject matter of scientific disciplines can be transparently discerned, and everyone eats ice cream for dinner every night terms that sound extremely similar have extremely similar referents rather than referring to completely different fields of study. Unfortunately, that world is not the world we happen to actually inhabit. In our world, most of the disciplines at the intersection of psychology and neuroscience have funny names that reflect accidents of history, and tell you very little about what the people in that field actually study.

Here’s the list your guests might hand back in this world, if you ever made the terrible, terrible mistake of inviting a bunch of working scientists to dinner:

  • Neuropsychology: The study of how brain damage affects cognition and behavior. Most often focusing on the effects of brain lesions in humans, and typically relying primarily on behavioral evaluations (i.e., no large magnetic devices that take photographs of the space inside people’s skulls). People who call themselves neuropsychologists are overwhelmingly trained as clinical psychologists, and many of them work in big white buildings with a red cross on the front. Note that this isn’t the definition of neuropsychology that Wikipedia gives you; Wikipedia seems to think that neuropsychology is “the basic scientific discipline that studies the structure and function of the brain related to specific psychological processes and overt behaviors.” Nice try, Wikipedia, but that’s much too general. You didn’t even use the words ‘brain damage’, ‘lesion’, or ‘patient’ in the first sentence.
  • Biological psychology: To be perfectly honest, I’m going to have to step out of dinner-guest character for a moment and admit I don’t really have a clue what biological psychologists study. I can’t remember the last time I heard someone refer to themselves as a biological psychologist. To an approximation, I think biological psychology differs from, say, cognitive neuroscience in placing greater emphasis on everything outside of higher cognitive processes (sensory systems, autonomic processes, the four F’s, etc.). But that’s just idle speculation based largely on skimming through the chapter names of my old “Biological Psychology” textbook. What I can definitively confidently comfortably tentatively recklessly assert is that you really don’t want to trust the Wikipedia definition here, because when you type ‘biological psychology‘ into that little box that says ‘search’ on Wikipedia, it redirects you to the behavioral neuroscience entry. And that can’t be right, because, as we’ll see in a moment, behavioral neuroscience refers to something very different…
  • Neurology: Hey, look! A wikipedia entry that doesn’t lie to our face! It says neurology is “a medical specialty dealing with disorders of the nervous system. Specifically, it deals with the diagnosis and treatment of all categories of disease involving the central, peripheral, and autonomic nervous systems, including their coverings, blood vessels, and all effector tissue, such as muscle.” That’s a definition I can get behind, and I think 9 out of 10 dinner guests would probably agree (the tenth is probably drunk). But then, I’m not (that kind of) doctor, so who knows.
  • Cognitive neuroscience: In principle, cognitive neuroscience actually means more or less what it sounds like it means. It’s the study of the neural mechanisms underlying cognitive function. In practice, it all goes to hell in a handbasket when you consider that you can prefix ‘cognitive neuroscience’ with pretty much any adjective you like and end up with a valid subdiscipline. Developmental cognitive neuroscience? Check. Computational cognitive neuroscience? Check. Industrial/organizational cognitive neuroscience? Amazingly, no; until just now, that phrase did not exist on the internet. But by the time you read this, Google will probably have a record of this post, which is really all it takes to legitimate I/OCN as a valid field of inquiry. It’s just that easy to create a new scientific discipline, so be very afraid–things are only going to get messier.
  • Cognitive science: A field that, by most accounts, lives up to its name. Well, kind of. Cognitive science sounds like a blanket term for pretty much everything that has to do with cognition, and it sort of is. You have psychology and linguistics and neuroscience and philosophy and artificial intelligence all represented. I’ve never been to the annual CogSci conference, but I hear it’s a veritable orgy of interdisciplinary activity. Still, I think there’s a definite bias towards some fields at the expense of others. Neuroscientists (of any stripe), for instance, rarely call themselves cognitive scientists. Conversely, philosophers of mind or language love to call themselves cognitive scientists, and the jerk cynic in me says it’s because it means they get to call themselves scientists. Also, in terms of content and coverage, there seems to be a definite emphasis among self-professed cognitive scientists on computational and mathematical modeling, and not so much emphasis on developing neuroscience-based models (though neural network models are popular). Still, if you’re scoring terms based on clarity of usage, cognitive science should score at least an 8.5 / 10.
  • Systems neuroscience: The study of neural circuits and the dynamics of information flow in the central nervous system (note: I stole part of that definition from MIT’s BCS website, because MIT people are SMART). Systems neuroscience doesn’t overlap much with psychology; you can’t defensibly argue that the temporal dynamics of neuronal assemblies in sensory cortex have anything to do with human cognition, right? I just threw this in to make things even more confusing.
  • Behavioral neuroscience: This one’s really great, because it has almost nothing to do with what you think it does. Well, okay, it does have something to do with behavior. But it’s almost exclusively animal behavior. People who refer to themselves as behavioral neuroscientists are generally in the business of poking rats in the brain with very small, sharp, glass objects; they typically don’t care much for human beings (professionally, that is). I guess that kind of makes sense when you consider that you can have rats swim and jump and eat and run while electrodes are implanted in their heads, whereas most of the time when we study human brains, they’re sitting motionless in (a) a giant magnet, (b) a chair, or (c) a jar full of formaldehyde. So maybe you could make an argument that since humans don’t get to BEHAVE very much in our studies, people who study humans can’t call themselves behavioral neuroscientists. But that would be a very bad argument to make, and many of the people who work in the so-called “behavioral sciences” and do nothing but study human behavior would probably be waiting to thump you in the hall the next time they saw you.
  • Psychiatry: The branch of medicine that concerns itself with handing out multicolored pills that do funny things to your thoughts and feelings. Of course.

Anyway, the basic point of all this long-winded nonsense is just that, for all that stuff we tell undergraduates about how science is such a wonderful way to achieve clarity about the way the world works, scientists–or at least, neuroscientists and psychologists–tend to carve up their disciplines in pretty insensible ways. That doesn’t mean we’re dumb, of course; to the people who work in a field, the clarity (or lack thereof) of the terminology makes little difference, because you only need to acquire it once (usually in your first nine years of grad school), and after that you always know what people are talking about. Come to think of it, I’m pretty sure the whole point of learning big words is that once you’ve successfully learned them, you can stop thinking deeply about what they actually mean.

It is kind of annoying, though, to have to explain to undergraduates that, DUH, the class they really want to take given their interests is OBVIOUSLY cognitive neuroscience and NOT neuropsychology or biological psychology. I mean, can’t they read? Or to pedantically point out to someone you just met at a party that saying “the neurological mechanisms of such-and-such” makes them sound hopelessly unsophisticated, and what they should really be saying is “the neural mechanisms,” or “the neurobiological mechanisms”, or (for bonus points) “the neurophysiological substrates”. Or, you know, to try (unsuccessfully) to convince your mother on the phone that even though it’s true that you study the relationship between brains and behavior, the field you work in has very little to do with behavioral neuroscience, and so you really aren’t an expert on that new study reported in that article she just read in the paper the other day about that interesting thing that’s relevant to all that stuff we all do all the time.

The point is, the world would be a slightly better place if cognitive science, neuropsychology, and behavioral neuroscience all meant what they seem like they should mean. But only very slightly better.

Anyway, aside from my burning need to complain about trivial things, I bring these ugly terminological matters up partly out of idle curiosity. And what I’m idly curious about is this: does this kind of confusion feature prominently in other disciplines too, or is psychology-slash-neuroscience just, you know, “special”? My intuition is that it’s the latter; subdiscipline names in other areas just seem so sensible to me whenever I hear them. For instance, I’m fairly confident that organic chemists study the chemistry of Orgas, and I assume condensed matter physicists spend their days modeling the dynamics of teapots. Right? Yes? No? Perhaps my  millions thousands hundreds dozens three regular readers can enlighten me in the comments…

what the Dunning-Kruger effect is and isn’t

If you regularly read cognitive science or psychology blogs (or even just the lowly New York Times!), you’ve probably heard of something called the Dunning-Kruger effect. The Dunning-Kruger effect refers to the seemingly pervasive tendency of poor performers to overestimate their abilities relative to other people–and, to a lesser extent, for high performers to underestimate their abilities. The explanation for this, according to Kruger and Dunning, who first reported the effect in an extremely influential 1999 article in the Journal of Personality and Social Psychology, is that incompetent people by lack the skills they’d need in order to be able to distinguish good performers from bad performers:

…people who lack the knowledge or wisdom to perform well are often unaware of this fact. We attribute this lack of awareness to a deficit in metacognitive skill. That is, the same incompetence that leads them to make wrong choices also deprives them of the savvy necessary to recognize competence, be it their own or anyone else’s.

For reasons I’m not really clear on, the Dunning-Kruger effect seems to be experiencing something of a renaissance over the past few months; it’s everywhere in the blogosphere and media. For instance, here are just a few alleged Dunning-Krugerisms from the past few weeks:

So what does this mean in business? Well, it’s all over the place. Even the title of Dunning and Kruger’s paper, the part about inflated self-assessments, reminds me of a truism that was pointed out by a supervisor early in my career: The best employees will invariably be the hardest on themselves in self-evaluations, while the lowest performers can be counted on to think they are doing excellent work…

Heidi Montag and Spencer Pratt are great examples of the Dunning-Kruger effect. A whole industry of assholes are making a living off of encouraging two attractive yet untalented people they are actually genius auteurs. The bubble around them is so thick, they may never escape it. At this point, all of America (at least those who know who they are), is in on the joke ““ yet the two people in the center of this tragedy are completely unaware…

Not so fast there — the Dunning-Kruger effect comes into play here. People in the United States do not have a high level of understanding of evolution, and this survey did not measure actual competence. I’ve found that the people most likely to declare that they have a thorough knowledge of evolution are the creationists“¦but that a brief conversation is always sufficient to discover that all they’ve really got is a confused welter of misinformation…

As you can see, the findings reported by Kruger and Dunning are often interpreted to suggest that the less competent people are, the more competent they think they are. People who perform worst at a task tend to think they’re god’s gift to said task, and the people who can actually do said task often display excessive modesty. I suspect we find this sort of explanation compelling because it appeals to our implicit just-world theories: we’d like to believe that people who obnoxiously proclaim their excellence at X, Y, and Z must really not be so very good at X, Y, and Z at all, and must be (over)compensating for some actual deficiency; it’s much less pleasant to imagine that people who go around shoving their (alleged) superiority in our faces might really be better than us at what they do.

Unfortunately, Kruger and Dunning never actually provided any support for this type of just-world view; their studies categorically didn’t show that incompetent people are more confident or arrogant than competent people. What they did show is this:

This is one of the key figures from Kruger and Dunning’s 1999 paper (and the basic effect has been replicated many times since). The critical point to note is that there’s a clear positive correlation between actual performance (gray line) and perceived performance (black line): the people in the top quartile for actual performance think they perform better than the people in the second quartile, who in turn think they perform better than the people in the third quartile, and so on. So the bias is definitively not that incompetent people think they’re better than competent people. Rather, it’s that incompetent people think they’re much better than they actually are. But they typically still don’t think they’re quite as good as people who, you know, actually are good. (It’s important to note that Dunning and Kruger never claimed to show that the unskilled think they’re better than the skilled; that’s just the way the finding is often interpreted by others.)

That said, it’s clear that there is a very large discrepancy between the way incompetent people actually perform and the way they perceive their own performance level, whereas the discrepancy is much smaller for highly competent individuals. So the big question is why. Kruger and Dunning’s explanation, as I mentioned above, is that incompetent people lack the skills they’d need in order to know they’re incompetent. For example, if you’re not very good at learning languages, it might be hard for you to tell that you’re not very good, because the very skills that you’d need in order to distinguish someone who’s good from someone who’s not are the ones you lack. If you can’t hear the distinction between two different phonemes, how could you ever know who has native-like pronunciation ability and who doesn’t? If you don’t understand very many words in another language, how can you evaluate the size of your own vocabulary in relation to other people’s?

This appeal to people’s meta-cognitive abilities (i.e., their knowledge about their knowledge) has some intuitive plausibility, and Kruger, Dunning and their colleagues have provided quite a bit of evidence for it over the past decade. That said, it’s by no means the only explanation around; over the past few years, a fairly sizeable literature criticizing or extending Kruger and Dunning’s work has developed. I’ll mention just three plausible (and mutually compatible) alternative accounts people have proposed (but there are others!)

1. Regression toward the mean. Probably the most common criticism of the Dunning-Kruger effect is that it simply reflects regression to the mean–that is, it’s a statistical artifact. Regression to the mean refers to the fact that any time you select a group of individuals based on some criterion, and then measure the standing of those individuals on some other dimension, performance levels will tend to shift (or regress) toward the mean level. It’s a notoriously underappreciated problem, and probably explains many, many phenomena that people have tried to interpret substantively. For instance, in placebo-controlled clinical trials of SSRIs, depressed people tend to get better in both the drug and placebo conditions. Some of this is undoubtedly due to the placebo effect, but much of it is probably also due to what’s often referred to as “natural history”. Depression, like most things, tends to be cyclical: people get better or worse better over time, often for no apparent rhyme or reason. But since people tend to seek help (and sign up for drug trials) primarily when they’re doing particularly badly, it follows that most people would get better to some extent even without any treatment. That’s regression to the mean (the Wikipedia entry has other nice examples–for example, the famous Sports Illustrated Cover Jinx).

In the context of the Dunning-Kruger effect, the argument is that incompetent people simply regress toward the mean when you ask them to evaluate their own performance. Since perceived performance is influenced not only by actual performance, but also by many other factors (e.g., one’s personality, meta-cognitive ability, measurement error, etc.), it follows that, on average, people with extreme levels of actual performance won’t be quite as extreme in terms of their perception of their performance. So, much of the Dunning-Kruger effect arguably doesn’t need to be explained at all, and in fact, it would be quite surprising if you didn’t see a pattern of results that looks at least somewhat like the figure above.

2. Regression to the mean plus better-than-average. Having said that, it’s clear that regression to the mean can’t explain everything about the Dunning-Kruger effect. One problem is that it doesn’t explain why the effect is greater at the low end than at the high end. That is, incompetent people tend to overestimate their performance to a much greater extent than competent people underestimate their performance. This asymmetry can’t be explained solely by regression to the mean. It can, however, be explained by a combination of RTM and a “better-than-average” (or self-enhancement) heuristic which says that, in general, most people have a tendency to view themselves excessively positively. This two-pronged explanation was proposed by Krueger and Mueller in a 2002 study (note that Krueger and Kruger are different people!), who argued that poor performers suffer from a double whammy: not only do their perceptions of their own performance regress toward the mean, but those perceptions are also further inflated by the self-enhancement bias. In contrast, for high performers, these two effects largely balance each other out: regression to the mean causes high performers to underestimate their performance, but to some extent that underestimation is offset by the self-enhancement bias. As a result, it looks as though high performers make more accurate judgments than low performers, when in reality the high performers are just lucky to be where they are in the distribution.

3. The instrumental role of task difficulty. Consistent with the notion that the Dunning-Kruger effect is at least partly a statistical artifact, some studies have shown that the asymmetry reported by Kruger and Dunning (i.e., the smaller discrepancy for high performers than for low performers) actually goes away, and even reverses, when the ability tests given to participants are very difficult. For instance, Burson and colleagues (2006), writing in JPSP, showed that when University of Chicago undergraduates were asked moderately difficult trivia questions about their university, the subjects who performed best were just as poorly calibrated as the people who performed worst, in the sense that their estimates of how well they did relative to other people were wildly inaccurate. Here’s what that looks like:

Notice that this finding wasn’t anomalous with respect to the Kruger and Dunning findings; when participants were given easier trivia (the diamond-studded line), Burson et al observed the standard pattern, with poor performers seemingly showing worse calibration. Simply knocking about 10% off the accuracy rate on the trivia questions was enough to induce a large shift in the relative mismatch between perceptions of ability and actual ability. Burson et al then went on to replicate this pattern in two additional studies involving a number of different judgments and tasks, so this result isn’t specific to trivia questions. In fact, in the later studies, Burson et al showed that when the task was really difficult, poor performers were actually considerably better calibrated than high performers.

Looking at the figure above, it’s not hard to see why this would be. Since the slope of the line tends to be pretty constant in these types of experiments, any change in mean performance levels (i.e., a shift in intercept on the y-axis) will necessarily result in a larger difference between actual and perceived performance at the high end. Conversely, if you raise the line, you maximize the difference between actual and perceived performance at the lower end.

To get an intuitive sense of what’s happening here, just think of it this way: if you’re performing a very difficult task, you’re probably going to find the experience subjectively demanding even if you’re at the high end relative to other people. Since people’s judgments about their own relative standing depends to a substantial extent on their subjective perception of their own performance (i.e., you use your sense of how easy a task was as a proxy of how good you must be at it), high performers are going to end up systematically underestimating how well they did. When a task is difficult, most people assume they must have done relatively poorly compared to other people. Conversely, when a task is relatively easy (and the tasks Dunning and Kruger studied were on the easier side), most people assume they must be pretty good compared to others. As a result, it’s going to look like the people who perform well are well-calibrated when the task is easy and poorly-calibrated when the task is difficult; less competent people are going to show exactly the opposite pattern. And note that this doesn’t require us to assume any relationship between actual performance and perceived performance. You would expect to get the Dunning-Kruger effect for easy tasks even if there was exactly zero correlation between how good people actually are at something and how good they think they are.

Here’s how Burson et al summarized their findings:

Our studies replicate, eliminate, or reverse the association between task performance and judgment accuracy reported by Kruger and Dunning (1999) as a function of task difficulty. On easy tasks, where there is a positive bias, the best performers are also the most accurate in estimating their standing, but on difficult tasks, where there is a negative bias, the worst performers are the most accurate. This pattern is consistent with a combination of noisy estimates and overall bias, with no need to invoke differences in metacognitive abilities. In this  regard, our findings support Krueger and Mueller’s (2002) reinterpretation of Kruger and Dunning’s (1999) findings. An association between task-related skills and metacognitive insight may indeed exist, and later we offer some suggestions for ways to test for it. However, our analyses indicate that the primary drivers of errors in judging relative standing are general inaccuracy and overall biases tied to task difficulty. Thus, it is important to know more about those sources of error in order to better understand and ameliorate them.

What should we conclude from these (and other) studies? I think the jury’s still out to some extent, but at minimum, I think it’s clear that much of the Dunning-Kruger effect reflects either statistical artifact (regression to the mean), or much more general cognitive biases (the tendency to self-enhance and/or to use one’s subjective experience as a guide to one’s standing in relation to others). This doesn’t mean that the meta-cognitive explanation preferred by Dunning, Kruger and colleagues can’t hold in some situations; it very well may be that in some cases, and to some extent, people’s lack of skill is really what prevents them from accurately determining their standing in relation to others. But I think our default position should be to prefer the alternative explanations I’ve discussed above, because they’re (a) simpler, (b) more general (they explain lots of other phenomena), and (c) necessary (frankly, it’d be amazing if regression to the mean didn’t explain at least part of the effect!).

We should also try to be aware of another very powerful cognitive bias whenever we use the Dunning-Kruger effect to explain the people or situations around us–namely, confirmation bias. If you believe that incompetent people don’t know enough to know they’re incompetent, it’s not hard to find anecdotal evidence for that; after all, we all know people who are both arrogant and not very good at what they do. But if you stop to look for it, it’s probably also not hard to find disconfirming evidence. After all, there are clearly plenty of people who are good at what they do, but not nearly as good as they think they are (i.e., they’re above average, and still totally miscalibrated in the positive direction). Just like there are plenty of people who are lousy at what they do and recognize their limitations (e.g., I don’t need to be a great runner in order to be able to tell that I’m not a great runner–I’m perfectly well aware that I have terrible endurance, precisely because I can’t finish runs that most other runners find trivial!). But the plural of anecdote is not data, and the data appear to be equivocal. Next time you’re inclined to chalk your obnoxious co-worker’s delusions of grandeur down to the Dunning-Kruger effect, consider the possibility that your co-worker’s simply a jerk–no meta-cognitive incompetence necessary.

ResearchBlogging.orgKruger J, & Dunning D (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of personality and social psychology, 77 (6), 1121-34 PMID: 10626367
Krueger J, & Mueller RA (2002). Unskilled, unaware, or both? The better-than-average heuristic and statistical regression predict errors in estimates of own performance. Journal of personality and social psychology, 82 (2), 180-8 PMID: 11831408
Burson KA, Larrick RP, & Klayman J (2006). Skilled or unskilled, but still unaware of it: how perceptions of difficulty drive miscalibration in relative comparisons. Journal of personality and social psychology, 90 (1), 60-77 PMID: 16448310

de Waal and Ferrari on cognition in humans and animals

Humans do many things that most animals can’t. That much no one would dispute. The more interesting and controversial question is just how many things we can do that most animals can’t, and just how many animal species can or can’t do the things we do. That question is at the center of a nice opinion piece in Trends in Cognitive Sciences by Frans de Waal and Pier Francisco Ferrari.

De Waal and Ferrari argue for what they term a bottom-up approach to human and animal cognition. The fundamental idea–which isn’t new, and in fact owes much to decades of de Waal’s own work with primates–is that most of our cognitive abilities, including many that are often characterized as uniquely human, are in fact largely continuous with abilities found in other species. De Waal and Ferrari highlight a number of putatively “special” functions like imitation and empathy that turn out to have relatively frequent primate (and in some cases non-primate) analogs. They push for a bottom-up scientific approach that seeks to characterize the basic mechanisms that complex functionality might have arisen out of, rather than (what they see as) “the overwhelming tendency outside of biology to give human cognition special treatment.”

Although I agree pretty strongly with the thesis of the paper, its scope is also, in some ways, quite limited: De Waal and Ferrari clearly believe that many complex functions depend on homologous mechanisms in both humans and non-human primates, but they don’t actually say very much about what these mechanisms might be, save for some brief allusions to relatively broad neural circuits (e.g., the oft-criticized mirror neuron system, which Ferrari played a central role in identifying and characterizing). To some extent that’s understandable given the brevity of TICS articles, but given how much de Waal has written about primate cognition, it would have been nice to see a more detailed example of the types of cognitive representations de Waal thinks underlie, say, the homologous abilities of humans and capuchin monkeys empathize with conspecifics.

Also, despite its categorization as an “Opinion” piece (these are supposed to stir up debate), I don’t think many people (at least, the kind of people who read TICS articles) are going to take issue with the basic continuity hypothesis advanced by de Waal and Ferrari. I suspect many more people would agree than disagree with the notion that most complex cognitive abilities displayed by humans share a closely intertwined evolutionary history with seemingly less sophisticated capacities displayed by primates and other mammalian species. So in that sense, de Waal and Ferrari might be accused of constructing something of a straw man. But it’s important to recognize that de Waal’s own work is a very large part of the reason why the continuity hypothesis is so widely accepted these days. So in that sense, even if you already agree with its premise, the TICS paper is worth reading simply as an elegant summary of a long-standing and important line of research.

more on the absence of brain training effects

A little while ago I blogged about the recent Owen et al Nature study on the (null) effects of cognitive training. My take on the study, which found essentially no effect of cognitive training on generalized cognitive performance, was largely positive. In response, Martin Walker, founder of Mind Sparke, maker of Brain Fitness Pro software, left this comment:

I’ve done regular aerobic training for pretty much my entire life, but I’ve never had the kind of mental boost from exercise that I have had from dual n-back training. I’ve also found that n-back training helps my mood.

There was a foundational problem with the BBC study in that it didn’t provide anywhere near the intensity of training that would be required to show transfer. The null hypothesis was a forgone conclusion. It seems hard to believe that the scientists didn’t know this before they began and were setting out to debunk the populist brain game hype.

I think there are a couple of points worth making. One is the standard rejoinder that one anecdotal report doesn’t count for very much. That’s not meant as a jibe at Walker in particular, but simply as a general observation about the fallibility of human judgment. Many people are perfectly convinced that homeopathic solutions have dramatically improved their quality of life, but that doesn’t mean we should take homeopathy seriously. Of course, I’m not suggesting that cognitive training programs are as ineffectual as homeopathy–in my post, I suggested they may well have some effect–but simply that personal testimonials are no substitute for controlled studies.

With respect to the (also anecdotal) claim that aerobic exercise hasn’t worked for Walker, it’s worth noting that the effects of aerobic exercise on cognitive performance take time to develop. No one expects a single brisk 20-minute jog to dramatically improve cognitive performance. If you’ve been exercising regularly your whole life, the question isn’t whether exercise will improve your cognitive function–it’s whether not doing any exercise for a month or two would lead to poorer performance. That is, if Walker stopped exercising, would his cognitive performance suffer? It would be a decidedly unhealthy hypothesis to test, of course, but that would really be the more reasonable prediction. I don’t think anyone thinks that a person in excellent physical condition would benefit further from physical exercise; the point is precisely that most people aren’t in excellent physical shape. In any event, as I noted in my post, the benefits of aerobic exercise are clearly largest for older adults who were previously relatively sedentary. There’s much less evidence for large effects of aerobic exercise on cognitive performance in young or middle-aged adults.

The more substantive question Walker raises has to do with whether the tasks Owen et al used were too easy to support meaningful improvement. I think this is a reasonable question, but I don’t think the answer is as straightforward as Walker suggests. For one thing, participants in the Owen et al study did show substantial gains in performance on the training tasks (just not the untrained tasks), so it’s not like they were at ceiling. That is, the training tasks clearly weren’t easy. Second, participants varied widely in the number of training sessions they performed, and yet, as the authors note, the correlation between amount of training and cognitive improvement was negligible. So if you extrapolate from the observed pattern, it doesn’t look particularly favorable. Third, Owen et al used 12 different training tasks that spanned a broad range of cognitive abilities. While one can quibble with any individual task, it’s hard to reconcile the overall pattern of null results with the notion that cognitive training produces robust effects. Surely at least some of these measures should have led to a noticeable overall effect if they successfully produced transfer. But they didn’t.

To reiterate what I said in my earlier post, I’m not saying that cognitive training has absolutely no effect. No study is perfect, and it’s conceivable that more robust effects might be observed given a different design. But the Owen et al study is, to put it bluntly, the largest study of cognitive training conducted to date by about two orders of magnitude, and that counts for a lot in an area of research dominated by relatively small studies that have generally produced mixed findings. So, in the absence of contradictory evidence from another large training study, I don’t see any reason to second-guess Owen et al’s conclusions.

Lastly, I don’t think Walker is in any position to cast aspersions on people’s motivations (“It seems hard to believe that the scientists didn’t know this before they began and were setting out to debunk the populist brain game hype”). While I don’t think that his financial stake in brain training programs necessarily impugns his evaluation of the Owen et al study, it can’t exactly promote impartiality either. And for what it’s worth, I dug around the Mind Sparke website and couldn’t find any “scientific proof” that the software works (which is what the website claims)–just some vague allusions to customer testimonials and some citations of other researchers’ published work (none of which, as far as I can tell, used Brain Fitness Pro for training).

undergraduates are WEIRD

This month’s issue of Nature Neuroscience contains an editorial lambasting the excessive reliance of psychologists on undergraduate college samples, which, it turns out, are pretty unrepresentative of humanity at large. The impetus for the editorial is a mammoth in-press review of cross-cultural studies by Joseph Henrich and colleagues, which, the authors suggest, collectively indicate that “samples drawn from Western, Educated, Industrialized, Rich and Democratic (WEIRD) societies … are among the least representative populations one could find for generalizing about humans.” I’ve only skimmed the article, but aside from the clever acronym, you could do a lot worse than these (rather graphic) opening paragraphs:

In the tropical forests of New Guinea the Etoro believe that for a boy to achieve manhood he must ingest the semen of his elders. This is accomplished through ritualized rites of passage that require young male initiates to fellate a senior member (Herdt, 1984; Kelley, 1980). In contrast, the nearby Kaluli maintain that  male initiation is only properly done by ritually delivering the semen through the initiate’s anus, not his mouth. The Etoro revile these Kaluli practices, finding them disgusting. To become a man in these societies, and eventually take a wife, every boy undergoes these initiations. Such boy-inseminating practices, which  are enmeshed in rich systems of meaning and imbued with local cultural values, were not uncommon among the traditional societies of Melanesia and Aboriginal Australia (Herdt, 1993), as well as in Ancient Greece and Tokugawa Japan.

Such in-depth studies of seemingly “exotic“ societies, historically the province of anthropology, are crucial for understanding human behavioral and psychological variation. However, this paper is not about these peoples. It’s about a truly unusual group: people from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. In particular, it’s about the Western, and more specifically American, undergraduates who form the bulk of the database in the experimental branches of psychology, cognitive science, and economics, as well as allied fields (hereafter collectively labeled the “behavioral sciences“). Given that scientific knowledge about human psychology is largely based on findings from this subpopulation, we ask just how representative are these typical subjects in light of the available comparative database. How justified are researchers in assuming a species-level generality for their findings? Here, we review the evidence regarding how WEIRD people compare to other
populations.

Anyway, it looks like a good paper. Based on a cursory read, the conclusions the authors draw seem pretty reasonable, if a bit strong. I think most researchers do already recognize that our dependence on undergraduates is unhealthy in many respects; it’s just that it’s difficult to break the habit, because the alternative is to spend a lot more time and money chasing down participants (and there are limits to that too; it just isn’t feasible for most researchers to conduct research with Etoro populations in New Guinea). Then again, just because it’s hard to do science the right way doesn’t really make it OK to do it the wrong way. So, to the extent that we care about our results generalizing across the entire human species (which, in many cases, we don’t), we should probably be investing more energy in weaning ourselves off undergraduates and trying to recruit more diverse samples.

Kahneman on happiness

The latest TED talk is an instant favorite of mine. Daniel Kahneman talks about the striking differences in the way we experience versus remember events:

It’s an entertaining and profoundly insightful 20-minute talk, and worth watching even if you think you’ve heard these ideas before.

The fundamental problem Kahneman discusses is that we all experience our lives on a moment-by-moment basis, and yet we make decisions based on our memories of the past. Unfortunately, it turns out that the experiencing self and the remembering self don’t necessarily agree about what things make us happy, and so we often end up in situations where we voluntarily make choices that actually substantially reduce our experienced utility. I won’t give away the examples Kahneman talks about, other than to say that they beautifully illustrate the relevance of psychology (or at least some branches of psychology) to the real-world decisions we all make–both the trival, day-to-day variety, and the rarer, life-or-death kind.

As an aside, Kahneman gave a talk at Brain Camp (or, officially, the annual Summer Institute in Cognitive Neuroscience, which may now be defunct–or perhaps only on hiatus?) the year I attended. There were a lot of great talks that year, but Kahneman’s really stood out for me, despite the fact that he hardly talked about research at all. It was more of a meditation on the scientific method–how to go about building and testing new theories. You don’t often hear a Nobel Prize winner tell an audience that the work that won the Nobel Prize was completely wrong, but that’s essentially what Kahneman claimed. Of course, his point wasn’t that Prospect Theory was useless, but rather, that many of the holes and limitations of the theory that people have gleefully pointed out over the last three decades were already well-recognized at the time the original findings were published. Kahneman and Tversky’s goal wasn’t to produce a perfect description or explanation of the mechanisms underlying human decision-making, but rather, an approximation that made certain important facts about human decision-making clear (e.g., the fact that people simply don’t follow the theory of Expected Utility), and opened the door to entirely new avenues of research. Kahneman seemed to think that ultimately what we really want isn’t a protracted series of incremental updates to Prospect Theory, but a more radical paradigm shift, and that in that sense, clinging to Prospect Theory might now actually be impeding progress.

You might think that’s a pretty pessimistic message–“hey, you can win a Nobel Prize for being completely wrong!”–but it really wasn’t; I actually found it quite uplifting (if Daniel Kahneman feels comfortable being mostly wrong about his ideas, why should the rest of us get attached to ours?). At least, that’s the way I remember it now. But that talk was nearly three years ago, you see, so my actual experience at the time may have been quite different. Turns out you can’t really trust my remembering self; it’ll tell you anything it thinks it wants me to hear.

in praise of (lab) rotation

I did my PhD in psychology, but in a department that had close ties and collaborations with neuroscience. One of the interesting things about psychology and neuroscience programs is that they seem to have quite different graduate training models, even in cases where the area of research substantively overlaps (e.g., in cognitive neuroscience). In psychology, there seem two be two general models (at least, at American and Canadian universities; I’m not really familiar with other systems). One is that graduate students are accepted into a specific lab and have ties to a specific advisor (or advisors); the other, more common at large state schools, is that graduate students are accepted into the program (or an area within the program) as a whole, and are then given the (relative) freedom to find an advisor they want to work with. There are pros and cons to either model: the former ensures that every student has a place in someone’s lab from the very beginning of training, so that no one falls through the cracks; but the downside is that beginning students often aren’t sure exactly what they want to work on, and there are occasional (and sometimes acrimonious) mentor-mentee divorces. The latter gives students more freedom to explore their research interests, but can make it more difficult for students to secure funding, and has more of a sink-or-swim flavor (i.e., there’s less institutional support for students).

Both of these models differ quite a bit from what I take to be the most common neuroscience model, which is that students spend all or part of their first year doing a series of rotations through various labs–usually for about 2 months at a time. The idea is to expose students to a variety of different lines of research so that they get a better sense of what people in different areas are doing, and can make a more informed judgment about what research they’d like to pursue. And there are obviously other benefits too: faculty get to evaluate students on a trial basis before making a long-term commitment, and conversely, students get to see the internal workings of the lab and have more contact with the lab head before signing on.

I’ve always thought the rotation model makes a lot of sense, and wonder why more psychology programs don’t try to implement one. I can’t complain about my own training, in that I had a really great experience on both personal and professional levels in the labs I worked in; but I recognize that this was almost entirely due to dumb luck. I didn’t really do my homework very well before entering graduate school, and I could easily have landed in a department or lab I didn’t mesh well with, and spent the next few years miserable and unproductive. I’ll freely admit that I was unusually clueless going into grad school (that’s a post for another time), but I think no matter how much research you do, there’s just no way to know for sure how well you’ll do in a particular lab until you’ve spent some time in it. And most first-year graduate students have kind of fickle interests anyway; it’s hard to know when you’re 22 or 23 exactly what problem you want to spend the rest of your life (or at least the next 4 – 7 years) working on. Having people do rotations in multiple labs seems like an ideal way to maximize the odds of students (and faculty) ending up in happy, productive working relationships.

A question, then, for people who’ve had experience on the administrative side of psychology (or neuroscience) departments: what keeps us from applying a rotation model in psychology too? Are there major disadvantages I’m missing? Is the problem one of financial support? Do we think that psychology students come into graduate programs with more focused interests? Or is it just a matter of convention? Inquiring minds (or at least one of them) want to know…

what’s adaptive about depression?

Jonah Lehrer has an interesting article in the NYT magazine about a recent Psych Review article by Paul Andrews and J. Anderson Thomson. The basic claim Andrews and Thomson make in their paper is that depression is “an adaptation that evolved as a response to complex problems and whose function is to minimize disruption of rumination and sustain analysis of complex problems”. Lehrer’s article is, as always, engaging, and he goes out of his way to obtain some critical perspectives from other researchers not affiliated with Andrews & Thomson’s work. It’s definitely worth a read.

In reading Lehrer’s article and the original paper, two things struck me. One is that I think Lehrer slightly exaggerates the novelty of Andrews and Thomson’s contribution. The novel suggestion of their paper isn’t that depression can be adaptive under the right circumstances (I think most people already believe that, and as Lehrer notes, the idea traces back a long way); it’s that the specific adaptive purpose of depression is to facilitate solving of complex problems. I think Andrews and Thomson’s paper received a somewhat critical reception (which Lehrer discusses) not so much because people found the suggestion that depression might be adaptive objectionable, but because there are arguably more plausible things depression could have been selected for. Lehrer mentions a few:

Other scientists, including Randolph Nesse at the University of Michigan, say that complex psychiatric disorders like depression rarely have simple evolutionary explanations. In fact, the analytic-rumination hypothesis is merely the latest attempt to explain the prevalence of depression. There is, for example, the “plea for help“ theory, which suggests that depression is a way of eliciting assistance from loved ones. There’s also the “signal of defeat“ hypothesis, which argues that feelings of despair after a loss in social status help prevent unnecessary attacks; we’re too busy sulking to fight back. And then there’s “depressive realism“: several studies have found that people with depression have a more accurate view of reality and are better at predicting future outcomes. While each of these speculations has scientific support, none are sufficient to explain an illness that afflicts so many people. The moral, Nesse says, is that sadness, like happiness, has many functions.

Personally, I find these other suggestions more plausible than the Andrews and Thomson story (if still not terribly compelling). There are a variety of reasons for this (see Jerry Coyne’s twin posts for some of them, along with the many excellent comments), but one pretty big one is that is that they’re all at least somewhat more consistent with a continuity hypothesis under which many of the selection pressures that influenced the structure of the human mind have been at work in our lineage for millions of years. That’s to say, if you believe in a “signal of defeat” account, you don’t have to come up with complex explanations for why human depression is adaptive (the problem being that other mammals don’t seem to show an affinity for ruminating over complex analytical problems); you can just attribute depression to much more general selection pressures found in other animals as well.

One hypothesis I particularly like in this respect, related to the signal-of-defeat account, is that depression is essentially just a human manifestation of a general tendency toward low self-confidence and aggression. The value of low self-confidence is pretty obvious: you don’t challenge the alpha male, so you don’t get into fights; you only chase prey you think you can safely catch; and so on. Now suppose humans inherited this basic architecture from our ancestral apes. In human societies there’s still a clear potential benefit to being subservient and non-confrontational; it’s a low-risk, low-reward strategy. If you don’t bother anyone, you’re probably not going to get the girl impress the opposite sex very much, but at least you won’t get clubbed over the head by a competitor very often. So there’s a sensible argument to be made for frequency dependent selection for depression-related traits (the reason it’s likely to be frequency dependent is that if you ever had a population made up entirely of self-doubting, non-aggressive individuals, being more aggressive would probably become highly advantageous, so at some point, you’d achieve a stable equilibrium).

So where does rumination–the main focus of the Andrews and Thomson paper–come into the picture? Well, I don’t know for sure, but here’s a pretty plausible just-so story: once you evolve the capacity to reason intelligently about yourself, you now have a higher cognitive system that’s naturally going to want to understand why it feels the way it does so often. If you’re someone who feels pretty upset about things much of the time, you’re going to think about those things a lot. So… you ruminate. And that’s really all you need! Saying that depression is adaptive doesn’t require you to think of every aspect of depression (e.g., rumination) as a complex and human-specific adaptation; it seems more parsimonious to see depressive rumination as a non-adaptive by-product of a more general and (potentially) adaptive disposition to experience negative affect.  On this type of account, ruminating isn’t actually helping a depressed person solve any problems at all. In fact, you could even argue that rumination shouldn’t make you feel better, or it would defeat the very purpose of having a depressive nature in the first place. In other words, it’s entirely consistent with the basic argument that depression is adaptive under some circumstances that the very purpose of rumination might be to keep depressed people in a depressed state. I don’t have any direct evidence for this, of course; it’s a just-so story. But it’s one that is, in my opinion (a) more plausible and (b) more consistent with indirect evidence (e.g., that rumination generally doesn’t seem to make people feel better!) than the Andrews and Thomson view.

The other thing that struck me about the Andrews and Thomson paper, and to a lesser extent, Lehrer’s article, is that the focus is (intentionally) squarely on whether and why depression is adaptive from an evolutionary standpoint. But it’s not clear that the average person suffering from depression really cares, or should care, about whether their depression exists for some distant evolutionary reason. What’s much more germane to someone suffering from depression is whether their depression is actually increasing their quality of life, and in that respect, it’s pretty difficult to make a positive case. The argument that rumination is adaptive because it helps you solve complex analytical problems is only compelling if you think that those problems are really worth mulling over deeply in the first place. For most of the things that depressed people tend to ruminate over (most of which aren’t life-changing decisions, but trivial things like whether your co-workers hate you because of the unfashionable shirt you wore to work yesterday), that just doesn’t seem to be the case. So the argument becomes circular: rumination helps you solve problems that a happier person probably wouldn’t have been bothered by in the first place. Now, that isn’t to say that there aren’t some very specific environments in which depression might still be adaptive today; it’s just that there don’t seem to be very many of them. If you look at the data, it’s quite clear that, on average, depression has very negative effects. People lose friends, jobs, and the joy of life because of their depression; it’s hard to see what monumental problem-solving insight could possibly compensate for that in most cases. By way of analogy, saying that depression is adaptive because it promotes rumination seems kind of like saying that cigarettes serve an adaptive purpose because they make nicotine withdrawal go away. Well, maybe. But wouldn’t you rather not have the withdrawal symptoms to begin with?

To be clear, I’m not suggesting that we should view depression solely in pathological terms, and should entirely write off the possibility that there are some potentially adaptive aspects to depression (or personality traits that go along with it). Rather, the point is that, if you’re suffering from depression, it’s not clear what good it’ll do you to learn that some of your ancestors may have benefited from their depressive natures. (By the same token, you wouldn’t expect a person suffering from sickle-cell anemia to gain much comfort from learning that they carry two copies of a mutation that, in a heterozygous carrier, would confer a strong resistance to malaria.) Conversely, there’s a very real danger here, in the sense that, if Andrews and Thomson are wrong about rumination being adaptive, they might be telling people it’s OK to ruminate when in fact excessive rumination could be encouraging further depression. My sense is that that’s actually the received wisdom right now (i.e., much of cognitive-behavioral therapy is focused on getting depressed individuals to recognize their ruminative cycles and break out of them). So the concern is that too much publicity might be a bad thing in this case, and, far from heralding the arrival of a new perspective on the conceptualization and treatment of depression, may actually be hurting some people. Ultimately, of course, it’s an empirical matter, and certainly not one I have any conclusive answers to. But what I can quite confidently assert in the meantime is that the Lehrer article is an enjoyable read, so long as you read it with a healthy dose of skepticism.

ResearchBlogging.org
Andrews, P., & Thomson, J. (2009). The bright side of being blue: Depression as an adaptation for analyzing complex problems. Psychological Review, 116 (3), 620-654 DOI: 10.1037/a0016242