Tag Archives: science

whether or not you should pursue a career in science still depends mostly on that thing that is you

I took the plunge a couple of days ago and answered my first question on Quora. Since Brad Voytek won’t shut up about how great Quora is, I figured I should give it a whirl. So far, Brad is not wrong.

The question in question is: “How much do you agree with Johnathan Katz’s advice on (not) choosing science as a career? Or how realistic is it today (the article was written in 1999)?” The Katz piece referred to is here. The gist of it should be familiar to many academics; the argument boils down to the observation that relatively few people who start graduate programs in science actually end up with permanent research positions, and even then, the need to obtain funding often crowds out the time one has to do actual science. Katz’s advice is basically: don’t pursue a career in science. It’s not an optimistic piece.

My answer is, I think, somewhat more optimistic. Here’s the full text:

The real question is what you think it means to be a scientist. Science differs from many other professions in that the typical process of training as a scientist–i.e., getting a Ph.D. in a scientific field from a major research university–doesn’t guarantee you a position among the ranks of the people who are training you. In fact, it doesn’t come close to guaranteeing it; the proportion of PhD graduates in science who go on to obtain tenure-track positions at research-intensive universities is very small–around 10% in most recent estimates. So there is a very real sense in which modern academic science is a bit of a pyramid scheme: there are a relatively small number of people at the top, and a lot of people on the rungs below laboring to get up to the top–most of whom will, by definition, fail to get there.

If you equate a career in science solely with a tenure-track position at a major research university, and are considering the prospect of a Ph.D. in science solely as an investment intended to secure that kind of position, then Katz’s conclusion is difficult to escape. He is, in most respects, correct: in most biomedical, social, and natural science fields, science is now an extremely competitive enterprise. Not everyone makes it through the PhD; of those who do, not everyone makes it into–and then through–one more more postdocs; and of those who do that, relatively few secure tenure-track positions. Then, of those few “lucky” ones, some will fail to get tenure, and many others will find themselves spending much or most of their time writing grants and managing people instead of actually doing science. So from that perspective, Katz is probably right: if what you mean when you say you want to become a scientist is that you want to run your own lab at a major research university, then your odds of achieving that at the outset are probably not very good (though, to be clear, they’re still undoubtedly better than your odds of becoming a successful artist, musician, or professional athlete). Unless you have really, really good reasons to think that you’re particularly brilliant, hard-working, and creative (note: undergraduate grades, casual feedback from family and friends, and your own internal gut sense do not qualify as really, really good reasons), you probably should not pursue a career in science.

But that’s only true given a rather narrow conception where your pursuit of a scientific career is motivated entirely by the end goal rather than by the process, and where failure is anything other than ending up with a permanent tenure-track position. By contrast, if what you’re really after is an environment in which you can pursue interesting questions in a rigorous way, surrounded by brilliant minds who share your interests, and with more freedom than you might find at a typical 9 to 5 job, the dream of being a scientist is certainly still alive, and is worth pursuing. The trivial demonstration of this is that if you’re one of the many people who actuallyenjoy the graduate school environment (yes, they do exist!), it may not even matter to you that much whether or not you have a good shot of getting a tenure-track position when you graduate.

To see this, imagine that you’ve just graduated with an undergraduate degree in science, and someone offers you a choice between two positions for the next six years. One position is (relatively) financially secure, but involves rather boring work of quesitonable utility to society, an inflexible schedule, and colleagues who are mostly only there for a paycheck. The other position has terrible pay, but offers fascinating and potentially important work, a flexible lifestyle, and colleagues who are there because they share your interests and want to do scientific research.

Admittedly, real-world choices are rarely this stark. Many non-academic jobs offer many of the same perceived benefits of academia (e.g., many tech jobs offer excellent working conditions, flexible schedules, and important work). Conversely, many academic environments don’t quite live up to the ideal of a place where you can go to pursue your intellectual passion unfettered by the annoyances of “real” jobs–there’s often just as much in the way of political intrigue, personality dysfunction, and menial due-paying duties. But to a first approximation, this is basically the choice you have when considering whether to go to graduate school in science or pursue some other career: you’re trading financial security and a fixed 40-hour work week against intellectual engagement and a flexible lifestyle. And the point to note is that, even if we completely ignore what happens after the six years of grad school are up, there is clearly a non-negligible segment of the population who would quite happy opt for the second choice–even recognizing full well that at the end of six years they may have to leave and move onto something else, with little to show for their effort. (Of course, in reality we don’t need to ignore what happens after six years, because many PhDs who don’t get tenure-track positions find rewarding careers in other fields–many of them scientific in nature. And, even though it may not be a great economic investment, having a Ph.D. in science is a great thing to be able to put on one’s resume when applying for a very broad range of non-academic positions.)

The bottom line is that whether or not you should pursue a career in science has as much or more to do with your goals and personality as it does with the current environment within or outside of (academic) science. In an ideal world (which is certainly what the 1970s as described by Katz sound like, though I wasn’t around then), it wouldn’t matter: if you had any inkling that you wanted to do science for a living, you would simply go to grad school in science, and everything would probably work itself out. But given real-world constraints, it’s absolutely essentially that you think very carefully about what kind of environment makes you happy and what your expectations and goals for the future are. You have to ask yourself: Am I the kind of person who values intellectual freedom more than financial security? Do I really love the process of actually doing science–not some idealized movie version of it, but the actual messy process–enough to warrant investing a huge amount of my time and energy over the next few years? Can I deal with perpetual uncertainty about my future? And ultimately, would I be okay doing something that I really enjoy for six years if at the end of that time I have to walk away and do something very different?

If the answer to all of these questions is yes–and for many people it is!–then pursuing a career in science is still a very good thing to do (and hey, you can always quit early if you don’t like it–then you’ve lost very little time!). If the answer to any of them is no, then Katz may be right. A prospective career in science may or may not be for you, but at the very least, you should carefully consider alternative prospects. There’s absolutely no shame in going either route; the important thing is just to make an honest decision that takes the facts as they are and not as you wish that they were.

A couple of other thoughts I’ll add belatedly:

  • Calling academia a pyramid scheme is admittedly a bit hyperbolic. It’s true that the personnel structure in academia broadly has the shape of a pyramid, but that’s true of most organizations in most other domains too. Pyramid schemes are typically built on promises and lies that (almost by definition) can’t be realized, and I don’t think many people who enter a Ph.D. program in science can claim with a straight face that they were guaranteed a permanent research position at the end of the road (or that it’s impossible to get such a position). As I suggested in this post, it’s much more likely that everyone involved is simply guilty of minor (self-)deception: faculty don’t go out of their way to tell prospective students what the odds are of actually getting a tenure-track position, and prospective grad students don’t work very hard to find out the painful truth, or to tell faculty what their real intentions are after they graduate. And it may actually be better for everyone that way.
  • Just in case it’s not clear from the above, I’m not in any way condoning the historically low levels of science funding, or the fact that very few science PhDs go on to careers in academic research. I would love for NIH and NSF budgets (or whatever your local agency is) to grow substantially–and for everyone get exactly the kind of job they want, academic or not. But that’s not the world we live in, so we may as well be pragmatic about it and try to identify the conditions under which it does or doesn’t make sense to pursue a career in science right now.
  • I briefly mention this above, but it’s probably worth stressing that there are many jobs outside of academia that still allow one to do scientific research, albeit typically with less freedom (but often for better hours and pay). In particular, the market for data scientists is booming right now, and many of the hires are coming directly from academia. One lesson to take away from this is: if you’re in a science Ph.D. program right now, you should really spend as much time as you can building up your quantitative and technical skills, because they could very well be the difference between a job that involves scientific research and one that doesn’t in the event you leave academia. And those skills will still serve you well in your research career even if you end up staying in academia.

 

the ‘decline effect’ doesn’t work that way

Over the last four or five years, there’s been a growing awareness in the scientific community that science is an imperfect process. Not that everyone used to think science was a crystal ball with a direct line to the universe or anything, but there does seem to be a growing recognition that scientists are human beings with human flaws, and are susceptible to common biases that can make it more difficult to fully trust any single finding reported in the literature. For instance, scientists like interesting results more than boring results; we’d rather keep our jobs than lose them; and we have a tendency to see what we want to see, even when it’s only sort-of-kind-of there, and sometimes not there at all. All of these things contrive to produce systematic biases in the kinds of findings that get reported.

The single biggest contributor to the zeitgeist shift nudge is undoubtedly John Ioannidis (recently profiled in an excellent Atlantic article), whose work I can’t say enough good things about (though I’ve tried). But lots of other people have had a hand in popularizing the same or similar ideas–many of which actually go back several decades. I’ve written a bit about these issues myself in a number of papers (1, 2, 3) and blog posts (1, 2, 3, 4, 5), so I’m partial to such concerns. Still, important as the role of the various selection and publication biases is in charting the course of science, virtually all of the discussions of these issues have had a relatively limited audience. Even Ioannidis’ work, influential as it’s been, has probably been read by no more than a few thousand scientists.

Last week, the debate hit the mainstream when the New Yorker (circulation: ~ 1 million) published an article by Jonah Lehrer suggesting–or at least strongly raising the possibility–that something might be wrong with the scientific method. The full article is behind a paywall, but I can helpfully tell you that some people seem to have un-paywalled it against the New Yorker’s wishes, so if you search for it online, you will find it.

The crux of Lehrer’s argument is that many, and perhaps most, scientific findings fall prey to something called the “decline effect”: initial positive reports of relatively large effects are subsequently followed by gradually decreasing effect sizes, in some cases culminating in a complete absence of an effect in the largest, most recent studies. Lehrer gives a number of colorful anecdotes illustrating this process, and ends on a decidedly skeptical (and frankly, terribly misleading) note:

The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.

While Lehrer’s article received pretty positive reviews from many non-scientist bloggers (many of whom, dismayingly, seemed to think the take-home message was that since scientists always change their minds, we shouldn’t trust anything they say), science bloggers were generally not very happy with it. Within days, angry mobs of Scientopians and Nature Networkers started murdering unicorns; by the end of the week, the New Yorker offices were reduced to rubble, and the scientists and statisticians who’d given Lehrer quotes were all rumored to be in hiding.

Okay, none of that happened. I’m just trying to keep things interesting. Anyway, because I’ve been characteristically lazy slow on the uptake, by the time I got around to writing this post you’re now reading, about eighty hundred and sixty thousand bloggers had already weighed in on Lehrer’s article. That’s good, because it means I can just direct you to other people’s blogs instead of having to do any thinking myself. So here you go: good posts by Games With Words (whose post tipped me off to the article), Jerry Coyne, Steven Novella, Charlie Petit, and Andrew Gelman, among many others.

Since I’ve blogged about these issues before, and agree with most of what’s been said elsewhere, I’ll only make one point about the article. Which is that about half of the examples Lehrer talks about don’t actually seem to me to qualify as instances of the decline effect–at least as Lehrer defines it. The best example of this comes when Lehrer discusses Jonathan Schooler’s attempt to demonstrate the existence of the decline effect by running a series of ESP experiments:

In 2004, Schooler embarked on an ironic imitation of Rhine’s research: he tried to replicate this failure to replicate. In homage to Rhirie’s interests, he decided to test for a parapsychological phenomenon known as precognition. The experiment itself was straightforward: he flashed a set of images to a subject and asked him or her to identify each one. Most of the time, the response was negative—-the images were displayed too quickly to register. Then Schooler randomly selected half of the images to be shown again. What he wanted to know was whether the images that got a second showing were more likely to have been identified the first time around. Could subsequent exposure have somehow influenced the initial results? Could the effect become the cause?

The craziness of the hypothesis was the point: Schooler knows that precognition lacks a scientific explanation. But he wasn’t testing extrasensory powers; he was testing the decline effect. “At first, the data looked amazing, just as we’d expected,” Schooler says. “I couldn’t believe the amount of precognition we were finding. But then, as we kept on running subjects, the effect size”–a standard statistical measure–“kept on getting smaller and smaller.” The scientists eventually tested more than two thousand undergraduates. “In the end, our results looked just like Rhinos,” Schooler said. “We found this strong paranormal effect, but it disappeared on us.”

This is a pretty bad way to describe what’s going on, because it makes it sound like it’s a general principle of data collection that effects systematically get smaller. It isn’t. The variance around the point estimate of effect size certainly gets smaller as samples get larger, but the likelihood of an effect increasing is just as high as the likelihood of it decreasing. The absolutely critical point Lehrer left out is that you would only get the decline effect to show up if you intervened in the data collection or reporting process based on the results you were getting. Instead, most of Lehrer’s article presents the decline effect as if it’s some sort of mystery, rather than the well-understood process that it is. It’s as though Lehrer believes that scientific data has the magical property of telling you less about the world the more of it you have. Which isn’t true, of course; the problem isn’t that science is malfunctioning, it’s that scientists are still (kind of!) human, and are susceptible to typical human biases. The unfortunate net effect is that Lehrer’s article, while tremendously entertaining, achieves exactly the opposite of what good science journalism should do: it sows confusion about the scientific process and makes it easier for people to dismiss the results of good scientific work, instead of helping people develop a critical appreciation for the amazing power science has to tell us about the world.

the male brain hurts, or how not to write about science

My wife asked me to blog about this article on CNN because, she said, “it’s really terrible, and it shouldn’t be on CNN”. I usually do what my wife tells me to do, so I’m blogging about it. It’s by Louann Brizendine, M.D., author of the absolutely awful controversial book The Female Brain, and now, its manly counterpart, The Male Brain. From what I can gather, the CNN article, which is titled Love, Sex, and the Male Brain, is a precis of Brizendine’s new book (though I have no intention of reading the book to make sure). The article is pretty short, so I’ll go through the first half of it paragraph-by-paragraph. But I’ll warn you right now that it isn’t pretty, and will likely anger anyone with even a modicum of training in psychology or neuroscience.

Although women the world over have been doing it for centuries, we can’t really blame a guy for being a guy. And this is especially true now that we know that the male and female brains have some profound differences.

Our brains are mostly alike. We are the same species, after all. But the differences can sometimes make it seem like we are worlds apart.

So far, nothing terribly wrong here, just standard pop psychology platitudes. But it goes quickly downhill.

The “defend your turf” area — dorsal premammillary nucleus — is larger in the male brain and contains special circuits to detect territorial challenges by other males. And his amygdala, the alarm system for threats, fear and danger is also larger in men. These brain differences make men more alert than women to potential turf threats.

As Vaughan notes over at Mind Hacks, the dorsal premammillary nucleus (PMD) hasn’t been identified in humans, so it’s unclear exactly what chunk of tissue Brizendine’s referring to–let alone where the evidence that there are gender differences in humans might come from. The claim that the PMD is a “defend your turf” area might be plausible, if oh, I don’t know, you happen to think that the way rats behave under narrowly circumscribed laboratory conditions when confronted by an aggressor is a good guide to normal interactions between human males. (Then again, given that PMD lesions impair rats from running away when exposed to a cat, Brizendine could just as easily have concluded that the dorsal premammillary nucleus is the “fleeing” part of the brain.)

The amygdala claim is marginally less ridiculous: it’s not entirely clear that the amygdala is “the alarm system for threats, fear and danger”, but at least that’s a claim you can make with a straight face, since it’s one fairly common view among neuroscientists. What’s not really defensible is the claim that larger amygdalae “make men more alert than women to potential turf threats”, because (a) there’s limited evidence that the male amygdala really is larger than the female amygdala and (b) if such a difference exists, it’s very small, and (c) it’s not clear in any case how you go from a small between-group difference to the notion that somehow the amygdala is the reason why men maintain little interpersonal fiefdoms and women don’t.

Meanwhile, the “I feel what you feel” part of the brain — mirror-neuron system — is larger and more active in the female brain. So women can naturally get in sync with others’ emotions by reading facial expressions, interpreting tone of voice and other nonverbal emotional cues.

This falls under the rubric of “not even wrong“. The mirror neuron system isn’t a single “part of the brain”; current evidence suggests that neurons that show mirroring properties are widely distributed throughout multiple frontoparietal regions. So I don’t really know what brain region Brizendine is referring to (the fact that she never cites any empirical studies in support of her claims is something of an inconvenience in that respect). And even if I did know, it’s a safe bet it wouldn’t be the “I feel what you feel” brain region, because, as far as I know, no such thing exists. The central claim regarding mirror neurons isn’t that they support empathy per se, but that they support a much more basic type of representation–namely, abstract conceptual (as opposed to sensory/motor) representation of actions. And even that much weaker notion is controversial; for example, Greg Hickok has a couple of recent posts (and a widely circulated paper) arguing against it. No one, as far as I know, has provided any kind of serious evidence linking the mirror neuron system to females’ (modestly) superior nonverbal decoding ability.

Perhaps the biggest difference between the male and female brain is that men have a sexual pursuit area that is 2.5 times larger than the one in the female brain. Not only that, but beginning in their teens, they produce 200 to 250 percent more testosterone than they did during pre-adolescence.

Maybe the silliest paragraph in the whole article. Not only do I not know what region Brizendine is talking about here, I have absolutely no clue what the “sexual pursuit area” might be. It could be just me, I suppose, but I just searched Google Scholar for “sexual pursuit area” and got… zero hits. Is it a visual region? A part of the hypothalamus? The notoriously grabby motor cortex hand area? No one knows, and Brizendine isn’t telling.  Off-hand, I don’t know of any region of the human brain that shows the degree of sexual dimorphism Brizendine claims here.

If testosterone were beer, a 9-year-old boy would be getting the equivalent of a cup a day. But a 15-year-old would be getting the equivalent of nearly two gallons a day. This fuels their sexual engines and makes it impossible for them to stop thinking about female body parts and sex.

If each fiber of chest hair was a tree, a 12-year-old boy would have a Bonsai sitting on the kitchen counter, and a 30-year-old man would own Roosevelt National Forest. What you’re supposed to learn from this analogy, I honestly couldn’t tell you. It’s hard for me to think clearly about trees and hair you see, seeing as how I find it impossible to stop thinking about female body parts while I’m trying to write this.

All that testosterone drives the “Man Trance”– that glazed-eye look a man gets when he sees breasts. As a woman who was among the ranks of the early feminists, I wish I could say that men can stop themselves from entering this trance. But the truth is, they can’t. Their visual brain circuits are always on the lookout for fertile mates. Whether or not they intend to pursue a visual enticement, they have to check out the goods.

To a man, this is the most natural response in the world, so he’s dismayed by how betrayed his wife or girlfriend feels when she sees him eyeing another woman. Men look at attractive women the way we look at pretty butterflies. They catch the male brain’s attention for a second, but then they flit out of his mind. Five minutes later, while we’re still fuming, he’s deciding whether he wants ribs or chicken for dinner. He asks us, “What’s wrong?” We say, “Nothing.” He shrugs and turns on the TV. We smolder and fear that he’ll leave us for another woman.

This actually isn’t so bad if you ignore the condescending “men are animals with no self-control” implication and pretend Brizendine had just made the  indisputably true but utterly banal observation that men, on average, like to ogle women more than women, on average, like to ogle men.

Not surprisingly, the different objectives that men and women have in mating games put us on opposing teams — at least at first. The female brain is driven to seek security and reliability in a potential mate before she has sex. But a male brain is fueled to mate and mate again. Until, that is, he mates for life.

So men are driven to sleep around, again and again… until they stop sleeping around. It’s tautological and profound at the same time!

Despite stereotypes to the contrary, the male brain can fall in love just as hard and fast as the female brain, and maybe more so. When he meets and sets his sights on capturing “the one,” mating with her becomes his prime directive. And when he succeeds, his brain makes an indelible imprint of her. Lust and love collide and he’s hooked.

Failure to operationalize complex construct of “love” in a measurable way… check. Total lack of evidence in support of claim that men and women are equally love-crazy… check. Oblique reference to Star Trek universe… check. What’s not to like?

A man in hot pursuit of a mate doesn’t even remotely resemble a devoted, doting daddy. But that’s what his future holds. When his mate becomes pregnant, she’ll emit pheromones that will waft into his nostrils, stimulating his brain to make more of a hormone called prolactin. Her pheromones will also cause his testosterone production to drop by 30 percent.

You know, on the off-chance that something like this is actually true, I think it’s actually kind of neat. But I just can’t bring myself to do a literature search, because I’m pretty sure I’ll discover that the jury is still out on whether humans even emit and detect pheromones (ok, I know this isn’t a completely baseless claim), or that there’s little to no evidence of a causal relationship between women releasing pheromones and testosterone levels dropping in men. I don’t like to be disappointed, you see; it turns out it’s much easier to just decide what you want to believe ahead of time and then contort available evidence to fit that view.

Anyway, we’re only half-way through the article; Brizendine goes on in similar fashion for several hundred more words. Highlights include the origin of male poker face, the conflation of correlation and causation in sociable elderly men, and the effects of oxytocin on your grandfather. You should go read the reset of it if you practice masochism; I’m too full of rage depressed to write about it any more.

Setting aside the blatant exercise in irresponsible scientific communication (Brizendine has an MD, and appears to be at least nominally affiliated with UCSF’s psychiatry department, so ignorance shouldn’t really be a valid excuse here), I guess what I’d really like to know is what goes through Brizendine’s mind when she writes this sort of dreck. Does she really believe the ludicrous claims she makes? Is she fully aware she’s grossly distorting the empirical evidence if not outright confabulating, and is simply in it for the money? Or does she rationalize it as a case of the ends justifying the means, thinking the message she’s presenting is basically right, so it’s ok if nearly all a few of the details go missing in the process?

I understand that presenting scientific evidence in an accurate and entertaining manner is a difficult business, and many people who work hard at it still get it wrong pretty often (I make mistakes in my posts here all the time!). But many scientists still manage to find time in their busy schedules to write popular science books that present the science in an accessible way without having to make up ridiculous stories just to keep the reader entertained (Steven Pinker, Antonio Damasio, and Dan Gilbert are just a few of the first ones that spring to mind). And then there are amazing science writers like Carl Zimmer and David Dobbs who don’t necessarily have any professional training in the areas they write about, but still put in the time and energy to make sure they get the details right, and consistently write stories that blow me away (the highest compliment I can pay to a science story is that it makes me think “I wish I studied that“, and Zimmer’s articles routinely do that). That type of intellectual honesty is essential, because there’s really no point in going to the trouble of doing most scientific research if people get to disregard any findings they disagree with on ideological or aesthetic grounds, or can make up any evidence they like to fit their claims.

The sad thing is that Brizendine’s new book will probably sell more copies in its first year out than Carl Zimmer’s entire back catalogue. And it’s not going to sell all those copies because it’s a careful meditation on the subtle differences between genders that scientists have uncovered; it’s going to fly off the shelves because it basically regurgitates popular stereotypes about gender differences with a seemingly authoritative scientific backing. Instead of evaluating and challenging many of those notions with actual empirical data, people who read Brizendine’s work will now get to say “science proves it!”, making it that much more difficult for responsible scientists and journalists to tell the public what’s really true about gender differences.

You might say (or at least, Brizendine might say) that this is all well and good, but hopelessly naive and idealistic, and that telling an accurate story is always going to be less important than telling the public what it wants to hear about science, because the latter is the only way to ensure continued funding for and interest in scientific research. This isn’t that uncommon a sentiment; I’ve even heard a number of scientists who I otherwise have a great deal of respect for say something like this. But I think Brizendine’s work underscores the typical outcome of that type of reasoning: once you allow yourself to relax the standards for what counts as evidence, it becomes quite easy to rationalize almost any rhetorical abuse of science, and ultimately you abuse the public’s trust while muddying the waters for working scientists.

As with so many other things, I think Richard Feynman summed up this sentiment best:

I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I am not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being. We’ll leave those problems up to you and your rabbi. I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you are maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.

For example, I was a little surprised when I was talking to a friend who was going to go on the radio. He does work on cosmology and astronomy, and he wondered how he would explain what the applications of this work were. “Well,” I said, “there aren’t any.” He said, “Yes, but then we won’t get support for more research of this kind.” I think that’s kind of dishonest. If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing–and if they don’t want to support you under those circumstances, then that’s their decision.

No one doubts that men and women differ from one another, and the study of gender differences is an active and important area of psychology and neuroscience. But I can’t for the life of me see any merit in telling the public that men can’t stop thinking about breasts because they’re full of the beer-equivalent of two gallons of testosterone.

[Update 3/25: plenty of other scathing critiques pop up in the blogosphere today: Language Log, Salon, and Neuronarrative, and no doubt many others...]

Feynman’s first principle: on the virtue of changing one’s mind

As an undergraduate, I majored in philosophy. Actually, that’s not technically true: I came within one credit of double-majoring in philosophy and psychology, but I just couldn’t bring myself to take one more ancient philosophy course (a requirement for the major), so I ended up majoring in psychology and minoring in philosophy. But I still had to read a lot of philosophy, and one of my favorite works was Hilary Putnam’s Representation and Reality. The reason I liked it so much had nothing to do with the content (which, frankly, I remember nothing of), and everything to do with the introduction. Hilary Putnam was notorious for changing his mind about his ideas, a practice he defended this way in the introduction to Representation and Reality:

In this book I shall be arguing that the computer analogy, call it the “computational view of the mind,” or “functionalism,” or what you will, does not after all answer the question we philosophers (along with many cognitive scientists) want to answer, the question “What is the nature of mental states?” I am thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced. Strangely enough, there are philosophers who criticize me for doing this. The fact that I change my mind in philosophy has been viewed as a character defect. When I am lighthearted, I retort that it might be that I change my mind because I make mistakes, and that other philosophers don’t change their minds because they simply never make mistakes.

It’s a poignant way of pointing out the absurdity of a view that seemed to me at the time much too common in philosophy (and, which, I’ve since discovered, is also fairly common in science): that changing your mind is a bad thing, and conversely, that maintaining a consistent position on important issues is a virtue. I’ve never really understood this, since, by definition, any time you have at least two people with incompatible views in the same room, the odds must be at least 50% that any given view expressed at random must be wrong. In science, of course, there are rarely just two explanations for a given phenomenon. Ask 10 cognitive neuroscientists what they think the anterior cingulate cortex does, and you’ll probably get a bunch of different answers (though maybe not 10 of them). So the odds of any one person being right about anything at any given point in time are actually not so good. If you’re honest with yourself about that, you’re forced to conclude not only that most published research findings are false, but also that the vast majority of theories that purport to account for large bodies of evidence are false–or at least, wrong in some important ways.

The fact that we’re usually wrong when we make scientific (or philosophical) pronouncements isn’t a reason to abandon hope and give up doing science, of course; there are shades of accuracy, and even if it’s not realistic to expect to be right much of the time, we can at least strive to be progressively less wrong. The best expression of this sentiment that I know of an Isaac Asimov essay entitled The Relativity of Wrong. Asimov was replying to a letter from a reader who took offense to the fact that Asimov, in one of his other essays, “had expressed a certain gladness at living in a century in which we finally got the basis of the universe straight”:

The young specialist in English Lit, having quoted me, went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern “knowledge” is that it is wrong. The young man then quoted with approval what Socrates had said on learning that the Delphic oracle had proclaimed him the wisest man in Greece. “If I am the wisest man,” said Socrates, “it is because I alone know that I know nothing.” the implication was that I was very foolish because I was under the impression I knew a great deal.

My answer to him was, “John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”

The point being that scientific progress isn’t predicated on getting it right, but on getting it more right. Which seems reassuringly easy, except that that still requires us to change our minds about the things we believe in on occasion, and that’s not always a trivial endeavor.

In the years since reading Putnam’s introduction, I’ve come across a number of other related sentiments. One comes  from Richard Dawkins, in a fantastic 1996 Edge talk:

A formative influence on my undergraduate self was the response of a respected elder statesmen of the Oxford Zoology Department when an American visitor had just publicly disproved his favourite theory. The old man strode to the front of the lecture hall, shook the American warmly by the hand and declared in ringing, emotional tones: “My dear fellow, I wish to thank you. I have been wrong these fifteen years.” And we clapped our hands red. Can you imagine a Government Minister being cheered in the House of Commons for a similar admission? “Resign, Resign” is a much more likely response!

Maybe I’m too cynical, but I have a hard time imagining such a thing happening at any talk I’ve ever attended. But I’d like to believe that if it did, I’d also be clapping myself red.

My favorite piece on this theme, though, is without a doubt Richard Feyman’s “Cargo Cult Science” 1974 commencement address at Caltech. If you’ve never read it, you really should; it’s a phenomenally insightful, and simultaneously entertaining, assessment of the scientific process:

We’ve learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work. And it’s this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science.

A little further along, Feynman is even more succinct, offering what I’d say might be the most valuable piece of scientific advice I’ve come across:

The first principle is that you must not fool yourself–and you are the easiest person to fool.

I really think this is the first principle, in that it’s the one I apply most often when analyzing data and writing up papers for publication. Am I fooling myself? Do I really believe the finding, irrespective of how many zeros the p value happens to contain? Or are there other reasons I want to believe the result (e.g., that it tells a sexy story that might make it into a high-impact journal) that might trump its scientific merit if I’m not careful? Decision rules abound in science–the most famous one in psychology being the magical p < .05 threshold. But it’s very easy to fool yourself into believing things you shouldn’t believe when you allow yourself to off-load your scientific conscience onto some numbers in a spreadsheet. And the more you fool yourself about something, the harder it becomes to change your mind later on when you come across some evidence that contradicts the story you’ve sold yourself (and other people).

Given how I feel about mind-changing, I suppose I should really be able to point to cases where I’ve changed my own mind about important things. But the truth is that I can’t think of as many as I’d like. Which is to say, I worry that the fact that I still believe so many of the things I believed 5 or 10 years ago means I must be wrong about most of them. I’d actually feel more comfortable if I changed my mind more often, because then at least I’d feel more confident that I was capable of evaluating the evidence objectively and changing my beliefs when change was warranted. Still, there are at least a few ideas I’ve changed my mind about, some of them fairly big ones. Here are a few examples of things I used to believe and don’t any more, for scientific reasons:

  • That libertarianism is a reasonable ideology. I used to really believe that people would be happiest if we all just butted out of each other’s business and gave each other maximal freedom to govern our lives however we see fit. I don’t believe that any more, because any amount of empirical evidence has convinced me that libertarianism just doesn’t (and can’t) work in practice, and is a worldview that doesn’t really have any basis in reality. When we’re given more information and more freedom to make our choices, we generally don’t make better decisions that make us happier; in fact, we often make poorer decisions that make us less happy. In general, human beings turn out to be really outstandingly bad at predicting the things that really make us happy–or even evaluating how happy the things we currently have make us. And the notion of personal responsibility that libertarians stress turns out to have very limited applicability in practice, because so much of the choices we make aren’t under our direct control in any meaningful sense (e.g., because the bulk of variance in our cognitive abilities and personalities are inherited from our parents, or because subtle contextual cues influence our choices without our knowledge, and often, to our detriment). So in the space of just a few years, I’ve gone from being a libertarian to basically being a raving socialist. And I’m not apologetic about that, because I think it’s what the data support.
  • That we should stress moral education when raising children. The reason I don’t believe this any more is much the same as the above: it turns out that children aren’t blank slates to be written on as we see fit. The data clearly show that post-conception, parents have very limited capacity to influence their children’s behavior or personality. So there’s something to be said for trying to provide an environment that makes children basically happy rather than one that tries to mould them into the morally upstanding little people they’re almost certain to turn into no matter what we do or don’t do.
  • That DLPFC is crucially involved in some specific cognitive process like inhibition or maintenance or manipulation or relational processing or… you name it. At various points in time, I’ve believed a number of these things. But for reasons I won’t go into, I now think the best characterization is something very vague and non-specific like “abstract processing” or “re-representation of information”. That sounds unsatisfying, but no one said the truth had to be satisfying on an intuitive level. And anyway, I’m pretty sure I’ll change my view about this many more times in future.
  • That there’s a general factor of intelligence. This is something I’ve been meaning to write about here for a while now (UPDATE: and I have now, here), and will hopefully get around to soon. But if you want to know why I don’t think g is real, read this explanation by Cosma Shalizi, which I think presents a pretty open-and-shut case.

That’s not a comprehensive list, of course; it’s just the first few things I could think of that I’ve changed my mind about. But it still bothers me a little bit that these are all things that I’ve never taken a public position on in any published article (or even on this blog). After all, it’s easy to change your mind when no one’s watching. Ego investment usually stems from telling other people what you believe, not from thinking out loud to yourself when you’re pacing around the living room. So I still worry that the fact I’ve never felt compelled to say “I used to think… but I now think” about any important idea I’ve asserted publicly means I must be fooling myself. And if there’s one thing that I unfailingly believe, it’s that I’m the easiest person to fool…

[For another take on the virtues of mind-changing, see Mark Crislip's "Changing Your Mind", which provided the impetus for this post.]

the genetics of dog hair

Aside from containing about eleventy hundred papers on Ardi–our new 4.4 million year-old ancestor–this week’s issue of Science has an interesting article on the genetics of dog hair. What is there to know about dog hair, you ask? Well, it turns out that nearly all of the phenotypic variation in dog coats (curly, shaggy, short-haired, etc.) is explained by recent mutations in just three genes. It’s another beautiful example of how complex phenotypes can emerge from relatively small genotypic differences. I’d tell you much more about it, but I’m very lazy busy right now. For more explanation, see here, here, and here (you’re free to ignore the silly headline of that last article). Oh, and here’s a key figure from the paper. I’ve heard that a picture is worth a thousand words, which effectively makes this a 1200-word post. All this writing is hurting my brain, so I’ll stop now.

a tale of dogs, their coats, and three genetic mutations

a tale of dogs, their coats, and three genetic mutations