As an undergraduate, I majored in philosophy. Actually, that’s not technically true: I came within one credit of double-majoring in philosophy and psychology, but I just couldn’t bring myself to take one more ancient philosophy course (a requirement for the major), so I ended up majoring in psychology and minoring in philosophy. But I still had to read a lot of philosophy, and one of my favorite works was Hilary Putnam’s Representation and Reality. The reason I liked it so much had nothing to do with the content (which, frankly, I remember nothing of), and everything to do with the introduction. Hilary Putnam was notorious for changing his mind about his ideas, a practice he defended this way in the introduction to Representation and Reality:
In this book I shall be arguing that the computer analogy, call it the “computational view of the mind,” or “functionalism,” or what you will, does not after all answer the question we philosophers (along with many cognitive scientists) want to answer, the question “What is the nature of mental states?” I am thus, as I have done on more than one occasion, criticizing a view I myself earlier advanced. Strangely enough, there are philosophers who criticize me for doing this. The fact that I change my mind in philosophy has been viewed as a character defect. When I am lighthearted, I retort that it might be that I change my mind because I make mistakes, and that other philosophers don’t change their minds because they simply never make mistakes.
It’s a poignant way of pointing out the absurdity of a view that seemed to me at the time much too common in philosophy (and, which, I’ve since discovered, is also fairly common in science): that changing your mind is a bad thing, and conversely, that maintaining a consistent position on important issues is a virtue. I’ve never really understood this, since, by definition, any time you have at least two people with incompatible views in the same room, the odds must be at least 50% that any given view expressed at random must be wrong. In science, of course, there are rarely just two explanations for a given phenomenon. Ask 10 cognitive neuroscientists what they think the anterior cingulate cortex does, and you’ll probably get a bunch of different answers (though maybe not 10 of them). So the odds of any one person being right about anything at any given point in time are actually not so good. If you’re honest with yourself about that, you’re forced to conclude not only that most published research findings are false, but also that the vast majority of theories that purport to account for large bodies of evidence are false–or at least, wrong in some important ways.
The fact that we’re usually wrong when we make scientific (or philosophical) pronouncements isn’t a reason to abandon hope and give up doing science, of course; there are shades of accuracy, and even if it’s not realistic to expect to be right much of the time, we can at least strive to be progressively less wrong. The best expression of this sentiment that I know of an Isaac Asimov essay entitled The Relativity of Wrong. Asimov was replying to a letter from a reader who took offense to the fact that Asimov, in one of his other essays, “had expressed a certain gladness at living in a century in which we finally got the basis of the universe straight”:
The young specialist in English Lit, having quoted me, went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern “knowledge” is that it is wrong. The young man then quoted with approval what Socrates had said on learning that the Delphic oracle had proclaimed him the wisest man in Greece. “If I am the wisest man,” said Socrates, “it is because I alone know that I know nothing.” the implication was that I was very foolish because I was under the impression I knew a great deal.
My answer to him was, “John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”
The point being that scientific progress isn’t predicated on getting it right, but on getting it more right. Which seems reassuringly easy, except that that still requires us to change our minds about the things we believe in on occasion, and that’s not always a trivial endeavor.
In the years since reading Putnam’s introduction, I’ve come across a number of other related sentiments. One comesÂ from Richard Dawkins, in a fantastic 1996 Edge talk:
A formative influence on my undergraduate self was the response of a respected elder statesmen of the Oxford Zoology Department when an American visitor had just publicly disproved his favourite theory. The old man strode to the front of the lecture hall, shook the American warmly by the hand and declared in ringing, emotional tones: “My dear fellow, I wish to thank you. I have been wrong these fifteen years.” And we clapped our hands red. Can you imagine a Government Minister being cheered in the House of Commons for a similar admission? “Resign, Resign” is a much more likely response!
Maybe I’m too cynical, but I have a hard time imagining such a thing happening at any talk I’ve ever attended. But I’d like to believe that if it did, I’d also be clapping myself red.
My favorite piece on this theme, though, is without a doubt Richard Feyman’s “Cargo Cult Science” 1974 commencement address at Caltech. If you’ve never read it, you really should; it’s a phenomenally insightful, and simultaneously entertaining, assessment of the scientific process:
We’ve learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work. And it’s this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science.
A little further along, Feynman is even more succinct, offering what I’d say might be the most valuable piece of scientific advice I’ve come across:
The first principle is that you must not fool yourself–and you are the easiest person to fool.
I really think this is the first principle, in that it’s the one I apply most often when analyzing data and writing up papers for publication. Am I fooling myself? Do I really believe the finding, irrespective of how many zeros the p value happens to contain? Or are there other reasons I want to believe the result (e.g., that it tells a sexy story that might make it into a high-impact journal) that might trump its scientific merit if I’m not careful? Decision rules abound in science–the most famous one in psychology being the magical p < .05 threshold. But it’s very easy to fool yourself into believing things you shouldn’t believe when you allow yourself to off-load your scientific conscience onto some numbers in a spreadsheet. And the more you fool yourself about something, the harder it becomes to change your mind later on when you come across some evidence that contradicts the story you’ve sold yourself (and other people).
Given how I feel about mind-changing, I suppose I should really be able to point to cases where I’ve changed my own mind about important things. But the truth is that I can’t think of as many as I’d like. Which is to say, I worry that the fact that I still believe so many of the things I believed 5 or 10 years ago means I must be wrong about most of them. I’d actually feel more comfortable if I changed my mind more often, because then at least I’d feel more confident that I was capable of evaluating the evidence objectively and changing my beliefs when change was warranted. Still, there are at least a few ideas I’ve changed my mind about, some of them fairly big ones. Here are a few examples of things I used to believe and don’t any more, for scientific reasons:
- That libertarianism is a reasonable ideology. I used to really believe that people would be happiest if we all just butted out of each other’s business and gave each other maximal freedom to govern our lives however we see fit. I don’t believe that any more, because any amount of empirical evidence has convinced me that libertarianism just doesn’t (and can’t) work in practice, and is a worldview that doesn’t really have any basis in reality. When we’re given more information and more freedom to make our choices, we generally don’t make better decisions that make us happier; in fact, we often make poorer decisions that make us less happy. In general, human beings turn out to be really outstandingly bad at predicting the things that really make us happy–or even evaluating how happy the things we currently have make us. And the notion of personal responsibility that libertarians stress turns out to have very limited applicability in practice, because so much of the choices we make aren’t under our direct control in any meaningful sense (e.g., because the bulk of variance in our cognitive abilities and personalities are inherited from our parents, or because subtle contextual cues influence our choices without our knowledge, and often, to our detriment). So in the space of just a few years, I’ve gone from being a libertarian to basically being a raving socialist. And I’m not apologetic about that, because I think it’s what the data support.
- That we should stress moral education when raising children. The reason I don’t believe this any more is much the same as the above: it turns out that children aren’t blank slates to be written on as we see fit. The data clearly show that post-conception, parents have very limited capacity to influence their children’s behavior or personality. So there’s something to be said for trying to provide an environment that makes children basically happy rather than one that tries to mould them into the morally upstanding little people they’re almost certain to turn into no matter what we do or don’t do.
- That DLPFC is crucially involved in some specific cognitive process like inhibition or maintenance or manipulation or relational processing or… you name it. At various points in time, I’ve believed a number of these things. But for reasons I won’t go into, I now think the best characterization is something very vague and non-specific like “abstract processing” or “re-representation of information”. That sounds unsatisfying, but no one said the truth had to be satisfying on an intuitive level. And anyway, I’m pretty sure I’ll change my view about this many more times in future.
- That there’s a general factor of intelligence. This is something I’ve been meaning to write about here for a while now (UPDATE: and I have now, here), and will hopefully get around to soon. But if you want to know why I don’t think g is real, read this explanation by Cosma Shalizi, which I think presents a pretty open-and-shut case.
That’s not a comprehensive list, of course; it’s just the first few things I could think of that I’ve changed my mind about. But it still bothers me a little bit that these are all things that I’ve never taken a public position on in any published article (or even on this blog). After all, it’s easy to change your mind when no one’s watching. Ego investment usually stems from telling other people what you believe, not from thinking out loud to yourself when you’re pacing around the living room. So I still worry that the fact I’ve never felt compelled to say “I used to think… but I now think” about any important idea I’ve asserted publicly means I must be fooling myself. And if there’s one thing that I unfailingly believe, it’s that I’m the easiest person to fool…
[For another take on the virtues of mind-changing, see Mark Crislip’s “Changing Your Mind“, which provided the impetus for this post.]