we, the people, who make mistakes–economists included

Andrew Gelman discusses a “puzzle that’s been bugging [him] for a while“:

Pop economists (or, at least, pop micro-economists) are often making one of two arguments:

1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist.

2. People are irrational and they need economists, with their open minds, to show them how to be rational and efficient.

Argument 1 is associated with “why do they do that?” sorts of puzzles. Why do they charge so much for candy at the movie theater, why are airline ticket prices such a mess, why are people drug addicts, etc. The usual answer is that there’s some rational reason for what seems like silly or self-destructive behavior.

Argument 2 is associated with “we can do better” claims such as why we should fire 80% of public-schools teachers or Moneyball-style stories about how some clever entrepreneur has made a zillion dollars by exploiting some inefficiency in the market.

The trick is knowing whether you’re gonna get 1 or 2 above. They’re complete opposites!

Personally what I find puzzling isn’t really how to reconcile these two strands (which do seem to somehow coexist quite peacefully in pop economists’ writings); it’s how anyone–economist or otherwise–still manages to believe people are rational in any meaningful sense (and I’m not saying Andrew does; in fact, see below).

There are at least two non-trivial ways to define rationality. One is in terms of an ideal agent’s actions–i.e., rationality is what a decision-maker would choose to do if she had unlimited cognitive resources and knew all the information relevant to a given decision. Well, okay, maybe not an ideal agent, but at the very least a very smart one. This is the sense of rationality in which you might colloquially remark to your neighbor that buying lottery tickets is an irrational thing to do, because the odds are stacked against you. The expected value of buying a lottery ticket (i.e., the amount you would expect to end up with in the long run) is generally negative, so in some normative sense, you could say it’s irrational to buy lottery tickets.

This definition of irrationality is probably quite close to the colloquial usage of the term, but it’s not really interesting from an academic standpoint, because nobody (economists included) really believes we’re rational in this sense. It’s blatantly obvious to everyone that none of us really make normatively correct choices much of the time. If for no other reason than we are all somewhat lacking in the omniscience department.

What economists mean when they talk about rationality is something more technical; specifically, it’s that people manifest stationary preferences. That is, given any set of preferences an individual happens to have (which may seem completely crazy to everyone else), rationality implies that that person expresses those preferences in a consistent manner. If you like dark chocolate more than milk chocolate, and milk chocolate more than Skittles, you shouldn’t like Skittles more than dark chocolate. If you do, you’re violating the principle of transitivity, which would effectively make it impossible to model your preferences formally (since we’d have no way of telling what you’d prefer in any given situation). And that would be a problem for standard economic theory, which is based on the assumption that people are fundamentally rational agents (in this particular sense).

The reason I say it’s puzzling that anyone still believes people are rational in even this narrower sense is that decades of behavioral economics and psychology research have repeatedly demonstrated that people just don’t have consistent preferences. You can radically influence and alter decision-makers’ behavior in all sorts of ways that simply aren’t predicted or accounted for by Rational Choice Theory (RCT). I’ll give just two examples here, but there are any number of others, as many excellent books attest (e.g., Dan Ariely‘s Predictably Irrational, or Thaler and Sunstein’s Nudge).

The first example stems from famous work by Madrian and Shea (2001) investigating the effects of savings plan designs on employees’ 401(k) choices. By pretty much anyone’s account, decisions about savings plans should be a pretty big deal for most employees. The difference between opting into a 401(k) and opting out of one can easily amount to several hundred thousand dollars over the course of a lifetime, so you would expect people to have a huge incentive to make the choice that’s most consistent with their personal preferences (whether those preferences happen to be for splurging now or saving for later). Yet what Madrian and Shea convincingly showed was that most employees simply go with the default plan option. When companies switch from opt-in to opt-out (i.e., instead of calling up HR and saying you want to join the plan, you’re enrolled by default, and have to fill out a form if you want to opt out), nearly 50% more employees end up enrolled in the 401(k).

This result (and any number of others along similar lines) makes no sense under rational choice theory, because it’s virtually impossible to conceive of a consistent set of preferences that would explain this type of behavior. Many of the same employees who won’t take ten minutes out of their day to opt in or out of their 401(k) will undoubtedly drive across town to save a few dollars on their groceries; like most people, they’ll look for bargains, buy cheaper goods rather than more expensive ones, worry about leaving something for their children after they’re gone, and so on and so forth. And one can’t simply attribute the discrepancy in behavior to ignorance (i.e., “no one reads the fine print!”), because the whole point of massive incentives is that they’re supposed to incentivize you to do things like look up information that could be relevant to, oh, say, having hundreds of thousands of extra dollars in your bank account in forty years. If you’re willing to look for coupons in the sunday paper to save a few dollars, but aren’t willing to call up HR and ask about your savings plan, there is, to put it frankly, something mildly inconsistent about your preferences.

The other example stems from the enormous literature on risk aversion. The classic risk aversion finding is that most people require a higher nominal payoff on risky prospects than on safe ones before they’re willing to accept the risky prospect. For instance, most people would rather have $10 for sure than $50 with 25% probability, even though the expected value of the latter is 25% higher (an amazing return!). Risk aversion is a pervasive phenomenon, and crops up everywhere, including in financial investments, where it is known as the equity premium puzzle (the puzzle being that many investors prefer bonds to stocks even though the historical record suggests a massively higher rate of return for stocks over the long term).

From a naive standpoint, you might think the challenge risk aversion poses to rational choice theory is that risk aversion is just, you know, stupid. Meaning, if someone keeps offering you $10 with 100% probability or $50 with 25% probability, it’s stupid to keep making the former choice (which is what most people do when you ask them) when you’re going to make much more money by making the latter choice. But again, remember, economic rationality isn’t about preferences per se, it’s about consistency of preferences. Risk aversion may violate a simplistic theory under which people are supposed to simply maximize expected value at all times; but then, no one’s really believed that for  several hundred years. The standard economist’s response to the observation that people are risk averse is to observe that people aren’t maximizing expected value, they’re maximizing utility. Utility has a non-linear relationship with expected value, so that people assign different weight to the Nth+1 dollar earned than to the Nth dollar earned. For instance, the classical value function identified by Kahneman and Tversky in their seminal work (for which Kahneman won the Nobel prize in part) looks like this:

The idea here is that the average person overvalues small gains relative to larger gains; i.e., you may be more satisfied when you receive $200 than when you receive $100, but you’re not going to be twice as satisfied.

This seemed like a sufficient response for a while, since it appears to preserve consistency as the hallmark of rationality. The idea is that you can have people who have more or less curvature in their value and probability weighting functions (i.e., some people are more risk averse than others), and that’s just fine as long as those preferences are consistent. Meaning, it’s okay if you prefer $50 with 25% probability to $10 with 100% probability just as long as you also prefer $50 with 25% probability to $8 with 100% probability, or to $7 with 100% probability, and so on. So long as your preferences are consistent, your behavior can be explained by RCT.

The problem, as many people have noted, is that in actuality there isn’t any set of consistent preferences that can explain most people’s risk averse behavior. A succinct and influential summary of the problem was provided by Rabin (2000), who showed formally that the choices people make when dealing with small amounts of money imply such an absurd level of risk aversion that the only way for them to be consistent would be to reject uncertain prospects with an infinitely large payoff even when the certain payoff was only modestly larger. Put differently,

if a person always turns down a 50-50 lose $100/gain $110 gamble, she will always turn down a 50-50 lose $800/gain $2,090 gamble. … Somebody who always turns down 50-50 lose $100/gain $125 gambles will turn down any gamble with a 50% chance of losing $600.

The reason for this is simply that any concave function that crosses the points expressed by the low-magnitude prospects (e.g., a refusal to take a 50-50 bet with lose $100/gain $110 outcomes) will have to asymptote fairly quickly. So for people to have internally consistent preferences, they would literally have to be turning down infinite but uncertain payoffs for certain but modest ones. Which of course is absurd; in practice, you would have a hard time finding many people who would refuse a coin toss where they lose $600 on heads and win $$$infinity dollarz$$$ on tails. Though you might have a very difficult time convincing them you’re serious about the bet. And an even more difficult time finding infinity trucks with which to haul in those infinity dollarz in the event you lose.

Anyway, these are just two prominent examples; there are literally hundreds of other similar examples in the behavioral economics literature of supposedly rational people displaying wildly inconsistent behavior. And not just a minority of people; it’s pretty much all of us. Presumably including economists. Irrationality, as it turns out, is the norm and not the exception. In some ways, what’s surprising is not that we’re inconsistent, but that we manage to do so well despite our many biases and failings.

To return to the puzzle Andrew Gelman posed, though, I suspect Andrew’s being facetious, and doesn’t really see this as much of a puzzle at all. Here’s his solution:

The key, I believe, is that “rationality” is a good thing. We all like to associate with good things, right? Argument 1 has a populist feel (people are rational!) and argument 2 has an elitist feel (economists are special!). But both are ways of associating oneself with rationality. It’s almost like the important thing is to be in the same room with rationality; it hardly matters whether you yourself are the exemplar of rationality, or whether you’re celebrating the rationality of others.

This seems like a somewhat more tactful way of saying what I suspect Andrew and many other people (and probably most academic psychologists, myself included) already believe, which is that there isn’t really any reason to think that people are rational in the sense demanded by RCT. That’s not to say economics is bunk, or that it doesn’t make sense to think about incentives as a means of altering behavior. Obviously, in a great many situations, pretending that people are rational is a reasonable approximation to the truth. For instance, in general, if you offer more money to have a job done, more people will be willing to do that job. But the fact that the tenets of standard economics often work shouldn’t blind us to the fact that they also often don’t, and that they fail in many systematic and predictable ways. For instance, sometimes paying people more money makes them perform worse, not better. And sometimes it saps them of the motivation to work at all. Faced with overwhelming empirical evidence that people don’t behave as the theory predicts, the appropriate response should be to revisit the theory, or at least to recognize which situations it should be applied in and which it shouldn’t.

Anyway, that’s a long-winded way of saying I don’t think Andrew’s puzzle is really a puzzle. Economists simply don’t express their own preferences and views about consistency consistently, and it’s not surprising, because neither does anyone else. That doesn’t make them (or us) bad people; it just makes us all people.

to each their own addiction

An only slightly fictionalized story, for my long-suffering wife.

“It’s happening again,” I tell my wife from the couch. “I’m having that soul-crushing experience again.”

“Too much work?” she asks, expecting the answer to be yes, since no matter what quantity of work I’m actually burdened with at any given moment, the way I describe it to to other people when they ask is always “too much.”

“No,” I say. “Work is fine right now.”

“Had a paper rejected?”

“Pfft, no,” I say. “Like that ever happens to me!” (I don’t tell her it’s happened to me twice in the past week.)

“Then what?”

“The blog posts,” I tell her, motioning to my laptop screen. “There’s just too many of them in my Reader. I can’t keep up! I’m drowning in RSS feeds!”

My wife has learned not to believe anything I say, ever; we’ve lived together long enough that her modal response to my complaints is an arched eyebrow. So I flip my laptop around and point at the gigantic bolded text in the corner that says All Items (118). Emotionally gigantic, I mean; physically, I think it’s only like 12 point font.

“One hundred and eighteen blog posts!” I yell at absolutely no one. “I’m going to be here all night!”

“That’s because you live here,” she helpfully points out.

I’m not sure exactly when I became enslaved by my blog feeds. I know it was sometime after Carl Zimmer‘s amazing post about the man-eating fireflies of Sri Lanka, and sometime before the Neuroskeptic self-published his momentous report introducing three entirely new mental health diagnoses. But that’s as much as I can tell you; the rest is lost in a haze of rapid-scrolling text, retweeted links, and never-ending comment threads. There’s no alarm bell that sounds out loud to indicate that you’ve stomped all over the line that separates occasional indulgence from outright “I can quit any time, honest!” abuse. No one shows up at your door, hands you a bucket of Skittles, and says, “congratulations! You’re hooked on feeds!”

The thought of all those unread posts piling up causes me to hyperventilate. My wife, who sits unperturbed in her chair as 1,000+ unread articles pile up in her Reader, stares at me with a mixture of bemusement and horror.

“Let’s go for a walk,” she suggests, making a completely transparent effort to distract me from my immense problems.

Going for a walk is, of course, completely out of the question; I still have 118 blog posts to read before I can do anything else. So I read all 118 posts, which turns out not to take all night, but more like 15 minutes (I have a very loose definition of reading; it’s closer to what other people call ‘seeing’). By the time I’ve done that, the internet has written another 8 new articles, so now I feel compelled to read those too. So I do that, and then I hit refresh again, and lo and behold, there are 2 MORE articles. So I grudgingly read those as well, and then I quickly shut my laptop so that no new blog posts can sneak up on me while I’m off hanging out in Microsoft Word pretending to do work.

Screw this, I think after a few seconds, and run to find my wife.

“Come on, let’s go for that walk,” I say, running as fast as I can towards my sandals.

“What’s the big rush,” she asks. “I want to go walking, not jogging; I already went to the gym today.”

“No choice,” I say. “We have to get back before the posts pile up again.”

“What?”

“I said, I have a lot of work to do.”

So we go out walking, and it’s nice and all that; the temperature is probably around 70 degrees; it’s cool and dry and the sun’s just going down; the ice cream carts are out in force on the Pearl Street mall; the jugglers juggle and the fire eaters eat fire and give themselves cancer; a little kid falls down and skins his knee but gets up and laughs like it didn’t even hurt, which it probably didn’t, because everyone knows children under seven years of age don’t have a central nervous system and can’t feel pain. It’s a really nice walk, and I’m happy we’re on it, but the whole time I keep thinking, How many dozens of posts has PZ Myers put up while I’ve been gone? Are Razib Khan and Ed Yong posting their link dumps as I think this? And what’s the over-under on the number of posts in my ‘cog blogs’ folder?

She sees me doing all this of course, and she’s not happy about it. So she lets me know it.

“I’m not happy about this,” she says.

When we get back, we each back to our respective computer screen. I’m relieved to note that the internet’s only made 11 more deliveries, which I promptly review and discharge. I star two posts for later re-consideration and let the rest disappear into the ether of spent words. Then I open up a manuscript I’ve been working on for a while and pretend to do some real work for a couple of hours. With periodic edutainment breaks, of course.

Around 11:30 pm I decide to close up shop for the night. No one really blogs after about 9 pm, which is fortunate, or I’d never get any sleep. It’s also the reason I avoid subscribing to European blogs if I can help it. Europeans have no respect for Mountain Time.

“Are you coming to bed,” I ask my wife.

“Not yet,” she says, looking guilty and avoiding eye contact.

“Why not? You have work to do?”

“Nope, no work.”

“Cooking? Are you making a fancy meal for dinner tomorrow?”

“No, it’s your turn to cook tomorrow,” she says, knowing full well that my idea of cooking consists of a take-out menu and telephone.

“Then what?”

She opens her mouth, but nothing comes out. The words are all jammed tightly in between her vocal cords.

Then I see it, poking out on the couch from under a pillow: green cover, 9 by 6 inches, 300 pages long. It’s that damn book!

“You’re reading Pride and Prejudice again,” I say. It’s an observation, not a question.

“No I’m not.”

“Yes you are. You’re reading that damn book again. I know it. I can see it. It’s right there.” I point at it, just so that there can’t possibly be any ambiguity about which book I’m talking about.

She gazes around innocently, looking at everything but the book.

“What is that, like the fourteenth time this year you’ve read it?”

“Twelfth,” she says, looking guilty. “But really, go to bed without me; I might be up for a while still. I have another fifty pages or so I need to finish before I can go to sleep. I just have to find out if Elizabeth Bennet and Mr. Darcy end up together.”

I look at her mournfully, quietly shut my laptop’s lid, and bid the both of them–wife and laptop–good night. My wife grudgingly nods, but doesn’t look away from Jane Austen’s pages. My RSS feeds don’t say anything either.

“Yes,” I mumble to no one in particular, as I slowly climb up the stairs and head for my toothbrush.

“Yes, they do end up together.”