Fly less, give more

Russ Poldrack writes that he will be flying less:

I travel a lot – I have almost 1.3 million lifetime miles on United Airlines, and in the last few years have regularly flown over 100,000 miles per year. This travel has definitely helped advance my scientific career, and has been in many ways deeply fulfilling and enlightening. However, the toll has been weighing on me and Miles’ article really pushed me over the edge towards action. I used the Myclimate.org carbon footprint calculator to compute the environmental impact of my flights just for the first half of 2019, and it was mind-boggling: more than 23 tons of CO2. For comparison, my entire household’s yearly carbon footprint (estimated using https://www3.epa.gov/carbon-footprint-calculator/) is just over 10 tons!

For these reasons, I am committing to eliminate (to the greatest degree possible) academic air travel for the foreseeable future. That means no air travel for talks, conferences, or meetings — instead participating by telepresence whenever possible.

I’m sympathetic to the sentiment, and have considered reducing my own travel on a couple of occasions. So far, I’ve decided not to. It’s not that I disagree with Russ’s reasoning; I’m very much on board with the motivation for cutting travel, and I think we should all try to do our part to try and help avert, or at least mitigate, the looming climate crisis. The question for me is how to best go about that. While I haven’t entirely ruled out cutting down on flying in future, it’s not something I’m terribly eager to do. Travel is one of the most rewarding and fulfilling parts of my job, and I’m loathe to stop flying unless there are no other ways to achieve the same or better outcomes.

Fortunately, there are other things one can do to try to keep the planet nice and habitable for all of us. For my part, I’ve decided that, rather than cutting back on travel, I’m going to give some money to charity. And in the next ~4,000 words, I’m going to try to convince you to consider doing the same (at least the giving to charity part; I’m not going to try to argue you out of not flying).

Now, I know what you’re thinking at this point. You’re probably thinking, what does he mean, ‘charity’? Is he talking about carbon offsets? He’s talking about carbon offsets, isn’t he! This dude is about to write a 4,000-word essay telling me I should buy carbon offsets! Like I don’t already know they’re a gigantic scam. No thanks.

Congratulations, my friend—you’re right. I do mean carbon offsets.

Well, kind of.

What I’ll really try to convince you is that, while carbon offsets are actually a perfectly reasonable thing for a person to purchase, the idea of "offsetting" one’s lifestyle choices is probably not the most helpful way to think about the more general fight against climate change. But carbon offsets as they’re usually described—i.e., as promissory notes you can purchase from organizations that claim to suck up a certain amount of carbon out of the atmosphere by planting trees or engaging in other similarly hippie activities—are a good place to start, because pretty much everyone’s heard of them. So let me start by rebutting what I see as the two (or really three) most common arguments against offsets, and in the process, it’ll hopefully become clear why offsets are a bit of a red herring that’s best understood as just a special case of a much more general principle. The general principle being, if you want to save yourself a bunch of reading, do a little bit of research, and then give as much as you comfortably can.

Offsets as indulgences

Carbon offsets are frequently criticized on the grounds that they’re either (a) ineffective, or (b) morally questionable—amounting to a form of modern day indulgence. I can’t claim any particular expertise in either climate science or catholicism, but I can say that nothing I’ve read has convinced me of either argument.

Let’s take the indulgence argument first. Superficially, it may seem like carbon offsets are just a way for well-off people to buy their way out of their sins and into heaven (or across the Atlantic—whichever comes first). But, as David Roberts observed in an older Grist piece, there are some fairly important differences between the two things that make the analogy… not such a great one:

If there really were such a thing as sin, and there was a finite amount of it in the world, and it was the aggregate amount of sin that mattered rather than any individual’s contribution, and indulgences really did reduce aggregate sin, then indulgences would have been a perfectly sensible idea.

Roberts’s point is that when someone opts to buy carbon offsets before they get on a plane, the world still benefits from any resulting reduction in carbon release—it’s not like the money simply vanishes into the church’s coffers, never to be seen again, while the newly guilt-relieved traveler gets to go on their merry way. Maybe it feels like people are just taking the easy way out, but so what? There are plenty of other situations in which people opt to give someone else money in order to save themselves some time and effort—or for some other arbitrarily unsavory reason—and we don’t get all moralistic and say y’know, it’s not real parenting if you pay for a babysitter to watch your kids while you’re out, or how dare you donate money to cancer charities just so people see you as a good person. We all have imperfect motives for doing many of the things we do, but if your moral orientation is even slightly utilitarian, you should be able to decouple the motive for performing an action from the anticipated consequences of that action.

As a prominent example, none of us know what thought process went through Bill and Melinda Gates’s heads in the lead-up to their decision to donate the vast majority of their wealth to the Gates Foundation. But suppose it was something like "we want the world to remember us as good people" rather than "we want the world to be a better place", would anyone seriously argue that the Gateses shouldn’t have donated their wealth?

You can certainly argue that it’s better to do the right thing for the right reason than the right thing for the wrong reason, but to the extent that one views climate change as a battle for the survival of humanity (or some large subset of it), it seems pretty counterproductive to only admit soldiers into one’s army if they appear to have completely altruistic motives for taking up arms.

The argument from uncertainty

Then there’s the criticism that carbon offsets are ineffective. I think there are actually two variants of this argument—one from uncertainty, and one from inefficacy. The argument from uncertainty is that there’s just too much uncertainty associated with offset programs. That is, many people are understandably worried that when they donate their money to tree planting or cookstove-purchasing programs, they can’t know for sure that their investment will actually lead to a reduction in global carbon emissions, whereas when they reduce their air travel, they at least know that they’ve saved at least one ticket’s worth of emissions.

Now, it’s obviously true that offsets can be ineffective—if you give a charity some money to reduce carbon, and that charity proceeds to blow all your money on advertising, squirrels it away in an executive’s offshore account, or plants a bunch of trees that barely suck up any carbon, then sure, you have a problem. But the fact that it’s possible to waste money giving to a particular cause doesn’t mean it’s inevitable. If it did, nobody would ever donate money to any charity, because huge inefficiences are rampant. Similarly, there would be no basis for funding clean energy subsidies or emission-reducing technologies, seeing as the net long-term benefit of most such investments is virtually impossible to predict at the outset. Requiring certainty, or anything close to it, when seeking to do something good for the world is a solid recipe for doing almost nothing at all. Uncertainty about the consequences of our actions is just a fact of life, and there’s no reason to impose a higher standard here than in other areas.

Conversely, as intuitively appealing as the idea may be, trying to cut carbon by reducing one’s travel is itself very far from a sure thing. It would be nice to think that if the net carbon contribution of a given flight is estimated at X tons of CO2 per person, then the effect of a random person not getting on that flight is to reduce global CO2 levels by roughly X tons. But it doesn’t work quite that way. For one thing, it’s not as if abstaining from air travel instantly decreases the amount of carbon entering the atmosphere. Near term, the plane you would have traveled on is still going to take off, whether you’re on it or not. So if you decide to stay home, your action doesn’t actually benefit the environment in any way until such time as it (in concert with others’ actions) influences the broader air travel industry.

Will your actions eventually have a positive impact on the air travel industry? I don’t know. Probably. It seems reasonable to suppose that if a bunch of academics decide to stop flying, eventually, fewer planes will take off than otherwise would. What’s much less clear, though, is how many fewer. Will the effective CO2 savings be anywhere near the nominal figure that people like to float when estimating the impact of air travel—e.g., roughly 2 tons for a one-way transatlantic flight in economy? This I also don’t know, but it’s plausible to suppose they won’t. The reason is that your purchasing decisions don’t unfold in a vacuum. When an academic decides not to fly, United Airlines doesn’t say, "oh, I guess we have one less customer now." Instead, the airline—or really, its automated pricing system—says "I guess I’ll leave this low fare open a little bit longer". At a certain price point, the lower price will presumably induce someone to fly who otherwise wouldn’t.

Obviously, price elasticity has its limits. it may well be that, in the long term, the airlines can’t compensate for the drop in demand while staying solvent, and academics and other forward-thinking types get to take credit for saving the world. That’s possible. Alternatively, maybe it’s actually quite easy for airlines to create new, less conscientious, air travelers by lowering prices a little bit, and so the only real product of choosing to stay home is that you develop a bad case of FOMO while your friends are all out having fun learning new things at the conference. Which of these scenarios (or anything in between) happen to be true depends on a number of strong assumptions that, in general, I don’t think most academics, or even economists, have a solid grasp on (I certainly don’t pretend to).

To be clear, I’m not suggesting that the net impact of not flying is negative (that would surprise me), or that academics shouldn’t cut their air travel. I’m simply observing that there’s massive uncertainty about the effects of pretty much anything one could do to try fight climate change. This doesn’t mean we should give up and do nothing (if uncertainty about the future was a reason not to do things, most of us would never leave our house in the morning), but it does mean that perhaps naive cause-and-effect intuitions of the if-I-get-on-fewer-planes-the-world-will-have-less-CO2 variety are not the best guide to effective action.

The argument from inefficacy

The other variant of the argument is a bit stronger, and is about inefficacy rather than uncertainty: here, the idea is not just that we can’t be sure that offsetting works; it’s that we actually have positive evidence that offset programs don’t do what they claim. In support of this argument, people like to point to articles like this, or this, or this—all of which make the case that many organizations that nominally offer to take one’s money and use it to pull some carbon out of the environment (or prevent it from being released) are just not cost-effective.

For what it’s worth, I find many of these articles pretty convincing, and, for the sake of argument, I’m happy to take what many of them say about specific mechanisms of putative carbon reduction as gospel truth. The thing is, the conclusion they support is not that trying to reduce carbon through charitable giving doesn’t work; it’s that it’s easy to waste your money by giving to the wrong organization. This doesn’t mean you have to put your pocketbook away and go home; it just means you might have to invest a bit of time researching the options before you can feel comfortable that there’s a reasonable (again, not a certain!) chance that your donation will achieve its intended purpose.

This observation shouldn’t be terribly troubling to most people. Most of us are already willing to spend some time researching options online before we buy, say, a television; there’s no reason why we shouldn’t expect to do the same thing when trying to use our money to help mitigate environmental disaster. Yet, in conversation, when I’ve asked my academic friends who express cynicism about the value of offsets how much time they’ve actually spent researching the issue, the answer is almost invariably "none" or "not much". I think this is a bit of an odd response from smart people with fancy degrees who I know spend much of their waking life thinking deeply about complex issues. Academics, more than most other folks, should be well aware of the dangers of boiling down a big question like "what’s the best way to fight climate change by spending money?" to a simplistic assertion like "nothing; it can’t be done." But the fact that this kind of response is so common does suggest to me that maybe we should be skeptical of the reflexive complaint that charitable giving can’t mitigate carbon emissions.

Crucially, we don’t have to stop at a FUD-like statement like nobody really knows what helps, so in principle, carbon offsets could be just as effective as not flying. No, I think it’s trivial to demonstrate essentially from first principles that there must be many cost-effective ways to offset one’s emissions.

The argument here is simple: much of what governments and NGOs do to fight climate change isn’t about directly changing individual human beings’ consumption behaviors, but about pro-actively implementing policies or introducing technologies that indirectly affect those behaviors, or minimize their impacts. Make a list of broad strategies, and you find things like:

  • Develop, incentivize and deploy clean energy sources.
  • Introduce laws and regulations that encourage carbon emission reduction (e.g., via forest preservation, congestion pricing, consumption taxes, etc.).
  • Offer financial incentives for farmers, loggers, and other traditional industrial sources of carbon to develop alternative income streams.
  • Fund public awareness campaigns to encourage individual lifestyle changes.
  • Fund research into blue-sky technologies that efficiently pull carbon out of the atmosphere and safely sequester it.

You can probably go on like this for a long time.

Now, some of the items on this list may be hard to pursue effectively unless you’re a government. But in most of these cases, there’s already a healthy ecosystem of NGOs working to make the world a better place. And there’s zero reason to think that it’s just flatly impossible for any of these organizations to be more effective than whatever benefit you think the environment derives from people getting on fewer planes.

On the contrary: it requires very little imagination to see how, say, a charity staffed by lawyers who help third-world governments draft and lobby for critical environment laws might have an environmental impact measured in billions of dollars, even if its budget is only in the millions. Or, if science is your thing, to believe that publicly-funded researchers working on clean energy do occasionally succeed at developing technologies that, when deployed at scale, provide societal returns many times the cost of the original research.

Once you frame it this way—and I honestly don’t know how one would argue against this way of looking at things—it seems pretty clear that blanket statements like "carbon offsets don’t work" are kind of dumb—or at least, intellectually lazy. If what you mean by "carbon offsets don’t work" is the much narrower claim that most tree-planting campaigns aren’t cost-effective, then sure, maybe that’s true. My impression is that many environmental economists would be happy to agree with you. But that narrow statement has almost no bearing on the question of whether or not you can cost-effectively offset the emissions you’d produce by flying. If somebody offered you credible evidence that their organization could reduce enough carbon emissions to offset your transatlantic flight for the princely sum of $10, I hope you wouldn’t respond by saying well, I read your brochure, and I buy all the evidence you presented, but it said nothing about trees anywhere, so I’m afraid I’m going to have to reject your offer and stay home.

The fact of the matter is that there are thousands, if not tens of thousands, of non-profit organizations currently working to fight climate change. They’re working on the problem in many different ways: via policy efforts, technology development, reforestation, awareness-raising, and any number of other avenues. Some of these organizations are undoubtedly fraudulent, bad at what they do, or otherwise a waste of your money. But it’s inconceivable to think that there aren’t some charities out there—and probably a large number, in absolute terms—that are very effective at what they do, and certainly far more effective than whatever a very high-flying individual can achieve by staying off the runways and saving a couple dozen tons of CO2 per year. And you don’t even need there to be a large number of such organizations; you just need to find one of them.

Do you really find it so hard to believe that there are such organizations out there? And that there are also quite a few people whose day job is identifying those organizations, precisely so that people like you and I can come along and give them money?

I don’t.

So how should you spend your money?

Supposing you find the above plausible, you might be thinking, okay, fine, maybe offsetting does work, as long as you’re smart about how you do it—now please tell me who to make a check out to so I can keep drinking terrible hotel coffee and going to poster sessions that make me want to claw my eyes out.

Well I hate to disappoint you, but I’m not entirely comfortable telling you what you should do with your money (I mean, if you insist on an answer, I’ll probably tell you to give it to me). What I can do is tell you is what I’ve done with mine.

A few months ago, I set aside an evening and spent a few hours reading up on various climate-focused initiatives, Then I ended up donating money to the Clean Air Task Force and the Coalition for Rainforest Nations. Both of these are policy-focused organizations; they don’t plant tree saplings or buy anyone a clean-burning stove. They fight climate change by attempting to influence policy in ways that promote, respectively, clean air in the United States, and preservation of the world’s rain forests. They are also, not coincidentally, the two organizations strongly recommended by Founders Pledge—an organization dedicated to identifying effective ways for technology founders (but really, pretty much anyone) to spend their money for the benefit of society.

My decision to give to these organizations was motivated largely by this Founders Pledge report, which I think compellingly argues that these organizations likely offer a much better return on one’s investment than most others. The report estimates a cost of $0.02 – $0.72 per ton of CO2 release averted when donating to the Coalition for Rainforest Nations (the cost is somewhat higher for the Clean Air Task Force). For reference, typical estimates suggest that a single one-way economy-class transatlantic plane ticket introduces perhaps 2 – 3 tons of CO2 to the atmosphere. So, even at the conservative end of Founders Pledge’s "realistic" estimate, you’d need to give CfRN only around $2 to offset that cost. Personally, I’m a skeptical kind of person, so I don’t take such estimates at face value. When I see this kind of number, I immediately multiply it by a factor of 10, because I know how the winner’s curse works. In this case, that still leaves you with an estimate of $20/ton—a number I’m perfectly happy with personally, and that seems to me quite manageable for almost anybody who can afford to get on a transatlantic flight in the first place.

Am I sure that my donations to the above organizations will ultimately do the environment some good? No.

Do I feel confident that these are the best charities out there? Of course not. It’s hard to imagine that they could be, given the sheer number of organizations in this space. But again, certainty is a wildly unrealistic desideratum here. What I am satisfied with is that I’ve done my due diligence, and that in my estimation, I’ve identified a plausibly effective mechanism though which I can do a very tiny bit of good for the world (well, two mechanisms—the other one is this blog post I’m writing, which will hopefully convince at least one other person to take similar action).

I’m not suggesting that anyone else has to draw the same conclusions I have, or donate to the same organizations. Your mileage will probably vary. If, after doing some research, you decide that in your estimation, not flying still makes the most sense, great. And if you decide that actually none of this climate stuff is likely to help, and instead, you’re going to give your money to charities that work on AI alignment or malaria, great. But at the very least, I hope it’s clear there’s really no basis for simply dismissing, out of hand, the notion that one can effectively help reduce atmospheric CO2—on a very, very tiny scale, obviously—via financial means, rather than solely through lifestyle changes.

Why stop at offsets?

So far I’ve argued that donating your money to climate-focused organizations (done thoughtfully) is a perfectly acceptable alternative to cutting back on travel, if your goal is to ultimately reduce atmospheric carbon. If you want to calculate the amount of money you’d need to give to the organization of your choice in order to offset the carbon that your travel (or, more generally, lifestyle) introduces every year, and give exactly that much, great.

But I want to go a bit further than that. What I really want to suggest is that if you’re very concerned about the environment, donating your money can actually be a much better thing to do than just minimizing your own footprint.

The major advantage of charitable giving, unlike travel reduction, or really any kind of lifestyle change, is that there’s a much higher ceiling on what you can accomplish. When you try to fight global warming by avoiding travel, the best you can do is eliminate all of your own personal travel. That may not be trivial, and I think it’s certainly worth doing if your perceived alternative is doing nothing at all. Still, there’s always going to be a hard limit on your contribution. It’s not like you can remove arbitrarily large quantities of carbon from the environment by somehow, say, negatively traveling.

By contrast, when you give money, you don’t have to stop at just offsetting your own carbon production; in principle, you can pay to offset other people’s production too. If you have some discretionary income, and believe that climate change really is an existential threat to the human species (or some large subset of it), then on on some level it seems a bit strange to say, "I just want to make sure I personally don’t produce more carbon than the average human being living in my part of the world; beyond that, it’s other people’s problem." If you believe that climate change presents an existential threat to your descendants, or at least to their quality of life, and you can afford to do more than just reduce your own carbon footprint, why not use more of your resources to try and minimize the collective impact of humanity’s past poor environmental decisions? I’m not saying anyone has a moral obligation to do that; I don’t think they do. But it doesn’t seem like a crazy thing to do, if you have some money to spare.

You can still fly less!

Before I go, let me circle around to where I started. I want to emphasize that nothing I’ve said here is intended as criticism of what Russ Poldrack wrote, or of the anti-flying movement more generally. Quite the opposite: I think Russ Poldrack is a goddamn hero (and not just for his position on this issue). If not for Russ’s post, and subsequent discussions on social media, I doubt I would have been sufficiently motivated to put my own money where my mouth is on this issue, let alone to write this post (as an otherwise fairly selfish person, I’m not ashamed to say that I wrote this post in part to force myself to give a serious chunk of cash to charity—public commitment is a powerful thing!). So I’m very much on board with the initiative: other things equal, I think cutting back on one’s air travel is a good thing to do. All I’m saying here is that there are other ways one can do one’s part in the fight against climate change that don’t require giving up air travel—and that, if anything, have the potential to exert far greater (though admittedly still tiny in the grand scheme of things) impact.

It also goes without saying that the two approaches are not mutually exclusive. On the contrary, the best-case scenario is that most people cut their air travel and give money to organizations working to mitigate climate change. But since nobody is perfect, everything is commensurable, and people have different preferences and resource constraints, I take it for granted that most people (me included) aren’t going to do both, and I think that’s okay. It seems perfectly reasonable to me to feel okay about your relationship with the environment so long as you’re doing something. I respect people who opt to do their part by cutting down on their air travel. But I’m not going to feel guilty for continuing to fly around the world fairly regularly, because I think I’m doing my part too.

what I’ve learned from a failed job search

For the last few months, I’ve been getting a steady stream of emails in my inbox that go something like this:

Dear Dr. Yarkoni,

We recently concluded our search for the position of Assistant Grand Poobah of Academic Sciences in the Area of Multidisciplinary Widget Theory. We received over seventy-five thousand applications, most of them from truly exceptional candidates whose expertise and experience would have been welcomed with open arms at any institution of higher learning–or, for that matter, by the governing board of a small planet. After a very careful search process (which most assuredly did not involve a round or two on the golf course every afternoon, and most certainly did not culminate in a wild injection of an arm into a hat filled with balled-up names) we regret to inform you that we are unable to offer you this position. This should not be taken to imply that your intellectual ability or accomplishments are in any way inferior to those of the person who we ultimately did offer the position to (or rather, persons–you see, we actually offered the job to six people before someone accepted it); what we were attempting to optimize, we hope you understand, was not the quality of the candidate we hired, but a mythical thing called ‘fit’ between yourself and ourselves. Or, to put it another way, it’s not you, it’s us.

We wish you all the best in your future endeavors, and rest assured that if we have another opening in future, we will celebrate your reapplication by once again balling your name up and tossing it into a hat along with seventy-five thousand others.

These letters are typically so warm and fuzzy that it’s hard to feel bad about them. I mean, yes, they’re basically telling me I failed at something, but then, how often does anyone ever actually tell me I’m an impressive, accomplished, human being? Never! If every failure in my life was accompanied by this kind of note, I’d be much more willing to try new things. Though, truth be told, I probably wouldn’t try very hard at anything; it would be worth failing in advance just to get this kind of affirmation.

Anyway, the reason I’ve been getting these letters, as you might surmise, is that I’ve been applying for academic jobs. I’ve been doing this for two years now, and will be doing it for a third year in a row in a few months, which I’m pretty sure qualifies me a world-recognized expert on the process. So in the interest of helping other people achieve the same prowess at failing to secure employment, I’ve decided to share some of the lessons I’ve learned here. This missive comes to you with all of the standard caveats and qualifiers–like, for example, that you should be sitting down when you read this; that you should double-check with people you actually respect to make sure any of this makes sense; and, most importantly, that you’ve completely lost your mind if you try to actually apply any of this ‘knowledge’ to your own personal situation. With that in mind, Here’s What I’ve Learned:

1. The academic job market is really, really, bad. No, seriously. I’ve heard from people at several major research universities that they received anywhere from 150 to 500 applications for individual positions (the latter for open-area positions). Some proportion of these applications come from people who have no real shot at the position, but a huge proportion are from truly exceptional candidates with many, many publications, awards, and glowing letters of recommendation. People who you would think have a bright future in research ahead of them. Except that many of them don’t actually have a bright future in research ahead of them, because these days all of that stuff apparently isn’t enough to land a tenure-track position–and often, isn’t even enough to land an interview.

Okay, to be fair, the situation isn’t quite that bad across the board. For one thing, I was quite selective about my job search this past year. I applied for 22 positions, which may sound like a lot, but there were a lot of ads last year, and I know people with similar backgrounds to mine who applied to 50 – 80 positions and could have expanded their searches still further. So, depending on what kind of position you’re aiming for–particularly if you’re interested in a teaching-heavy position at a small school–the market may actually be quite reasonable at the moment. What I’m talking about here really only applies to people looking for research-intensive positions at major research universities. And specifically, to people looking for jobs primarily in the North American market. I recognize that’s probably a minority of people graduating with PhDs in psychology, but since it’s my blog, you’ll have to live with my peculiar little biases. With that qualifier in mind, I’ll reiterate again: the market sucks right now.

2. I’m not as awesome as I thought I was. Lest you think I’ve suddenly turned humble, let me reassure you that I still think I’m pretty awesome–and I can back that up with hard evidence, because I currently have about 20 emails in my inbox from fancy-pants search committee members telling me what a wonderful, accomplished human being I am. I just don’t think I’m as awesome as I thought I was a year ago. Mind you, I’m not quite so delusional that I expected to have my choice of jobs going in, but I did think I had a decent enough record–twenty-odd publications, some neat projects, a couple of major grant proposals submitted (and one that looks very likely to get funded)–to land at least one or two interviews. I was wrong. Which means I’ve had to take my ego down a peg or two. On balance, that’s probably not a bad thing.

3. It’s hard to get hired without a conventional research program. Although I didn’t get any interviews, I did hear back informally from a couple of places (in addition to those wonderful form letters, I mean), and I’ve had hallway conversations with many people who’ve sat on search committees before. The general feedback has been that my work focuses too much on methods development and not enough on substantive questions. This doesn’t really come as a surprise; back when I was putting together my research statement and application materials, pretty much everyone I talked to strongly advised me to focus on a content area first and play down my methods work, because, they said, no one really hires people who predominantly work on methods–at least in psychology. I thought (and still think) this is excellent advice, and in fact it’s exactly the same advice I give to other people if they make the mistake of asking me for my opinion. But ultimately, I went ahead and marketed myself as a methods person anyway. My reasoning was that I wouldn’t want to show up for a new job having sold myself as a person who does A, B, and C, and then mostly did X, Y, and Z, with only a touch of A thrown in. Or, you know, to put it in more cliched terms, I want people to like me for meeeeeee.

I’m still satisfied with this strategy, even if it ends up costing me a few interviews and a job offer or two (admittedly, this is a bit presumptuous–more likely than not, I wouldn’t have gotten any interviews this time around no matter how I’d framed my application). I do the kind of work I do because I enjoy it and think it’s important; I’m pretty happy where I am, so I don’t feel compelled to–how can I put this diplomatically–fib to search committees. Which isn’t to say that I’m laboring under any illusion that you always have to be completely truthful when applying for jobs; I’m fully aware that selling yourself framing your application around your strengths–and telling people what they want to hear to some extent–is a natural and reasonable thing to do. So I’m not saying this out of any bitterness or naivete; I’m just explaining why I chose to go the honest route that was unlikely to land me a job as opposed to the slightly less honest route that was very slightly more likely to land me a job.

4. There’s a large element of luck involved in landing an academic job. Or, for that matter, pretty much any other kind of job. I’m not saying it’s all luck, of course; far from it. In practice, a single group of maybe three dozen people seem end up filling the bulk of interview slots at major research universities in any given year. Which is to say, while the majority of applicants will go without any interviews at all, some people end up with a dozen or more of them. So it’s clearly very far from a random process; in the long run, better candidates are much more likely to get jobs. But for any given job, the odds of getting an interview and/or job offer depend on any number of factors that you have little or no control over: what particular area the department wants to shore up; what courses need to be taught; how your personality meshes with the people who interview you; which candidate a particular search committee member idiosyncratically happens to take a shining to, and so on. Over the last few months, I’ve found it useful to occasionally remind myself of this fact when my inbox doth overfloweth with rejection letters. Of course, there’s a very thin line between justifiably attributing your negative outcomes to bad luck and failing to take responsibility for things that are under your control, so it’s worth using the power of self-serving rationalization sparingly.

 

In any case, those vacuous observations lessons aside, my plan at this point is still to keep doing essentially the same thing I’ve done the last two years, which consists of (i) putting together what I hope is a strong, if somewhat unconventional, application package; (ii) applying for jobs very selectively–only to places that I think I’d be as happy or happier at than I am in my current position; and (iii) otherwise spending as little of my time as possible thinking about my future employment status, and as much of it as possible concentrating on my research and personal life.

I don’t pretend to think this is a good strategy in general; it’s just what I’ve settled on and am happy with for the moment. But ask me again a year from now and who knows, maybe I’ll be roaming around downtown Boulder fishing quarters out of the creek for lunch money. In the meantime, I hope this rather uneventful report of my rather uneventful job-seeking experience thus far is of some small use to someone else. Oh, and if you’re on a search committee and think you want to offer me a job, I’m happy to negotiate the terms of my employment in the comments below.

not really a pyramid scheme; maybe a giant cesspool of little white lies?

There’s a long tradition in the academic blogosphere (and the offlinesphere too, I presume) of complaining that academia is a pyramid scheme. In a strict sense, I guess you could liken academia to a pyramid scheme, inasmuch as there are fewer open positions at each ascending level, and supply generally exceeds demand. But as The Prodigal Academic points out in a post today, this phenomenon is hardly exclusive to academia:

I guess I don’t really see much difference between academic job hunting, and job hunting in general. Starting out with undergrad admissions, there are many more qualified people for desirable positions than available slots. Who gets those slots is a matter of hard work (to get qualified) and luck (to be one of the qualified people who is “chosen”). So how is the TT any different from grad school admissions (in ANY prestige program), law firm partnership, company CEO, professional artist/athlete/performer, attending physician, investment banking, etc? The pool of qualified applicants is many times larger than the number of slots, and there are desirable perks to success (money/prestige/fame/security/intellectual freedom) making the supply of those willing to try for the goal pretty much infinite.

Maybe I have rose colored glasses on because I have always been lucky enough to find a position in research, but there are no guarantees in life. When I was interviewing in industry, I saw many really interesting jobs available to science PhD holders that were not in research. If I hadn’t gone to National Lab, I would have been happy to take on one of those instead. Sure, my life would be different, but it wouldn’t make my PhD a waste of time or a failed opportunity.

For the most part, I agree with this sentiment. I love doing research, and can’t imagine ever voluntarily leaving academia. But If I do end up having to leave–meaning, if I can’t find a faculty position when I go on the job market in the next year or two–I don’t think it’ll be the end of the world. I see job ads in industry all the time that looks really interesting, and on some level, I think I’d find almost any job that involves creative analysis of very large datasets (which there are plenty of these days!) pretty gratifying. And no matter what happens, I don’t think I’d ever view the time I’ve spent on my PhD and postdoc training as a waste of time, for the simple reason that I’ve really enjoyed most of it (there are, of course, the nasty bits, like writing the Nth chapter of a dissertation–but those are transient, fortunately). So in that sense, I think all the talk about academia being a pyramid scheme is kind of silly.

That said, there is one sticking point to the standard pyramid scheme argument I do agree with, which is that, when you’re starting out as a graduate student, no one really goes out of their way to tell you what the odds of getting a tenure-track faculty position actually are (and they’re not good). The problem being that most of the professors that prospective graduate students have interacted with, either as undergraduates, or in the context of applying to grad school, are precisely those lucky souls who’ve managed to secure faculty positions. So the difficulty of obtaining the same type of position isn’t always very salient to them.

I’m not saying faculty members lie outright to prospective graduate students, of course; I don’t doubt that if you asked most faculty point blank “what proportion of students in your department have managed to find tenure-track positions,” they’d give you an honest answer. But when you’re 22 or 23 years old (and yes, I recognize some graduate students are much older, but this is the mode) and you’re thinking of a career in research, it doesn’t always occur to you to ask that question. And naturally, departments that are trying to recruit your services are unlikely to begin their pitch by saying, “in the past 10 years, only about 12% of our graduates have gone on to tenure-track faculty positions”. So in that sense, I don’t think new graduate students are always aware of just how difficult it is to obtain an independent research position, statistically speaking. That’s not a problem for the (many) graduate students who don’t really have any intention of going into academia anyway, but I do think a large part of the disillusionment graduate students often experience is about the realization that you can bust your ass for five or six years working sixty hours a week, and still have no guarantee of finding a research job when you’re done. And that could be avoided to some extent by making a concerted effort to inform students up front of the odds they face if they’re planning on going down that path. So long as that information is made readily available, I don’t really see a problem.

Having said that, I’m now going to blatantly contradict myself (so what if I do? I am large! I contain multitudes!). You could, I think, reasonably argue that this type of deception isn’t really a problem, and that it’s actually necessary. For one thing, the white lies cut both ways. It isn’t just faculty who conveniently forget to mention that relatively few students will successfully obtain tenure-track positions; many graduate students nod and smile when asked if they’re planning a career in research, despite having no intention of continuing down that path past the PhD. I’ve occasionally heard faculty members complain that they need to do a better job filtering out those applicants who really truly are interested in a career in research, because they’re losing a lot of students to industry at the tail end. But I think this kind of magical mind-reading filter is a pipe dream, for precisely the reasons outlined above: if faculty aren’t willing to begin their recruitment speeches by saying “most of you probably won’t get research positions even if you want them,” they shouldn’t really complain when most students don’t come right out and say “actually, I just want a PhD because I think it’ll be something interesting to do for a few years and then I’ll be able to find a decent job with better hours later”.

The reality is that the whole enterprise may actually require subtle misdirection about people’s intentions. If every student applying to grad school knew exactly what the odds of getting a research position were, I imagine many fewer people who were serious about research would bother applying; you’d then get predominantly people who don’t really want to do research anyway. And if you could magically weed out the students who don’t want to do research, then (a) there probably wouldn’t be enough highly qualified students left to keep research programs afloat, and/or (b) there would be even more candidates applying for research positions, making things even harder for those students who do want careers in research. There’s probably no magical allocation of resources that optimizes everyone’s needs simultaneously; it could be that we’re more or less at a stable equilibrium point built on little white lies.

tl;dr : I don’t think academia is really a pyramid scheme; more like a giant cesspool of little white lies and subtle misinformation that indirectly serves most people’s interests. So, basically, it’s kind of like most other domains of life that involve interactions between many groups of people.

CNS wrap-up

I’m back from CNS in Montreal (actually, I’m not quite back; I’m in Ottawa for a few days–but close enough). Some thoughts about the experience, in no particular order, and with very little sense:

  • A huge number of registered attendees (basically, everyone from Europe who didn’t leave for Montreal early) couldn’t make it to the meeting because of that evil, evil Icelandic volcano. As a result, large swaths of posterboard were left blank–or would have been left blank, if not for the clever “Holy Smokes! So-and-so can’t be here…” notes taped to them. So that was really too bad; aside from the fact that the Europeans missed out on the meeting, which kind of sucks, there was a fair amount of chaos during the slide and symposium sessions as speakers were randomly shuffled around. I guess it’s a testament to the organizers that the conference went off relatively smoothly despite the loss of a large chunk of the attendance.
  • The symposium I chaired went well, as far as I can tell. Which is to say, no one streaked naked through the hall, no one went grossly over time, the audience hall was full, and the three talks I got to watch from the audience were all great. I think my talk went well too, but it’s harder to say. In theory, you should be able to tell how these things go based on the ratio of positive to negative feedback you get. But since people generally won’t tell you if they thought your talk sucked, you’re usually stuck trying to determine whether people are giving you well-I-didn’t-really-like-it-but-I-don’t-want-you-to-feel-bad compliments, or I-really-liked-it-and-I’m-not-even-lying-to-your-face compliments. In any case, good or bad reception, I think the topic is a really important one, and I’m glad the symposium was well attended.
  • I love Montreal. As far as I’m concerned they could have CNS in Montreal every year and I wouldn’t complain. Well, maybe I’d complain a little. But only about unimportant things like the interior decoration of the hotel lobby.
  • Speaking of which, I liked the Hilton Bonaventure and all, but the place did remind me a lot of a 70s porn set. All it’s missing are some giant ferns in the lobby and a table lined with cocaine next to the elevators. (You can probably tell that my knowledge of 70s porn is based entirely on watching two-thirds of Boogie Nights once). Also, what the hell is on floors 2 through 12 of Place Bonaventure? And how can a hotel have nearly 400 rooms, all on the same (13th) floor!?
  • That Vietnamese place we had lunch at on Tuesday, which apparently just opened up, isn’t going to last long. When someone asks you for “brown rice”, they don’t mean “white rice with some red food dye stirred in”.
  • Apparently, Mike X. Cohen is not only the most productive man in cognitive neuroscience, but also a master of the neuroimaging haiku (admittedly, a niche specialty).
  • Sushi and baklava at a conference reception? Yes please!
  • The MDRS party on Monday night was a lot of fun, though the downstairs room at the bar was double-booked. I’m sure the 20-odd people at salsa dancing night were a bit surprised, and probably not entirely appreciative, when 100 or so drunken neuroscientists collectively stumbled downstairs for a free drink, hung out for fifteen minutes, then disappeared upstairs again. Other than that–and the $8 beers–a good time was had.
  • Turns out that assortment of vegetables that Afghans call an Afghan salad is exactly what Turks call a Turkish salad and Israelis call an Israeli salad. I guess I’m not surprised that everyone in that part of the world uses the same four or five ingredients in their salad, but let’s not all rush to take credit for what is basically some cucumber, tomato, and parsley in a bowl. That aside, dinner was awesome. And I wish there were more cities full of restaurants that let you bring your own wine.

  • The talks and posters were great this year. ALL OF THEM. If I had to pick favorites, I guess I really liked the symposium on perceptual decision-making, and several of the posters in the reward/motivation session on Sunday or Monday afternoon. But really, ALL OF THEM WERE GREAT. So let’s all give ourselves giant gold medals with pictures of brains on them. And then… let’s melt down those medals, sell the gold, and buy some scanners with the money.

oh noes! i loses at life again!

The guy in the cubicle next to me is showing signs of leaving work later than me again. Which of course makes him a better human being than me, again. This cannot stand! Lee, if you’re reading this, you’re going down. Today, I stay till midnight! Dinner be damned; I’m going to eat my desk if I have to.

i hate learning new things

Well, I don’t really hate learning new things. I actually quite like learning new things; what I don’t like is having to spend time learning new things. I find my tolerance for the unique kind of frustration associated with learning a new skill (you know, the kind that manifests itself in a series of “crap, now I have to Google that” moments) increases roughly in proportion to my age.

As an undergraduate, I didn’t find learning frustrating at all; quite the opposite, actually. I routinely ignored all the work that I was supposed to be doing (e.g., writing term papers, studying for exams, etc.), and would spend hours piddling around with things that were completely irrelevant to my actual progress through college. In hindsight, a lot of the skills I picked up have actually been quite useful, career-wise (e.g., I spent a lot of my spare time playing around with websites, which has paid off–I now collect large chunks of my data online). But I can’t pretend I had any special foresight at the time. I was just procrastinating by doing stuff that felt like work but really wasn’t.

In my first couple of years in graduate school, when I started accumulating obligations I couldn’t (or didn’t want to) put off, I developed a sort of compromise with myself, where I would spend about fifteen minutes of every hour doing what i was supposed to, and the rest of the hour messing around learning new things. Some of those things were work-related–for instance, learning to use a new software package for analyzing fMRI data, or writing a script that reinvented the wheel just to get a better understanding of the wheel. That arrangement seemed to work pretty well, but strangely, with every year of grad school, I found myself working less and less on so-called “personal development” projects and more and more on supposedly important things like writing journal articles and reviewing other people’s journal articles and just generally acting like someone who has some sort of overarching purpose.

Now that I’m a worldly post-doc in a new lab, I frankly find the thought of having to spend time learning to do new things quite distressing. For example, my new PI’s lab uses a different set of analysis packages than I used in graduate school. So I have to learn to use those packages before I can do much of anything. They’re really great tools, and I don’t have any doubt that I will in fact learn to use them (probably sooner rather than later); I just find it incredibly annoying to have to spend the time doing that. It feels like it’s taking time away from my real work, which is writing. Whereas five years ago, I would have gleefully thrown myself at any opportunity to learn to use a new tool, precisely because it would have allowed me to avoid nasty, icky activities like writing.

In the grand scheme of things, I suppose the transition is for the best. It’s hard to be productive as an academic when you spend all your time learning new things; at some point, you have to turn the things you learn into a product you can communicate to other people. I like the fact that I’ve become more conscientious with age (which, it turns out, is a robust phenomenon); I just wish I didn’t feel so guilty ‘wasting’ my time learning new things. And it’s not like I feel I know everything I need to know. More than ever, I can identify all sorts of tools and skills that would help me work more efficiently if I just took the time to learn them. But learning things often seems like a luxury in this new grown-up world where you do the things you’re supposed to do before the things you actually enjoy most. I fully expect this trend to continue, so that 5 years from now, when someone suggests a new tool or technique I should look into, I’ll just run for the door with my hands covering my ears…