Yes, your research is very noble. No, that’s not a reason to flout copyright law.

Scientific research is cumulative; many elements of a typical research project would not and could not exist but for the efforts of many previous researchers. This goes not only for knowledge, but also for measurement. In much of the clinical world–and also in many areas of “basic” social and life science research–people routinely save themselves inordinate amounts of work by using behavioral or self-report measures developed and validated by other researchers.

Among many researchers who work in fields heavily dependent on self-report instruments (e.g., personality psychology), there appears to be a tacit belief that, once a measure is publicly available–either because it’s reported in full in a journal article, or because all of the items and instructions be found on the web–it’s fair game for use in subsequent research. There’s a time-honored ttradition of asking one’s colleagues if they happen to “have a copy” of the NEO-PI-3, or the Narcissistic Personality Inventory, or the Hamilton Depression Rating Scale. The fact that many such measures are technically published under restrictive copyright licenses, and are often listed for sale at rather exorbitant prices (e.g., you can buy 25 paper copies of the NEO-PI-3 from the publisher for $363 US), does not seem to deter researchers much. The general understanding seems to be that if a measure is publicly available, it’s okay to use it for research purposes. I don’t think most researchers have a well-thought out, internally consistent justification for this behavior; it seems to almost invariably be an article of tacit belief that nothing bad can or should happen to someone who uses a commercially available instrument for a purpose as noble as scientific research.

The trouble with tacit beliefs is that, like all beliefs, they can sometimes be wrong–only, because they’re tacit, they’re often not evaluated openly until things go horribly wrong. Exhibit A on the frontier of horrible wrongness is a recent news article in Science that reports on a rather disconcerting case where the author of a measure (the Eight-Item Morisky Medication Adherence Scale–which also provides a clue to its author’s name) has been demanding rather large sums of money (ranging from $2000 to $6500) from the authors of hundreds of published articles that have used the MMAS-8 without explicitly requesting permission. As the article notes, there appears to be a general agreement that Morisky is within his legal rights to demand such payment; what people seem to be objecting to is the amount Morisky is requesting, and the way he’s going about the process (i.e., with lawyers):

Morisky is well within his rights to seek payment for use of his copyrighted tool. U.S. law encourages academic scientists and their universities to protect and profit from their inventions, including those developed with public funds. But observers say Morisky’s vigorous enforcement and the size of his demands stand out. “It’s unusual that he is charging as much as he is,” says Kurt Geisinger, director of the Buros Center for Testing at the University of Nebraska in Lincoln, which evaluates many kinds of research-related tests. He and others note that many scientists routinely waive payments for such tools, as long as they are used for research.

It’s a nice article, and and I think it suggests two things fairly clearly. First, Morisky is probably not a very nice man. He seems to have no compunction charging resource-strapped researchers in third-world countries licensing fees that require them to take out loans from their home universities, and he would apparently rather see dozens of published articles retracted from the literature than suffer the indignity of having someone use his measure without going through the proper channels (and paying the corresponding fees).

Second, the normative practice in many areas of science that depend on the (re)use of measures developed by other people is to essentially flout copyright law, bury one’s head in the sand, and hope for the best.

I don’t know that anything can be done about the first observation–and even if something could be done, there will always be other Moriskys. I do, however, think that we could collectively do quite a few things to change the way scientists think about, and deal with, the re-use of self-report (and other kinds of) measures. Most of these amount to providing better guidance and training. In principle, this shouldn’t be hard to do; in most disciplines, scientists are trained in all manner of research method, statistical praxis, and scientific convention. Yet I know of no graduate program in my own discipline (psychology) that provides its students with even a cursory overview of intellectual property law. This despite the fact that many scientists’ chief assets–and the things they most closely identify their career achievements with–are their intellectual products.

This is, in my view, a serious training failure. More important, it’s an unnecessary failure, because there isn’t really very much that a social scientist needs to know about copyright law in order to dramatically reduce their odds of ending up a target of legal action. The goal is not to train PhDs who can moonlight as bad attorneys; it’s to prevent behavior that flagrantly exposes one to potential Moriskying (look! I coined a verb!). For that, a single 15-minute segment of a research methods class would likely suffice. While I’m sure someone better-informed and more lawyer-like than me could come up with a more accurate precis, here’s the gist of what I think one would want to cover:

  • Just because a measure is publicly available does not mean it’s in the public domain. It’s intuitive to suppose that any measure that can be found in a publicly accessible place (e.g., on the web) is, by default, okay for public use–meaning that, unless the author of a measure has indicated that they don’t want their measure to be used by others, it can be. In fact, the opposite is true. By default, the author of a newly produced work retains all usage and distribution rights to that work. The author can, if they are so inclined, immediately place that work in the public domain. Alternatively, they could stipulate that every time someone uses their measure, that user must, within 72 hours of use, immediately send the author 22 green jelly beans in an unmarked paper bag. You don’t like those terms of use? Fine: don’t use the measure.

Importantly, an author isn’t under any obligation to say anything at all about how they wish their work to be reproduced or used. This means that when a researcher uses a measure that lacks explicit licensing information, that researcher is assuming the risk of running afoul of the measure author’s desires, whether or not those desires have been made publicly known. The fact that the measure happens to be publicly available may be a mitigating factor (e.g., one could potentially claim fair use, though as far as I know there’s little precedent for this type of thing in the scientific domain), but that’s a matter for lawyers to hash out, and I think most of us scientists would rather avoid lawyer-hashing if we can help it.

This takes us directly to the next point…

  • Don’t use a measure unless you’ve read, and agree with, its licensing terms. Of course, in practice, very few scientific measures are currently released with an explicit license–which gives rise to an important corollary injunction: don’t use a measure that doesn’t come with a license.

The latter statement may seem unfair; after all, it’s clear enough that most measures developed by social scientist are missing licenses not because their authors are intentionally trying to capitalize on ambiguity, but simply because most authors are ignorant of the fact that the lack of a license creates a significant liability for potential users. Walking away from unlicensed measures would amount to giving up on huge swaths of potential research, which surely doesn’t seem like a good idea.

Fortunately, I’m not suggesting anything nearly this drastic. Because the lack of licensing is typically unintentional, often, a simple, friendly email to an author may be sufficient to magic an explicit license into existence. While I haven’t had occasion to try this yet for self-report measures, I’ve been on both ends of such requests on multiple occasions when dealing with open-source software. In virtually every case I’ve been involved in, the response to an inquiry along the lines of “hey, I’d like to use your software, but there’s no license information attached” has been to either add a license to the repository (for example…), or provide an explicit statement to the effect of “you’re welcome to use this for the use case you describe”. Of course, if a response is not forthcoming, that too is instructive, as it suggests that perhaps steering clear of the tool (or measure) in question might be a good idea.

Of course, taking licensing seriously requires one to abide by copyright law–which, like it or not, means that there may be cases where the responsible (and legal) thing to do is to just walk away from a measure, even if it seems perfect for your use case from a research standpoint. If you’re serious about taking copyright seriously, and, upon emailing the author to inquire about the terms of use, you’re informed that the terms of use involve paying $100 per participant, you can either put up the money, or use a different measure. Burying your head in the sand and using the measure anyway, without paying for it, is not a good look.

  • Attach a license to every reusable product you release into the wild. This follows directly from the previous point: if you want responsible, informed users to feel comfortable using your measure, you should tell them what they can and can’t do with it. If you’re so inclined, you can of course write your own custom license, which can involve dollar bills, jelly beans, or anything else your heart desires. But unless you feel a strong need to depart from existing practices, it’s generally a good idea to select one of the many pre-existing licenses out there, because most of them have the helpful property of having been written by lawyers, and lawyers are people who generally know how to formulate sentiments like “you must give me heap big credit” in somewhat more precise language.

There are a lot of practical recommendations out there about what license one should or shouldn’t choose; I won’t get into those here, except to say that in general, I’m a strong proponent of using permissive licenses (e.g., MIT or CC-BY), and also, that I agree with many people’s sentiment that placing restrictions on commercial use–while intuitively appealing to scientists who value public goods–is generally counterproductive. In any case, the real point here is not to push people to use any particular license, but just to think about it for a few minutes when releasing a measure. I mean, you’re probably going to spend tens or hundreds of hours thinking about the measure itself; the least you can do is make sure you tell people what they’re allowed to do with it.

I think covering just the above three points in the context of a graduate research methods class–or at the very least, in those methods classes slanted towards measure development or evaluation (e.g., psychometrics)–would go a long way towards changing scientific norms surrounding measure use.

Most importantly, perhaps, the point of learning a little bit about copyright law is not just to reduce one’s exposure to legal action. There are also large communal benefits. If academic researchers collectively decided to stop flouting copyright law when choosing research measures, the developers of measures would face a very different–and, from a societal standpoint, much more favorable–set of incentives. The present state of affairs–where an instrument’s author is able to legally charge well-meaning researchers exorbitant fees post-hoc for use of an 8-item scale–exists largely because researchers refuse to take copyright seriously, and insist on acting as if science, being such a noble and humanitarian enterprise, is somehow exempt from legal considerations that people in other fields have to constantly worry about. Perversely, the few researchers who do the right thing by offering to pay for the scales they use then end up incurring large costs, while the majority who use the measures without permission suffer no consequences (except on the rare occasions when someone like Morisky comes knocking on the door with a lawyer).

By contrast, in an academic world that cared more about copyright law, many widely-used measures that are currently released under ambiguous or restrictive licenses (or, most commonly, no license at all) would never have attained widespread use in the first place. If, say, Costa & McCrae’s NEO measures–used by thousands of researchers every year–had been developed in a world where academics had a standing norm of avoiding restrictively licensed measures, the most likely outcome is that the NEO would have changed to accommodate the norm, and not vice versa. The net result is that we would be living in a world where the vast majority of measures–just like the vast majority of open-source software–really would be free to use in every sense of the word, without risk of lawsuits, and with the ability to redistribute, reuse, and modify freely. That, I think, is a world we should want to live in. And while the ship may have already sailed when it comes to the most widely used existing measures, it’s a world we could still have going forward. We just have to commit to not using new measures unless they have a clear license–and be prepared to follow the terms of that license to the letter.

on writing: some anecdotal observations, in no particular order

  • Early on in graduate school, I invested in the book “How to Write a Lot“. I enjoyed reading it–mostly because I (mistakenly) enjoyed thinking to myself, “hey, I bet as soon as I finish this book, I’m going to start being super productive!” But I can save you the $9 and tell you there’s really only one take-home point: schedule writing like any other activity, and stick to your schedule no matter what. Though, having said that, I don’t really do that myself. I find I tend to write about 20 hours a week on average. On a very good day, I manage to get a couple of thousand words written, but much more often, I get 200 words written that I then proceed to rewrite furiously and finally trash in frustration. But it all adds up in the long run I guess.
  • Some people are good at writing one thing at a time; they can sit down for a week and crank out a solid draft of a paper without every looking sideways at another project. Personally, unless I have a looming deadline (and I mean a real deadline–more on that below), I find that impossible to do; my general tendency is to work on one writing project for an hour or two, and then switch to something else. Otherwise I pretty much lose my mind. I also find it helps to reward myself–i.e., I’ll work on something I really don’t want to do for an hour, and then play video games for a while switch to writing something more pleasant.
  • I can rarely get any ‘real’ writing (i.e., stuff that leads to publications) done after around 6 pm; late mornings (i.e., right after I wake up) are usually my most productive writing time. And I generally only write for fun (blogging, writing fiction, etc.) after 9 pm. There are exceptions, but by and large that’s my system.
  • I don’t write many drafts. I don’t mean that I never revise papers, because I do–obsessively. But I don’t sit down thinking “I’m going to write a very rough draft, and then I’ll go back and clean up the language.” I sit down thinking “I’m going to write a perfect paper the first time around,” and then I very slowly crank out a draft that’s remarkably far from being perfect. I suspect the former approach is actually the more efficient one, but I can’t bring myself to do it. I hate seeing malformed sentences on the page, even if I know I’m only going to delete them later. It always amazes and impresses me when I get Word documents from collaborators with titles like “AmazingNatureSubmissionVersion18”. I just give my documents all the title “paper_draft”. There might be a V2 or a V3, but there will never, ever be a V18.
  • Papers are not meant to be written linearly. I don’t know anyone who starts with the Introduction, then does the Methods and Results, and then finishes with the Discussion. Personally I don’t even write papers one section at a time. I usually start out by frantically writing down ideas as they pop into my head, and jumping around the document as I think of other things I want to say. I frequently write half a sentence down and then finish it with a bunch of question marks (like so: ???) to indicate I need to come back later and patch it up. Incidentally, this is also why I’m terrified to ever show anyone any of my unfinished paper drafts: an unsuspecting reader would surely come away thinking I suffer from a serious thought disorder. (I suppose they might be right.)
  • Okay, that last point is not entirely true. I don’t write papers completely haphazardly; I do tend to write Methods and Results before Intro and Discussion. I gather that this is a pretty common approach. On the rare occasions when I’ve started writing the Introduction first, I’ve invariably ended up having to completely rewrite it, because it usually turns out the results aren’t actually what I thought they were.
  • My sense is that most academics get more comfortable writing as time goes on. Relatively few grad students have the perseverance to rapidly crank out publication-worthy papers from day 1 (I was definitely not one of them). I don’t think this is just a matter of practice; I suspect part of it is a natural maturation process. People generally get more conscientious as they age; it stands to reason that writing (as an activity most people find unpleasant) should get easier too. I’m better at motivating myself to write papers now, but I’m also much better about doing the dishes and laundry–and I’m pretty sure that’s not because practice makes dishwashing perfect.
  • When I started grad school, I was pretty sure I’d never publish anything, let alone graduate, because I’d never handed in a paper as an undergraduate that wasn’t written at the last minute, whereas in academia, there are virtually no hard deadlines (see below). I’m not sure exactly what changed. I’m still continually surprised every time something I wrote gets published. And I often catch myself telling myself, “hey, self, how the hell did you ever manage to pay attention long enough to write 5,000 words?” And then I reply to myself, “well, self, since you ask, I took a lot of stimulants.”
  • I pace around a lot when I write. A lot. To the point where my labmates–who are all uncommonly nice people–start shooting death glares my way. It’s a heritable tendency, I guess (the pacing, not the death glare attraction); my father also used to pace obsessively. I’m not sure what the biological explanation for it is. My best guess is it’s an arousal-mediated effect: I can think pretty well when I’m around other people, or when I’m in motion, but if I’m sitting at a desk and I don’t already know exactly what I want to say, I can’t get anything done. I generally pace around the lab or house for a while figuring out what I want to say, and then I sit down and write until I’ve forgotten what I want to say, or decide I didn’t really want to say that after all. In practice this usually works out to 10 minutes of pacing for every 5 minutes of writing. I envy people who can just sit down and calmly write for two or three hours without interruption (though I don’t think there are that many of them). At the same time, I’m pretty sure I burn a lot of calories this way.
  • I’ve been pleasantly surprised to discover that I much prefer writing grant proposals to writing papers–to the point where I actually enjoy writing grant proposals. I suspect the main reason for this is that grant proposals have a kind of openness that papers don’t; with a paper, you’re constrained to telling the story the data actually support, whereas a grant proposal is as good as your vision of what’s possible (okay, and plausible). A second part of it is probably the novelty of discovery: once you conduct your analyses, all that’s left is to tell other people what you found, which (to me) isn’t so exciting. I mean, I already think I know what’s going on; what do I care if you know? Whereas when writing a grant, a big part of the appeal for me is that I could actually go out and discover new stuff–just as long as I can convince someone to give me some money first.
  • At a a departmental seminar attended by about 30 people, I once heard a student express concern about an in-progress review article that he and several of the other people at the seminar were collaboratively working on. The concern was that if all of the collaborators couldn’t agree on what was going to go in the paper (and they didn’t seem to be able to at that point), the paper wouldn’t get written in time to make the rapidly approaching deadline dictated by the journal editor. A senior and very brilliant professor responded to the student’s concern by pointing out that this couldn’t possibly be a real problem seeing as in reality there is actually no such thing as a hard writing deadline. This observation didn’t go over so well with some of the other senior professors, who weren’t thrilled that their students were being handed the key to the kingdom of academic procrastination so early in their careers. But it was true, of course: with the major exception of grant proposals (EDIT: and as Garrett points out in the comments below, conference publications in disciplines like Computer Science), most of the things academics write (journal articles, reviews, commentaries, book chapters, etc.) operate on a very flexible schedule. Usually when someone asks you to write something for them, there is some vague mention somewhere of some theoretical deadline, which is typically a date that seems so amazingly far off into the future that you wonder if you’ll even be the same person when it rolls around. And then, much to your surprise, the deadline rolls around and you realize that you must in fact really bea different person, because you don’t seem to have any real desire to work on this thing you signed up for, and instead of writing it, why don’t you just ask the editor for an extension while you go rustle up some motivation. So you send a polite email, and the editor grudgingly says, “well, hmm, okay, you can have another two weeks,” to which you smile and nod sagely, and then, two weeks later, you send another similarly worded but even more obsequious email that starts with the words “so, about that extension…”

    The basic point here is that there’s an interesting dilemma: even though there rarely are any strict writing deadlines, it’s to almost everyone’s benefit to pretend they exist. If I ever find out that the true deadline (insofar as such a thing exists) for the chapter I’m working on right now is 6 months from now and not 3 months ago (which is what they told me), I’ll probably relax and stop working on it for, say, the next 5 and a half months. I sometimes think that the most productive academics are the ones who are just really really good at repeatedly lying to themselves.

  • I’m a big believer in structured procrastination when it comes to writing. I try to always have a really unpleasant but not-so-important task in the background, which then forces me to work on only-slightly-unpleasant-but-often-more-important tasks. Except it often turns out that the unpleasant-but-no-so-important task is actually an unpleasant-but-really-important task after all, and then I wake up in a cold sweat in the middle of the night thinking of all the ways I’ve screwed myself over. No, just kidding. I just bitch about it to my wife for a while and then drown my sorrows in an extra helping of ice cream.
  • I’m really, really, bad at restarting projects I’ve put on the back burner for a while. Right now there are 3 or 4 papers I’ve been working on on-and-off for 3 or 4 years, and every time I pick them up, I write a couple of hundred words and then put them away for a couple of months. I guess what I’m saying is that if you ever have the misfortune of collaborating on a paper with me, you should make sure to nag me several times a week until I get so fed up with you I sit down and write the damn paper. Otherwise it may never see the light of day.
  • I like writing fiction in my spare time. I also occasionally write whiny songs. I’m pretty terrible at both of these things, but I enjoy them, and I’m told (though I don’t believe it for a second) that that’s the important thing.

academic bloggers on blogging

Is it wise for academics to blog? Depends on who you ask. Scott Sumner summarizes his first year of blogging this way:

Be careful what you wish for.  Last February 2nd I started this blog with very low expectations.  During the first three weeks most of the comments were from Aaron Jackson and Bill Woolsey.  I knew I wasn’t a good writer, years ago I got a referee report back from an anonymous referee (named McCloskey) who said “if the author had used no commas at all, his use of commas would have been more nearly correct.“  Ouch!  But it was true, others said similar things.  And I was also pretty sure that the content was not of much interest to anyone.

Now my biggest problem is time—I spend 6 to 10 hours a day on the blog, seven days a week.  Several hours are spent responding to reader comments and the rest is spent writing long-winded posts and checking other economics blogs.  And I still miss many blogs that I feel I should be reading. …

Regrets?  I’m pretty fatalistic about things.  I suppose it wasn’t a smart career move to spend so much time on the blog.  If I had ignored my commenters I could have had my manuscript revised by now. …  And I really don’t get any support from Bentley, as far as I know the higher ups don’t even know I have a blog. So I just did 2500 hours of uncompensated labor.

I don’t think Sumner actually regrets blogging (as the rest of his excellent post makes clear), but he does seem to think it’s hurt him professionally in some ways–most notably, because of all the time he spends blogging that he could be doing something else (like revising that manuscript).

Andrew Gelman has a very different take:

I agree with Sethi that Sumner’s post is interesting and captures much of the blogging experience. But I don’t agree with that last bit about it being a bad career move. Or perhaps Sumner was kidding? (It’s notoriously difficult to convey intonation in typed speech.) What exactly is the marginal value of his having a manuscript revised? It’s not like Bentley would be compensating him for that either, right? For someone like Sumner (or, for that matter, Alex Tabarrok or Tyler Cowen or my Columbia colleague Peter Woit), blogging would seem to be an excellent career move, both by giving them and their ideas much wider exposure than they otherwise would’ve had, and also (as Sumner himself notes) by being a convenient way to generate many thousands of words that can be later reworked into a book. This is particularly true of Sumner (more than Tabarrok or Cowen or, for that matter, me) because he tends to write long posts on common themes. (Rajiv Sethi, too, might be able to put together a book or some coherent articles by tying together his recent blog entries.)

Blogging and careers, blogging and careers . . . is blogging ever really bad for an academic career? I don’t know. I imagine that some academics spend lots of time on blogs that nobody reads, and that could definitely be bad for their careers in an opportunity-cost sort of way. Others such as Steven Levitt or Dan Ariely blog in an often-interesting but sometimes careless sort of way. This might be bad for their careers, but quite possibly they’ve reached a level of fame in which this sort of thing can’t really hurt them anymore. And this is fine; such researchers can make useful contributions with their speculations and let the Gelmans and Fungs of the world clean up after them. We each have our role in this food web. … And then of course there are the many many bloggers, academic and otherwise, whose work I assume I would’ve encountered much more rarely were they not blogging.

My own experience falls much more in line with Gelman’s here; my blogging experience has been almost wholly positive. Some of the benefits I’ve found to blogging regularly:

  • I’ve had many interesting email exchanges with people that started via a comment on something I wrote, and some of these will likely turn into collaborations at some point in the future.
  • I’ve been exposed to lots of interesting things (journal articles, blog posts, datasets, you name it) I wouldn’t have come across otherwise–either via links left in comments or sent by email, or while rooting around the web for things to write about.
  • I’ve gotten to publicize and promote my own research, which is always nice. As Gelman points out, it’s easier to learn about other people’s work if those people are actively blogging about it. I think that’s particularly true for people who are just starting out their careers.
  • I think blogging has improved both my ability and my willingness to write. By nature, I don’t actually like writing very much, and (like most academics I know) I find writing journal articles particularly unpleasant. Forcing myself to blog (semi-)regularly has instilled a certain discipline about writing that I haven’t always had, and if nothing else, it’s good practice.
  • I get to share ideas and findings I find interesting and/or important with other people. This is already what most academics do over drinks at conferences (and I think it’s a really important part of science), and blogging seems like a pretty natural extension.

All this isn’t to say that there aren’t any potential drawbacks to blogging. I think there are at least two important ones. One is the obvious point that, unless you’re blogging anonymously, it’s probably unwise to say things online that you wouldn’t feel comfortable saying in person. So, despite being a class-A jackass pretty critical by nature, I try to discuss things I like as often as things I don’t like–and to keep the tone constructive whenever I do the latter.

The other potential drawback, which both Sumner and Gelman allude to, is the opportunity cost. If you’re spending half of your daylight hours blogging, there’s no question it’s going to have an impact on your academic productivity. But in practice, I don’t think blogging too much is a problem many academic bloggers have. I usually find myself wishing most of the bloggers I read posted more often. In my own case, I almost exclusively blog after around 9 or 10 pm, when I’m no longer capable of doing sustained work on manuscripts anyway (I’m generally at my peak in the late morning and early afternoon). So, for me, blogging has replaced about ten hours a week of book reading/TV watching/web surfing, while leaving the amount of “real” work I do largely unchanged. That’s not really much of a cost, and I might even classify it as another benefit. With the admittedly important caveat that watching less television has made me undeniably useless at trivia night.

what do personality psychology and social psychology actually have in common?

Is there a valid (i.e., non-historical) reason why personality psychology and social psychology are so often lumped together as one branch of psychology? There are PSP journals, PSP conferences, PSP brownbags… the list goes on. It all seems kind of odd considering that, in some ways, personality psychologists and social psychologists have completely opposite focuses (foci?). Personality psychologists are all about the consistencies in people’s behavior, and classify situational variables under “measurement error”; social psychologists care not one whit for traits, and are all about how behavior is influenced by the situation. Also, aside from the conceptual tension, I’ve often gotten the sense that personality psychologists and social psychologists often just don’t like each other very much. Which I guess would make sense if you think these are two relatively distinct branches of psychology that, for whatever reason, have been lumped together inextricably for several decades. It’s kind of like being randomly assigned a roommate in college, except that you have to live with that roommate for the rest of your life.

I’m not saying there aren’t ways in which the two disciplines overlap. There are plenty of similarities; for example, they both tend to heavily feature self-report, and both often involve the study of social behavior. But that’s not really a good enough reason to lump them together. You can take almost any two branches of psychology and find a healthy intersection. For example, the interface between social psychology and cognitive psychology is one of the hottest areas of research in psychology at the moment. There’s a journal called Social Cognition–which, not coincidentally, is published by the International Social Cognition Network. Lots of people are interested in applying cognitive psychology models to social psychological issues. But you’d probably be taking bullets from both sides of the hallway if you ever suggested that your department should combine their social psychology and cognitive psychology brown bag series. Sure, there’s an overlap, but there’s also far more content that’s unique to each discipline.

The same is true for personality psychology and social psychology, I’d argue. Many (most?) personality psychologists aren’t intrinsically interested in social aspects of personality (at least, no more so than in other, non-social aspects), and many social psychologists couldn’t give a rat’s ass about the individual differences that make each of us a unique and special flower. And yet there we sit, week after week, all together in the same seminar room, as one half of the audience experiences rapture at the speaker’s words, and the other half wishes they could be slicing blades of grass off their lawn with dental floss. What gives?

what’s the point of intro psych?

Sanjay Srivastava comments on an article in Inside Higher Ed about the limitations of traditional introductory science courses, which (according to the IHE article) focus too much on rote memorization of facts and too little on the big questions central to scientific understanding. The IHE article is somewhat predictable in its suggestion that students should be engaged with key scientific concepts at an earlier stage:

One approach to breaking out of this pattern, [Shirley Tilghman] said, is to create seminars in which first-year students dive right into science — without spending years memorizing facts. She described a seminar — “The Role of Asymmetry in Development” — that she led for Princeton freshmen in her pre-presidential days.

Beyond the idea of seminars, Tilghman also outlined a more transformative approach to teaching introductory science material. David Botstein, a professor at the university, has developed the Integrated Science Curriculum, a two-year course that exposes students to the ideas they need to take advanced courses in several science disciplines. Botstein created the course with other faculty members and they found that they value many of the same scientific ideas, so an integrated approach could work.

Sanjay points out an interesting issue in translating this type of approach to psychology:

Would this work in psychology? I honestly don’t know. One of the big challenges in learning psychology — which generally isn’t an issue for biology or physics or chemistry — is the curse of prior knowledge. Students come to the class with an entire lifetime’s worth of naive theories about human behavior. Intro students wouldn’t invent hypotheses out of nowhere — they’d almost certainly recapitulate cultural wisdom, introspective projections, stereotypes, etc. Maybe that would be a problem. Or maybe it would be a tremendous benefit — what better way to start off learning psychology than to have some of your preconceptions shattered by data that you’ve collected yourself?

Prior knowledge certainly does seem to play a huge role in the study of psychology; there are some worldviews that are flatly incompatible with certain areas of psychological inquiry. So when some students encounter certain ideas in psychology classes–even introductory ones–they’re forced to either change their views about the way the world works, or (perhaps more commonly?) to discount those areas of psychology and/or the discipline as a whole.

One example of this is the aversion many people have to a reductionist, materialist worldview. If you really can’t abide by the idea that all of human experience ultimately derives from the machinations of dumb cells, with no ghost to be found anywhere in the machine, you’re probably not going to want to study the neural bases of romantic love. Similarly, if you can’t swallow the notion that our personalities appear to be shaped largely by our genes and random environmental influences–and show virtually no discernible influence of parental environment–you’re unlikely to want to be a behavioral geneticist when you grow up. More so than most other fields, psychology is full of ideas that turn our intuitions on our head. For many Intro Psych students who go on to study the mind professionally, that’s one of the things that makes the field so fascinating. But other students are probably turned off for the very same reason.

Taking a step back though, I think before you can evaluate how introductory classes ought to be taught, it’s important to ask what goal introductory courses are ultimately supposed to serve. Implicit in the views discussed in the IHE article is the idea that introductory science classes should basically serve as a jumping-off point for young scientists. The idea is that if you’re immersed in deep scientific ideas in your first year of university rather than your third or fourth, you’ll be that much better prepared for a career in science by the time you graduate. That’s certainly a valid view, but it’s far from the only one. Another perfectly legitimate view is that the primary purpose of an introductory science class isn’t really to serve the eventual practitioners of that science, who, after all, form a very small fraction of students in the class. Rather, it’s to provide a very large number of students with varying degrees of interest in science with a very cursory survey of the field. After all, the vast majority of students who sit through Intro Psych classes would never go onto careers in psychology no matter how the course was taught. You could mount a reasonable argument that exposing most students to “the ideas they need to take advanced courses in several science disciplines” would be a kind of academic malpractice,  because most students who take intro science classes (or at least, intro psychology) probably have no  real interest in taking advanced courses in the topic, and simply want to fill a distribution requirement or get a cursory overview of what the field is about.

The question of who intro classes should be designed for isn’t the only one that needs to be answered. Even if you feel quite certain that introductory science classes should always be taught with an eye to producing scientists, and you don’t care at all for the more populist idea of catering to the non-major masses, you still have to make other hard choices. For example, you need to decide whether you value breadth over depth, or information retention over enthusiasm for the course material. Say you’re determined to teach Intro Psych in such a way as to maximize the production of good psychologists. Do you pick just a few core topics that you think students will find most interesting, or most conducive to understanding key research concepts, and abandon those topics that turn people off? Such an approach might well encourage more students to take further psychology classes; but it does so at the risk of providing an unrepresentative view of the field, and failing to expose some students to ideas they might have benefited more from. Many Intro Psych students seem to really resent the lone week or two of the course when the lecturer covers neurons, action potentials and very basic neuroanatomy. For reasons that are quite inscrutable to me, many people just don’t like brainzzz. But I don’t think that common sentiment is sufficient grounds for cutting biology out of intro psychology entirely; you simply wouldn’t be getting an accurate picture of our current understanding of the mind without knowing at least something about the way the brain operates.

Of course, the trouble is that the way that people like me feel about the brain-related parts of intro psych is exactly the way other people feel about the social parts of intro psych, or the developmental parts, or the clown parts, and so on. Cut social psych out of intro psych so that you can focus on deep methodological issues in studying the brain, and you may well create students more likely to go on to a career in cognitive neuroscience. But you’re probably reducing the number of students who’ll end up going into social psychology. More generally, you’re turning Intro Psychology into Intro to Cognitive Neuroscience, which sort of defeats the point of it being an introductory course in the first place; after all, they’re called survey courses for a reason!

In an ideal world, we wouldn’t have to make these choices; we’d just make sure that all of our intro courses were always engaging and educational and promoted a deeper understanding of how to do science. But in the real world, it’s rarely possible to pull that off, and we’re typically forced to make trade-offs. You could probably promote student interest in psychology pretty easily by showing videos of agnosic patients all day long, but you’d be sacrificing breadth and depth of understanding. Conversely, you could maximize the amount of knowledge students retain from a class by hitting them over the head with information and testing them in every class, but then you shouldn’t be surprised if some students find the course unpleasant and lose interest in the subject matter. The balance between the depth, breadth, and entertainment value of introductory science classes is a delicate one, but it’s one that’s essential to consider before we can fairly evaluate different proposals as to how such classes ought to be structured.

i hate learning new things

Well, I don’t really hate learning new things. I actually quite like learning new things; what I don’t like is having to spend time learning new things. I find my tolerance for the unique kind of frustration associated with learning a new skill (you know, the kind that manifests itself in a series of “crap, now I have to Google that” moments) increases roughly in proportion to my age.

As an undergraduate, I didn’t find learning frustrating at all; quite the opposite, actually. I routinely ignored all the work that I was supposed to be doing (e.g., writing term papers, studying for exams, etc.), and would spend hours piddling around with things that were completely irrelevant to my actual progress through college. In hindsight, a lot of the skills I picked up have actually been quite useful, career-wise (e.g., I spent a lot of my spare time playing around with websites, which has paid off–I now collect large chunks of my data online). But I can’t pretend I had any special foresight at the time. I was just procrastinating by doing stuff that felt like work but really wasn’t.

In my first couple of years in graduate school, when I started accumulating obligations I couldn’t (or didn’t want to) put off, I developed a sort of compromise with myself, where I would spend about fifteen minutes of every hour doing what i was supposed to, and the rest of the hour messing around learning new things. Some of those things were work-related–for instance, learning to use a new software package for analyzing fMRI data, or writing a script that reinvented the wheel just to get a better understanding of the wheel. That arrangement seemed to work pretty well, but strangely, with every year of grad school, I found myself working less and less on so-called “personal development” projects and more and more on supposedly important things like writing journal articles and reviewing other people’s journal articles and just generally acting like someone who has some sort of overarching purpose.

Now that I’m a worldly post-doc in a new lab, I frankly find the thought of having to spend time learning to do new things quite distressing. For example, my new PI’s lab uses a different set of analysis packages than I used in graduate school. So I have to learn to use those packages before I can do much of anything. They’re really great tools, and I don’t have any doubt that I will in fact learn to use them (probably sooner rather than later); I just find it incredibly annoying to have to spend the time doing that. It feels like it’s taking time away from my real work, which is writing. Whereas five years ago, I would have gleefully thrown myself at any opportunity to learn to use a new tool, precisely because it would have allowed me to avoid nasty, icky activities like writing.

In the grand scheme of things, I suppose the transition is for the best. It’s hard to be productive as an academic when you spend all your time learning new things; at some point, you have to turn the things you learn into a product you can communicate to other people. I like the fact that I’ve become more conscientious with age (which, it turns out, is a robust phenomenon); I just wish I didn’t feel so guilty ‘wasting’ my time learning new things. And it’s not like I feel I know everything I need to know. More than ever, I can identify all sorts of tools and skills that would help me work more efficiently if I just took the time to learn them. But learning things often seems like a luxury in this new grown-up world where you do the things you’re supposed to do before the things you actually enjoy most. I fully expect this trend to continue, so that 5 years from now, when someone suggests a new tool or technique I should look into, I’ll just run for the door with my hands covering my ears…