on writing: some anecdotal observations, in no particular order

  • Early on in graduate school, I invested in the book “How to Write a Lot“. I enjoyed reading it–mostly because I (mistakenly) enjoyed thinking to myself, “hey, I bet as soon as I finish this book, I’m going to start being super productive!” But I can save you the $9 and tell you there’s really only one take-home point: schedule writing like any other activity, and stick to your schedule no matter what. Though, having said that, I don’t really do that myself. I find I tend to write about 20 hours a week on average. On a very good day, I manage to get a couple of thousand words written, but much more often, I get 200 words written that I then proceed to rewrite furiously and finally trash in frustration. But it all adds up in the long run I guess.
  • Some people are good at writing one thing at a time; they can sit down for a week and crank out a solid draft of a paper without every looking sideways at another project. Personally, unless I have a looming deadline (and I mean a real deadline–more on that below), I find that impossible to do; my general tendency is to work on one writing project for an hour or two, and then switch to something else. Otherwise I pretty much lose my mind. I also find it helps to reward myself–i.e., I’ll work on something I really don’t want to do for an hour, and then play video games for a while switch to writing something more pleasant.
  • I can rarely get any ‘real’ writing (i.e., stuff that leads to publications) done after around 6 pm; late mornings (i.e., right after I wake up) are usually my most productive writing time. And I generally only write for fun (blogging, writing fiction, etc.) after 9 pm. There are exceptions, but by and large that’s my system.
  • I don’t write many drafts. I don’t mean that I never revise papers, because I do–obsessively. But I don’t sit down thinking “I’m going to write a very rough draft, and then I’ll go back and clean up the language.” I sit down thinking “I’m going to write a perfect paper the first time around,” and then I very slowly crank out a draft that’s remarkably far from being perfect. I suspect the former approach is actually the more efficient one, but I can’t bring myself to do it. I hate seeing malformed sentences on the page, even if I know I’m only going to delete them later. It always amazes and impresses me when I get Word documents from collaborators with titles like “AmazingNatureSubmissionVersion18”. I just give my documents all the title “paper_draft”. There might be a V2 or a V3, but there will never, ever be a V18.
  • Papers are not meant to be written linearly. I don’t know anyone who starts with the Introduction, then does the Methods and Results, and then finishes with the Discussion. Personally I don’t even write papers one section at a time. I usually start out by frantically writing down ideas as they pop into my head, and jumping around the document as I think of other things I want to say. I frequently write half a sentence down and then finish it with a bunch of question marks (like so: ???) to indicate I need to come back later and patch it up. Incidentally, this is also why I’m terrified to ever show anyone any of my unfinished paper drafts: an unsuspecting reader would surely come away thinking I suffer from a serious thought disorder. (I suppose they might be right.)
  • Okay, that last point is not entirely true. I don’t write papers completely haphazardly; I do tend to write Methods and Results before Intro and Discussion. I gather that this is a pretty common approach. On the rare occasions when I’ve started writing the Introduction first, I’ve invariably ended up having to completely rewrite it, because it usually turns out the results aren’t actually what I thought they were.
  • My sense is that most academics get more comfortable writing as time goes on. Relatively few grad students have the perseverance to rapidly crank out publication-worthy papers from day 1 (I was definitely not one of them). I don’t think this is just a matter of practice; I suspect part of it is a natural maturation process. People generally get more conscientious as they age; it stands to reason that writing (as an activity most people find unpleasant) should get easier too. I’m better at motivating myself to write papers now, but I’m also much better about doing the dishes and laundry–and I’m pretty sure that’s not because practice makes dishwashing perfect.
  • When I started grad school, I was pretty sure I’d never publish anything, let alone graduate, because I’d never handed in a paper as an undergraduate that wasn’t written at the last minute, whereas in academia, there are virtually no hard deadlines (see below). I’m not sure exactly what changed. I’m still continually surprised every time something I wrote gets published. And I often catch myself telling myself, “hey, self, how the hell did you ever manage to pay attention long enough to write 5,000 words?” And then I reply to myself, “well, self, since you ask, I took a lot of stimulants.”
  • I pace around a lot when I write. A lot. To the point where my labmates–who are all uncommonly nice people–start shooting death glares my way. It’s a heritable tendency, I guess (the pacing, not the death glare attraction); my father also used to pace obsessively. I’m not sure what the biological explanation for it is. My best guess is it’s an arousal-mediated effect: I can think pretty well when I’m around other people, or when I’m in motion, but if I’m sitting at a desk and I don’t already know exactly what I want to say, I can’t get anything done. I generally pace around the lab or house for a while figuring out what I want to say, and then I sit down and write until I’ve forgotten what I want to say, or decide I didn’t really want to say that after all. In practice this usually works out to 10 minutes of pacing for every 5 minutes of writing. I envy people who can just sit down and calmly write for two or three hours without interruption (though I don’t think there are that many of them). At the same time, I’m pretty sure I burn a lot of calories this way.
  • I’ve been pleasantly surprised to discover that I much prefer writing grant proposals to writing papers–to the point where I actually enjoy writing grant proposals. I suspect the main reason for this is that grant proposals have a kind of openness that papers don’t; with a paper, you’re constrained to telling the story the data actually support, whereas a grant proposal is as good as your vision of what’s possible (okay, and plausible). A second part of it is probably the novelty of discovery: once you conduct your analyses, all that’s left is to tell other people what you found, which (to me) isn’t so exciting. I mean, I already think I know what’s going on; what do I care if you know? Whereas when writing a grant, a big part of the appeal for me is that I could actually go out and discover new stuff–just as long as I can convince someone to give me some money first.
  • At a a departmental seminar attended by about 30 people, I once heard a student express concern about an in-progress review article that he and several of the other people at the seminar were collaboratively working on. The concern was that if all of the collaborators couldn’t agree on what was going to go in the paper (and they didn’t seem to be able to at that point), the paper wouldn’t get written in time to make the rapidly approaching deadline dictated by the journal editor. A senior and very brilliant professor responded to the student’s concern by pointing out that this couldn’t possibly be a real problem seeing as in reality there is actually no such thing as a hard writing deadline. This observation didn’t go over so well with some of the other senior professors, who weren’t thrilled that their students were being handed the key to the kingdom of academic procrastination so early in their careers. But it was true, of course: with the major exception of grant proposals (EDIT: and as Garrett points out in the comments below, conference publications in disciplines like Computer Science), most of the things academics write (journal articles, reviews, commentaries, book chapters, etc.) operate on a very flexible schedule. Usually when someone asks you to write something for them, there is some vague mention somewhere of some theoretical deadline, which is typically a date that seems so amazingly far off into the future that you wonder if you’ll even be the same person when it rolls around. And then, much to your surprise, the deadline rolls around and you realize that you must in fact really bea different person, because you don’t seem to have any real desire to work on this thing you signed up for, and instead of writing it, why don’t you just ask the editor for an extension while you go rustle up some motivation. So you send a polite email, and the editor grudgingly says, “well, hmm, okay, you can have another two weeks,” to which you smile and nod sagely, and then, two weeks later, you send another similarly worded but even more obsequious email that starts with the words “so, about that extension…”

    The basic point here is that there’s an interesting dilemma: even though there rarely are any strict writing deadlines, it’s to almost everyone’s benefit to pretend they exist. If I ever find out that the true deadline (insofar as such a thing exists) for the chapter I’m working on right now is 6 months from now and not 3 months ago (which is what they told me), I’ll probably relax and stop working on it for, say, the next 5 and a half months. I sometimes think that the most productive academics are the ones who are just really really good at repeatedly lying to themselves.

  • I’m a big believer in structured procrastination when it comes to writing. I try to always have a really unpleasant but not-so-important task in the background, which then forces me to work on only-slightly-unpleasant-but-often-more-important tasks. Except it often turns out that the unpleasant-but-no-so-important task is actually an unpleasant-but-really-important task after all, and then I wake up in a cold sweat in the middle of the night thinking of all the ways I’ve screwed myself over. No, just kidding. I just bitch about it to my wife for a while and then drown my sorrows in an extra helping of ice cream.
  • I’m really, really, bad at restarting projects I’ve put on the back burner for a while. Right now there are 3 or 4 papers I’ve been working on on-and-off for 3 or 4 years, and every time I pick them up, I write a couple of hundred words and then put them away for a couple of months. I guess what I’m saying is that if you ever have the misfortune of collaborating on a paper with me, you should make sure to nag me several times a week until I get so fed up with you I sit down and write the damn paper. Otherwise it may never see the light of day.
  • I like writing fiction in my spare time. I also occasionally write whiny songs. I’m pretty terrible at both of these things, but I enjoy them, and I’m told (though I don’t believe it for a second) that that’s the important thing.

the neuroinformatics of Neopets

In the process of writing a short piece for the APS Observer, I was fiddling around with Google Correlate earlier this evening. It’s a very neat toy, but if you think neuroimaging or genetics have a big multiple comparisons problem, playing with Google Correlate for a few minutes will put things in perspective. Here’s a line graph displaying the search term most strongly correlated (over time) with searches for “neuroinformatics”:

That’s right, the search term that covaries most strongly with “neuroinformatics” is none other than “Illinois film office” (which, to be fair, has a pretty appealing website). Other top matches include “wma support”, “sim codes”, “bed-in-a-bag”, “neopets secret”, “neopets guild”, and “neopets secret avatars”.

I may not have learned much about neuroinformatics from this exercise, but I did get a pretty good sense of how neuroinformaticians like to spend their free time…

 

p.s. I was pretty surprised to find that normalized search volume for just about every informatics-related term has fallen sharply in the last 10 years. I went in expecting the opposite! Maybe all the informaticians were early search adopters, and the rest of the world caught up? No, probably not. Anyway, enough of this; Neopia is calling me!

p.p.s. Seriously though, this is why data fishing expeditions are dangerous. Any one of these correlations is significant at p-less-than-point-whatever-you-like. And if your publication record depended on it, you could probably tell yourself a convincing story about why neuroinformaticians need to look up Garmin eMaps…

in which Discover Card decides that my wife is also my daughter

Ever since I opted out of receiving preapproved credit card offers, I’ve stopped getting credit card spam in the mail (yay!). But companies I have an existing relationship with still have the right to send me various offers and updates, and there’s nothing I can do about that (except throw said offers in the trash after inspecting them and deciding that, no, I do not want to purchase the premium yacht travel insurance policy that comes with a bonus free set of matching lawn gnomes and a voucher for a buy-one-get-one-free meal at the Olive Garden). Discover Card is one of these companies, and the clever devils regularly take advantage of my amicable nature by sending me all kinds of wonderful offers. Take for instance the one I received yesterday, which starts like this:

Dear Tal,

You’ve worked for years to provide a better life for your children and prepare them for a successful future. Now that they’re in college, the overwhelming cost of higher education shouldn’t stand in the way of their success. We’re ready to help.

This is undoubtedly a very generous offer, but it comes at an inconvenient time for me, because, as it so happens, I don’t have any children right now–let alone college-aged children who need their father to front them some money. Somewhere, somehow, it seems Discover Card took a left turn at Albuquerque, when all along they were trying to get to Pismo Beach:

http://www.youtube.com/watch?v=v-s-_ME8Qns#t=1m24s

Of course, this isn’t a case of human error; I very much doubt that an overworked analyst is putting in long nights at Discover combing through random customers’ accounts looking for purchases diagnostic of college attendance (you know, like Ritalin receipts). The blame almost certainly rests with an over-inclusive algorithm that combed through my purchase history and automagically decided that I fit the profile of a middle-aged man who’s worked hard for years to provide a better life for his children. (I suppose I can take solace in the fact that while Discover probably knows what brand of toothpaste I like, it must not know my age, given that there aren’t many 31-year-old men with college-aged children.)

Anyway, I spent some time pondering what purchases I’ve made that could have tripped up Discover’s parental alarm system. And after scanning several months of statements, I’m proud to report it almost certainly has something to do with the giant monthly rent charge from “CU Residence Halls” (my wife and I live in on-campus housing). Either that or the many book-and-coffee-related charges from places with names like “University of Colorado Bookstore” and “Pretentious Coffeehouse on CU Campus”.

So that’s easy enough, right? It’s the on-campus purchases, stupid! Ah, but wait! That’s only one part of the mystery! The other, perhaps more interesting, part is this: who exactly does Discover think my college-aged child is, seeing as they clearly think I’m not the one caffeinating myself at the altar of higher education? Well, after thinking about that for a while, another clear answer emerges: it’s my wife! Discover thinks I have a college-aged daughter who also happens to be my wife! There’s no other explanation; to my knowledge, I don’t live with anyone else besides my wife (though, admittedly, I don’t check the storage closet very often).

Now, setting aside the fact that such a thing would be illegal in all fifty states, my wife and I are not very amused by this. We’re mildly amused, but we’re not very amused. But we’re refraining from making too big a fuss about it, because we’re still hoping we can get our hands on some of those sweet, sweet college loans.

In the interim, here are some questions I find myself pondering:

  • Who writes the logic that does this kind of thing? I’m not asking for names; no need to rat out your best friend who works in Discover’s data mining department. I’m just curious to know what kind of background the people who come up with these things have. Artificial intelligence? Marketing research? Dental surgery?
  • How sophisticated are the rules used to screen customers for these mailings? Is there some serious business logic operating behind the scenes that happened to go wrong here, or is a well-meaning Discover employee just running SQL queries like “SELECT name, address FROM members WHERE description LIKE ‘%residence hall%'” on their lunch break?
  • Do credit card companies that do this kind of thing (which I imagine is pretty much all of them) actually validate their logic against test datasets (in this case, a large group of Discover members whose parental status has been independently verified), or do they just pick some criteria that seem to make sense and immediately start blanketing the United States with flyers?
  • What proportion of false positives is considered reasonable? Clearly, with any kind of program like this, some small number of customers is almost invariably going to get a letter that makes some very bad lifestyle assumptions. At what point does the risk of a backlash start to outweigh the potential for increased revenue? Obviously, the vast majority of people are probably going to chalk this type of thing down to a harmless error, but I imagine some small proportion of people are going to get upset and call up Discover to rant and rave about how they don’t have any children at all, and how dare Discover mine their records like this, and doesn’t Discover have any respect for them as loyal long-standing cardholders, and what’s that, why yes, of course, they’d be quite happy to accept Discover’s apology for this tragic error if it came with a two-for-one gift certificate to the Olive Garden.
  • Most importantly: is it considered fraud if I knowingly fill out an application for student loans in my lovely wife-daughter’s name?

what Ben Parker wants you to know about neuroimaging

I have a short opinion piece in the latest issue of The European Health Psychologist that discusses some of the caveats and limits of functional MRI. It’s a short and (I think) pretty readable piece; I touch on a couple of issues I’ve discussed frequently in other papers as well as here on the blog–namely, the relatively low power of most fMRI analyses and the difficulties inherent in drawing causal inferences from neuroimaging results.

More importantly, though, I’ve finally fulfilled my long held goal of sneaking a Spiderman reference into an academic article (though, granted, one that wasn’t peer-reviewed). It would be going too far to say I can die happy now, but at least I can have an extra large serving of ice cream for dessert tonight without feeling guilty*. And no, I’m not going to spoil the surprise by revealing what Spidey has to do with fMRI. Though I will say that if you actually fall for the hook and go read the article just to find that out, you’re likely to be sorely disappointed.

 

* So okay, the truth is, I never, ever feel guilty for eating ice cream, no matter the serving size.

naked dense bodies provoke depression (and other tall scientific tales)

I’ve been using Mendeley for about a year now, and while there are plenty of kinks left for the developers iron out (mostly related to the Word plug-in), I have to say I like it a lot overall. I could say more about why I like it a lot, but I won’t, because this isn’t really a post about Mendeley. Rather, it’s a post about one particular group on Mendeley (groups on Mendeley are basically curated sets of thematically related scientific articles). Specifically, the “Creatively named research papers” group.

Since the title of the group is self-explanatory, I’ll just list some of the more noteworthy entries, along with some of the corresponding notes I jotted down (you know, in case I need to refer back to these papers):

 

Naked Dense Bodies Provoke Depression

I don’t think depression is the normative response to this stimulus; this must be a case report.

 

Marvel Universe looks almost like a real social network

“We would like to mention that the actual number of collaborations is 569,770, but this value counts all collaborations in the Marvel Universe history, and while there are 91,040 pairs of characters that have only met once, other pairs have met quite often: for instance, every pair of members of the Fantastic Four has jointly appeared in around 700 comic books (more specifically, this range of collaborations of the members of the Fantastic Four runs between 668 joint appearances of the Thing and the Invisible Woman to 744 joint appearances of the Thing and the Human Torch).” (p. 7)

 

Are Analytic Philosophers Shallow and Stupid?

I’ll leave this one up to the analytic philosophers to mull over. We’ll check back on their progress in another ten or twenty years.

 

Are full or empty beer bottles sturdier and does their fracture-threshold suffice to break the human skull?

Spoiler: the answers are ’empty’ and ‘yes’, respectively.

 

A woman’s history of vaginal orgasm is discernible from her walk

I don’t want to offend anyone, so I’m going to tread very delicately here and just tiptoe away quietly.

 

Traumatic brain injuries in illustrated literature: experience from a series of over 700 head injuries in the Asterix comic books

At some point you kind of start to feel bad for the Romans.

 

Skillful writing of an awful research paper

Pretty sure I already know everything discussed in this article.

 

Chemical processes in the deep interior of Uranus

Obvious joke is obvious.

 

Japan’s Phillips Curve Looks Like Japan

A pretty remarkable article. Gregor Smith isn’t kidding; here’s Japan’s Phillips Curve:

 

Is a jumper angrier than a tree?

Possibly even better than the title of this paper is the set of papers Mendeley thinks are related, which include “The greater-than-g acceleration of a bungee jumper”, “When is a tree more than a tree?”, and my personal favorite, “The Angry, the Angrier, and the Angriest: Relationship Implications”.

 

The Penetration of a Finger into a Viscous Fluid in a Channel and Tube

It’s not often you find your finger stuck in an oil-filled Chinese finger trap, but when it inevitably does happen, you’ll be very glad you read this paper.

 

Executive Decision-Making in the Domestic Sheep

I’m a big fan of studies involving clever sheep.

 

Numerical simulation of fundamental trapped sausage modes

Alternative title: What’s the optimal amount of time to microwave a midnight snack for?

 

Accidental condom inhalation

You’re doing it wrong.

 

On the Effectiveness of Aluminium Foil Helmets: An Empirical Study

Pfft. Like anyone who wears one of these things is going to believe results published by agents of the scientific-industrial complex.

 

Experiments with genitalia : a commentary

Abstract: “There has been a recent burst of studies of the function of genitalia, many of which share several important shortcomings. Given that further studies on this topic are likely (there are probably millions of species showing rapid genital divergence), I discuss the studies critically to promote clear formulation of hypotheses and interpretation of results in the future. I also emphasize some possibly important but neglected variables, including female stimulation, phylogenetic contexts, and the behavior of male genitalia, and outline simple techniques that could improve future studies.”

 

The earth is round (p < . 05)

For shame! This one has no business being in this group! It’s an excellent title to one of the best commentaries on psychological methods ever written!

 

Amusing titles in scientific journals and article citation

Yes, you’re very clever, person who added this self-referential article to the group.

 

The ethics of eating a drug-company donut

It starts with a donut, and before you know it, you’re spending your lunch break stuffing boxes full of Pfizer pens down your shirt pocket.

 

Rectal impalement by pirate ship: A case report

You’re definitely doing it wrong.

 

Anyway, I’m sure this is just a tiny fraction of the creatively-named scientific literature. If you know of (or have authored) any worthy candidates, add them to the Mendeley group–or just indulge me and post them below in the comments. Note that in this context ‘creatively named’ seems to mean humorous rather than clever. There are probably many more clever titles out there than funny ones (a trend abetted by the fact that a clever title is pretty much a prerequisite for publishing in Psychological Science at this point), but for purposes of this thread, we don’t want to hear about your naked dense bodies unless they’re funny-looking!

brain-based prediction of ADHD–now with 100% fewer brains!

UPDATE 10/13: a number of commenters left interesting comments below addressing some of the issues raised in this post. I expand on some of them here.

The ADHD-200 Global Competition, announced earlier this year, was designed to encourage researchers to develop better tools for diagnosing mental health disorders on the basis of neuroimaging data:

The competition invited participants to develop diagnostic classification tools for ADHD diagnosis based on functional and structural magnetic resonance imaging (MRI) of the brain. Applying their tools, participants provided diagnostic labels for previously unlabeled datasets. The competition assessed diagnostic accuracy of each submission and invited research papers describing novel, neuroscientific ideas related to ADHD diagnosis. Twenty-one international teams, from a mix of disciplines, including statistics, mathematics, and computer science, submitted diagnostic labels, with some trying their hand at imaging analysis and psychiatric diagnosis for the first time.

Data for the contest came from several research labs around the world, who donated brain scans from participants with ADHD (both inattentive and hyperactive subtypes) as well as healthy controls. The data were made openly available through the International Neuroimaging Data-sharing Initiative, and nicely illustrate the growing movement towards openly sharing large neuroimaging datasets and promoting their use in applied settings. It is, in virtually every respect, a commendable project.

Well, the results of the contest are now in–and they’re quite interesting. The winning team, from Johns Hopkins, came up with a method that performed substantially above chance and showed particularly high specificity (i.e., it made few false diagnoses, though it missed a lot of true ADHD cases). And all but one team performed above chance, demonstrating that the imaging data has at least some (though currently not a huge amount) of utility in diagnosing ADHD and ADHD subtype. There are some other interesting results on the page worth checking out.

But here’s hands-down the most entertaining part of the results, culled from the “Interesting Observations” section:

The team from the University of Alberta did not use imaging data for their prediction model. This was not consistent with intent of the competition. Instead they used only age, sex, handedness, and IQ. However, in doing so they obtained the most points, outscoring the team from Johns Hopkins University by 5 points, as well as obtaining the highest prediction accuracy (62.52%).

…or to put it differently, if you want to predict ADHD status using the ADHD-200 data, your best bet is to not really use the ADHD-200 data! At least, not the brain part of it.

I say this with tongue embedded firmly in cheek, of course; the fact that the Alberta team didn’t use the imaging data doesn’t mean imaging data won’t ultimately be useful for diagnosing mental health disorders. It remains quite plausible that ten or twenty years from now, structural or functional MRI scans (or some successor technology) will be the primary modality used to make such diagnoses. And the way we get from here to there is precisely by releasing these kinds of datasets and promoting this type of competition. So on the whole, I think this should actually be seen as a success story for the field of human neuroimaging–especially since virtually all of the teams performed above chance using the imaging data.

That said, there’s no question this result also serves as an important and timely reminder that we’re still in the very early days of brain-based prediction. Right now anyone who claims they can predict complex real-world behaviors better using brain imaging data than using (much cheaper) behavioral data has a lot of ‘splainin to do. And there’s a good chance that they’re trying to sell you something (like, cough, neuromarketing ‘technology’).

the short but eventful magnetosensing life of cows

I’ve given several talks in the last few months about the Neurosynth framework, which is designed to help facilitate large-scale automated meta-analysis of fMRI data (see this paper, or these slides from my most recent talk). On a couple of occasions, I’ve decided to start out by talking about something other than brains. In particular, I’ve opted to talk about cows. Specifically, the cows in this study:

…in which the authors–Sabine Begall and colleagues–took Google Earth satellite imagery like this (yes, those tiny ant-like blobs are cows):

…and performed the clever trick of using Google Earth to determine that cows (and deer too!) naturally tend to align themselves along a geomagnetic north-south axis. In other words, cows have magnets in their brains! You have to admit that’s pretty amazing (unless you’re the kind of person who refuses to admit anything is amazing in the presence of other people, even though you secretly look them up and marvel at them later when you’re alone in your bedroom).

Now, superficially, this finding doesn’t actually have very much to do with any of the work I’ve done recently. Okay, not just superficially; it really has absolutely nothing to do with any of the work I’ve done recently. But the more general point I was trying to make was that advances in technology often allow us to solve scientific problems we couldn’t address before, even when the technology in question was originally designed for very different purposes (and I’m pretty confident that Google Earth wasn’t conceived as a means of studying cow alignment). That’s admittedly a bit totally grandiose inasmuch as none of the work I’ve done on Neurosynth is in any way comparable to the marvel that is Google Earth. But, you know, it’s the principle that counts. And the principle is that we should try to use the technology we have (and here I’m just talking about the web, not billion dollar satellites) to do neat scientific things.

Anyway, I was feeling quite pleased with myself for coming up with this completely tangential introduction–so much so that I used it in two or three talks to great success confuse the hell out of the audience. But then one day I made a horrible mistake. And that horrible mistake was to indulge the nagging little voice that kept saying, come now, cows with magnetic brains? really? maybe you should double-check this, just to make sure. So the last time I was about to use the cow slides, I went and did a lit search just to make sure I was still on the cutting edge of the bovine geomagnetic sensing literature. Well, as it turns out I was NOT on the cutting edge! I’d fallen off the edge! Way off! Just a few months ago, you see, this little gem popped up in the literature:

Basically the authors tried to replicate the Begall et al findings and couldn’t. They argued that the original findings were likely due to poor satellite imagery coupled with confirmation bias. So it now appears that cows don’t have the foggiest conception of magnetic fields after all. They just don’t get to join the sparrow-and-spiny-lobster club, no matter how much they whine to the bouncer at the door. Which leads me to my current predicament: what the hell should I do about the cow slides I went to the trouble of making? (Yes, this is the kind of stuff I worry about at midnight on a Wednesday after I’ve written as many job application cover letters as I can deal with in one night, and have safely verified that my Netflix Instant queue contains 233 movies I have no interest at all in watching.)

I suppose the reasonable thing to do would be to jettison the cow slides entirely. But I don’t really want to do that. It’s not like there’s a lack of nifty* and unexpected uses of technology to solve scientific problems; it’s just that, you know, I kind of got attached to this particular example. Plus I’m lazy and don’t want to revise my slides if I can help it. The last time I presented the cow slides in a talk–which was after I discovered that cows don’t know the north-south axis from a hole in the ground–I just added a slide showing the image of the Hert et al rebuttal paper you see above, and called it a “postscript”. Then I made some lame comment about how, hah, you see, just like you can Google Earth to discover interesting new findings, you can also use it to debunk interesting spurious findings, so that’s still okay! But that’s not going to cut it; I’m thinking that next time out, I’m going to have to change things up. Still, to minimize effort, maybe I’ll keep the Google Earth thing going, but simply lose the cows. Instead, I can talk about, I don’t know, using satellite imagery to discover long-buried Mayan temples and Roman ruins. That still sort of counts as science, right?

 

 

* Does anyone still use the word ‘nifty’ in casual conversation? No? Well I like it, so there.

in which I suffer a minor setback due to hyperbolic discounting

I wrote a paper with some collaborators that was officially published today in Nature Methods (though it’s been available online for a few weeks). I spent a year of my life on this (a YEAR! That’s like 30 years in opossum years!), so go read the abstract, just to humor me. It’s about large-scale automated synthesis of human functional neuroimaging data. In fact, it’s so about that that that’s the title of the paper*. There’s also a companion website over here, which you might enjoy playing with if you like brains.

I plan to write a long post about this paper at some point in the near future, but not today. What I will do today is tell you all about why I didn’t write anything about the paper much earlier (i.e., 4 weeks ago, when it appeared online), because you seem very concerned. You see, I had grand plans for writing a very detailed and wonderfully engaging multi-part series of blog posts about the paper, starting with the background and motivation for the project (that would have been Part 1), then explaining the methods we used (Part 2), then the results (III; let’s switch to Roman numerals for effect), then some of the implications (IV), then some potential applications and future directions (V), then some stuff that didn’t make it into the paper (VI), and then, finally, a behind-the-science account of how it really all went down (VII; complete with filmed interviews with collaborators who left the project early due to creative differences). A seven-part blog post! All about one paper! It would have been longer than the article itself! And all the supplemental materials! Combined! Take my word for it, it would have been amazing.

Unfortunately, like most everyone else, I’m a much better person in the future than I am in the present; things that would take me a week of full-time work in the Now apparently take me only five to ten minutes when I plan them three months ahead of time. If you plotted my temporal discounting curve for intellectual effort, it would look like this:

So that’s why my seven-part series of blog posts didn’t debut at the same time the paper was published online a few weeks ago. In fact, it hasn’t debuted at all. At this point, my much more modest goal is just to write a single much shorter post, which will no longer be able to DEBUT, but can at least slink into the bar unnoticed while everyone else is out on the patio having a smoke. And really, I’m only doing it so I can look myself in the eye again when I look myself in the mirror. Because it turns out it’s very hard to shave your face safely if you’re not allowed to look yourself in the eye. And my labmates are starting to call me PapercutMan, which isn’t really a superpower worth having.

So yeah, I’ll write something about this paper soon. But just to play it safe, I’m not going to operationally define ‘soon’ right now.

 

* Three “that”s in a row! What are the odds! Good luck parsing that sentence!

sunbathers in America

This is fiction. Kind of. Science left for a few days and asked fiction to care for the house.


I ran into my friend, Cornelius Kipling, at the grocery store. He was ahead of me in line, holding a large eggplant and a copy of the National Enquirer. I didn’t ask about it.

I hadn’t seen Kip in six months, so went for a walk along Boulder Creek to catch up. Kip has a Ph.D. in molecular engineering from Ben-Gurion University of the Negev, and an MBA from an online degree mill. He’s the only person I know who combines an earnest desire to save the world with the scruples of a small-time mafia don. He’s an interesting person to talk as long as you remember that he gets most of his ideas out of mail-order catalogs.

“What are you working on these days,” I asked him after I’d stashed my groceries in the fridge and retrieved my wallet from his pocket. Last I’d heard Kip was involved in a minor arson case and couldn’t come within three thousand feet of any Monsanto office.

“Saving lives,” he said, in the same matter-of-fact way that a janitor will tell you he cleans bathrooms. “Small lives. Fireflies. I’m making miniature organic light-emitting diodes that save fireflies from certain death at the hands of the human industrial-industrial complex.”

“The industrial human what?”

“Exactly,” he said, ignoring the question. “We’re developing new LEDs that mimic the light fireflies give off. The purpose of the fire in fireflies, you see, is to attract mates. Bigger light, better mate. The problem is, humans have much bigger lights than fireflies. So fireflies end up trying to mate with incandescents. You turn on a light bulb outside, and pffftttt there go a dozen bugs. It’s genocide, only on a larger scale. Whereas the LEDs we’re building attract fireflies like crazy but aren’t hot enough to harm them. At worst, you’ve got a device guaranteed to start a firefly orgy when it turns on.”

“Well, that absolutely sounds like another winning venture,” I said. “Oh, hey, what happened to the robot-run dairy you were going to start?”

“The cow drowned,” he said wistfully. We spent a few moments in silence while I waited for conversational manna to rain down on my head. It didn’t.

“I didn’t mean to mock you,” I said finally. “I mean, yes, of course I meant to mock you. But with love. Not like an asshole. You know.”

“S’okay. Your sarcasm is an ephemeral, transient thing–like summer in the Yukon–but the longevity of the firefly is a matter of life and death.”

“Sure it is,” I said. “For the fireflies.”

“This is the potential impact of my work right now,” Kip said, holding his hands a foot apart, as if he were cupping a large balloon. “The oldest firefly in captivity just turned forty-one. That’s eleven years older than us. But in the wild, the average firefly only lives six weeks. Mostly because of contact with the residues of the industrial-industrial complex. Compact fluorescents, parabolic aluminized reflectors, MR halogens, Rizzuto globes, and regular old incandescents. Historically, the common firefly stood no chance against us. But now, I am its redress. I am the Genghis Khan of the Lampyridae Mongol herd. Prepare to be pillaged.”

“I think you just make this stuff up,” I said, wincing at the analogy. “I mean, I’m not one hundred percent sure. But I’m very close to one hundred percent sure.”

“Your envy of other people’s imagination is your biggest problem,” said Kip, rubbing his biceps in lazy circles through his shirt. “And my biggest problem is: I need more imaginative friends. Just this morning, in the shower, this question popped into my head, and it’s been bugging me ever since: if you could be any science fiction character, who would you be? But I can’t ask you what you think; you have no vision. You didn’t even ask me why I was checking out with nothing but an eggplant when you saw me at the grocery store.”

“It’s not a vision problem,” I said. “It’s strictly a science fiction problem. I’m just no good at it. I’ll sit down to read a Ben Bova book, and immediately my egg timer will go off, or I’ll remember I need to renew my annual subscription to Vogue. That stuff never happens when I read Jane Austen or Asterix. Plus, I have this long-standing fear that if I read a lot of sci-fi, I’ll learn too much about the future; more than is healthy for any human being to know. There are like three hundred thousand science fiction novels in print, but we only have one future between all of us. The odds are good that at least one of those novels is basically right about what will happen. I won’t even watch a ninety-minute slasher film if someone tells me ahead of time that the killer is the girl from Ipanema with the dragon tattoo; why would I want to read all that science fiction and find out that thirty years from now, sentient goats from Zorbon will land on Mt. Rushmore and enslave us all, starting with the lawyers?”

“See,” he said. “No answer. Simple question, but no answer.”

“Fine,” I said. “If I must. Hari Seldon.”

“Good. Why?”

“Because,” I said, “unlike the real world, Hari Seldon lives in a mysterious future where psychologists can actually predict people’s behavior.”

“Predicting things is not so hard,” said Kip. “Take for instance the weather. It’s like ninety-three degrees today, which means the nudists will be out in force on the rocks by the Gold Run condos. It’s the only time they have a legitimate excuse to expose their true selves.”

We walked another fifty paces.

“See?” he said, as we stepped off a bridge and rounded a corner along the path. “There they are.”

I nodded. There they were: young, old, and pantsless all over.

“Personally, I always wanted to be Superman,” Kip said as we kept walking. He traced an S through his sweat-stained shirt. “Like every other kid I guess. But then when I hit puberty, I realized being Superman is a lot of responsibility. You can’t sit naked on the rocks on a hot day. Not when you’re Superman. You can’t really do anything just for fun. You can’t punch a hole in the wall to annoy your neighbor who smokes a pack a day and makes the whole building smell like stale menthol. You can’t even use your x-ray vision to stare at his wife in the shower. You need a reason for everything you do; the citizens of Metropolis demand accountability. So instead of being Superman, I figured I’d keep the S on the chest, but make it stand for ‘Science’. And now my guiding philosophy is to go through life always performing random acts of scientific kindness but never explicitly committing to help anyone. That way I can be a fundamentally decent human being who still occasionally pops into a titty bar for a late buffet-style lunch.”

I stared at him in awe, amazed that so much light and air could stream out of one man’s ego. I think in his mind, Kip really believed that spending all of his time on personal science projects put him on the side of the angels. That St. Peter himself would one day invite him through the Pearly Gates just to hang out and compare notes on fireflies. And then of course Kip would get to tell St. Peter, “no thanks,” and march right past him into a strip club.

My mental cataloging of Kip’s character flaws was broken up by an American White Pelican growling loudly somewhere in the sky above us. It spun around a few times before divebombing into the creek–an ambivalently graceful entrance reminiscent of Greg Louganis at the ’88 Olympics. American White Pelicans aren’t supposed to plunge-dive for food, but I guess that’s the beauty of America; anyone can exercise their individuality at any given moment. You can get Superman, floating above Metropolitan landmarks, eyeing anonymous bathrooms and wishing he could use his powers for evil instead of good; Cornelius Kipling, with ideas so grand and unattainable they crush out every practical instinct in his body; and me, with my theatrical vision of myself–starring myself, as Hari Seldon, the world’s first useful psychologist!

And all of us just here for a brief flash in the goldpan of time; just temporary sunbathers in America.

“You’re overthinking things again,” Kip said from somewhere outside my head. “I can tell. You’ve got that dumb look on your face that says you think you have a really deep thought on your face. Well, you don’t. You know what, forget the books; the nudists have the right idea. Go lie on the grass and pour some goddamn sunshine on your skin. You look even whiter than I remembered.”

amusing evidence of a lazy cut and paste job

In the course of a literature search, I came across the following abstract, from a 1990 paper titled “Taking People at Face Value: Evidence for the Kernel of Truth Hypothesis”, and taken directly from the publisher’s website:

Two studies examined the validity of impressions based on static facial appearance. In Study 1, the content of previously unacquainted classmates’ impressions of one another was assessed during the 1st, 5th, and 9th weeks of the semester. These impressions were compared with ratings of facial photographs of the participants that were provided by a separate group of unacquainted judges. Impressions based on facial appearance alone predicted impressions provided by classmates after up to 9 weeks of acquaintance. Study 2 revealed correspondences between self ratings provided by stimulus persons, and ratings of their faces provided by unacquainted judges. Mechanisms by which these links may develop are discussed.

Now fully revealed by the fire and candlelight, I was amazed more than ever to behold the transformation of Heathcliff. His countenance was much older in expression and decision of feature than Mr. Linton’s; it looked intelligent and retained no marks of former degradation. A half civilized ferocity lurked yet in the depressed brows and eyes full of black fire, but it was subdued.

 

Apparently social psychology was a much more interesting place in 1990.

Some more investigation revealed the source of the problem. Here’s the first page of the PDF:

 

So it looks to be a lazy cut and paste job on the publisher’s part rather than a looking glass into the creative world of scientific writing in the early 1990s. Which I guess is for the best, otherwise Diane S. Berry would be on the hook for plagiarizing from Wuthering Heights. And not in a subtle way either.