Still not selective: comment on comment on comment on Lieberman & Eisenberger (2015)

In my last post, I wrote a long commentary on a recent PNAS article by Lieberman & Eisenberger claiming to find evidence that the dorsal anterior cingulate cortex is “selective for pain” using my Neurosynth framework for large-scale fMRI meta-analysis. I argued that nothing about Neurosynth supports any of L&E’s major conclusions, and that they … Continue reading Still not selective: comment on comment on comment on Lieberman & Eisenberger (2015)

the weeble distribution: a love story

“I’m a statistician,” she wrote. “By day, I work for the census bureau. By night, I use my statistical skills to build the perfect profile. I’ve mastered the mysterious headline, the alluring photo, and the humorous description that comes off as playful but with a hint of an edge. I’m pretty much irresistible at this … Continue reading the weeble distribution: a love story

Internal consistency is overrated, or How I learned to stop worrying and love shorter measures, Part I

[This is the first of a two-part series motivating and introducing precis, a Python package for automated abbreviation of psychometric measures. In part I, I motivate the search for shorter measures by arguing that internal consistency is highly overrated. In part II, I describe some software that makes it relatively easy to act on this … Continue reading Internal consistency is overrated, or How I learned to stop worrying and love shorter measures, Part I

There is no ceiling effect in Johnson, Cheung, & Donnellan (2014)

This is not a blog post about bullying, negative psychology or replication studies in general. Those are important issues, and a lot of ink has been spilled over them in the past week or two. But this post isn’t about those issues (at least, not directly). This post is about ceiling effects. Specifically, the ceiling … Continue reading There is no ceiling effect in Johnson, Cheung, & Donnellan (2014)

what exactly is it that 53% of neuroscience articles fail to do?

[UPDATE: Jake Westfall points out in the comments that the paper discussed here appears to have made a pretty fundamental mistake that I then carried over to my post. I’ve updated the post accordingly.] [UPDATE 2: the lead author has now responded and answered my initial question and some follow-up concerns.] A new paper in Nature Neuroscience … Continue reading what exactly is it that 53% of neuroscience articles fail to do?

The homogenization of scientific computing, or why Python is steadily eating other languages’ lunch

Over the past two years, my scientific computing toolbox been steadily homogenizing. Around 2010 or 2011, my toolbox looked something like this: Ruby for text processing and miscellaneous scripting; Ruby on Rails/JavaScript for web development; Python/Numpy (mostly) and MATLAB (occasionally) for numerical computing; MATLAB for neuroimaging data analysis; R for statistical analysis; R for plotting … Continue reading The homogenization of scientific computing, or why Python is steadily eating other languages’ lunch

the truth is not optional: five bad reasons (and one mediocre one) for defending the status quo

You could be forgiven for thinking that academic psychologists have all suddenly turned into professional whistleblowers. Everywhere you look, interesting new papers are cropping up purporting to describe this or that common-yet-shady methodological practice, and telling us what we can collectively do to solve the problem and improve the quality of the published literature. In … Continue reading the truth is not optional: five bad reasons (and one mediocre one) for defending the status quo

R, the master troll of statistical languages

Warning: what follows is a somewhat technical discussion of my love-hate relationship with the R statistical language, in which I somehow manage to waste 2,400 words talking about a single line of code. Reader discretion is advised. I’ve been using R to do most of my statistical analysis for about 7 or 8 years now–ever … Continue reading R, the master troll of statistical languages

A very classy reply from Karl Friston

After writing my last post critiquing Karl Friston’s commentary in NeuroImage, I emailed him the link, figuring he might want the opportunity to respond, and also to make sure he knew my commentary wasn’t intended as a personal attack (I have enormous respect for his seminal contributions to the field of neuroimaging). Here’s his very … Continue reading A very classy reply from Karl Friston

Sixteen is not magic: Comment on Friston (2012)

UPDATE: I’ve posted a very classy email response from Friston here. In a “comments and controversies” piece published in NeuroImage last week, Karl Friston describes “Ten ironic rules for non-statistical reviewers”. As the title suggests, the piece is presented ironically; Friston frames it as a series of guidelines reviewers can follow in order to ensure … Continue reading Sixteen is not magic: Comment on Friston (2012)