Much ink has been spilled in the last week or so over the so-called “tone” problem in psychology, and what to do about it. I speak here, of course, of the now infamous (and as-yet unpublished) APS Observer column by APS Past President Susan Fiske, in which she argues rather strenuously that psychology is in danger of … Continue reading There is no “tone” problem in psychology
I like to think of myself as a data-respecting guy–by which I mean that I try to follow the data wherever it leads, and work hard to suppress my intuitions in cases where those intuitions are convincingly refuted by the empirical evidence. Over the years, I’ve managed to argue myself into believing many things that … Continue reading the mysterious inefficacy of weather
A long, long time ago (in social media terms), I wrote a post defending Facebook against accusations of ethical misconduct related to a newly-published study in PNAS. I won’t rehash the study, or the accusations, or my comments in any detail here; for that, you can read the original post (I also recommend reading this … Continue reading In defense of In Defense of Facebook
This is not a blog post about bullying, negative psychology or replication studies in general. Those are important issues, and a lot of ink has been spilled over them in the past week or two. But this post isn’t about those issues (at least, not directly). This post is about ceiling effects. Specifically, the ceiling … Continue reading There is no ceiling effect in Johnson, Cheung, & Donnellan (2014)
By now you will most likely have heard about the “Many Labs” Replication Project (MLRP)–a 36-site, 12-country, 6,344-subject effort to try to replicate a variety of classical and not-so-classical findings in psychology. You probably already know that the authors tested a variety of different effects–some recent, some not so recent (the oldest one dates back … Continue reading What we can and can’t learn from the Many Labs Replication Project
I’m working on a TOP SEKKRIT* project involving large-scale data mining of the psychology literature. I don’t have anything to say about the TOP SEKKRIT* project just yet, but I will say that in the process of extracting certain information I needed in order to do certain things I won’t talk about, I ended up … Continue reading what do you get when you put 1,000 psychologists together in one journal?
You could be forgiven for thinking that academic psychologists have all suddenly turned into professional whistleblowers. Everywhere you look, interesting new papers are cropping up purporting to describe this or that common-yet-shady methodological practice, and telling us what we can collectively do to solve the problem and improve the quality of the published literature. In … Continue reading the truth is not optional: five bad reasons (and one mediocre one) for defending the status quo
The latest issue of the APS Observer features a special section on methods. I contributed a piece discussing the need for a full-fledged discipline of psychoinformatics: Scientific progress depends on our ability to harness and apply modern information technology. Many advances in the biological and social sciences now emerge directly from advances in the large-scale acquisition, … Continue reading bio-, chemo-, neuro-, eco-informatics… why no psycho-?
Andrew Gelman discusses a “puzzle that’s been bugging [him] for a while“: Pop economists (or, at least, pop micro-economists) are often making one of two arguments: 1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist. 2. People are irrational and they need … Continue reading we, the people, who make mistakes–economists included
Somehow I wound up profiled in this month’s issue of the APS Observer as a “Rising Star“. I’d like to believe this means I’m a really big deal now, but I suspect what it actually means is that someone on the nominating committee at APS has extraordinarily bad judgment. I say this in no small … Continue reading the APS likes me!