A political candidate running for regional public office asked a famous political psychologist what kind of television ads she should air in three heavily contested districts: positive ones emphasizing her own record, or negative ones attacking her opponent’s record. “You’re in luck,” said the psychologist. “I have a new theory of persuasion that addresses exactly … Continue reading The parable of the three districts: A projective test for psychologists
The annual Association for Psychological Science meeting is coming up in San Francisco this week. One of the cross-cutting themes this year is “Big Data: Understanding Patterns of Human Behavior”. Since I’m giving two Big Data-related talks (1, 2), and serving as discussant on a related symposium, I’ve been spending some time recently trying to … Continue reading Big Data, n. A kind of black magic
By now you will most likely have heard about the “Many Labs” Replication Project (MLRP)–a 36-site, 12-country, 6,344-subject effort to try to replicate a variety of classical and not-so-classical findings in psychology. You probably already know that the authors tested a variety of different effects–some recent, some not so recent (the oldest one dates back … Continue reading What we can and can’t learn from the Many Labs Replication Project
I’m working on a TOP SEKKRIT* project involving large-scale data mining of the psychology literature. I don’t have anything to say about the TOP SEKKRIT* project just yet, but I will say that in the process of extracting certain information I needed in order to do certain things I won’t talk about, I ended up … Continue reading what do you get when you put 1,000 psychologists together in one journal?
You could be forgiven for thinking that academic psychologists have all suddenly turned into professional whistleblowers. Everywhere you look, interesting new papers are cropping up purporting to describe this or that common-yet-shady methodological practice, and telling us what we can collectively do to solve the problem and improve the quality of the published literature. In … Continue reading the truth is not optional: five bad reasons (and one mediocre one) for defending the status quo
I’ve written a few posts on this blog about how the development of better online infrastructure could help address and even solve many of the problems psychologists and other scientists face (e.g., the low reliability of peer review, the ‘fudge factor’ in statistical reporting, the sheer size of the scientific literature, etc.). Actually, that general … Continue reading tracking replication attempts in psychology–for real this time
Andrew Gelman discusses a “puzzle that’s been bugging [him] for a while“: Pop economists (or, at least, pop micro-economists) are often making one of two arguments: 1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist. 2. People are irrational and they need … Continue reading we, the people, who make mistakes–economists included
A provocative and very short Opinion piece by Julien Mayor (Are scientists nearsighted gamblers? The misleading nature of impact factors) was recently posted on the Frontiers in Psychology website (open access! yay!). Mayor’s argument is summed up nicely in this figure: The left panel plots the mean versus median number of citations per article in … Continue reading how many Cortex publications in the hand is a Nature publication in the bush worth?
Let’s suppose you were charged with the important task of naming all the various subdisciplines of neuroscience that have anything to do with the field of research we now know as psychology. You might come up with some or all of the following terms, in no particular order: Neuropsychology Biological psychology Neurology Cognitive neuroscience Cognitive … Continue reading the naming of things
There’s a time-honored tradition in the social sciences–or at least psychology–that goes something like this. You decide on some provisional number of subjects you’d like to run in your study; usually it’s a nice round number like twenty or sixty, or some number that just happens to coincide with the sample size of the last … Continue reading the capricious nature of p < .05, or why data peeking is evil