Has the liberal bias in psychology contributed to the replication crisis?
While liberal bias per se is not associated with research replicability, highly politically biased findings of either slant (liberal or conservative) are less robust.
02 April 2019
By Jesse Singal
There's no simple explanation for why psychology has been hit so hard by the replication crisis – it's the result of a complicated mix of professional incentives, questionable research practices, and other factors, including the sheer popularity of the sorts of sexy, counterintuitive findings that make for great TED Talk fodder.
But that might not be the entire story. Some have also posited a more sociological explanation: political bias. After all, psychology is overwhelmingly liberal. Estimates vary and depend on the methodology used to generate them, but among professional psychologists the ratio of liberals to conservatives is something like 14:1.
A new PsyArXiv preprint first-authored by Diego Reinero at New York University – and involving an "adversarial collaboration" in which " two sets of authors were simultaneously testing the same question with different theoretical commitments" – has looked for evidence to support this explanation, and found that while liberal bias per se is not associated with research replicability, highly politically biased findings of either slant (liberal or conservative) are less robust.
The crisis-spurred-by-bias theory goes something like this: Because psychology is so dominated by liberals, a certain groupthink sets in. Findings that appeal to liberal sensibilities aren't viewed as critically – they're ushered through the scientific gatekeeping process rather than carefully evaluated. For proponents of this theory, it's easy to come up with examples: The Implicit Association Test, for example, posits a very liberal-friendly theory – "[almost] everyone's a little bit racist" – and has been hobbled, lately, by serious questions about what, exactly, it measures.
But that sort of theorising only gets one so far: If there's an actual bias helping to determine which questionable findings get published, it should show up at a more zoomed-out level.
For the new research, consisting of two studies, nearly 200 psychology findings described in published research abstracts (chosen because they have all been subject to subsequent replication attempts in the literature) were rated on a 1-5 scale from liberal to conservative – or put in a separate, no-political-valence category – by coders not otherwise affiliated with the study. In the first study the coders were a "politically balanced sample of six psychology doctoral students (including self-identified liberals, moderates, and conservatives)", and in the other they were "a larger, politically diverse sample of American residents (using an online Mechanical Turk sample)."
For example, a research finding showing that the perceived threat of appearing racist caused whites to distance themselves from blacks in an experiment was coded as liberal; a study in which people burdened with secrets perceived hills to be steeper and weights to be heavier was coded as moderate; and a study which showed that exposure to centerfold erotica caused students to rate photos of opposite-sex nude people as less attractive, relative to the ratings of students in a control condition exposed to abstract art, was coded as conservative.
Overall, the majority of findings were rated as being centrist or politically neutral; among those with a bias, there were more findings rated liberal than conservative.
If a liberal political bias really has been fueling the replication crisis, it would be through allowing methodologically weaker papers to be published when they had a liberal message, and these papers would then be less likely to replicate later on (replication efforts tend to be conducted carefully enough that they would be unlikely to fall victim to simple political bias).
However, the authors found no evidence to support this idea. In the first study, "less than 1%" and "less than 2%," respectively, "of the variance in replicability could be explained by the ideological slant of the original research alone," and in the second study the variance explained was less than 2 per cent. This would seem like a serious strike against any straightforward idea of a connection between liberal bias and psychology's replication woes.
But the researchers also tested for whether the ideological extremity of findings, liberal or conservative, was correlated with a lack of replicability. That is, they ignored whether each abstract was coded as liberal or conservative and focused instead on its "distance" from being apolitical.
Here's where the "no liberal bias" takeaway becomes slightly more complicated. With this technique, the researchers found that the more highly ideological a paper was, whether to the left or the right, the less likely it was to replicate – what they called an "ideological extremity effect". For instance, a strongly liberal-rated study, which showed how subtle cues could make African-American professionals feel unwelcome, failed to replicate, as did the aforementioned, conservative-rated study on centerfold erotica. Overall, "more ideologically slanted research (regardless of liberal vs. conservative ideological slant) was between 34% (Study 1) and 6% (Study 2) less likely to replicate."
"Taken together our results are a starting point for a richer conversation about the role and influence of politics in science," the researchers concluded. "Our work suggests that politics may play a role in scientific replicability but not in the way many scholars have thought, as we did not find evidence of a liberal bias and instead found preliminary evidence of an ideological extremity effect." It suggests that yes, there's a link between ideology and replication-woes, but one that operates in a significantly more complicated way than some replication-crisis-theorists have posited.
Further reading
—Is the Political Slant of Psychology Research Related to Scientific Replicability? [this study is a preprint meaning that it has not yet been subject to peer review and the final published version may differ from the version that this report was based on]
About the author
Post written by Jesse Singal (@JesseSingal) for the BPS Research Digest. Jesse is a contributing writer at BPS Research Digest and New York Magazine, and he publishes his own newsletter featuring behavioral-science-talk. He is also working on a book about why shoddy behavioral-science claims sometimes go viral for Farrar, Straus and Giroux.