Psychologist logo
Illustration of two people holding on to a ship's broken mast
Methods and statistics, Social and behavioural

Was the “crisis” in social psychology really that bad? Have things improved? Part One: the researchers’ perspective

Not perfect, but getting better, is the take within the field: a cautious optimism compared to some dire pronouncements on the state of psychology.

25 May 2017

By Alex Fradera

The field of social psychology is reeling from a series of crises that call into question the everyday scientific practices of its researchers. The fuse was lit by statistician John Ioannidis in 2005, in a review that outlined why, thanks particularly to what are now termed "questionable research practices" (QRPs), over half of all published research in social and medical sciences might be invalid. Kaboom. This shook a large swathe of science, but the fires continue to burn especially fiercely in the fields of social and personality psychology, which marshalled its response through a 2012 special issue in Perspectives on Psychological Science that brought these concerns fully out in the open, discussing replication failure, publication biases, and how to reshape incentives to improve the field. The fire flared up again in 2015 with the publication of Brian Nosek and the Open Science Collaboration's high-profile attempt to replicate 100 studies in these fields, which succeeded in only 36 per cent of cases. Meanwhile, and to its credit, efforts to institute better safeguards like registered reports have gathered pace.

So how bad did things get, and have they really improved? new article in pre-print at the Journal of Personality and Social Psychology tries to tackle the issue from two angles: first by asking active researchers what they think of the past and present state of their field, and how they now go about conducting psychology experiments, and second by analysing features of published research to estimate the prevalence of broken practices more objectively.

The paper comes from a large group of authors at the University of Illinois at Chicago under the guidance of Linda Skitka, a distinguished social psychologist who participated in the creation of the journal Social Psychological and Personality Science and who is on the editorial board of many more social psych journals, and led by Matt Motyl, a social and personality psychologist who has published with Nosek in the past, including on the issue of improving scientific practice.

Psychology research is the air that we breathe at the Digest, making it crucial that we understand its quality. So in this two-part series, we're going to explore the issues raised in the University of Illinois at Chicago paper, to see if we can make sense of the state of social psychology, beginning in this post with the findings from Motyl et al's survey of approximately 1,200 social and personality psychologists, from graduate students to full professors, mainly from the US, Europe and Australasia.

 

Motyl's team began by asking their participants about the state of the field now as opposed to 10 years ago. On average, participants believed that older research would only replicate in 40 per cent of cases – quite close to Nosek's figure – but they believed that research being conducted now would have a better rate, about 50 per cent, and that generally the field was improving itself in response to the crisis.

Motyl's team also canvassed the respondents on a range of questionable research practices, sketchy behaviours like neglecting to report all the measures taken, or quietly dropping experimental conditions from your study. Thanks particularly to work by Joseph Simmons, Leif Nelson, and Uri Simonsohn, we understand just how much these practices compromise the assumptions of scientific significance testing, making it easy to produce false positive results even in the absence of fraudulent intent. In their words, QRPs are not wrong "in the way it's wrong to jaywalk", the way that researchers have often implicitly been encouraged to think of them, but "wrong the way it's wrong to rob a bank."

Previous surveys of researchers' own QRP usage have uncovered high levels of admissions, as if the field was rushing to the confession box to purge their sins. Here, Motyl's team used finer-grained questioning to look at frequency (often a "yes" turned out to be "rarely" or "once") and justification. In some cases, a researcher's justification showed that they had misinterpreted the question and that they were actually expressing strong disapproval of the QRP – in fact, this seemed to be the case in virtually all "confessions" of data fabrication. In other cases, the context provided by a justification painted the particular research practice in a completely different light.

For example, consider the seemingly dodgy decision to drop conditions from your study analysis. If your rationale is that the condition didn't turn out to do what you want to do – in an emotion and memory study, your sad video didn't produce a sad mood in participants, for instance – it's actually more problematic to keep what is effectively a bogus condition in your analysis than it is to exclude it (ideally in a principled way according to a registered procedure). For the new survey, independent judges evaluated all the stated justifications, and felt they legitimised the "questionable" practices in 90 per cent of cases.

Discovering these misunderstandings and justifiable practices littered through the QRP data led Motyl's team to conclude that pre-explosion psychology practices aren't as derelict as once feared, although the fact that 70 per cent respondents said they are now less likely to engage in many of these practices than ten years ago suggests that all was not entirely virtuous back then.

So not perfect, but getting better, is the take within the field: a cautious optimism compared to some dire pronouncements on the state of psychology. In Part  Two, we'll look at the body of psychological research itself, to see if this optimism is justified.