The reproducibility project: Disaster or triumph for psychology?
…or both? Our editor Jon Sutton reports on today's announcement, and how psychologists felt to be part of the initiative.
28 August 2015
'Ever since Brian Nosek and his colleagues first set up the Reproducibility Project in 2011 many psychologists have been twitchy.' So begins psychologist Professor Dorothy Bishop, writing in The Guardian. How are psychologists feeling today, now that the Project has announced its findings that only 36 per cent of findings in psychology appear to stand up to an replication attempt? [See coverage and links on our Research Digest blog]
The reaction has so far been decidedly mixed, with some sections of the media reporting 'study reveals that a lot of psychology research really is just 'psycho-babble'.' Science journalist Ed Yong presents a more reasoned account, pointing out that nobody really knows whether 36 per cent is good or bad. And scientists and psychologists themselves appear to be looking on the bright side: John Ioannidis, who in 2012 published his anticipation of a 53 per cent replication failure rate for psychology, writes that 'Hopefully this successful, highly informative paradigm will help improve research practices in this field. Many other scientific fields without strong replication cultures may also be prompted now to embrace replications and reproducible research practices.' Writing on Mind Hacks, psychologist Dr Vaughan Bell says that the project is 'either a disaster, a triumph or both for psychology'.
We recommend you read all the coverage, starting with our Research Digest and moving on via Ed Yong. Here we take a slightly different tack: how did those who took part in the project feel about it? Two psychologists spoke to the press conference and Science announcement.
Joshua Correll, a psychologist at the University of Colorado, Boulder: 'This is how science works. How else will we converge on the truth? Really, the surprising thing is that this kind of systematic attempt at replication is not more common. [The failure to replicate my results, by Etienne LeBel's lab at the University of Western Ontario in Canada] does not convince me that my original effects were [a] fluke. I know that other researchers have found similar patterns ... and my lab has additional data that supports our basic claim.'
Dr E.J. Masicampo: 'I led a research team that conducted one of the 100 replications, and at the same time, an original author of one of the studies that was targeted for replication by another team.
So I'll share my experience in both of these roles and talk about how the project was conducted. The project was community crowd sourced. Researchers volunteered for the project after hearing about it through various outlets such as conference presentations or psychology listers.
In the end, enough research teams volunteered that 100 replications were conducted with the vast majority of replication teams being led by researchers with Ph.D.'s.
As research teams joined the effort, they were presented with a pool of studies drawn from three prominent psychology journals and the replication teams selected the studies they would replicate based on their interests, expertise, and available resources.
The next steps involved designing the study and then conducting it, and at both stages, there are numerous steps taken to maximize the quality of the replications that were conducted.
When designing their studies, replication teams consulted with original authors. This ensured that the replication studies were as faithful as possible to the originals. Each study was also highly powered, meaning plan to run enough participants to ensure high odds of detecting the original effect.
And before data were collected, the study designs were reviewed both by the original authors and by third-party reviewers. And in addition to this, the increased transparency replication teams registered and uploaded their methodology to a central repository.
So this openness served to increase each replication team's accountability, and ideally the quality of their work. Replication teams then collected and analyzed – according to the preregistered plan.
Teams completed a write-up of the results and interpretations, which were, again, reviewed by the original authors and a third-party, and once again, to increase transparency, final write-ups were uploaded to a central repository.
So to reiterate some of the project's features, this was a highly collaborative process not just in terms of many teams contributing replication studies, but also in terms of the individual replications involving collaboration between replication teams and original authors.
There were third-party reviews of each stage of the process and the entire process was transparent with all materials registered and made publically available. So personally, I found the project to be precisely what I hoped it to be.
As the leader of a replication team, I was excited by the scope of the project. I felt I was taking part in an important groundbreaking effort and this motivated me to invest heavily in the replication study that I conducted.
I also saw the project was a fun opportunity to replicate a well-respected study that I had long been fascinated by. In fact, the study I replicated is one that I teach to my undergraduates every semester.
So the experience was also made even better by the fact that I was able to work closely with the original authors in executing the study. As an original author whose work was being replicated, I felt like my research was being treated in the best way possible.
The replication team was competent and motivated. I was consultant at every stage of the process. Everything was transparent, and I felt confident that a best attempt at replicating my work was being made.
So in short, I think these were well-planned replications conducted by highly qualified and motivated scientists; thus, one major benefit of this project is that it really served as a model for how to conduct high quality replications and how to coordinate a very large number of them."
Are you a psychologist who took part in the project? We would love to hear your views on the process. Or perhaps you are a psychologist with a view on whether that 36 per cent figure is good, bad or both? Comment below (if you are a British Psychological Society member), e-mail [email protected] or tweet @psychmag.
- See also our opinion special on replication from May 2012, and our announcement of the initiative.
Other links:
Here's why science can't make up its mind
Why so many psych studies may be false
We found only one-third of published psychology studies is reliable - now what?
No, science's reproducibility problem is not limited to psychology
Coverage in the Times Higher Education
Three popular psychology studies that did not hold up
Now what?
How to rethink psychology
Psychology is not in crisis
Psychology's reproducibility problem is journalism's problem too