A new age for psychology?
Ella Rhodes reports on developments in the 'replication debate'.
04 December 2017
Psychological science may just be in the midst of a renaissance. After waves of revelations about the apparent inaccuracy of research, questionable statistical practices and the lack of successful replications of psychology studies, are there signs that the tide has turned?
In 2011 a number of events led psychology into a period of scrutiny, resulting in many of the changes we have seen in recent years. In that year social psychologist Diederik Stapel was suspended from Tilburg University for fabricating data. Papers were also published illustrating how p-hacking, or picking out significant results and data without having a hypothesis of effects beforehand, can lead to spurious false positives – for example how listening to music can statistically be shown to reduce one's age!
The authors of that music paper, Leif Nelson (University of California, Berkeley), Joseph Simmons and Uri Simonsohn (University of Pennsylvania), in a new pre-print article, have turned their attention to psychology's changing face. They suggest that thanks to the increased prevalence of data-sharing, open-access publication, pre-registered studies, publishing negative findings and attempted replications, psychological science is experiencing a renaissance.
Nelson told us about some of the most encouraging changes in the field since 2011. While he said the most observable change had been an increase in sample sizes, with samples in the hundreds of thousands becoming relatively common, the most beneficial change had been a move towards transparent reporting: 'By no means has that change been complete (many journals are still quite lax in requesting such reporting), but I have already seen a change in expectation for the completeness with which an author should report scientific details. Going forward, I think that pre-registration will have huge positive consequences for the field. Adoption of pre-registration has been somewhat slower, but in just the last few years it has gone from a non-existent novelty to a frequent and reasonable presence in many journals. My guess is that in 10 years most published research in psychology will be pre-registered.'
Pre-registration is a model of publishing where a research question, methods and suggested analyses are pre-registered with a journal prior to data collection in an aim to shift the focus in publishing from a bias towards positive results to one where methodology and research questions are more important factors. Around 80 journals now accept registered reports in some form, thanks largely to advocacy of the cause by Professor Chris Chambers (Cardiff University) – in 2012 that number was one, with Cortex (Chambers being on the editorial board) being the first to accept them.
Nelson and his colleagues believe that p-hacking explains how psychologists have so often used underpowered studies with small numbers of participants yet have still uncovered positive results. He told us that he had himself been guilty but that his work today, compared with 10 years ago, was entirely different: 'I would look at my data in many different ways and convince myself that the analysis that looked the best was probably the correct analysis. Honestly, I am sure that I would fall into the exact same traps today – I am no less biased and self-serving than I was 10 years ago – but there is less allowance for it. I pre-register studies. I replicate findings. I know that all of my data and materials will be posted. All of those features keep my self-serving biases in check.'
Although debatable, p-hacking may explain why so few key findings in psychology are reproducible. Professor Brian Nosek, co-founder of the Centre for Open Science, and his colleagues brought widespread attention to this in their attempt to replicate 100 psychology studies in which only 36 per cent found a significant result. Nosek (University of Virginia), who also helped to launch the Open Science Framework, an online platform for the online sharing of methods and data, told us that while change was afoot much needed to be done: 'Journals and researchers that are leading the renaissance have demonstrated that interventions such as pre-registration, badges for open practices, and registered reports can be implemented efficiently.'
Researchers are actively working to assess the effects of these interventions on the accessibility of data and reproducibility of research. Nosek added: 'What we need now is to scale the adoption of the new practices across the psychology community and to continuously evaluate their impact. But it can't only be the leading voices that are willing to act.' A full cultural shift towards openness and reproducibility requires action by all society leaders, journal editors and researchers to adopt practices that will accelerate progress in psychology, Nosek said. 'The most exciting aspects of the renaissance is that we are turning our scientific skills to investigating and improving our own research culture and practices. The psychology of science will first improve our field, but will eventually improve all of science.'
The renaissance in psychology has also led to open, public scrutiny of some academic work on blogs and Twitter. Several researchers turned their attention recently to the findings of food psychologist Brian Wansink and detailed their concerns over many of his findings and methods online. Amy Cuddy's work on power posing has also come under detailed and public scrutiny. Some have called this 'methodological terrorism' or bullying, while others argue open debate and scrutiny are an essential part of academia.
Professor Daryl O'Connor (University of Leeds), Chair of the British Psychological Society's Research Board, said it was an exciting time for psychology, with the field leading the way in the last decade. 'Researchers have begun to embrace open science, pre-registration and large-scale replication efforts, and recognise the risks of p-hacking and other questionable research practices.' But he added: 'It is important that we continue to work collaboratively and to keep the tone of the debate collegiate, non-judgemental and supportive. As a result, our renaissance will propel psychological researchers forward by improving scientific practice and trigger new ways of working that will ultimately improve the robustness of our evidence base.'
One example of large-scale collaboration is the recently-founded Psychological Science Accelerator, which is bringing together a worldwide network of labs to work on replications and other hypotheses. Founded by Christopher Chartier (Ashland University, Ohio), who initially hoped to create a CERN for psychology, the Accelerator already has 180 member labs in 40 countries. Anyone can submit research proposals to the accelerator, which are reviewed and selected by a large subset of network members. The group has recently selected its first study to work on: Ben Jones and Lisa DeBruine (University of Glasgow) proposed to test whether Oosterhof and Todorov's (2008) valence-dominance model of social perception generalises across world regions.
Chartier said he wanted to set up the network to improve the reliability and generalisability of psychological evidence: 'We have frequently seen small studies coming out of independent labs that are later difficult to reproduce in larger collaboratively collected samples. I'm hoping the Accelerator can speed up the process of confirmation in psychological science by quickly gathering huge global data sets on our important research questions. Our hope is to consistently select exciting projects from all areas of psychology that the network can then collect data for. We are a standing lab network, instead of recruiting labs anew for each data collection project, we can quickly match labs with appropriate projects. We're also geographically dispersed, so data collection will occur all around the world, not just in traditional strongholds of North America and Europe.'
When asked what we should turn our attention to in the future in terms of methodological overhauls and changes in practice, Nelson said: 'At every turn we have been surprised by what we have come to recognise as critical. When we originally talked about transparent reporting we thought that pre-registration was ponderous and unreasonable, so we ignored it. Three years later we came to see it as useful to the point of being all but essential. If you ask me today what will be most important in the future, I would not trust myself to have a correct answer. Perhaps we are simply not good at knowing what will come next? Instead, we try to advocate for those tools which we know to be useful now while continuously thinking about other changes that might be useful in the future.'
- Find much more on these issues in our archive, and also see our recent interview with Marcus Munafo.