Psychologist logo
Dr Charlotte Pennington
Research, Teaching and learning

Children of the replication crisis

Dr Charlotte Pennington on the importance of teaching students how open science can reform psychology.

27 June 2024

It's 2011 and a series of landmark events are about to plummet psychology into an ongoing period of deep reflection. A series of nine experiments appearing to show evidence for precognition (Bem, 2011) are found to be 'too good to be true' (Wagenmakers et al., 2011); Diederik Stapel admits fabricating research data in multiple influential social psychological studies (Stapel, 2012); and meta-researchers shine light on how questionable research practices permit researchers to present anything as significant (John et al., 2012; Simmons et al., 2011). These events would lead Pashler and Wagenmakers (2012) to ask, in a special journal issue on the replicability of psychological science, whether the field was facing a 'crisis of confidence'.

An answer came from one of the largest replication efforts in psychology's history. In 2015, the Open Science Collaboration shared the results of a three-year-long project where they attempted to replicate 100 randomly selected studies from three prestigious psychology journals, using the original materials and high-powered designs. 

Only 36 per cent of studies 'successfully' replicated, with significant effects in the same direction as the original studies and effect sizes half the size. Social psychology fared particularly badly, with only 25 per cent of effects replicating (Open Science Collaboration, 2015). Shockwaves spread out across the discipline: Psychology had indeed found itself amidst a replication crisis.

I am a child of this replication crisis. 

If only I had been taught about replication all along

I started my PhD in Experimental Social Psychology in 2013 on the topic of stereotype threat, the situational phenomenon which proposes that knowledge of negative societal stereotypes can have a detrimental effect on performance (Steele & Aronson, 1995). The typical experimental hypothesis goes like this: if you make a person aware of a negative stereotype relating to a group with which they identify (e.g., 'women are bad at mathematics') then, on average, they will perform worse relative to a control group who are not aware of this same stereotype. 

As an undergraduate, I had been taught about this topic with a focus on the novel and flashy multi-study papers and the concrete conclusions. It fascinated me, and it spoke to me: as someone who had struggled with mathematics at school, I felt that it could explain my 'fixed mindset' when it came to solving maths problems. The theory is convincing: we have all experienced stereotypes about the groups with which we identify, and it makes sense that these stereotypes influence how we feel about ourselves and our behaviour.

And in this very same way, the theory was influential too – stereotype threat was put forward to explain academic achievement gaps for a range of diverse groups and across multiple performance domains. Books on this topic sold in their thousands (Steele, 2010) and TEDTalks drove this topic into public discourse.

With the literature on stereotype threat seemingly so conclusive, I jumped straight in at the deep end aiming to identify the mechanisms of the stereotype threat-performance relationship (Pennington et al., 2016). But my research journey would not be as easy as the published findings led me to believe.

In the first year of my PhD, I ran multiple studies and failed to find the typical stereotype threat effect time and time again. I piloted many different ways of evoking the effect, priming stereotypes both explicitly and implicitly, and testing multiple performance tasks. I reasoned that the primes were not strong enough and the tasks too easy. 

At the end of my second year, I decided to replicate a classic stereotype threat study, which felt like a fail-safe because both positive and negative results could lend support to one of two theories. But, again, my findings were null or inconclusive. Why could I not find the same effects that seasoned researchers could find? What was I doing wrong? It must be me – I was the failure! 

I tried to publish my null studies, but journal editors and reviewers weren't convinced they had a place in the literature. One reviewer suggested that I had "built a straw man and burnt it" commenting on perceived flaws in the methodological design that explained away the null results. I wonder whether this reviewer would have written their review in the same disparaging way if they knew that an early career researcher was sitting behind the computer screen?

Around the time that I completed my PhD, replications were rare and disincentivised in academic culture (see Koole & Lakens, 2012). Journals explicitly stated that they wanted "novel and important data" and "cutting-edge research". And this incentive culture trickled down into research training, with my PhD centred around making "an original contribution to knowledge". With the 'publish or perish' culture ever-salient, I worried immensely about not having any publications to my name. 

I found myself becoming increasingly demotivated to pursue my PhD. I lost trust in the stereotype threat literature and questioned everything I thought I knew about psychology. I became stressed, anxious, and depressed. My passion for psychology had been erased.

Open science role models

It was in 2016 that I graduated with my PhD and decided to focus on teaching, joining Lancaster University as a Teaching Associate. There I was paired with a mentor, Dr Dermot Lynott, who would reignite my passion for psychological research. 

Dermot invited me to join Lancaster's Open Science Working Group, which provided the answers to the questions I had asked myself throughout my PhD. I vividly remember going to a session on publication bias, where the speaker outlined the inequitable parallel between publishing positive over null/negative results. 

Why had no one taught me about this during my undergraduate degree? Why were my lectures filled with newsworthy studies full of spin and hype? I left the talk feeling like a new door had opened, showing me a better and brighter future for psychological science. But a door I wish had always been open.

During my time at Lancaster, Dermot encouraged me to get involved in two Registered Replication Reports, gaining hands-on experience of conducting large-scale replications of classic priming studies alongside international researchers. 

These studies integrated open science practices throughout: the study protocols were submitted and approved through the Registered Report format whereby publication decisions focus on methodological rigour rather than the eventual results (see Chambers & Tzavella, 2022), and the materials and data were made publicly available via the Open Science Framework. The studies did not replicate the original effects, with a sizeable decrease in the observed effect sizes (see McCarthy et al., 2018; Verschuere et al., 2018).

On a personal level, the results of these replications felt validating. It turned out that my experiences during my PhD were perfectly normal. Studying complex phenomena, which is particularly the case with human behaviour, is difficult. There may be more (so-called) failures rather than successes. Yet there are still so many early career researchers who don't realise this, and the literature landscape does nothing to educate them (see Scheel et al., 2021). Both success and failure are part of a healthy scientific process and lead us closer to the truth. 

And another truth is this – if it weren't for open science role models, I would not have continued to pursue an academic career. Open science gave me the hope that things can get better because at its heart lies transparency. 

If researchers preregister their study protocols and make their materials, code and data publicly available, then the research community can verify studies and build upon them. In this way, open science can reduce questionable research practices, or make them more detectable, realigning the incentive structures that traditionally underpin research culture (i.e., a refocus on quality over quantity). Openness can also help to reduce publication bias by making outputs more discoverable (even if a researcher does not publish a study, there is a record that it was conducted; see Ensinck & Lakens, 2023). 

But for open science reform to be sustained, we need to teach it to the next generations of students who will take our discipline forward. This starts by embedding it from the start – within the higher education curriculum. 

Teaching the grassroots

In the years since I completed my PhD, I have grown ever more passionate about open science. But I have noticed a discrepancy between the new ways of doing psychological research and how students are taught about research. 

If you turn the pages of any social psychology textbook, you will see sections on stereotype threat, ego depletion, mindset, and priming without any mention of the hugely mixed evidence and replication failures. The history of the replication crisis is not taught as an essential component within UK higher education and undergraduate empirical projects still focus on novelty and independent contributions rather than rigour, open research, and big team science (see Pennington et al., 2022; Pennington, 2023).

Most of the efforts in open science have focused on seasoned researchers and whilst incredible resources exist that teach students elements of this (e.g., Azevedo et al., 2019; Chopik et al., 2018; Wagge et al., 2019), there existed no comprehensive guidebook that provided both a historical overview of the replication crisis and how open science can fix research culture. For open science to truly reform psychology, we need to teach it to the next generations of students who will take our discipline forward.

So, I wrote a book dedicated to my younger self starting out in psychology. A Student's Guide to Open Science: Using the Replication Crisis to Reform Psychology walks students, and their educators, through key events that sparked the replication crisis, discusses the causes and drivers of this crisis, and provides a hands-on guide to open science practices that can be digested in a 'buffet-style approach' (see Bergmann, 2023). 

It contains top tips for preprints, preregistration, Registered Reports, open materials, data, and code, as well as pedagogical activities to reinforce learning throughout. Importantly, it also presents a critical outlook on open science by exploring how well these practices are working in practice (by evaluating evidence from meta-research), and where inequalities still lie within research culture (are we at risk of restraining progress through #bropenscience?; Whitaker & Guest, 2020). 

I hope this book becomes an essential guide to navigating the replication crisis and integrating open science into teaching and research training. But I also hope it is so much more. I want it to encourage students to pursue psychology as a field of study, allowing the best researchers to stay. By embedding the teaching of open science into the higher education curriculum, I believe that we can make these practices normative, paving the way to a stronger and more credible psychological science.

From pessimism to optimism

I have spoken to some educators who worry that teaching students about the replication crisis will make them pessimistic about the state of the field and lead them to pursue other programmes of study, contributing to a 'leaky pipeline'. I actually believe that the opposite is true. 

Teaching students about the difficulties inherent in the research process can make them critical consumers and improve their scientific literacy (Pownall et al., 2023). This allows them to apply such critical thinking to their daily lives (e.g., in the case of the COVID-19 pandemic; Besançon et al., 2021). Importantly, improving students' understanding of the importance of replication can prevent them from pursuing lines of research that lead them to a dead end and waste scarce resources. If we do not teach the grassroots about the history of the replication crisis, then we are at risk of history repeating itself. 

Open science can be taught directly within psychology modules (e.g., social psychology, research methods) and also within the research process itself. Educators may want to embed consortium studies within the undergraduate dissertation project, whereby students work in groups to conduct a replication or original study and showcase their own expertise and skills by improving the conceptualisation of the research and contributing their own research questions (see Button et al., 2016; Pennington et al., 2022).

For example, a supervisor may propose a study that investigates whether inhibitory control training is effective in reducing unhealthy eating behaviour (primary research question), with one student investigating whether this is moderated by Body Mass Index and another exploring whether this is moderated by emotional eating tendencies (secondary research questions). 

Open science practices can be embedded throughout the process, with the students co-developing a preregistration protocol and learning how to make their materials, data, and code transparent and reproducible. During this process, the supervisor can then check their students' understanding of the research design and analysis, with this feedback proving beneficial to them at an earlier stage. Some researchers have even told me that trialling open research with their students eased them into integrating such practices consistently within their own research!

The knowledge garnered from teaching of open science also transfers outside of the classroom to provide students with enhanced graduate skills. For example, organising and managing data is a skill that many employers value, and writing detailed and rigorous reports can help companies reflect on what is working and what is not to improve their revenue. One of my students informed me that in an interview for an industry job, the panel were "blown away" by her knowledge of open science and the skills acquired that she could bring to the role. She got the job!

Hopes for the future

Change is afoot, and we are seeing many advancements through open science reform. There has been an uptake in researchers submitting preregistrations and Registered Reports, as well as journals and funding models supporting their initiation. The UK Reproducibility Network and their international partners promote the wide-scale adoption of open practices, and ECR-led initiatives such as ReproducibiliTEA and RIOT Science Club allow a space for critical discussion through inclusive and friendly discourse. We also have organisations that support the integration of open science into higher education curriculum, such as the Framework for Open and Reproducible Research Training, which is open for anybody to join!

Many of the people who are a part of these organisations have one thing in common: they too are children of the replication crisis.

And we are being listened to. In July 2021, the UK House of Commons launched an inquiry into reproducibility and research integrity, investigating the causes of non-reproducibility and exploring solutions to ensure responsible research practices. One of the recommendations put forward was for attention to be placed on teaching reproducibility, replication, and research integrity across undergraduate, postgraduate, and early career stages (UK House of Commons, 2021). This recommendation was accepted in a joint response from the government and UK Research and Innovation (UKRI), stating: 

"UKRI welcomes the committee's conclusion that attention be placed on high-quality training for all individuals involved in the research system. Continuous development is essential for everyone from students to experienced professionals, particularly to understand their role and responsibility when it comes to research integrity" (UK House of Commons, 2023, pp. 9).

I hope that we now see an action plan for the implementation of these recommendations by the British and Irish Psychological Society; the charitable bodies that set the accreditation standards for psychology programmes in the UK and Ireland. 

But we need more children of the replication crisis to implement these changes. We don't just need these people on the membership boards of open science reform initiatives. We need them on the boards of all journals, hiring and promotion panels, and in the reshaping of academic practice and policy (e.g., the UK's Research Excellence Framework). 

Crucially, we also need them to teach and train the next generation of students to ensure long-term, sustained research reform. Only then can we shape the perverse incentive structures that academia is historically built upon and tip the balance truly in favour of open science. 

In ten years' time, I want to pick up a copy of my book in a dusty bookstore to find that it is heavily thumbed and annotated, but horribly outdated. No longer do we see questionable practices, publication bias and research misconduct plague the literature. No longer are replications rare and undervalued. And no longer do the best early career researchers leave academia because hype and spin are more incentivised than rigour and transparency.

Dr Charlotte R. Pennington is a Lecturer in Psychology at Aston University and the author of A Student's Guide to Open Science: Using the Replication Crisis to Reform Psychology. You can follow Charlotte on X, BlueSky, and Mastodon: @drcpennington