Psychologist logo
Cartoon of figures conducting research
Methods and statistics, Research

A recipe for (attempting to) replicate existing findings in psychology

Some journals have introduced dedicated replication article formats and pressure is building on others to follow suit.

11 November 2013

By Christian Jarrett

Regular readers of this blog will know that social psychology has gone through a traumatic time of late. Some of its most high profile proponents have been found guilty of research fraud. And some of the field's landmark findings have turned out to be less robust than hoped. This has led to soul searching and one proposal for strengthening the discipline is to encourage more replication attempts of existing research findings.

To this end, some journals have introduced dedicated replication article formats and pressure is building on others to follow suit. As the momentum for reform builds, an international team of psychologists has now published an open-access article outlining their "Replication Recipe" – key steps for conducting and publishing a convincing replication.

This is an important development because when high-profile findings have failed to replicate there's been a tendency in recent times for ill-feeling and controversy to ensue. In particular, on more than one occasion the authors of the original findings have complained that the failed replication attempt was of poor quality or not suitably similar to the original methods. In fact it's notable that one of the co-authors of this new paper is Ap Dijksterhuis who published his own tetchy response to a failed replication of his work earlier this year.

Dijksterhuis and the others, led by Mark Brandt at Tilburg University (coincidentally the institution of the disgraced psychologist Diederik Stapel), outline five main ingredients for a successful replication recipe:

  1. Carefully defining the effects and methods that the researcher intends to replicate;
  2. Following as exactly as possible the methods of the original study (including participant recruitment, instructions, stimuli, measures, procedures, and analyses);
  3. Having high statistical power [this usually means having a large enough sample];
  4. Making complete details about the replication available, so that interested experts can fully evaluate the replication attempt (or attempt another replication themselves);
  5. Evaluating replication results, and comparing them critically to the results of the original study.

To help the would-be replicator, the authors have also compiled a checklist of 36 decisions that should be made, and items of information that should be collated, before the replication attempt begins. It appears as table 1 in their open-access article and they've made it freely available for completion and storage as a template on the Open Science Framework.

Here are some more highlights from their paper:

In deciding which past findings are worth attempting to replicate, Brandt and his team urge researchers to choose based on an effect's theoretical importance, its value to society, and the existing confidence in the effect.

They remind readers that a perfect replication is of course impossible – replications inevitably will happen at a different time, probably in a different place, and almost certainly with different participants. While they recommend contact with the authors of the original study in order to replicate the original methodology as closely as possible, they also point out that a replication in a different time or place may actually require a change in methodology for the purpose of mimicking the context of the original. For instance, a study originally conducted in the US involving baseballs might do well to switch to cricket balls if replicated in the UK. Also bear in mind that something as seemingly innocuous as the brand of the computer monitor used to display stimuli could have a bearing on the results.

Another important point they make is that replicators should set out to measure any factors that they anticipate may cause the new findings to deviate from the original. Not only will this help achieve a successful replication but it furthers scientific understanding by establishing boundary conditions for the original effect.

On their point about having enough statistical power, Brandt and his colleagues urge replicators to err on the side of caution, towards having a larger sample size. Where calculating the necessary statistical power is tricky, they suggest a simple rule of thumb: aim for 2.5 times the sample size of the original study.

It's not enough to say simply that a replication has failed or succeeded, Brandt and co also advise. Instead replicators should conduct two tests: first establish whether the effect of interest was statistically significant in the new study; and second, establish whether the findings from the attempted replication differ statistically from the findings of the original. A meta-analysis that combines results from the original study and the replication attempt is also recommended.

"By conducting high-powered replication studies of important findings we can build a cumulative science," the authors conclude. "With our Replication Recipe, we hope to encourage more researchers to conduct convincing replications that contribute to theoretical development, confirmation and disconfirmation."

Further reading

Mark J. Brandt, Hans IJzerman, Ap Dijksterhuis, Frank J. Farach, Jason Gellerd, Roger Giner-Sorollae, James A. Grange, Marco Perugini, Jeffrey R. Spies, and Anna van 't Veer (2013). The Replication Recipe: What Makes for a Convincing Replication? Journal of Experimental Social Psychology DOI: 10.1016/j.jesp.2013.10.005