
Registered reports: what are they, and why should journal authors consider them?
Dr Pamela Jacobsen from the University of Bath explains more about registered reports, which reduce publication bias and improve reproducibility of published studies.
02 May 2024
Picture the scene. After months of planning, gallons of coffee, and weeks of tedious data collection the big moment has arrived. You execute your pre-specified analysis plan on your beautifully cleaned dataset of your fully powered study. You run the analysis code and then peer at the results to find…. no statistically significant findings!
If you're thinking this sounds like a science horror story, you're not alone. After all, researchers are rewarded for novel, exciting, ground-breaking results, which tell a nice clean story. Top-ranked journals are full of such studies - despite increasing evidence that such findings often fail to replicate (Ioannidis, 2005), or worse still, are based on questionable research practices (Munafò et al., 2017), including outright fraud in rare cases (Smith, 2006).
The sad truth remains that the rewards for producing robust and reliable findings based on sound methods and analysis are virtually non-existent for null findings. This contributes to the so-called 'file-drawer' problem, whereby studies with findings which don't support the experimental hypothesis languish in a filing cabinet (whether virtual or physical) because of difficulties to bringing them successfully to publication.
This evidently should not be the case; after all the results are the one thing researchers are supposed to not have direct control over, yet this is often what judgements of the worthiness of a study are based on.
One solution to this problem is the use of registered reports, which were first introduced in 2012. They were first offered as an article type by Cortex in 2012, and subsequently introduced to the BPS journals with the British Journal of Neuropsychology in 2018, and the remainder of the portfolio in 2020.
The UK Reproducibility Network (UKRN) defines registered reports as 'a type of journal article that involves peer review of the background, study design, methods, and analysis plan (i.e., the stage one manuscript) before data are collected' (Stewart et al., 2020).
Once the study is completed and analysed, the results and discussion sections are added to produce a stage two manuscript, which goes through another round of peer review to ensure compliance with the original plan, including transparent reporting of any deviations from the pre-registered plan where relevant.
Most importantly, having accepted a stage one manuscript (in principle acceptance), the journal commits to publishing the stage tw manuscript no matter what the findings, so long as the researchers have followed their agreed study plan. The benefits of such a system are immediately apparent in alleviating 'p-value anxiety'. Given the research team know that publication is not dependent on the results, there are also reduced incentives to indulge in questionable research practices such as p-hacking (also known as data-dredging, or fishing).
Another important benefit is the opportunity to receive peer review at a time when it is most useful – i.e. before the data has been collected. It's all very well being told at the end of a study that you should have used a different type of experimental stimuli, or included some other measure of a key construct, but these are non-fixable problems without access to a handy time machine.
Despite these many benefits, registered reports remain under-used. There are many reasons for this – not least of all, lack of awareness of the format, or lack of knowledge about which journals offer registered reports (a very helpful journal database is kept updated on the Open Science Framework for this very purpose).
Even armed with this knowledge, researchers can be reluctant to try out a registered report due to fear of the unknown, or concerns about the process being unduly cumbersome or time-consuming. I can relate to these understandable concerns, having gone through the process myself for the first time a few years ago.
I was working with some DClinPsy trainees to develop an experimental study investigating mindfulness in a voices simulation study. We submitted a stage one Registered Report to Psychology and Psychotherapy; Theory, Research, and Practice and received a revise and resubmit decision within six months. We found the reviewers' comments very helpful and constructive. It led us to making key changes to the experimental induction task and adding measures including a manipulation check, none of which could have been implemented after the data had been collected.
We submitted the revision in July 2020, and received an in-principle acceptance two months later. You may have spotted that this timeline coincided with a certain global pandemic that caught us all unawares, which led to considerable delays with data collection. However, we persevered and completed data collection two years later in 2022. The process of data analysis went very smoothly as we were able to simply follow our well thought-out (and peer-reviewed) analysis plan.
We then only needed to write-up the results and discussion section and the stage two manuscript was ready for submission. There were no further revisions to the paper and the paper was promptly accepted for publication in April 2023 (Jones et al., 2023).
Although pandemic-related delays were clearly a confounding factor in this case, my general impression was that although some work was front-loaded before data collection could begin, overall the publication process appeared to take about the same time (or less) than via a traditional, non-registered report format. I particularly appreciated the shorter lag period in bringing the paper to publication once the final manuscript was completed.
I highly recommend trying out a registered report, as the process brings many benefits. Based on my own experience, we felt as a research team that we learnt many helpful things from the process, and it supported us to embed open science practises throughout the whole research cycle in an integrated way. Oh, and in case you're wondering, we got null findings!
References:
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2(8), e124.
Jones, B., Muddle, S., Jenkins, T., Kitapci, N., & Jacobsen, P. (2023). Mindfulness for voices: An experimental analogue study of the effect of manipulating response style to simulated voices in a non-clinical population. Psychology and Psychotherapy: Theory, Research and Practice, 96(3), 778-792.
Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., Percie Du Sert, N., Simonsohn, U., Wagenmakers, E. J., Ware, J. J., & Ioannidis, J. P. A. (2017). A manifesto for reproducible science [Review]. Nature Human Behaviour, 1(1), Article 0021.
Smith, R. (2006). Research misconduct: the poisoning of the well. Journal of the Royal Society of Medicine, 99(5), 232-237.
Stewart, S., Rinke, E., McGarrigle, R., Lynott, D., Lunny, C., Lautarescu, A., Galizzi, M., Farran, E., & Crook, Z. (2020). Pre-registration and Registered Reports: a Primer from UKRN.