Instilling scientific rigour at the grassroots
A letter from our March edition advocates consortium-based undergraduate projects.
08 February 2016
There is increasing awareness of the problem of unreliable findings across social, psychological and biomedical research. The 'publish or perish' culture, and the bias towards generating novelty and positive results, may incentivise running multiple small studies measuring multiple outcomes. This, combined with flexible analytical procedures, can generate a large number of positive results, but many will be false positive. These positive results are disproportionately rewarded with publication, potentially leading to grant funding and career advancement. Current incentive structures therefore perpetuate poor practice.
Changing these incentives requires a cultural shift in both thinking and practice. Improved doctoral and postdoctoral research methods training is vital (Munafò et al., 2014). However, changing scientific culture can begin at the undergraduate level, instilling the principles of transparency and scientific rigor at the grassroots.
British undergraduate psychology courses have an assessed research component. Given the timescale and resources available, student projects are often small, suffering from many of the associated problems, such as low power to detect genuine effects, and increased likelihood of finding false ones (Button et al., 2013; Ioannidis, 2005). The sheer number of these projects, coupled with the potential for undisclosed analytic flexibility (Simmons et al., 2011), means that many student projects will generate positive but unreliable findings. If these are published, the student will be at a career advantage, allowing the culture of rewarding chance results over robust methods to take root.
Potential solutions pioneered in clinical trials include pre-registration of study protocols, transparent reporting of methods and results, and designing studies with sufficient statistical power. However, some of these (e.g. statistical power) require resources beyond those available for the typical student project.
A solution widely used in genetics is collaboration (Munafò & Flint, 2014). Individual student assessment and limited access to populations of interest may hinder extensive collaboration within a university, but it could be achieved across universities. As part of a significant collaborative effort, students would benefit from sharing and learning best practice through experience, whilst contributing to a genuinely valuable piece of research. Academics would benefit from aligning research teaching with practice. We acknowledge that may academics already achieve this by embedding student projects into ongoing larger studies. However, such practice is limited by the availability of suitable larger studies and departmental policies.
Drawing on best practices from clinical trials and genetic consortia, psychologists from the universities of Bath, Bristol, Cardiff and Exeter are assessing the feasibility of an innovative consortium-based approach to undergraduate projects, to improve training and research quality.
In brief, academics and their students form a consortium. The research question, protocol and analysis plan are developed collaboratively, publicly pre-registered prior to data collection, and rolled out across the participating centres. Consortium meetings before and after data collection are carefully designed to integrate training with opportunities for creative input. For example, at the post-data meeting, the students present their dissertation results based solely on the data from their centre. The academics subsequently present the pooled analysis, facilitating a discussion of key principles such as sampling variation, site-specific effects, and illustrating how pooling resources to increase power can increase precision. Conclusions are mutually agreed in preparation for wider dissemination, using inclusive authorship conventions adopted by genetic consortia.
Consortium-based projects are both flexible and scalable. Following the initial feasibility study, and in line with evidence-based practice, the next step is to conduct a larger trial of the approach to test its effectiveness for improving both training and research quality outcomes. If you are interested in being part of this initiative please contact Dr Katherine Button ([email protected]) for more information.
Katherine S. Button
Department of Psychology, University of Bath
Natalia S. Lawrence
Mood Disorders Centre, University of Exeter
Chris D. Chambers
School of Psychology, Cardiff University
Marcus R. Munafò
School of Experimental Psychology, University of Bristol
References
Button, K.S., Ioannidis, J.P.A., Mokryscz, C. et al. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376.
Ioannidis, J.P.A. (2005). Why most published research findings are false. PLoS Med, 2(8), e124.
Munafò, M.R. & Flint, J. (2014). The genetic architecture of psychophysiological phenotypes. Psychophysiology, 51(12), 1331–1332.
Munafò, M., Noble, S., Browne, W.J. et al. (2014). Scientific rigor and the art of motorcycle maintenance. Nature Biotechnology, 32(9), 871–873.
Simmons, J.P., Nelson, L.D. & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366.