Psychologist logo
Grimpact
Research, Research Ethics

‘Grimpact’: psychological researchers should do more to prevent widespread harm

Researchers carefully evaluate ethics for study participants, but Alon Zivony argues we need to consider wider guidelines for socially responsible science.

17 May 2024

In a recent study, US researchers claimed to have found that Black and Hispanic Americans are not more likely to be fatally shot by police officers than White Americans. Unsurprisingly, this study got a lot of attention with over 100 news outlets covering it and millions of people discussing it on social media. Meanwhile, scientists criticised the study for its analyses, claiming they were so flawed that they invalidated the conclusions entirely. At first, the authors rejected the criticisms.

But then, something almost unprecedented happened: in response to the public debate, the authors decided to retract their paper due to 'the continued use' of the work to 'support for the idea that there are no racial biases in fatal shootings, or policing in general'. In other words, this highly visible paper was retracted, not because of flaws in the methodology, but because of ethical concerns about its adverse impacts on society.

Questions about ethics have always been part of science. From the time of the Hippocratic Oath to the time of the Declaration of Helsinki, scientists have debated how scientific freedoms should be balanced with the wellbeing of individuals affected by research. Ethics boards worldwide regulate scientific studies to make sure that as little harm as possible is caused to people participating in them. Recently, more and more scientists have been calling for more extensive ethical considerations. They argue that science doesn't only directly affect participants but can negatively impact society, or, to use the term coined by Gemma Derrick and Paul Benneworth, it can have 'grimpacts'.

Although some recent papers aim to assist researchers in characterising or reducing grimpacts, they present a challenging problem. Ethics boards do not regulate grimpacts, and most researchers get no training on identifying or addressing them. It is often impossible to reach a consensus on what constitutes a grimpact. Worse, even when there is a broad agreement that a study can result in grimpacts, scientific institutions have limited options for addressing them.

Harm to a group

This situation creates a paradox: harm to the few is strictly regulated, but harm to the many is not. Let me give you two hypothetical scenarios to illustrate the point. A researcher conducts a study that offers monetary incentives based on performance, and invites Jewish people from the community. The researcher tells the participants: 'I'm assuming that being Jewish, you'll be particularly motivated to perform well, you know, because Jewish people are more driven by monetary rewards.' 

This statement refers to the harmful stereotype that Jewish people are inherently greedy, which is often used to justify antisemitism. In this case, the participants would have every right to file a complaint with the departmental ethics board (as a Jewish person, I surely would), and they would have a strong case. Ethical guidelines dictate that researchers must treat participants with respect and avoid causing them mental distress.

Now let's imagine a different scenario where the researcher treats the Jewish participants with respect, but then publishes an article concluding that 'Jewish people are more driven by monetary rewards'. In this case, the researcher will most likely cause harm to a much larger group of people. Yet, since the researcher did not disrespect any of the participants during the study, the university ethics board and the paper's publisher can do very little to address complaints arising from this work.

This example may seem extreme, but a recent string of retracted papers suggests that such situations are disturbingly common. If you've been following scientists on social media, you may have come across one or more of the following studies: a study about religiosity, intelligence, and violence that relied on an assumption that the average IQ in some countries in Africa falls below the threshold for intellectual disability; a study labelling women surgeons who post personal photos on social media as unprofessional; and a study suggesting people with a higher BMI are dishonest.

Scientists and the general public criticised these papers for promoting stigmatising or harmful ideas that can negatively impact the affected groups. Following the criticisms, all these papers were retracted (the details behind these retractions are covered by the excellent website Retraction Watch). But why were these papers retracted? Surely, the editorial teams of these journals were aware of the criticisms, and yet, there is no mention of the studies' potential for harm in the retraction notices. Instead, the only justification given for the retractions were the studies' methodological flaws. The reason for the omission is simple: the guidelines that regulate the conditions for retraction (COPE - Committee on Publication Ethics) do not mention negative impacts.

Therefore, editors cannot rely on such reasons to justify a retraction. Had there been nothing wrong with these papers' methods, the editors' hands would have been tied, no matter how harmful the ideas or policies the papers promoted.

The Nature Human Behavior controversy

Science can be used to promote harmful ideas, sometimes deliberately and cynically. Many believe journals and other scientific institutions should have tools to deal with such cases. However, agreeing on which cases should be addressed and what actions should be taken is not so easy, as a recent controversy has shown.

Recently, the prestigious journal Nature Human Behavior (NHB) published an editorial outlining a new ethical guideline for scientists seeking to publish in NHB and editors (in NHB and beyond) tasked with deciding the fate of potentially harmful papers. The NHB guideline extends the principle of harm reduction from research participants to all those who may be indirectly impacted by a study. 

Specifically, the guideline aims to protect groups from harm resulting from stigmatisation and discrimination, and from the promotion of harmful policies. Editors who adhere to this guideline can ask authors of a potentially harmful paper to use more cautious language. They may also invite reviewers to carefully examine the ethical aspects of the research to provide advice on these issues. Importantly, editors have the discretion to require a higher evidentiary threshold for claims that may be harmful.

Insisting on scientific rigour for studies with a potential for high impact seems like common sense to me. Most of us would agree that before a new over-the-counter drug is released for public consumption, it should be thoroughly studied for its effects and side effects. After all, we cannot allow people to get hurt because of a single sloppy study. Similarly, it seems prudent to ensure that the findings of a study with significant social impact are actually correct and communicated in a way that reduces its potential for harm.

So, what is so controversial about the NHB guideline? The most controversial suggestion in the NHB guideline is that, to prevent negative indirect impacts, some studies should not be published (or should be retracted) even if they are methodologically sound. This last suggestion drew harsh criticisms, with some denouncing the NHB guideline as a corruption of science.

Critics of the NHB guideline raise difficult questions. What happens if a study accidentally finds a result that some people find stigmatising? Should the publication of a methodologically sound study be prevented because it can be potentially harmful? In a second editorial, the NHB editors address this point, saying that the guideline will be used judiciously and will not be used to suppress controversial or offensive views. Even so, it remains unclear what criteria can be used to decide what constitutes harm. The NHB guideline is vague on this issue, leaving open the possibility of political bias and unfair treatment of studies whose conclusions do not align with the editors' political views.

The guideline's vagueness also raises the possibility of retractions for papers that don't align with popular political views. If a paper gets negative public attention after publication, editors may feel pressured to retract it. Arguably, they can always find reasons why the paper was ethically flawed in the first place. Altogether, critics argue that the NHB guideline is contrary to the principle of impartiality in science. In turn, if people begin to see science as a tool for advancing political agendas, trust in science institutions may erode, ultimately harming the scientific community and society as a whole.

Despite some inflammatory rhetoric, the NHB guideline's critics raise some valid concerns. Let me add one more concern on top of that. I think that the NHB guideline places too heavy a burden on individual scientists, rather than on scientific institutions. Currently, most scientists get no education on the broad ethical implications of their research, so it's unrealistic to expect them to make such considerations. While the editors can help guide the researchers during the review process and communicate their results, these considerations are best made earlier, when the study is being designed. In my opinion, promoting broad ethical considerations in science requires much more support from universities and funding agencies. Their involvement is crucial for any changes to our scientific culture to take root.

What should we do?

The NHB controversy shows that more work is needed before we can agree on what regulations should be imposed on scientists in the name of broad ethical considerations. Maybe we never will. So, what should we do in the meantime? Recently, my colleagues and I published a paper that tries to address this question. We argued that placing ethical responsibility on individual researchers is a difficult task, but we can still take actions to improve our decision-making when it comes to social responsibility. 

Many scientists want to be more ethical in their research but often lack the training to make such considerations. To that end, we suggested ten relatively straightforward recommendations that follow the life cycle of a study, from inception to publication. Importantly, we don't view our recommendations as a prescriptive code of conduct. Instead, we offer them as aids for scientists who wish to be more socially responsible but are unsure how to go about it. Here are a few of our suggestions (see the complete list).

Many of the obvious problems that scientists face when it comes to social issues are caused by a lack of knowledge. When studying a social group that we don't belong to, or an issue that doesn't directly affect us, we may overlook important information that is common knowledge to those we are studying. Even worse, we often overestimate how informed we are. Therefore, we recommend seeking out the perspectives of individuals who are part of that group or may be indirectly impacted by the study.

This perspective is especially useful if such advice is sought early in the planning stage and if it is provided by scientists who are part of the relevant group. 'Insider' insight can help bridge our knowledge gap, enhance our understanding of potential impacts, and improve our study's design to address potential limitations before we even start collecting data. Of course, being part of a social group doesn't automatically grant you access to all relevant knowledge, so it's always a good idea to seek multiple perspectives when studying a sensitive issue.

Another recommendation is to learn about and address the social context relevant to our study. This is important during the planning stage and when communicating what we found. Without properly considering the social context, we invite incomplete or even entirely incorrect interpretations of our results. 

For example, suppose we conduct a study that compares different social groups. It may be a good idea to provide context by highlighting how these groups differ in terms of socioeconomic factors, access to services, discrimination, and other relevant variables. Failing to do so can lead people to mistakenly believe that any differences observed are inherent and unchangeable rather than a reflection of a complex interplay of factors.

A third theme of our recommendations centres on scientific rigour. As I said earlier, all scientific studies should be rigorous, but rigour is especially important for studies that could cause harm. There are many ways to enhance the rigour of our research: preregistration, addressing limitations in advance, accurate reporting, transparency, and insisting on a thorough peer-review process. By implementing these measures, we can minimise the chances that our findings would be unreliable or invalid, which could negatively affect people's lives or result in inadequate policies.

Unfortunately, adhering to these recommendations may come at a cost. In the current hyper-competitive academic climate, researchers are pressured to publish as many high-impact papers as possible to secure funding and a steady job (especially early-career researchers). To publish in prestigious journals, researchers are pushed to pursue flashy or controversial findings and overstate their conclusions' reliability. Meanwhile, ensuring that our studies are socially responsible takes time and effort, which might not be rewarded in the current academic culture. Addressing these issues requires structural changes in academia.

Nevertheless, I hope that our recommendations can help researchers consider the potential grimpacts of their work and help them take steps to become more socially responsible.

The pursuit of empirical truths and better theories is essential for both scientific and social progress. As such, if a study is methodologically sound, the mere potential for its misuse might not provide a strong-enough justification to prevent its publication. At the same time, ethics and science are not in opposition to each other, and there is always more we can do to be more socially responsible. In that spirit, I view the NHB guideline as a step in the right direction. It may have been clumsy, but first steps often are. Perhaps the best we can do for now is further the discussion by taking a few more clumsy steps.

Dr Alon Zivony is a lecturer and researcher in the Department of Psychology at the University of Sheffield.

Key sources

Clark, C.J., Winegard, B.M., Beardslee, J., et al (2020). RETRACTED: declines in religiosity predict increases in violent crime—but not among countries with relatively high average IQ. Psychological science, 31(2), 170-183.
Derrick, G.E., Faria, R., Benneworth, P., et al. (2018, November). Towards characterising negative impact: Introducing Grimpact. In 23rd International Conference on Science and Technology Indicators (pp. 1199-1213). Leiden University, CWTS.
Hardouin, S., Cheng, T.W., Mitchell, E.L., et al. (2020). RETRACTED: Prevalence of unprofessional social media content among young vascular surgeons. Journal of Vascular Surgery, 72(2), 667-671.
Nature Human Behaviour Editorial (2022). Science must respect the dignity and rights of all humans. Nature Human Behaviour, 6, 1029–1031. 
Nature Human Behaviour Editorial (2022). Why and how science should respect the dignity and rights of all humans. Nature Human Behaviour, 6, 1321–1323.
Polizzi di Sorrentino, E., Herrmann, B., & Villeval, M.C. (2020). RETRACTED: Dishonesty is more affected by BMI status than by short-term changes in glucose. Scientific Reports, 10(1), 12170.
Zivony, A., Kardosh, R., Timmins, L., & Reggev, N. (2023). Ten simple rules for socially responsible science. PLOS Computational Biology, 19(3), e1010954.