Fact checks work — whether you believe in them or not
Warnings about potentially false claims work to reduce belief in and sharing of misinformation, even if we don't trust fact-checkers, according to new research.
15 October 2024
With the rise of fake news, many of us have become increasingly anxious about whether the information we read online is legitimate. To ameliorate this, many online news platforms and social media sites now work with third-party fact-checking organisations, an attempt to reassure readers that what they are reading is not misinformation. But what if someone doesn't trust fact-checkers either?
According to a new study, that might not matter as much as you'd think. Writing in Nature Human Behaviour, two Massachusetts Institute of Technology researchers find that warning labels on social media posts can even slow the spread of misinformation if people don't trust fact-checkers much at all.
After an initial study, designed to establish a scale by which to measure trust in fact-checking, the team performed a meta-analysis encompassing data from 21 studies, which included a total of 14,133 adults with social media accounts. Firstly, participants answered questions on how much they trusted fact-checking organisations, before being randomly split into two groups: in one, a control, participants saw headlines with no fact-checking labels, while the other group was presented with the same headlines, but with warnings on those that were potentially false.
To ensure political balance, some headlines were more supportive of Democrats, others Republicans, and some were neutral. Participants were then asked how accurate they thought each headline was, and how likely they would be to share it.
Overall, warning labels reduced belief in false headlines by nearly 30%, indicating that they do indeed work as intended. There were, however, some differences between individuals. People who trusted fact-checkers more were, fairly unsurprisingly, more likely to believe warnings; for each 1-point increase on the trust in fact-checking scale developed by the team, there was a 22% increase in how well the labels reduced belief in the presented fake news.
Yet even in those with the least amount of trust, labels still worked. When the team looked at the participants with the lowest 25% of scores on the trust scale, warning labels reduced belief in fake headlines by around 20%. For those with trust scores of zero — those with absolutely no trust in fact-checking organisations — they still reduced belief by 12.9%.
A similar pattern also emerged when it came to sharing content. Overall, warning labels reduced the intention to share potentially false information by nearly 25%; for those with the lowest levels of trust in fact-checking organisations, this figure was 16.7%. So, while content warning labels may not have the same level of efficacy across the board, they do go some way to reducing misinformation, even for fact-checking sceptics.
The team notes that these results illustrate an instance of a "discrepancy between attitudes and actual behaviours": people might say they don't trust fact-checkers but, nonetheless, changed their behaviour when something was flagged as false. They suggest two reasons this may be the case; firstly, that people may believe a particular warning label whilst remaining generally sceptical of fact-checkers; and secondly, that people are concerned about reputational damage, reducing their likelihood of sharing content labelled as misinformation.
As for implications from the study, the team stresses that continued investment in fact-checking organisations is vital. Their findings lend strength to the value of such efforts — no matter how much they might be decried.
Read the paper in full:
Martel, C., & Rand, D. G. (2024). Fact-checker warning labels are effective even for those who distrust fact-checkers. Nature Human Behaviour. https://doi.org/10.1038/s41562-024-01973-x
Want the latest in psychological research, straight to your inbox?
Sign up to Research Digest's free weekly newsletter.