How much are readers misled by headlines that imply correlational findings are causal?
"Being breastfed may make children behave better."
21 April 2017
By Alex Fradera
What do you take from this hypothetical headline: "Reading the Research Digest blog is associated with higher intelligence"? How about this one: "Reading this blog might increase your intelligence"? According to science writing guides like HealthNewsReview.org, taking the first correlational finding from a peer-reviewed article and reporting it for the public using the second wording, implying causation, is a sin of exaggeration, making a relationship appear more causal than the evidence suggests.
Yet this happens a lot. A 2014 British Medical Journal (BMJ) article showed these exaggerations to be rife in media coverage of correlational studies, with 81 per cent of news articles committing the sin. Dismayingly, one-third of press releases were also guilty. These normally involve editorial input from professionals and often the scientists themselves, who should really know better. Reading about this, we might conclude that science communicators of all stripes need to get more familiar with the best practice of describing causality.
However, the authors of that BMJ analysis started to ponder whether readers interpret these headlines literally, or whether they draw their own conclusions. Now their research group has tested this for a paper in the Journal of Experimental Psychology, and their findings suggest that while science writers need to pick up their game, science-writing guides also have some catching up to do.
Rachel Adams and her colleagues at Cardiff University presented 71 online participants, average age of 27, with real and modified headlines drawn from news stories about science, politics and sport. Take the following examples:
- (A) Being breastfed makes children behave better
- (B) Being breastfed is linked to better behaviour in children
- (C) Being breastfed is associated with better behaviour in children
Style guides consider causal claims on a ladder, with a simple causal statement (A) at the top, ambiguous causal statements like (B) somewhere below, and (C) being on the bottom rung, as it makes a correlational rather than a causal claim. But when the participants reviewed a series of these kinds of statements, rating in each case how much the initial term was causing the final one, a different picture emerged. While they gave the first type of headline high scores for causality at 75 or more out of 100, they scored headlines in the style of B and C similarly, at around the 50/100 mark, suggesting that they saw them both as suggesting a moderate amount of causality. This was true across all topics, and unaffected by the participants' level of science education.
In further experiments the researchers examined the effect of modifying a causal claim with a modal verb – may, can, might and could – e.g. Being breastfed may make children behave better. Modal verbs introduce some uncertainty, and the style guides recommend them when research demonstrates causality but the claim may not be watertight, such as when based on small or unusual samples. Participants' interpretations suggested that they saw "can make" as a pretty strong statement, weaker than a pure causal statement but above all the others, such as "might make" – much as style guides suggest. But whereas the style guides claim that "may make" is a stronger causal statement than equivalent headlines using "might" and "could", participants saw these three modifiers as interchangeable and understood them to severely weaken the causal claim, rating them at, or even below, the causality of a purely correlational statement.
Adams' team concluded that readers do not have a highly sophisticated hierarchy of causality, but three broad categories: direct cause, can cause, and a large bucket of moderate cause statements, including weaker modals, ambiguous statements, and even correlation itself. This evidence suggests we should be a bit more forgiving of press offices: re-analysing the BMJ data from 2014 found that only one-fifth of articles involved an exaggeration as defined in terms of how readers were likely to have interpreted them. Style guides may want to reconsider their recommendations.
One final point: the research group also looked at the uptake of science coverage – how much a piece was read and shared – and found no evidence that exaggeration was of any benefit in increasing coverage. That's heartening for science writers who prefer to focus on telling the truth in an engaging way rather than resorting to hype.