Five reasons it’s so hard to think like a scientist
In a new chapter in the Psychology of Learning and Motivation book series, Priti Shah at the University of Michigan and her colleagues have taken a detailed look at the reasons.
20 June 2017
Thinking like a scientist is really hard, even for scientists. It requires putting aside your own prior beliefs, evaluating the quality and meaning of the evidence before you, and weighing it in the context of earlier findings. But parking your own agenda and staying objective is not the human way.
Consider that even though scientific evidence overwhelming supports the theory of evolution, a third of Americans think the theory is "absolutely false". Similarly, the overwhelming scientific consensus is that human activity has contributed to climate change, yet around a third of Americans doubt it.
We Brits are just as blinkered. In a recent survey, over 96 per cent of teachers here said they believed pupils learn better when taught via their preferred learning style, even though scientific support for the concept is virtually non-existent. Why is it so hard to think like a scientist? In a new chapter in the Psychology of Learning and Motivation book series, Priti Shah at the University of Michigan and her colleagues have taken a detailed look at the reasons, and here I've pulled out five key insights:
We're swayed by anecdotes
When making everyday decisions, such as whether to begin a new treatment or sign up to a particular class at uni, most of us are influenced more powerfully by personal testimony from a single person than by impersonal ratings or outcomes averaged across many people. This is the power of anecdote to dull our critical faculties. In a study published last year Fernando Rodriguez and his colleagues asked dozens of students to evaluate scientific news reports that drew inappropriate conclusions from weak evidence. Some of the reports opened with an anecdote supporting the inappropriate conclusion, other reports lacked an anecdote and acted as a control condition. Regardless of their level of university training or knowledge of scientific concepts, the students were less competent at critically evaluating the reports when they opened with an anecdote. "Anecdotal stories can undermine our ability to make scientifically driven judgements in real-world contexts," the researchers said. Of course much health and science news in the mainstream media is delivered via anecdotes, increasing the likelihood that news consumers will swallow any claims whole.
We're overconfident
Confronted with a scientific claim, another reason many of us find it hard to reflect on it scientifically is that we overestimate our comprehension of the science. A study from 2003 asked hundreds of university students to read several science news stories, to interpret them and rate their understanding. The students made many interpretative errors – for example, confusing correlation for causation – even though they thought they had a good understanding. This is redolent of a survey from the 1980s of thousands of British and American citizens: nearly 60 per cent stated they were moderately or very well-informed about new science findings and yet far fewer were able to answer easy questions about elementary science. Part of the problem seems to be that we infer our understanding of scientific text based on how well we have comprehended the language used. This means that popular science stories written in layman's language can contribute to false confidence. This "fluency bias" can also apply to science lectures: a recent study found that students overestimated the knowledge they'd derived from a science lecture when it was delivered by an engaging speaker.
We're biased by our prior beliefs
This obstacle to scientific objectivity was demonstrated by a now-classic study from the 1970s in which participants were asked to evaluate scientific research that either supported or conflicted with their prior beliefs. For instance, one of the to-be-evaluated studies supposedly showed that murder rates tended to be lower in US states with the death penalty. Participants demonstrated an obvious bias in their evaluations. For example, if they supported capital punishment, they tended to evaluate the death penalty study favourably, whereas if they were against capital punishment, they were more likely to see the studies' flaws. Scientific skills offer little protection against this bias, in fact they can compound it. A 2013 study asked participants to evaluate a piece of research on gun control. Participants with greater numeracy skills were especially biased: if the findings supported their existing beliefs, they were generous in their evaluation, but if the findings went against their beliefs, they used their skills to (in the words of Shah et al) "slam" the findings – a phenomenon dubbed "identity-protective cognition".
We're seduced by graphs, formulas and meaningless neuroscience
It doesn't take a lot to dazzle the average newspaper or magazine reader using the superficial props of science, be that formulas, graphics or jargon. Consider a study due for publication soon (Ibrahim et al, cited in the new chapter): researchers asked their participants to consider a news story about a correlational study into genetically modified foods that was either consistent with the bulk of past research showing that they are safe, or was inconsistent, suggesting that they are harmful. Additionally, the story was either accompanied or not by a scatterplot of the new findings. When the news story had a graphic visualisation of the correlational evidence, which was inconsistent with the weight of past research (i.e. it implied a possibility of harm), participants were far more likely to interpret the new evidence as showing genetically modified foods cause harm, than if they had read the same story without a graphic. "This is especially worrisome," write Shah et al, "since it demonstrates how easily people can be convinced by new data, regardless of the actual scientific merit of the result." Similar research into readers' critical skills has shown that they are blinded in a comparable manner by gratuitous neuroscience jargon and meaningless formulas.
Being smart isn't enough
Even expert researchers suffer from the human foibles that undermine scientific thinking. Their critical faculties are contaminated by their agenda, by their ultimate motives for doing their experiments. This is why the open science revolution occurring in psychology is so important: when researchers make their methods and hypotheses transparent, and they pre-register their studies, it makes it less likely that they will be diverted, even corrupted by, confirmation bias (seeking out evidence to support their existing beliefs).
Take the example of systematic views in psychotherapy research: a recent analysis found that the conclusions of many are spun in a way that supports the researchers' own biases. Other times, the whole scientific publishing community, from journals editors down to science journalists, seem to switch off their critical faculties because they happen to agree with the message to emerge from a piece of research.
In their chapter, Shah and her colleagues point out that raw cognitive ability (IQ) is not a good predictor of a person's ability to think like a scientist. More relevant is mental attitude, such as a person's "need for cognition" and their ability or motivation to override gut instinct and reflect deeply. On a positive note, these mental dispositions may be more malleable, that is more trainable, than basic intelligence. But we'll need plenty of solid evidence to test that.