Psychologist logo
Man with quizzical expression
Cognition and perception, Education

How to evaluate an argument like a scientist

Participants evaluated arguments based on their internal consistency and quality rather than on their own prior knowledge or opinion.

11 January 2016

By Christian Jarrett

From the pontifications of the politician on the nightly news, to the latest tabloid health scare, we're constantly bombarded by other people's arguments – their attempts to make a particular claim based on some kind of evidence. How best to evaluate all these assertions and counter-assertions? Some insights come from a new study in the journal Thinking and Reasoning that's compared the argument evaluation strategies of scientists (advanced doctoral students and post-docs in psychology) with those used by first-year undergrad psych students.

Sarah von der Mühlen and her colleagues presented the 20 undergrads and 20 psychologists with two passages of text, approximately 400 words long, about smoking and addiction, each containing a mix of plausible and implausible arguments (note the superficial meaning and grammar of the implausible arguments was not at fault).

There were several elements to the task: All the participants were asked to identify the different components of the arguments and to judge the plausibility of the arguments. They were specifically told to evaluate the arguments based on their internal consistency and quality, not based on their own prior knowledge or opinion. The participants were also interviewed afterwards about what they'd thought of the task, the strategies they'd used to evaluate the arguments, and whether the arguments contained any of a list of fallacies, such as being circular. For one of the texts, the participants were asked to speak their thoughts out loud as they evaluated the arguments, granting the researchers immediate insight into their evaluation strategies.

As you might expect, the psychologists were better than the students at judging the plausibility of the arguments (achieving roughly 80 per cent vs. 70 per cent accuracy). The psychologists were especially superior at spotting weak or implausible arguments (they spotted nearly 80 per cent of these vs. 60 per cent spotted by the students). The psychologists, who took more time to judge plausibility, were also better at breaking down the structure of the arguments, especially at recognising what's known as the argument "warrant" – this is the link made between the claim and the evidence cited to support that claim.

From analysing the participants' out-loud thoughts and their comments at interview, the researchers established that at least part of the reason the psychologists were better at evaluating the arguments was that they far more often than the students (over 40 per cent of the time vs. around 12 per cent of the time) actually followed the instructions and made their judgments by considering the internal consistency of the arguments and whether the arguments contained any logical fallacies, including: being circular in nature; containing a contradiction; using a wrong example; citing a false dichotomy; or overgeneralising (see box for examples). By contrast, the students more often (approximately 43 vs. 27 per cent of the time) relied on their intuition (as revealed by comments like "I don't know why, but that just doesn't sound plausible to me") and on their prior opinions or knowledge.

Psychologists and other scientists aren't usually given formal training in argument logic and analysis, but the researchers think they probably pick up a lot of relevant analytical skills through their training and the social aspects of being a scientist. Further analysis suggested that a greater awareness of the formal structure of arguments, and the range of argument fallacies, helped the psychologists better evaluate the arguments used in this study. However, we need to be aware that the study was cross-sectional so we don't know that this knowledge caused their better performance – for example, perhaps being the kind of person to take on post-doctoral science studies makes you better at judging arguments and/or maybe the psychologists were more motivated to excel at the task and follow the instructions.

Another limitation of this research is that the students and psychologists were assessing arguments in a context that was at least partly related to their domain of expertise or study (but note that no prior knowledge was required to judge the plausibility of the arguments). It would be interesting to know how well the psychologists argument evaluation skills would extend to other topics. For now though, what this research reveals is that when it comes to evaluating arguments, people find it very difficult to put aside their gut instincts, their prior opinions and knowledge and to judge the arguments in a logical way, based on their actual quality and coherence. Although we think of scientists as highly knowledgeable experts, their greater skill at evaluating arguments actually seems to come from their ability to forget what they know and to judge an argument on its merits.

Further reading

von der Mühlen, S., Richter, T., Schmid, S., Schmidt, E., & Berthold, K. (2015). Judging the plausibility of arguments in scientific texts: a student–scientist comparison Thinking & Reasoning, 1-29 DOI: 10.1080/13546783.2015.1127289