‘Fraud exploits tendencies that serve us well most of the time’
Stephen Lea (Emeritus Professor at the University of Exeter) asks Daniel Simons and Christopher Chabris about their new book, ‘Nobody’s Fool: Why We Get Taken In And What We Can Do About It’.
21 November 2023
I should start by saying that I enjoyed this book tremendously, and it has required a considerable effort of will to formulate some challenges to you!
Your fundamental thesis is that we are susceptible to fraud and other kinds of deception because we, necessarily, rely on heuristics rather than engaging in optimal decision making. I wholly agree with this, not least because I have said it myself on many occasions. However, I'd like to turn your own argument against you, as it were, and ask whether the Habits that you identify as making us vulnerable to deception, and the Hooks that are used to deceive us, might be helping us to accept your argument. So let's look at them in turn together.
Thank you for the kind words about Nobody's Fool. We're glad to hear that it resonates with your own work! It's our perspective as psychologists who primarily study cognitive limitations on the question of why people so often get fooled.
Did we use the tricks we warn about to seduce our readers into accepting our arguments? Inevitably. Even in a 327-page book with 326 footnotes, it's not possible to provide all the relevant information, justify every assertion, uncover all the assumptions, and examine every study.
You'll have to take our word for it that we aren't trying to deceive our readers into believing falsehoods, nor are we running a long con that will culminate with a scam so perfectly designed that it takes in only those people who have read and implemented our anti-fraud tips…
Will do! But I'd like us to examine your Hooks in turn, if that's OK, starting with 'Focus': 'We tend to make decisions using the information before us, ignoring irrelevant or distracting information'. So, what are you leading us to focus on? Evidently, the risks involved in our short cuts to efficient decision-making. And what does that mean we are ignoring?
All deception involves believing something that is untrue, but there are many reasons we could do that. We think heuristics differ from habits. The idea of a heuristic implies an active decision process that's incomplete and non-optimal, but good enough. We only used the word heuristic when referring to what someone else has called a heuristic (e.g. Gigerenzer's recognition heuristic). We instead focus on habits because they do not necessarily involve an active, deliberate process or result in a decision. In our view, habits apply more broadly than heuristics do. In our use of the term, habits are essential features of how we process information, whether we are making decisions or not. We don't actively decide to focus only on the information in front of us and to ignore what we're missing – this is just how cognition works.
Still, our tendency to focus much of the time means that we are often open to deception. The case of shrouded attributes illustrates the difference: When companies make it hard to obtain important information about their products, we lack the information necessary to make an optimal decision (e.g., understanding the total cost of ownership rather than the immediate purchase price). But our tendency to focus on the information we have is not a strategy, shortcut, or rule of thumb for decision making. It's a habit that we don't even realise we're following.
Can we be deceived without any heuristic being involved?
Certainly. Someone stealing our wifi or hacking an otherwise trustworthy vendor to steal our information can result in deception that harms us (e.g. our credit card is used to steal goods or services). But hacks that we are unaware of are not 'scams' in the sense we use in the book. We focused on cases in which our cognitive tendencies can lead us to be directly fooled.
OK, the second Hook is 'Prediction': 'To make sense of our world, we rely on our experiences to predict what will happen next'. So, how are you building on our experiences rather than letting us be open to surprise?
Effective communication depends on drawing connections to what people know. We tried to illustrate how deception relies on habits and hooks by choosing examples that would have features that were familiar enough to be understandable to non-experts. The challenge is ensuring that the examples we picked are 'representative'. If we selected atypical examples only because they supported a point, we would be deceiving our readers (a common practice for some writers).
Of course, successful narratives also require mystery or unexpected turns. If our examples were all well-known and our arguments entirely expected and obvious, readers wouldn't learn anything from our book. We hope the surprise in this case comes both from learning about new, interesting examples and from gaining insight into how consistently our habits and hooks influence our beliefs and our risk of being deceived.
Third hook is 'Commitment': 'When we commit to an assumption or belief, we rarely reconsider it later'. So, do you reconsider your core assumption, that it is habits and heuristics that expose us to deception?
We reconsidered many of our assumptions while working on the book, and we revised our framework several times… giving ourselves more work in the process, but hopefully making it more useful to readers. Undoubtedly there are other frameworks that could address the question of why we are vulnerable to deception, and we are open to criticism of our own approach. Still, it's hard to imagine a cognitive framework for deception that doesn't involve the scammers exploiting habits that usually make sense. If people are actually aware of how their habits make them vulnerable and rationally accept that they are taking a risk when they hand over their savings to a Ponzi schemer or when they accept what a politician tells them, they should be less surprised and regretful when they find out they have been duped.
'Efficiency:' Shouldn't we always be challenging you with follow-up questions about the case-histories you give us? And what should those follow-up questions be?
Should our readers ask more questions about the stories we present? Of course. But they should come up with their own questions! We wanted to present more details about many of these amazing cases, and to include more examples, but as an editor once told us, the challenge in writing for a general audience is to keep the ship moving from each port of call to the next, not to take extended shore leaves.
Good questions to ask in general might be 'is this case typical?' or 'how often does that really happen?' Those are questions we asked ourselves while thinking about our framework. They are hard to answer definitively because we only learn about the existence of a fraud when someone notices and reports it. And people can be reluctant to admit having been fooled. In some cases, they might not even know! Because investigators and researchers only see a selected sample of frauds – those that have been caught – it is hard to know the actual prevalence of fraud in any domain (including, unfortunately, in science).
'Consistency': Shouldn't there be some outliers – some frauds or other deceptions that do not depend on any of the habits or hooks you identify? What would those look like?
This is a fair point. We selected examples that illustrate our key arguments, and perhaps we tried too hard to fit all of our examples into our framework and left out examples that didn't fit. We tried to select stories that were representative of common patterns of deception, typical of the techniques used, and potentially interesting and unfamiliar to our readers. Still, we left hundreds of stories on the cutting room floor in going from our research notes to a final book.
It's an interesting thought experiment to imagine frauds that work by getting people to not apply their usual cognitive tendencies. Is it even possible to develop a fraud that relies on people seeking and obtaining all of the relevant information? In general, though, it's hard for us to imagine a scam that relies on none of our habits and hooks (and not even truth bias), but maybe a reader of this interview will suggest some!
'Familiarity': Following some of the reasoning in your book, would I have found reading it more persuasive than if I'd read eight separate journal articles, say, by different people, arguing in different ways for the same conclusion as you? Am I right to be more persuaded by your repeated arguments than I would be by eight independent witnesses?
We don't doubt that you might have found eight journal articles by eight different authors more persuasive than our book. All else equal, multiple independent sources are good – emphasis on independent. But scammers may realise this and arrange things to make it seem that multiple independent sources have vetted or validated something, when that's not the case at all (see the Theranos case, for example, or the many companies whose websites list dozens of references to the scientific literature that don't actually support their dubious claims).
We agree that a consistency of voice and argument works to persuade in part by promoting a disarming sense of familiarity. Short of producing an edited volume rather than writing our own book, we're not sure what we could have done to avoid this sort of persuasive 'trick'. And we're glad to hear you weren't taken in.
'Precision': 'People treat precision as a sign of rigor and realism and vagueness as a sign of evasion'. So, what are you doing when you refer, for example, to the 7367 participants tested by Meyer et al. (2015) on p.89; it's not untrue, but are you writing that way as more persuasive than saying "over 7000"?
Precise numbers are not always deceptive, and a correct precise statement is valuable. In fact, for all of the hooks, we find the information appealing for good reasons: Usually, hooks like precision, consistency, and familiarity are associated with understanding or trustworthiness. Only when people use them in the absence of an underlying ground truth do they become deceptive. Deceivers can make overly precise or concrete claims without any ground truth, so we should be careful to not accept them without checking first (at least when the stakes are high). Hopefully, if readers do check our precise statements, they will find them to be correct!
There is a more general issue with pseudoprecision – giving detail that is really no more than noise, such as, reporting average response times in fractions of a millisecond when the recording instrument can't be that precise. We didn't address this form of false precision in as much detail, although we did discuss the GRIM test that detects impossible percentages. By reporting exactly 7367 participants in the Meyer et al. study, we might have made the example more persuasive than others that we rounded, but accurate precision isn't deceptive, even if it is more convincing.
Potency: '… We find potency unduly persuasive, when in reality, we should be wary whenever anyone claims that a big effect can come from a small cause'. So, isn't the difference between using heuristics and using optimal decision making often quite small in practice – too small to justify checking everything, as you argue in your conclusion?
If we had argued that all fraud can be explained by the proverbial one weird trick, you would be right to be wary. Our explanatory framework is a collection of cognitive tendencies, not a single overarching mechanism. There are even more factors that explain the prevalence of fraud independent of our own foibles. We deliberately did not address the many economic, emotional, social, and historical forces that contribute. Instead, we focused on the factors that individual people can control if they know to look for them.
We also did not claim that we can eliminate all fraud through the 'one simple trick' of noticing our habits and hooks. That would be a deceptive, overly-potent promise. Moreover, applying our suggested anti-deception strategies to every interaction and decision wouldn't be healthy. A hypertrophied skepticism is not a solution because it would render us needlessly critical in entirely benign contexts. Still, a major reason why fraud works is that it exploits tendencies that serve us well most of the time. The toughest challenge may be spotting the situations and decisions that pose a major risk of negative consequences that we would want to avoid if we could.
Nobody's Fool: Why We Get Taken In and What We Can Do About It by Daniel Simons & Christopher Chabris (£25, Basic Books) is available now. Read an extract here.