‘I took the plunge and chose the risky option’
Lance Workman meets Gerd Gigerenzer, Director of the Max Planck Institute for Human Development and the Harding Center for Risk Literacy in Berlin.
19 November 2015
I'm very interested to learn about your work on decision making, but can I start by going back to a former life. I gather you started out playing a banjo as a Dixieland jazz band – how did that come about?
I always loved music, and being a musician helped me become financially independent from my parents at the age of 17. My first instrument was the accordion, then guitar and banjo.
How did you make the transition from successful jazz musician to academic psychologist?
After getting my PhD, I had to decide – should I continue playing music on stage or leave that behind me and aim for an academic career? As a musician, I was raking in good money, much more than an assistant professor earns. Music was the safe option for me, and academics the risky one. At that point it was anyone's guess whether I would ever get a professorship. But I took the plunge and chose the risky option.
I'm glad to see the risky option paid off. You're currently director of the interdisciplinary Center for Adaptive Behavior and Cognition (ABC) at the Max Planck Institute. What sort of questions does this group seek to answer?
We study decision making under uncertainty, that is, in situations where it is impossible to know all options, consequences and probabilities in advance – just like my decision between the world of music and academia. Most psychologists, and almost all behavioural economists, restrict their research to situations in which all of this is known. We try to open decision making up by developing and studying models of heuristics that can be superior to so-called rational models in the real world where uncertainty reigns.
The ABC contains a very wide range of academics – do you ever find you speak a number of different scientific languages and come at things from really quite different angles?
The very idea – and the success – of the ABC Research Group is to bring together young, adventurous scientists from different disciplines to work on the same topic. The group currently consists of about 35 researchers who come from about 10 disciplines. Because everyone looks at the same issue from different angles, we learn from one another and can approach the study of heuristics and rationality with a wide range of useful methodological tools. And last but not least, I learn something new almost every day from my researchers, so I am never bored.
When I interviewed Daniel Kahneman he explained how he sees human heuristics as having various irrational cognitive biases and weaknesses. You have a different view?
I do. Kahneman studies the deviations of human judgement from rational choice models or logic. In his view, when there is a discrepancy, it's we who should be blamed and never the rational choice model. I make a distinction between situations of 'risk' where we can calculate the best option with certainty (as in monetary gambles) and situations of 'uncertainty', where we do not know all alternatives, consequences and probabilities – such as how to invest your money and whom to trust. In uncertain situations, it is an illusion to believe that the standard models of logic, probability or rational choice can define what a good decision is.
What I have shown is that under uncertainty, simple heuristics can make better predictions than complex statistical models such as multiple regression or other complex 'rational' models. That clashes with Kahneman's belief that heuristics are always second-best. Whereas the heuristics-and-biases programme relied on labels such as 'availability' or 'affect' heuristics, or the near-empty notion of 'System 1 and 2', we used formal models of heuristics to show the effectiveness of simplicity, so-called less-is-more effects. Without such scientifically precise models, the older research programme was not equipped to find these exciting results, nor could it develop a study of ecological rationality that specifies the conditions under which a given heuristic succeeds or fails. After all, we humans are not as dim-witted as the heuristics-and-biases research makes us appear.
You relate the current problems we have with handling probabilities to our evolutionary past when we faced quite different challenges?
That is one aspect. But it is too general for making precise predictions. We have derived a number of quite precise insights. For instance, many experiments by others used single-event probabilities to demonstrate human cognitive fallacies, such as the question 'Is Linda more likely to be (1) a bank teller or (2) a bank teller and active in the feminist movement?'.Most people choose (2), which Tversky and Kahneman attribute to not understanding the basics of probability theory (the so-called conjunction fallacy). Yet in statistics proper there is a debate whether single-event probabilities belong to the domain of probability theory. Whatever the answer, one should not immediately put the blame on people without questioning a controversial norm. Ralph Hertwig and I (1999, Journal of Behavioral Decision Making) turned the single-event question into a frequency question ('Think of 100 people like Linda: How many are (1) bank tellers? How many are (2) bank tellers and active in the feminist movement?'), which made the so-called fallacy more or less disappear. Thus, the problem is not that people fail to understand the conjunction rule of probability, but rather that researchers fail to critically think about norms of reasoning. We have documented that many of the so-called biases reflect careless thinking on the part of researchers.
One area you have tried to improve is the way that medical doctors understand natural frequencies in test results. Why do you think these highly trained professionals get things wrong and how can they learn to improve this?
In the good old days (and even in some backward textbooks today) the message was spread that people cannot think the Bayesian way. Many otherwise competent doctors have considered themselves mathematical duds and avoided statistics when they could. In a 1995 Psychological Review article, Ulrich Hoffrage and I report the results of our experiments that showed for the first time that the problem is not simply in the minds of people but in the way the information is framed. That is, misunderstandings arise through the use of conditional probabilities. When we replaced these by what we termed 'natural frequencies', much of the confusion evaporated. And we could identify the reason – natural frequencies facilitate Bayesian computations, being a format that corresponds to the way people learned information before the invention of books and probability theory.
We then applied these findings to the medical field, where confusion can have critical effects. First, we showed that most doctors do not understand, say, what the chances are that a woman has breast cancer if she has a positive screening mammogram. For instance, in one study with 160 gynecologists, only 21 per cent understood that the probability that a woman has breast cancer if she has a positive screening mammogram is only about 1 in 10. Most believed it is between 80 per cent and 90 per cent! As mentioned, the problem is not simply in some biases in doctors' minds, but in the widespread use of conditional probabilities to communicate risk. So we taught doctors how to translate conditional probabilities into natural frequencies. With the help of natural frequencies, 87 per cent of the doctors finally understood the correct numbers.
You published a book recently called Risk Savvy: How to Make Good Decisions in which you claim anyone can learn to make better decisions for their health, finances, family and business without needing to consult an expert. If you could give me just one tip to improve decision making what would it be?
Don't buy financial products you don't understand. If everyone on both sides of the Atlantic had followed this rule before the last financial crisis, the results would not have been as devastating. Simple rules can be much more effective than rating agencies' complex and misleading risk calculations and banks' value-at-risk calculations. These calculations border on astrology. I am currently working with the Bank of England on a project, 'simple heuristics for a safer world of finance'.
I'll certainly look out for that! You have conducted a great deal of research in decision making and in cognitive psychology in general, you have published a great deal and you have headed a number of research centres. I'm wondering if you have any unfulfilled ambitions?
Oh yes, dozens of them. I want to better understand the heuristics in the 'adaptive toolbox' of experts and laypeople, and how they develop in the course of a lifetime and training, and I also want to better understand the ecological rationality of heuristics, that is to describe the environmental conditions in which a heuristic is better than, say, a complex calculation, and vice versa. This requires a formal study of models of heuristics and environments. And finally, there is a larger dream. I would like to help create a society where most people are risk savvy and can make intelligent choices on their own. We do not need more 'nudging' or blaming… we should instead help people to take their lives into their own hands.