Making up the mind
Chris Frith, winner of the Society’s 2008 Book Award with Making Up the Mind, on how his ideas have developed, and the surprising implications.
14 October 2009
Although I originally trained as a clinical psychologist, in the days when this was a 13-month course leading to a diploma, my subsequent career has been devoted to research, although always in a medical setting. I have been particularly fortunate in that I have worked in multidisciplinary groups. Tea-time conversations, in those days when there was still tea-time, covered everything from synapses and neurotransmitters to consciousness and expressed emotion.
My work on schizophrenia began almost by chance when Hans Eysenck assigned me this topic for the new addition of his Handbook of Abnormal Psychology. My research aimed to understand the typical symptoms associated with this diagnosis in terms of neuropsychology. It took a while to dawn on me that, even as a psychologist interested in behaviour and introspection, I was actually studying how the brain works. Whatever the ultimate causes of schizophrenia, there is a neural basis to symptoms such as hallucinations and delusions. In my book The Cognitive Neuropsychology of Schizophrenia (Frith, 1992) I developed some ideas about the cognitive basis of hallucinations and delusions and speculated on their neural correlates.
At this time I was incredibly fortunate to be given the chance to move to the MRC Cyclotron Unit. At that time this was the only place in the UK with access to the newly developed brain-imaging technique of positron emission tomography (PET). Using this technique, to which our group added functional magnetic resonance imaging (fMRI), after moving to the Wellcome Trust Imaging Centre at UCL, I was able to look directly at the neural basis of cognitive processes. (What we measure with PET and fMRI is not neural activity, but changes in blood flow that occur as a result of changes in neural activity.)
These were wonderful new techniques, but to use them properly required considerably more thought than was applied initially. So it became obvious very soon that applying these new imaging techniques to the study of schizophrenia would be fruitless since we knew so little about the relationship between brain function and mental activity. Ironically, considering we are measuring objective, physical activity, the restrictions on what you can do while lying in a brain scanner make it easier to study subjective experiences rather than objective behaviour. In many of the early studies volunteers lay in the scanner and imagined making movements (Stephan et al., 1995) or imagined seeing faces (O'Craven & Kanwisher, 2000).
How do we know about the world?
Another unexpected advantage of scanning is the ability to look at brain activity that occurs without awareness.
Of course, there are elegant behavioural paradigms for looking at effects of unconscious processing, but once you have asked your volunteers whether they noticed some stimulus that was irrelevant to the task they were performing, their focus of attention is irrevocably altered. With the scanner you can measure whether stimuli outside awareness elicit brain activity without ever mentioning these stimuli to the volunteers. Such studies confirmed, in a particularly striking manner, how much brain processing goes on without awareness (Rees et al., 2002). This observation is critical for research aimed at uncovering the neural correlates of consciousness (NCC). A key question that still remains to be answered is, what is the difference between the neural activity associated with consciousness and the neural activity that is not (Frith et al., 1999)?
I thought that studying schizophrenia would provide answers to understanding consciousness, but, instead, I realised that it is the study of consciousness that will help us to understand schizophrenia. False perceptions such as hallucinations are disorders of consciousness. However, these false perceptions are, at first sight, difficult to understand in terms of brain function. All the evidence we have about the state of the world comes through our senses via our brain. It is easy to understand how damage to the brain can impair our perception. For example, damage to the colour area (V4) of the visual brain means that colour is no longer available for perception. As a result the patient has a visual world without colour (Zeki, 1990). However, hallucinations are experiences in the absence of any signals coming from the senses. Why should our brain create such experiences and how does it do this?
The Helmholtz/Bayes framework
I have already mentioned one clue to the answer to this question – the large amount of brain activity that goes on without any associated conscious experience. It took me a long time to realise the significance of this clue myself, and it was this realisation that led to the main ideas in Making Up the Mind.
Some of this brain activity is involved in creating our perceptions. This idea was originally proposed by Helmholtz (1866) who talked about our brain's 'unconscious inferences'. Helmholtz had made two important observations. First, that there was a long time, in terms of neural transmission (˜200ms), between a signal striking the senses and emergence of a conscious percept. Second, that sensory signals are essentially crude and ambiguous. He concluded that perception depends upon the brain making unconscious inferences and that these inferences take time. The perception of depth is an obvious case where inferences have to be made. Just from its size on the retina, we can't tell whether we are looking at a small object nearby or a large object far away. We need to use other cues like motion parallax.
Helmholtz's idea was strongly promoted by Richard Gregory and others during the rise of cognitive psychology in the latter part of the 20th century when the process of perceptual inference was referred to as 'analysis by synthesis'. Today the same idea dominates theories of perception and is referred to as 'predictive coding' (Yuille & Kersten, 2006).
In addition to Helmholtz, a key precursor of these theories of perception was the Revd Th. Bayes (1763/1958). Bayes' theorem is concerned with the relationship between evidence and beliefs. In a Bayesian framework, beliefs are expressed as probabilities. If I have a strong belief about the state of the world, then I consider the probability of that being the true state of the world to be high. Perception is a belief about the state of the world, or, in other words, an estimate of the state of the world. We can never know the true state of the world, but we can test and improve our estimates by acting on the world and collecting new evidence from our senses. Bayes' theorem tells how much we should change our beliefs about the world given this new evidence.
This Helmholtz/Bayes framework had a number of interesting implications, and I suspected that many people might be quite shocked by them.
I Our experience of having a direct perception of the world is an illusion. This illusion is created by our lack of awareness of the inferences being made by our brain.
I There is no qualitative difference between perceptions and beliefs. A perception is a belief about the world that we hold to have extremely high probability.
I Perceptions are created by combining bottom-up, sensory signals with top-down, prior beliefs.
I Our perceptions are an estimate of the state of the world and never the true state of the world. However, we can constantly improve our estimate by making and testing predictions. For survival it is more important to be able to predict the state of the world than to have a very good estimate of what it was in the past. Furthermore, for survival all that matters is that our model of the world makes useful predictions.
In this framework, hallucinations are no longer such strange phenomena. All our perceptions are hallucinations, in the sense that they are created by our brain. However, our perceptions are hallucinations that are strongly constrained by reality. These constraints derive from the evidence provided by our senses, but also from our prior beliefs. Furthermore, in this framework, there is no essential difference between hallucinations and delusions. Both result from the assessment of evidence constrained by prior expectations.
And here might lie the critical defect in schizophrenia that leads to hallucinations and delusions. The constraints of prior expectations seem no longer to apply (Fletcher & Frith, 2009).
The problem with psychology
But are these rather complex ideas a suitable topic for a popular book about psychology? Psychology is different from other sciences in many ways, but the most important difference is that everyone has their own intuitions about psychology. This includes psychologists: we all use folk psychology all the time. With disciplines like physics or molecular genetics we accept that we know little or nothing about the subject and respect the experts who do. The psychologist who makes some exciting new discovery is told either that everybody knew that already or else that it must be nonsense. To persuade people of the importance of psychology we need to choose some strong belief in folk psychology and provide sufficient evidence to convince people that they are wrong in holding this belief. It seemed to me that my work on perceptions and hallucinations could provide the basis for a persuasive book about psychology. Furthermore, my work with brain imaging could provide evidence that people find especially compelling. For some reason brain-imaging studies do seem to capture the imagination of the public, or at least, of the press. Experimental psychologists are, quite rightly, annoyed when phenomena that they have been working on for years, are picked up by the press as having been recently 'discovered' through a brain imaging study. In Making Up the Mind I try to show by examples how behavioural experiments are just as important as brain-imaging studies in telling us about relationships between brain and mind.
We all have the strong belief that we have a direct perception of the world. This is because we have no awareness of all the inferences being made by our brain. In writing Making Up the Mind I aimed to show people that this belief is wrong.
I wanted to show how psychologists have arrived at this conclusion and how, through experiments, they create the evidence that supports this conclusion.
At the same time I wanted to combat the persistent dualist denial that there is any relation between the physical world of the brain and the mental world of the mind. After all, this is what makes neuropsychology the most exciting and difficult of the sciences. This is the discipline where the mind and the brain come together.
We are all connected
There is a second popular illusion that I wanted to confront in Making Up the Mind. This is an illusion about our social world. A striking feature of the symptoms associated with schizophrenia is the extent to which they are about other people. If you have hallucinations they are likely to consist of voices talking to you or about you. If you have delusions they are likely to be about people communicating with you or maligning you or controlling your actions. In The Cognitive Neuropsychology of Schizophrenia I suggested that people with a diagnosis of schizophrenia would have problems with social cognition and, in particular, with theory of mind or mentalising tasks in which people have to infer the intentions and beliefs of others. This prediction has been largely confirmed (Harrington et al., 2005). However, since that time there has been a dramatic increase of interest in and studies of the 'social brain'. This upsurge of interest has been driven, in part, by the discovery of mirror neurons, first noticed in monkeys (Rizzolatti & Craighero, 2004). These neurons became more active when a monkey performed a particular action and also when the monkey saw the experimenter performing the same action. A number of different mirror systems have now been identified in humans also. We all tend to imitate (Dimberg et al., 2000) and share the emotional expressions of others (Wicker et al., 2003). If we see someone else being touched, brain activity occurs in the area of somatosensory cortex that would be activated if we ourselves were touched in the same way (Blakemore et al., 2005). Of particular interest is the tendency we have to covertly imitate each other's movements and gestures, known as the chameleon effect (Chartrand & Bargh, 1999). In some experiments, one participant in the interaction is instructed to covertly imitate the other. The results show that being imitated makes us like the person we are interacting with and makes us more likely to give money to charity afterwards (van Baaren et al., 2004).
This social mirroring has important functions. It makes us less selfish and more cooperative. It also increases alignment between people, which enhances communication (Pickering & Garrod, 2004). For me, however, the key observation from these experiments is that this mirroring mostly happens without our being aware of it. Except in rare cases of synaesthesia, we are not aware of the activity in our own sensory cortex when we see someone else being touched. Also, the prosocial effects of being imitated would almost certainly disappear if we become aware that we were being imitated (Lakin & Chartrand, 2003). As a result of this lack of awareness we feel much more independent of others than we really are. Because all this activity is hidden from us, we do not realise how embedded we are in the social world. We feel that we are independent agents.
Freedom and responsibility
Since Libet's classic experiment (Libet et al., 1983) showing that brain activity precedes the conscious decision to act there have been ever-more frequent claims that neuroscience has shown that free will is an illusion (Wegner, 2003). Most recently, Soon et al. (2008) reported that brain activity measured up to 10 seconds before could predict what action a person was going to perform. The discovery of social mirroring and its effects also suggests that our actions are much more constrained than we realise. However, whether or not we have free will, we have a strongly felt experience of being free agents. We feel that our intentions cause our actions and that we could have chosen to do something different if we had wanted to. This feeling of being an agent, whether or not it is illusory, has a very important role in our social interactions. From an early age we make a distinction between deliberate acts and accidents (e.g. Behne et al., 2005), and this distinction is associated with the idea of responsibility.
The importance of responsibility can be observed even in simple laboratory games involving trust and reciprocity (Fehr & Gächter, 2002). In these games the players can invest money in the group or keep it for themselves. Money invested in the group gains interest (this game was developed before the credit crunch!) and is then shared among all the members of the group. Thus investing increases the amount owned by the group as a whole, but slightly reduces the amount held by the investor. As long as many people invest then everyone gains. However, there are always a few individuals (free riders) who realise they can gain even more by benefiting from the investments of others and not investing themselves. With repeated rounds of such a game, overall investments decrease as members stop investing since they don't see why they should support the free riders. As a result the group as a whole loses out. Fehr and Gächter (2002) showed that this problem could be resolved by allowing the players to punish the free riders. A player can pay a small amount of money to have another player fined. This is known as altruistic punishment since it has a cost. When this sanction is introduced into the game the amount of free riding declines and the amount of investment increases. As a result the group as a whole benefits (see also Gurerk et al., 2006).
What has all this to do with responsibility? Tania Singer and her colleagues (2006) found that punishment in these economic games is only applied to people who we believe are acting freely and deliberately. Punishment was not applied when players were told that other players were not choosing their responses, but simply following a sheet of instructions.
In addition the emotional response, measured by fMRI, was greater to the faces of players who persistently cooperated or defected than it was to those who simply followed instructions (Singer et al., 2004).
I believe that these data show that our sense that we are each of us responsible for our actions has a vital role in developing the sanctions that enable the good of the group to take priority over individual advantages.
BOX - Reacting quickly
The problem with writing about science is that the most exciting results always appear just after your manuscript has gone to press. This was certainly the case with Making Up the Mind…
In a recent study by Roman Liepelt and colleagues, participants were asked to lift their first or second finger as quickly as possible in response to a visual cue. If the participants could see a picture of a hand in which these same fingers were held down in clamps, their responses were slower even though their own fingers were completely free of restraint.
It seems that even reaction time, the mainstay of experimental psychology, has a strong social component.
Liepelt, R. et al. (2009). Contextual movement constraints of others modulate motor preparation in the observer. Neuropsychologia, 47, 268–275.
We live in exciting times
The discipline of social cognitive neuroscience has flourished dramatically in the last few years. Nearly every week a new experiment is reported revealing novel cognitive mechanisms underpinning social interactions and group behaviour (for example see box). We are even beginning to get clues to the kinds of computational mechanisms that might enable us to read each others intentions (Hampton et al., 2008). In comparison to other sciences, we psychologists are lucky to be living in such exciting times.
Chris Frith is at the Wellcome Trust Centre for NeuroImaging at UCL and the Interacting Minds Project, University of Aarhus
[email protected]
References
Bayes, T. (1958). Studies in the history of probability and statistics: IX. Thomas Bayes' essay Towards Solving a Problem in the Doctrine of Chances. Biometrika, 45, 296–315. (Original work published 1763)
Behne, T., Carpenter, M., Call, J. & Tomasello, M. (2005). Unwilling versus unable: Infants' understanding of intentional action. Developmental Psychology, 41, 328–337.
Blakemore, S.J., Bristow, D., Bird, G. et al. (2005). Somatosensory activations during the observation of touch and a case of vision-touch synaesthesia. Brain, 128, 1571–1583.
Chartrand, T.L. & Bargh, J.A. (1999). The chameleon effect: The perception–
behavior link and social interaction. Journal of Personality and Social Psychology, 76, 893–910.
Dimberg, U., Thunberg, M. & Elmehed, K. (2000). Unconscious facial reactions to emotional facial expressions. Psychological Science, 11, 86–89.
Fehr, E. & Gächter, S. (2002). Altruistic punishment in humans. Nature, 415, 137–140.
Fletcher, P.C. & Frith, C.D. (2009). Perceiving is believing: A Bayesian approach to explaining the positive symptoms of schizophrenia. Nature Reviews Neuroscience, 10, 48–58.
Frith, C.D. (1992). The cognitive neuropsychology of schizophrenia. Hove: Lawrence Erlbaum.
Frith, C.D., Perry, R. & Lumer, E. (1999). The neural correlates of conscious experience: An experimental framework. Trends in Cognitive Science, 3, 105–114.
Gurerk, O., Irlenbusch, B. & Rockenbach, B. (2006). The competitive advantage of sanctioning institutions. Science, 312, 108–111.
Hampton, A.N., Bossaerts, P. & O'Doherty, J.P. (2008). Neural correlates of mentalizing-related computations during strategic interactions in humans. Proceedings of the National Academy of Science USA, 105, 6741–6746.
Harrington, L., Siegert, R.J. & McClure, J. (2005). Theory of mind in schizophrenia: A critical review. Cognitive Neuropsychiatry, 10, 249–286.
Helmholtz, H. von (1866). Handbuch
der Physiologischen Optik. Leipzig: Voss.
Lakin, J.L. & Chartrand, T.L. (2003). Using nonconscious behavioral mimicry to create affiliation and rapport. Psychological Science, 14, 334–339.
Libet, B., Gleason, C.A., Wright, E.W. & Pearl, D.K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential): The unconscious initiation of a freely voluntary act. Brain, 106(3), 623–642.
O'Craven, K.M. & Kanwisher, N. (2000). Mental imagery of faces and places activates corresponding stiimulus-specific brain regions. Journal of Cognitive Neuroscience, 12, 1013–1023.
Pickering, M.J. & Garrod, S. (2004). Toward a mechanistic psychology of dialogue. Behaviour and Brain Sciences, 27, 169–190 (discussion 190–226).
Rees, G., Kreiman, G. & Koch, C. (2002). Neural correlates of consciousness in humans. Nature Reviews Neuroscience, 3, 261–270.
Rizzolatti, G. & Craighero, L. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, 169–192.
Singer, T., Kiebel, S.J., Winston, J.S. et al. (2004). Brain responses to the acquired moral status of faces. Neuron, 41, 653–662.
Singer, T., Seymour, B., O'Doherty, J.P. et al. (2006). Empathic neural responses are modulated by the perceived fairness of others. Nature, 439, 466–469.
Soon, C.S., Brass, M., Heinze, H.J. & Haynes, J.D. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience, 11, 543–545.
Stephan, K.M., Fink, G.R., Passingham, R.E. et al. (1995). Functional anatomy of the mental representation of upper extremity movements in healthy subjects. Journal of Neurophysiology, 73, 373–386.
van Baaren, R.B., Holland, R.W., Kawakami, K. & van Knippenberg, A. (2004). Mimicry and prosocial behavior. Psychological Science, 15, 71–74.
Wegner, D.M. (2003). The illusion of conscious will. Cambridge, MA: MIT Press.
Wicker, B., Keysers, C., Plailly, J. et al. (2003). Both of us disgusted in my insula: The common neural basis of seeing and feeling disgust. Neuron, 40, 655–664.
Yuille, A. & Kersten, D. (2006). Vision as Bayesian inference: Analysis by synthesis? Trends in Cognitive Science, 10, 301–308.
Zeki, S. (1990). A century of cerebral achromatopsia. Brain, 113(6), 1721–1777.