Psychologist logo
Brain

Can neuroscientists read your mind?

An exclusive extract from 'Sex, Lies, and Brain Scans', by Barbara J. Sahakian and Julia Gottwald.

12 November 2019

A British Psychological Society Book Award winner, Sex, Lies, and Brain Scans by Barbara Sahakian and Julia Gottwald is out in paperback on Oxford University Press on 28 November. They have shared this exclusive chapter with us.

 

Every one of us is a mind-reader. We do it every day, and our ability to communicate and cooperate with our social group depends on it. What does my boss think of me? Is my partner happy? Is my son going to run after his football and into the busy street? You might not be aware of it, but you are processing a great deal of social information whenever you interact with another person. Your brain works like a detective conducting an investigation about another person's thoughts and mental state, based on evidence from, for example, facial expression, body language, tone of voice, and your previous knowledge of the person. These factors form a bigger picture and most of us are quite skilled at coming to the right conclusion – reading the mind. We only become aware of our investigative skills when we observe people who lack them. Social cognition is greatly impaired in some psychiatric conditions such as autism spectrum disorder. The majority of patients have difficulty understanding their peers' thoughts and feelings. They read facial cues less accurately, have trouble putting themselves into another person's shoes, and are less able to form relationships. This shows how important social cognition is for our everyday lives.

Even though we are skilled mind-readers, we are far from perfect. Mistakes are common and some people are impressively capable of hiding their true feelings. Those in some professions have to be particularly good at hiding emotions, such as politicians, actors, and poker players. What if there was an objective, accurate, scientific method of reading someone's mind? Such a scenario could be both dream and nightmare: though undoubtedly useful, in the wrong hands such technology holds the potential for abuse.

How Much Mind-Reading Is Really Possible Today?

Some scientific studies have tested the possibility of mind-reading, and this has attracted much press attention. Newspaper headlines claimed that we now have a 'brain scan that can read people's intention', or 'So our minds CAN be read: magnetic scanner produces these actual images from inside people's brains.' But what is the scientific evidence? Many studies try to predict the actions of participants. Subjects perform tasks while their brain activity is recorded by an fMRI machine. The scientists often use an approach called machine learning, in which a computer does not need a pre-programmed solution to a problem. The model learns from the data fed into it from a training set and improves over time to make predictions about new data. This approach is highly useful, because it makes the model flexible and driven by the data, rather than the hypothesis of the researcher. Machine learning technology is already used in many areas, such as the optimization of online search engines or self-driving cars. But it is also a valuable tool for research.

In the early stages of mind-reading experiments, John-Dylan Haynes and colleagues from the Max Planck Institute for Human Cognitive and Brain Sciences were able to predict whether you intended to add or to subtract two numbers presented to you or whether you were going to press a left or right button. While this may not make a Hollywood thriller, it was a remarkable achievement back in 2007 and 2008. Since then, the techniques and computer power have improved, allowing the decoding of more elaborate processes. One basic approach to 'eavesdrop on one's thoughts' is to find out which nouns a person is thinking about. In 2008, Tom Mitchell and his colleagues at Carnegie Mellon University reported a breakthrough in the field of mental state decoding. They invited nine participants to an fMRI experiment in which each of them was presented with sixty different nouns. The subjects had to imagine the properties of every word when it was presented, for example they saw the word 'castle' and might imagine 'knight', 'cold', and 'stone'. Their patterns of brain activity were fed into a computer model, for all nouns apart from two. The model then predicted the pattern of activity for the two excluded words, based on what it had learned from the other fifty eight patterns. Afterwards, it was given the two scans that had been left out and matched them with the two words. On average, the computer was right more than seven times out of ten. If that seems too easy, the model was then trained with fifty-nine out of sixty words, presented with a new activity pattern and asked to predict the matching word – this time, out of a pool of 1001 nouns. The model performed equally well in this scenario. Impressive, isn't it? Bear in mind, though, that this does not make the computer a perfect mind-reader. It needs a lot of training, and in about a third of the cases it was still wrong. And our thoughts are a lot more complex than just single words. In order to decode a mental state, we need more sophisticated techniques.

Jack Gallant's group from the University of California has developed a highly impressive method to reconstruct a film clip that a subject was watching purely based on the fMRI recordings. The subjects – in this case, three researchers who are also authors of the paper – first watched a set of film trailers known to the computer. It was therefore possible to associate the trailers with brain activation patterns. After this training stage, the subjects then watched a second set of clips, but this time the computer had no information about the content of the film trailers. Instead, the model used the known associations to form a reconstruction of the clip. The reconstructions were blurry and not very detailed…. Brain signals are still too complex and the fMRI technology is not capable of capturing very rapid neurotransmission. While there are considerable advances required, it is nonetheless remarkable what we are able to achieve at present. Technology is constantly evolving and this might be just the beginning.

One possible application of reconstructing film clips is to reconstruct our 'inner films': our dreams and memories. A Japanese research team led by Tomoyasu Horikawa tried its luck with the former in 2013. They scanned the brain activity of three people while they were falling asleep and entering the dreaming stage, then woke them up and asked for a description of the dreams. The subjects had to repeat this more than 200 times to give the researchers a good pool of data. Among the dreams were ordinary experiences ('I saw a scene in which I ate or saw yoghurt'), but also some unusual scenarios ('I saw something like a bronze statue […] on a small hill'). Key words were assigned to different categories (for example, 'food' or 'geological formations'). Subjects were then shown photos from each of these categories and again their brain activity was measured. These data were fed into the model. The computer compared the activation of seeing an image while awake with the activation during the dream and made a prediction as to whether a certain content was present in the dream or not ('Did the person dream about food?'). The model does not work perfectly, but it is reasonably accurate: on average three out of five times. One important weakness is the lack of objectivity. Participants had to describe their dream contents to the researchers and these reports form an important data set for the study. But who knows how accurately we remember our own dreams? To date, there is no objective way of measuring this, so we have to rely on subjective reports. Nonetheless, the study illustrates an exciting way of analysing internal representations.

As we have discovered, neuroscientists are able to make good predictions about simple and straightforward thoughts. But this leaves out a big and important component of our mental state – our emotions. When your boss says he is happy with your work, a thought identification device may confirm that it is actually your work he is thinking about (and not his afternoon golf game). But does that tell us that he is indeed happy? We need a different type of information to be sure: a peek at his emotions. Karim Kassam and his colleagues from Carnegie Mellon University have taken that peek successfully, using actors from the local community for their study. The actors were asked to put themselves into nine emotional states (anger, disgust, envy, fear, happiness, lust, pride, sadness, and shame) while in the scanner. They accomplished the experience of these emotions by imagining scenarios they developed before the scan. Rather than pretending, they were asked to actively immerse themselves into the feelings in more than a hundred trials, presented in a random order.

While the emotionally drained actors recovered, their data were fed into a computer model which could learn and improve its assessments with experience. The model was able to identify the correct emotions of a subject on average four out of five times when comparing the scan patterns with previous trials of the same subject. That is already remarkable, but here comes an amazing twist: the model was still correct on average seven out of ten times when comparing the neural activity of one subject with the scans from other individuals. Thus emotions seem to have a similar neural basis among individuals (or at least among different actors). This seems to be more true for emotions like anger and less so for shame, but overall the model predicted all nine emotions with impressive accuracy.

The Limitations

While these recent advances are certainly remarkable, they only work within strict limits. There is no 'one-size-fits-all' approach, especially for complex thoughts, because our brains are like every other part of the body: they vary between individuals. Your neural representation of eating a yoghurt (especially while being ashamed) might be different from that of your neighbour. To date, computers cannot handle this variability without being trained. The participants in the described studies underwent functional imaging for extended periods of time; they saw a large number of video clips or were awoken from their dreams annoyingly often. This initial training stage is important for the model to adapt to your personal activation patterns – to 'get to know you'. Therefore, it is also impossible today to use these techniques 'undercover' – you would certainly know if you spent hours in a big noisy fMRI machine, repeatedly drifting in and out of sleep.

Moreover, the widespread use for potential mind-reading would not be practical at present. The equipment is expensive, heavy, and not portable. It requires a special isolated room, trained personnel, and complex analyses. You have to lie still in an MRI scanner for a relatively long period of time before researchers or clinicians can get a good scan of your brain. This is a challenge for young children, but some adults also struggle to lie still for any length of time – especially those with motor symptoms such as attention deficit hyperactivity disorder (ADHD) patients. Could we have much faster machines where only a part of the head needs to be immersed in the scanner? Techniques such as magnetoencephalography (MEG) make this possible. In contrast to fMRI, MEG measures the magnetic field produced by the brain rather than blood flow. The magnetic fields change quickly and the technique is able to detect these rapid changes. In fact, the temporal resolution of MEG is considerably higher than in fMRI. But nothing comes without a price: tracing the brain activity back to a precise location is much harder in MEG. Researchers sometimes combine these techniques to get the best from both, but this approach is not very practical, since it requires multiple scans and the integration of big data sets. We are still looking for the 'holy grail' of neuroimaging: a temporally and spatially precise method that is also cheap, safe, easy, and portable.

There are some ideas about how to overcome at least the issue of portability. Some newer techniques, including diffuse optical tomography (DOT), which uses light rather than magnets, are in development. The accuracy of these new systems seems to be catching up with the current gold standard in the field – fMRI.

Reading personal thoughts in great detail using fMRI is science fiction for now. The experimental conditions have to be tightly controlled and the context well defined. The computer predictions reflect blurry shapes of a seen film or the presence/ absence of food in your dream, to name a few. Adrian Owen was involved in the remarkable work on patients in the vegetative state that we discussed in Chapter 1. He asked the world's top neuroimagers to identify what he was doing, via a scan of his brain, taken while he was lying in an fMRI scanner performing a particular task. Here are their answers:

1. Remembering something

2. Tracking a stimulus on the screen

3. Shifting attention from one thing to another

4. Deciding which of two responses to make

5. Doing sudoku

6. Switching attention

7. Tapping a finger in response to a stimulus

8. Counting

9. Looking at disgusting pictures

10. Nothing

What was Adrian Owen really doing? He was telling a lie! Not a single expert could identify the correct activity. They clearly did not lack creativity, but reading someone's mind from a brain scan without any information about the context is currently impossible. Mind-reading does not work in isolation and even the most sophisticated machine has to get to know the subject first. But the field is moving ahead rapidly. Advances in machine learning techniques and new imaging methods could overcome the current limitations of mind-reading. In the future it might be possible to know what your favourite politician is really thinking, no matter how good his acting skills are.

Good vs Bad Mind-Reading?

Most researchers working on technological refinements such as the development of new imaging techniques or better computer modelling have worthy, ethical aims. Better technology would make brain scans safer, easier, and cheaper. They could be used to advance our understanding of the brain and inner thought processes. In some cases, they could also enable us to communicate with people who have lost their ability to speak, due to being in a coma or to muteness. However, there is also the potential for abuse, especially if a large number of people suddenly have access to this technique. When do we have the right to keep our thoughts private? When does the government, an organization, or an individual have the right to read our thoughts? Is it ethical to use mind-reading techniques, for example, at an airport to screen for terrorists? Many people are already uncomfortable with full-body scans: what if a machine had access to your thoughts and emotions?

Clear ethical guidelines for applications of brain scans and potential mind-reading will be needed. We as a society will have to debate where we want to draw the line between responsible, beneficial usage of the techniques and what constitutes their abuse.