Psychologist logo
BPS updates, Research

Making a difference

Mihaly Csikszentmihalyi, Uta Frith and Richard Nisbett reveal the roots and fruits of their most famous contributions, introduced by Robert Sternberg; plus online extras, from Adrian Furnham, Michael Gazzaniga and Susan Fiske.

06 February 2017

When I was a graduate student, I often wondered how I could get from where I was as an unknown quantity to where eminent scientists like my advisers were. I was not even sure, at that point, what it was that the field of psychological science looked for to recognise a scientist as 'eminent'. In Scientists Making a Difference, I book-end the 100 contributions with some thoughts of my own on the characteristics of both the work that led to eminence, and of the scientists themselves. If you want to be in the next generation of eminent scientists, think about:  

Impact: The force of the ideas in terms of changing the ways people think or the things people do. Some scientists have impact by moving current paradigms forward in leaps and bounds. Others change the direction in which a field is moving. And still others propose that in the future the field has to start over. Low-impact work is not work people disagree with; it is work that people do not even bother to cite because they do not believe it of sufficient importance to talk about. 

Quality: Does one seek to study big important problems or just tiny unimportant ones? A second component is how well one studies the problems one chooses – does one study them in a rigorous and even elegant fashion, or in a sloppy ill-considered way that makes it difficult to draw conclusions? A third component is how well one communicates one's data. Does one recognise what is important in one's data, and present it so people can understand it, or does one communicate in a way that no one can or wants to understand, with the result that the work never achieves its potential? Not all work of excellent quality is extremely high-impact. A scientist could do elegant, innovative and top-quality work that just happens to be on problems different from those of interest to many other scientists.

Quantity: Scientists who are more eminent are generally not 'one-idea' scholars, who have a great idea, only then to disappear into obscurity. Rather, they keep coming up with new ideas that keep them productive. Some scientists cast a sceptical eye on the idea of quantity, believing that highly productive scientists, at least in terms of quantity of work, often just turn out lots of articles or books that are of lesser quality. But Dean Simonton, one of the foremost scientists of all time in the field of creativity, has shown this folk conception of quantity to be a myth. In fact, there is a high correlation between quality and quantity in scientific work.

Visibility: This is different from quality or impact. Someone can be highly visible for doing inferior work. But for the most part, visibility is correlated with eminence. Eminent researchers tend to be those that scientists, and sometimes laypeople, have heard about. And to achieve impact, one often needs visibility so scientists are aware of one's work and thus in a position to cite it.

Ethics: Nothing destroys a career, even an eminent one, faster than unethical behaviour in scientific work. Accomplish your goals always adhering to the highest standards of scientific ethics.

In our book (written under the auspices of the Federation of Associations in Behavioral and Brain Sciences), we asked some of the most eminent psychologists of the modern era what psychological science looks like as it goes on in their heads. The list of invitees was based on rankings compiled by Ed Diener and colleagues (2014), using criteria such as number of major awards, total number of citations, and pages of textbooks devoted to the scholars' work. Such lists, of course, would have different members depending on the criteria used, but the list seemed to us as good a basis for recognition as any we could find. We invited all living members of that list to tell us what they consider their most important scientific contribution and why, how they got the idea, how it matters for the world, and what they would like to see next. We have tried to capture personal stories of the authors' involvement in, and excitement about, the scientific process.
We hope you enjoy the three extracts, which the editor of The Psychologist selected. You can find three more in this online version, and, of course, 100 in the book itself!

The rediscovery of enjoyment
Mihaly Csikszentmihalyi

Personally, I think that my most important conceptual contribution to science might turn out to be the work I have done trying to adapt evolutionary theory to human development, and particularly to creativity; and in terms of methodology, the work with the Experience Sampling Method, or ESM, which has resulted in the beginnings of a systematic phenomenology that I expect will be widely used in psychology. I must repeat, however, that this is only my personal opinion. In terms of how others evaluate my work, I am quite sure that those who have heard about it at all would single out the concept of flow as being my main contribution – however small. As defined here, flow is complete absorption in what one is doing.

So let's talk about flow. I think flow is important mainly for two reasons: because it is (a) an essential aspect of life that almost everyone recognizes as being something they have experienced, yet they had no name for it or way to understand it; (b) the recognition of the phenomenon I ended up calling flow helped to add a new perspective to understanding human behavior, a perspective that eventually helped establish the sub-field of Positive Psychology.

My original interest in this phenomenon probably started when, as a child, I was caught up by the tragic events of World War II. The stupid cruelty around me was hard to tolerate and impossible to understand. My two older brothers disappeared – the oldest snatched away from his family to spend years in Soviet prison camps, the younger one drafted out of college and killed in the defense of Budapest.  Nobody knew what was going to happen. Powerful, wealthy, well-educated men acted like frightened children. Daily air raids chased us into basement shelters, and buildings crumbled in flames up and down the streets. I was ten years old while all this was going on, and could not figure out how grown-up people I had assumed to be rational and in control of their lives could suddenly become so clueless.

One small island of rationality remaining was that I had just learned how to play chess. The game was like an oasis in which rules provided predictable outcomes to one's actions. I remember that once I made the opening moves in a game, the 'real' world seemed to disappear, and you could plan the future without having to fear irrational violence. In retrospect, I realize that losing myself in the game constituted a denial of the larger reality of my surroundings; yet the experience of order within chaos that playing chess provided left a lasting – albeit almost entirely unconscious – impression: It suggested an alternative possibility to the senseless reality that humankind had chosen when opting for war.

It took several decades for this seed of intuition to bring any fruits. After the war, when we became refugees fleeing the approaching Soviet armies, the problems of daily survival were too pressing for giving much thought to the human condition in general.  Just having enough food so as not to go to bed on an empty stomach was a success. Still, whenever I could, I also read extensively from the work of philosophers, spiritual leaders, historians, and political scientists, hoping to find an explanation as to why a race that could build nuclear bombs and devise trigonometry was unable to find a peaceful and fair way to live. But I did not have much time to devote to such questions. By the time I was fifteen years old, I had to drop out of school and start working at various jobs. My father, who had became the head of the Hungarian Embassy in Italy, resigned his job in 1948 when the Soviets imposed a Communist Government in Hungary. So I had to serve tables at a restaurant in Rome, pick peaches for canning near Naples, manage a hotel in Milan, and lead trains full of pilgrims to Lourdes and Fatima. In the meantime, however, I stumbled on some books by C.G. Jung, and discovered psychology. Reading Jung's work suggested the possibility that psychology might be a key to understanding why human beings behave so strangely.

I decided to study psychology, but there were a few obstacles to overcome. First, at this time psychology in Italy was taught only as part of doctoral programs in medicine or philosophy. Second, even if there had been courses in psychology I could take at the University, I would not have been able to take them, given that I had quit school when I was fifteen. To get around these obstacles I decided to apply for a visa to the United States, start a college career while I worked at night, and then become a psychologist. It took a few years to get a visa, but in 1956, just as I turned 22 years of age, I finally was able to set foot in America – specifically Chicago, Illinois, where I arrived with $1.25 in my pocket, but full of good intentions.

Working 8 hours each night was not very pleasant, but then going to school each day (I did pass the high school equivalency exams, so I was admitted to the University of Illinois at Chicago) made it an exhausting drill that left no time for anything else. What was worse, however, was that the psychology taught in the 1950s was nothing like the books by Jung I had read . . . much about rats, nothing about the human spirit. It looked like coming to the USA had been a mistake; the miserable life in Chicago was much more miserable than the miserable life in Rome had been, and the psychology I had come to study turned out to be a dud.

After two years of college, I decided to transfer to the University of Chicago. I knew by then the reputation of that school, and read some of the work of teachers at that school that promised to be quite rat-free.

It turned out to be a good decision; at Chicago I became a student of Jacob W. Getzels, who wrote about values and creativity, and soon we started to write articles together. In 1963 I was writing my dissertation in psychology on a group of young artists, trying to understand how they moved from a blank canvas to a finished painting. During this study, I was struck by the fact that these young artists were deeply involved with the process of painting, to the point of not eating or sleeping for long periods of time, but as soon as the painting was finished, they just seemed to immediately lose interest in it, and stack it against a wall with all the other canvases they had painted before. What made this behavior so interesting was the fact that it seemed to contradict the generally accepted paradigm of psychology. According to behaviorist theories, people, like other organisms, were motivated to behave by the expectation of a desirable external state, such as food or the cessation of electric shocks. The young artists, however, knew that their work was very unlikely to be noticed or bought – yet as soon as they finished one painting, they were eager to start a new one. Clearly they were not motivated by having the painting, or by selling it for money; instead, it looked very much like what motivated them was making the painting.

From that point on, I became more and more focused on understanding this apparent anomaly in human behavior. What did the artists get from the process of painting that made them so eager to engage in the activity for its own sake? There were similarities between the artists' work and children's play, and for several years I studied various play-forms to see if there was an underlying pattern between different forms of play, and also the creative process of artists. The result of all these studies – which were facilitated by many students, friends, and colleagues – has been the recognition that some of the best moments in human life are the results of acting in ways that express who we are, what we are good at doing – as athletes, artists, thinkers, mothers, healers – or simply, as just good human beings. This way of acting is what I ended up calling flow, and the history of how the idea evolved has been chronicled elsewhere, and need not be repeated here.

The recognition that human beings are motivated by the intrinsic rewards of the flow experience, and not just by external rewards, has had a reasonably strong impact on psychology, and on society as a whole. The book I wrote 25 years ago for a general audience, Flow, has been translated into 23 different languages, plus two different translations in Portuguese and in Chinese. An extensive flow research network has been organized in Europe. IPPA, the International Positive Psychology Association, which was built on conversations I had with Professor Martin Seligman of the University of Pennsylvania and now has tens of thousands of members among psychologists all over the globe, was strongly influenced by flow research.

Outside of academic psychology, flow has also found an unusual number of applications. In the field of education, it has influenced U.S. magnet schools, Montessori, and public education; studies of flow and learning in schools have been conducted in Denmark, Japan, Korea, Hungary, Finland, and France. Clearly young people all over the world learn more when they can experience flow in the process. In business, organizations that have adopted flow as a management tool have reported very encouraging results in the United States, Sweden, and South Korea. Makers of computer games have modeled many of their products on flow theory, and an interactive management simulation game based on flow has won the Gold Medal at the U.S. Serious Play Association meetings in 2013.

And this is clearly just a start. The possible uses of the flow perspective to improve the quality of life are truly innumerable. It is poetic justice, perhaps, that an idea and concern born amidst the ruins of an inhuman conflict should blossom into a set of practices that help make the world a better place.

Why study autism?
Uta Frith

I first met autistic children as a trainee clinical psychologist, and I was captivated for life. I thought them hauntingly mysterious. How could they do jigsaw puzzles straight off, and yet never even respond to my simple requests to play with them? What was going on? How could they be tested? Here was a challenge that cried out for basic research.

My mentors, Beate Hermelin and Neil O'Connor, knew how to do elegant experiments with children who hardly had any language and were more than a little wild. I was elated when they offered to supervise me, and I got my dream job in their lab after I finished my PhD. I was hooked on the experimental study of cognitive abilities and disabilities in young children with autism and I wanted to know how they differed from other children. One of the innovations that O'Connor and Hermelin had introduced me to was the mental-age match. They argued that comparing bright and intellectually impaired children would get us nowhere. The brighter would do better, and this told us nothing that we didn't know already. Instead, they compared, say, 8-year-old children who on psychometric tests had a mental age of 4, with 4-year-old, typically developing children with a mental age of 4.

I was proud of one memory experiment I did during my apprenticeship as a PhD student. We observed that autistic children often had a remarkable facility in remembering words by rote. This allowed us to compare autistic and non-autistic children who had the same short-term memory span. What we found gave me a key insight: Typically developing children could remember many more words when these words were presented in the form of sentences than if the same words were presented in a jumbled up fashion, but autistic children failed to show this advantage. I followed up this finding in experiments with binary sequences, with clear structure, e.g. abababab vs those without, such as aababaaa. The results suggested that structure or 'meaning' allowed stimuli to be packaged into bigger units and thereby extended memory span. Did autistic children not see meaning in the way other children did, I wondered? Did meaning not exert the same dynamic force in their information processing?

This question occupied me for a long time. Some years later, it became a theory that I termed 'weak central coherence.' Briefly, the information we process is usually pulled together by a strong drive to cohere. We like things to make sense, we like a narrative, we like the big picture. In autism, I proposed, this drive is less strong. The downside is that individuals with autism do not see the forest for the trees. But there is also an upside: Not being hampered by a strong drive for central coherence could actually give you far better attention to detail. You are not lured away by an overall Gestalt to forget about its constituents, and you won't fall prey to certain perceptual illusions. For the first time, here was a way to think about autism not just in terms of disabilities but also in terms of special talents.

As I was developing this idea, I was worried that in all our experiments we were missing the social features of autism. My search for a glitch in processing social information would have been a hopeless quest, had it not been for Alan Leslie and Simon Baron-Cohen. Alan had asked the exciting question how young infants were able to understand pretend play while they were still learning about the real world. How on earth could they distinguish which was what? This reminded me of a finding nobody had paid much attention to: Autistic children show little, if any, pretend play.  Alan proposed a cognitive mechanism that could underpin the ability to decouple representations of an event so that they could become second-order representations. They could then be freely embedded into an agent's mental states: the agent can wish, pretend or believe the original event. Could it be that the decoupling mechanism was missing in autistic individuals? In that case, they should not be able to understand that another person can have a false belief.

Why should this matter? Beliefs and other mental states, such as pretense, wishes, and knowledge, are what enable us to predict what others are going to do. We don't predict this on the basis of the physical state of affairs. So, John will open his umbrella because he believes it is raining, regardless of whether it is actually raining. Tracking mental states is grist to the mill of our everyday folk psychology, also known as Theory of Mind. To be able to talk about this ability, we coined a new word, mentalizing.

Simon, Alan, and I were excited to find out more about this ability. One of the tasks we developed was the Sally Ann task. It is played out with two dolls, Sally and Ann. Sally has a marble and puts it in her basket. She then leaves the scene. While she is out, Ann takes the marble from the basket and puts it into her box. Sally comes back and wants to play with her marble. The critical question is: 'Where will she look for the marble?' The right answer is, of course, 'in the basket', because that is where she believes it is.

The results amazed us, as they were so clear cut: Typical 4-year-olds and older learning disabled children passed this task, while autistic children didn't. They had failed to understand that Sally had a false belief and therefore made the wrong prediction of where she was going to look. This and other experiments threw new light on the social communication problems in autism: If you don't understand mental states, then you wouldn't understand deception nor get the point of most jokes. You wouldn't get the point of keeping secrets, nor would you understand any narratives that depended on 'she doesn't know that he knows' scenarios. It would limit ordinary social interactions in just the way that interactions with autistic people are limited.

With the advent of the new neuroimaging methods, we could now try to visualize this cognitive mechanism in the brain. One of the pioneers in neuroimaging was my husband, Chris Frith, and he and his colleagues were sufficiently interested to set up a then still daring series of studies. We designed stories, cartoons, and animated triangles, which could be presented in carefully matched conditions, which either did or did not require mentalizing. This allowed us to see a difference in brain activity in critical regions, forming a mentalizing network. Other labs replicated this.

One disappointment was that we could not immediately see what was different in the brains of autistic people during mentalizing. But to unravel this required many studies by many people in many different labs. This led us to a better understanding of mentalizing, and has already resulted in differentiating two forms: an apparently innate and unconscious form, and an acquired conscious form that is influenced by culture. This second form can be acquired by autistic people through compensatory learning.

Is there a lesson from my studies beyond the world of autism? I believe that the studies have demonstrated the usefulness of the cognitive level of explanation. The purely behavioral level is not sufficiently transparent for us to deduce the underlying causes; there are just too many. But, we can predict what behaviors might arise if a particular cognitive process were faulty. This was the point of the Sally Ann test: Nobody before had observed that autistic children failed to understand false beliefs. The beauty of this result was that it suddenly made sense of a range of hitherto unconnected behavioral observations, such as the poverty of pretend play, the inability to tell lies, and the incomprehension of irony.

Our concept of autism has changed enormously since the 1960s. There are likely to be many different phenotypes hidden in the autism spectrum. It is now time to split up subgroups and relate specific cognitive processes to specific causes, in the brain and in characteristic patterns of behavior. Mentalizing is not all there is to being social. There are other cognitive processes that underpin our social behavior that might be faulty and give rise to different problems and possibly different forms of autism. We simply need the right theoretical glasses to see differences in the spectrum, which are now blurred. Whether these subgroups conveniently map onto specific biological causes is another question. It is likely that there are hundreds of genetic and other biological causes, too many to make meaningful subgroups. At the behavioral level, each individual is in a class of his or her own. In contrast, at the cognitive level, there is a nexus, which might hold a manageable handful of phenotypes. My money is on cognition.

The incredible shrinking conscious mind 
Richard E. Nisbett

In the first experiment I ever conducted, I gave people a placebo and told some of the subjects it would cause heart palpitations, rapid breathing, and sweaty palms. These are the symptoms people experience when they're undergoing strong emotion. I then gave subjects a series of steadily increasing electric shocks, with instructions to tell me when the shocks became too painful to bear. I anticipated that subjects who were told the pill would cause arousal would mistakenly attribute their shock-produced arousal to the pill. They would consequently find the shock less aversive and would be willing to take more of it than control subjects who could only assume their arousal was being produced by the shock. And that was the finding. After removing the electrodes I asked the subjects in the arousal-instruction condition who had taken a great deal of shock why they had taken so much. A typical answer would be, 'Well, I used to build radios and I got a lot of shocks so I guess I got used to it.' I might then say, 'Well, I can see why that might be. I wonder if it occurred to you that the pill was causing you to be physiologically aroused.' 'Nope, didn't think about the pills and didn't think about the arousal.' I would then tell them what the hypothesis was. They would nod politely and say they were sure that would work for a lot of people. 'But see, I used to make radios…'

It was perfectly clear that subjects had no idea of what had gone on in their heads. At the time, believe it or not, this claim seemed to most people to be quite radical. There was just a bedrock presumption that thought is basically linguistic. To show this assumption was mistaken, and that quite elaborate cognitive processes can go on without people's awareness of them, I began to do experiments with Tim Wilson in which we would manipulate some aspect of the environment that would affect subjects' behavior in some way. For example, we might have people examine an array of nightgowns and tell us which they preferred. The order of the nightgowns had a big impact on preference: the later in the array, the more the subject liked them. Subjects had no idea this was true and when we asked them if the order of the nightgowns could have had an influence on their preference, they would look at us as if we were crazy. Or we would have subjects participate in a study in which they were to memorize word pairs, for example, parrot-bread. Later the subjects participated in 'another study' – this one on word association. They were to say the first word that came to mind when we asked them to provide an example of a category we suggested. Subjects who had studied the word pair ocean-moon were much more likely to respond with 'Tide' when asked to name a detergent than subjects who had not studied that particular pair. When asked why they came up with Tide, subjects were likely to say, 'that's what my mother uses,' or 'I like the Tide box.'

By now psychologists have done hundreds and hundreds of experiments where we find people engaging in unconscious cognitive processes. People tend to reject a persuasive communication when there is a 'fishy' smell in the room where they read it. People choose more orange products in a consumer survey if they're answering with an orange pen. People are more likely to put the coffee money in the honest box if there is a picture of a human face above it.

It became clear to me fairly early that we have no direct access to any kind of cognitive process that goes on. We understand that perceptual and memory processes are hidden in a black box, yet claim mistakenly that we can 'see' the thoughts that produce judgmental and behavioral responses. But the unconscious nature of these thoughts shouldn't seem surprising when you consider the question, 'why should we have access to our cognitive processes?' Being able to observe the complicated inference processes that go on would 
use up a lot of valuable real estate in the brain.

We nevertheless feel that we have access to our cognitive processes. Why is that? It's because I know I was seeing X and thinking about Y and then I did Z, and any fool would know that if you're seeing X and thinking Y you're going to do Z, because that's the sensible thing to do. 'I saw the squirrel and I put on the brakes because I didn't want to hit it.' 'I was nervous because I had to give a talk.' And in such cases I'm usually right. It's just that I'm right because I know what was in my (conscious) mind and I know what people do or feel in those circumstances. But a correct theory about process needn't come from inspection of process. (Plus, I hate to tell you, but we have a tremendous number of incorrect theories about our cognitive processes. And in novel situations we're as likely to call on one of those as to apply a correct theory.)

If you believe what I'm saying here, it's scary. I'm constantly being influenced by things I hardly notice – many of which I would rather not be influenced by. Moreover, I can't know by direct inspection why I believe anything I believe or do anything I do.

But there it is, and we've got to make the best of it. We can start with the fact that if it's the case that we have very imperfect knowledge of why we do what we do we're better off knowing it than not. In my book Mindware: Tools for Smart Thinking I spell out some of the advantages of knowing how little we know about why we do what we do.  It's valuable to know that I should be less confident about the accuracy of my opinions than I'm inclined to be. That makes me more likely to consider other people's possibly more valid views. And it makes it more likely that I would try to encounter people and objects in as many circumstances as possible. Abraham Lincoln once said, 'I don't like that man. I must get to know him better.' To which I'd add, see him in as many different contexts as I can. We are particularly blind about the extent to which other people influence us. Realizing this makes it clear how important it is that you carefully select the folks you choose to hang around with.

A big advantage of believing that the unconscious mind is what's driving the bus is that you can make more effective use of it than you do. There's evidence that the conscious mind can make poor decisions – in part because it attends overly much to stimuli that can be described in words. Hard-to-verbalize stimuli may get less of a say than they should. So think over a decision consciously and then hand it over to the unconscious mind before signing off.

A second big advantage is that the unconscious mind can actually learn things that the conscious mind can't. Pawel Lewicki and his coworkers showed subjects a box divided into four quadrants and asked them to predict, for dozens of trials, where an X would appear. There were rules – extremely complicated ones – determining where an X would come up on each trial. Subjects learned those rules but had no inkling of what the rules were or even that there were any. They said they 'just got a feel for where the X would come up.' So expose yourself to situations where you know there's something to learn even though it's not obvious just what it is.

A third big advantage is that the unconscious mind is a great problem-solver – if you give it a chance. Brewster Ghiselin has collected essays by some of the greatest thinkers, artists and scientists in history reporting on how they came up with their ideas. It turns out that the greatest discoveries are produced by the unconscious mind. Solutions typically appear out of nowhere when the thinker is occupied with some unrelated task or relaxing in a cafe.

What's true for geniuses is also true for the rest of us working on mundane problems. But you can't just tell your unconscious mind to go solve a problem. You have to do your homework and then pass the results along to the unconscious. Sit down and think about what the problem is and make a rough sketch of what a solution might look like. The noted writer James McPhee has said that he has to begin a draft of an article, no matter how crummy it is, before the real work on the article can begin. Once you do knock off a hasty sketch, you may subsequently be doing very little conscious thinking about the article, but your unconscious mind is working for you for free 24/7. A good way to kick off the process of writing, McPhee says, is to write a letter to your mother telling her what you're going to write about!

Conclusions

Robert Sternberg

Reviewing the contributions in the book we found that the contributors, collectively, show the characteristics that theories of creativity identify are critical for doing creative work. What are at least some of these characteristics? Hard work. Willingness to formulate an extended program of research. Willingness to set their own, often idiosyncratic paths. Willingness to surmount obstacles. Above-average analytical intelligence. Intellectual curiosity. Openness to new experiences. Intellectual honesty and courage. Collaborative skills. Willingness to take intellectual risks. Taste in scientific problems. Finding what you love to do. Communication and persuasion skills. Tolerance of ambiguity. Self-efficacy. Knowing your strengths and weaknesses. Picking a niche. And when all is said and done, luck plays a much greater part in success than any of us would like to admit.  

We all are malleable. If you think you can develop these characteristics within yourself, you are on your way, if not to becoming one of the top psychological scientists, at least toward becoming a distinguished psychological scientist who can take great pride in his or her accomplishments.

Key sources

Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. New York: Harper & Row. 

Csikszentmihalyi, M. (2003). Good business: Flow, leadership and the making of meaning. New York: Viking.

Diener, E., Shigehiro, O., & Park J-Y. (2014).  An incomplete list of eminent psychologists of the modern era. Archives of Scientific Psychology, 2, 20–32. (http://psycnet.apa.org/journals/arc/2/1/20.html) 

Frith, U. (1989, 2003 2nd edn). Autism: Explaining the enigma. Oxford: Wiley-Blackwell.

Frith, U. (2012). Why we need cognitive explanations of autism. Quarterly Journal of Experimental Psychology, 65(11), 2073–2092.

Kaufman, J.C. & Sternberg, R.J. (Eds.) (2010). Cambridge handbook of creativity. New York: Cambridge University Press.

Nisbett, R.E. (2015). Mindware: Tools for smart thinking. New York: Farrar, Straus & Giroux.

Nisbett, R.E. & Wilson, T.D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84, 231–259. 

Simonton, D.K. (2002). Great psychologists and their times: Insights into psychology's history. Washington, DC: American Psychological Association.

Sternberg, R.J. & Lubart, T.I. (1995). Defying the crowd: Cultivating creativity in a culture of conformity. New York: Free Press.

Sternberg, R.J., Fiske, S.T. & Foss, D.J. (Eds.) (2017). Scientists making a difference. Cambridge: Cambridge University Press.

Image removed. 

Online extras

Whether You Think You Can, or You Think You Can't – You're Right 
Adrian Furnham

A modest contribution to Science

I have never been a good judge of reactions to my work. Some of what I think are my most innovative ideas and experiments seem to attract little attention. I have to rely on others and such things as citation counts to get some idea of the contribution. I consider my research on the psychology of culture shock, the psychology of alternative medicine, and the psychology of money to be my best work, but I am going to discuss another issue: self estimated intelligence. I believe this is important for science as much as for application.

I got the idea from reading a very short paper by a fellow academic at Edinburgh University. Hanna Beloff, an early feminist and social psychologist, wrote a short, semi-academic paper in the British Psychological Society journal, The Psychologist. She reported in 1992 that she asked students to estimate their own and parents' IQ scores and found striking sex differences, which she attributed to the modesty training given to girls.

I thought it might be really interesting to replicate this finding, which I did. It seemed surprising to me that the difference was so large, particularly among bright students who had benefited from the women's liberation movement and from much gender equality legislation.

This, in turn, led me to do around 40 studies looking first at cross-cultural differences, then at differences in estimates of multiple intelligences as well as at the relationship between psychometrically measured ("actual") intelligence and self-assessments.

I started simply by showing students the well-known bell curve of intelligence and asked them to "honestly and accurately" estimate their score. With colleagues I collected data in countries from Argentina to Zambia, always showing a sex difference.

Then I started asking about different types of intelligence, using first Gardner's model but then others, including Sternberg's and Cattell's work on different types/facets of intelligence. I then got the subjects to estimate how they would do on the various tests that make up the best IQ tests, like the Stanford-Binet or the Wechsler Adult Intelligence Test. This provided a finer grain analysis and it appeared the sex difference in estimations were mainly about mathematical and spatial types of intelligence.

In the first study Beloff had asked students to estimate their parents' scores. In a number of studies we asked people to estimate scores not only of parents but also of grandparents as well as of their children and peers. We did one study on people estimating the intelligence of famous people. I remember being very surprised to find in many studies done in many countries, including China and Japan, that parents, on average, thought their sons brighter than their daughters. This could of course lead to self-fulfilling prophecies, with parents investing more in their sons than daughters.

Journal editors soon pointed out that it was important to look at the relationship between "actual" test-derived intelligence and self-estimates. Were males, they asked, overestimating their test scores and females under estimating their or was both true at the same time? The answer seemed to be "both," but that there was more evidence of male hubristic over-estimates than of female humility under-estimates.

We did notice that the over or under-estimate effects were reduced if the subjects made the estimations soon after taking an intelligence test. Currently we are looking at giving people accurate or distorted feedback on their scores and seeing what effect that has. However, we have concluded that you cannot use estimates and as a good proxy for actual scores: that is, that people's self-estimates are not accurate enough to be used as a substitute for getting an accurate test score.

The work has generated some interest with a few meta-analyses. Others have picked up the baton.

The Psychology of Self-Estimated intelligence

To summarise, my findings from over 40 studies in this area are:

First, males of all ages and backgrounds tend to estimate their (overall) general intelligence about 5 to 15 IQ points higher than do females. Always those estimates are above average and usually around one standard deviation above the norm. This difference is much larger than the actual differences one finds in test manuals. Males tend most to "suffer from" hubristic over-estimation.

Second, when judging "multiple intelligences," based on Gardner's model, males estimate their spatial and mathematical (numerical) intelligence higher but emotional intelligence lower than do females. On some multiple intelligences (verbal, musical, bodily-kinesthetic), there is little or no sex difference. When they consider the traditionally defined intelligences, which are usually verbal, mathematical, and spatial intelligence, people of all ages and cultures believe males are more intelligent.

Third, people believe these sex difference occur across the generations: people believe their grandfather was/is more intelligent than their grandmother; their father more than their mother; their brothers more than their sisters; and their sons more than their daughters. That is, throughout the generations in one's family, males are judged as more intelligent than females. This trend is, however, more noticeable in males compared to females. People estimate their own scores lower (3-5 points) than those of their children, but 3-5 points higher than those of their parents and 8-15 points higher than those of their grandparents. The sex difference in estimation remains consistent across the generations. In this sense, unless they believe their IQ scores change people, believe there are generational effects and people (their relatives) are getting brighter every generation.

Fourth, estimated sex differences are cross-culturally consistent. While Africans tend to give higher estimates, and Asians lower estimates, there remains an estimated sex difference across all cultures. Differences seem to lie in cultural definitions of intelligence as well as norms associated with humility and hubris.

Fifth, the correlation between self-estimated and -test- generated IQ is positive and low, in the range of r=.2 to r=.5, suggesting that you cannot use self-assessments as proxies for actual IQ test scores. Some people are accurate but there are too many outliers who seriously over- or underestimate their tests scores and ability.

Sixth, with regard to outliers, those who score high on IQ but give low self-estimates tend nearly always to be female, while those with the opposite pattern (high estimates, low scores) tend to be male.

Seventh, most people say, in the abstract, that they do not think there are sex differences in intelligence despite the fact that we always find females give lower self-estimates, on average, than do males. We also found that those who said they have taken IQ tests and received feedback seem to give higher self- estimates. This may be because brighter people choose to do tests or are at institutions that do IQ testing.

Where next?

I am not sure there is anything one could grandly see as a theory in this area. However, I would very much welcome the following research. I would like to further explore beliefs about intelligence, such as Dweck's Mindset theory and self-estimated intelligence. Dweck has distinguished between Entity theorists, who essentially believe that intelligence cannot be increased, as opposed to Incremental theorists, who believe that with sustained effort everyone can become more intelligent.

Most of all, I would like longitudinal research on a large sample tracing, the relationship between "actual" psychometrically assessed intelligence and self-estimated intelligence, over time, to explore causal relations. I could ask the question then of whether self-estimates have an impact on test achievement or the other way around, or indeed both.

 I would like to explore the relationship between self-rated intelligence, general measures of self-confidence, and life success variables such as health, income, and relationship stability. I would also like to explore ways of giving people feedback on their intellectual capabilities to help them ultimately explore and exploit their potentials.

Some useful References

Freund, P.H., & Kasten, N. (2012). How smart do you think you are? A meta-analysis on the validity of self-estimates of cognitive ability. Psychological Bulletin, 138, 296-321.

Furnham, A. (2001). Self-estimates of intelligence: culture and gender difference in self and other estimates of both general (g) and multiple intelligences. Personality and Individual Differences, 31, 1381-1405.

Szymanowicz, A., & Furnham, A. (2011). Gender differences in self-estimates of general, mathematical, spatial, and verbal intelligence: Four meta-analyses. Learning and Individual Differences, 21, 493–504.

Try it and assume nothing

Michael S. Gazzaniga

For the last 100 years, neuroscience has been in its Dodge City stage: chock full of unruly and unfettered "shoot 'em up" cowboys and outlaws. Why has neuroscience been so unruly? No sheriffs. Unlike many other fields, it has not been disciplined by an agreed upon set of next questions to be answered. As a result, neuroscience, with its unrestrained and unfettered researchers at large poking about in whatever has interested them, has accumulated information about nervous systems (from insects to humans) at an astonishing rate.   As I have related before, so much information is stacking up, however, that if one adopted the intellectual style of first learning all there is to know about a topic before studying new dimensions of it, then future progress would be stopped dead in its tracks, like so many cowboys on the streets of Dodge.

No place was as unfettered and unrestrained as Roger W. Sperry's lab at Caltech. Perhaps the premier brain scientist of the last century, he must have told me a hundred times, "Try it.  And don't read the literature until after you have made your observations.  Otherwise you can be blinded by pre-existing dogma." This is how we operated in those delicious free-ranging days in his lab exploring the unknown. If we had an idea, we "tried it." We were also spurred on by other greats wandering the halls. Linus Pauling (recipient of two Nobel Prizes) once stumbled across me in the midst of an experiment, which resulted in the take-away lesson: "Assume nothing."

As an undergrad, I was interested in neural specificity.  In 1960, I lucked out getting a summer NSF fellowship to study with Sperry, but soon was captivated by all the experiments being done in his lab on "split-brain" animals, primarily on cats and monkeys. The results were almost unbelievable: If one side of the brain was trained to do a sensory task, the other side of the brain didn't have a clue about it. I jumped in with both feet and went to work on rabbits and was hooked.

In cats and monkeys, in order to train one half of the brain only, two things had to be done. First, the optic chiasm was divided down the middle. Thus, information exposed to one eye was only projected to the ipsilateral half brain.  These animals learned a task quickly and were easily able to perform it through both the trained and the untrained eye. If, in addition, the corpus callosum and anterior commissure were also sectioned, the split-brain phenomenon presented itself.  With this additional sectioning, the untrained hemisphere remained ignorant of the task learned by the other half brain. The learned information appeared to be communicated through the second set of fibers that had been cut. Incredibly, it seemed as if there were two mental systems cohabitating side by side in one head.  Riveting as these findings were in monkeys and cats, when considered in the context of human behavior, they didn't seem relevant.  Could a left hand not know what the right hand is holding? Totally ridiculous.

I was captivated: captivated by Caltech, by Sperry, by research, and by all the questions that split-brain research had engendered. Before I knew it, I was back at Caltech starting graduate school the following summer.  Sperry gave me my marching orders: I was to design and prepare a set of studies for a human patient, W.J., who was being worked up by a neurosurgical resident, Joseph E. Bogen.  W.J. was a World War II veteran, who, disabled from a parachute jump, had been knocked out by a blow from the butt of a German rifle. He was left with intractable epilepsy, suffering several grand mal seizures a week. Bogen's research had suggested that sectioning his corpus callosum might decrease his seizures. W.J. was desperate and willing try desperate measures.  He'd already proved he was brave.

With all that we now know, it may not seem like it, but at the time, our project was beyond daring. This was wild and crazy stuff. No one seriously entertained the notion that the mind could be split, nor that we would actually find evidence that it could be. Weeks earlier, a patient with agenesis of the corpus callosum (a rare congenital condition in which there is a complete or partial absence of the corpus callosum) had come through the lab and testing showed nothing out of the ordinary. Neither had anything out of the ordinary been found in a series of "split-brain" patients in New York, who had been tested 20 years earlier by Andrew Akelaitis, a talented neurologist.  Neither Sperry, one of the world's greatest neurobiologists, nor definitely I, a new greenhorn graduate student, had any significant experience examining patients. Who did we think we were? On paper, it might have seemed to be a fool's game and a waste of time.  But at Caltech it wasn't, because the attitude was "try it" and "assume nothing."

The adventure began slowly enough. The pre-operative testing held no surprises.  W.J.'s two hemispheres were normally connected: Each hand knew what was in the other, and each visual cortex seamlessly connected to the other.  The very thought it could be otherwise was stupefying and could barely be considered.  After all the studies were completed, we put the work aside, focused on other projects, and waited until W.J. had his surgery. A few months later, W.J. had recovered nicely from his surgery and his seizures were under control. He was ready and eager to be tested again. That made two of us: I was equally eager.

The day arrived for the first post-surgical testing.  Pasadena was bright and sunny as W.J.'s wife rolled him up to the entrance of the biology building on San Pasquale Avenue. He still wore a helmet for protection (in case he had a seizure) and was using a wheelchair to get around. I wondered, "Will this World War II veteran reveal a deep secret?"  It didn't seem likely and no one thought so: The Triumphal March from Aida didn't kick in as we walked down the hall to the lab. In fact, I, the greenhorn, was left alone to do the testing, which started out routinely, as I have recently described: 

MSG: Fixate on the dot.

W.J.: Do you mean the little piece of paper stuck on the screen?

MSG: Yes, that is a dot. . . . Look right at it.

W.J.: Okay.

I make sure he is looking straight at the dot and flash him a picture of a simple object, a square, which is placed to the right of the dot for exactly 100 milliseconds. By being placed there the image is directed to his left half brain, his speaking brain. (This is the test I had designed that had not been given to Dr. Akelaitis's patients.)

MSG: What did you see?

W.J.: A box.

MSG: Good, let's do it again. Fixate the dot.

W.J.: Do you mean the little piece of tape?

MSG: Yes, I do. Now fixate.

Soon, however, things got more interesting:

Again I flash a picture of another square but this time to the left of his fixated point, and this image is transmitted exclusively to his right brain, a half brain that does not speak.  Because of the special surgery W.J. had undergone, his right brain, with its connecting fibers to the left hemisphere severed, could no longer communicate with his left brain. This was the telling moment. Heart pounding, mouth dry, I asked,

MSG: What did you see?

W.J.: Nothing.

MSG: Nothing? You saw nothing?

W.J.: Nothing

My heart races. I begin to sweat. Have I just seen two brains, that is to say, two minds working separately in one head? One could speak, one couldn't. Was that what was happening?

W.J.: Anything else you want me to do?

MSG: Yes, just a minute.

I quickly find some even more simple slides that only project single small circles onto the screen. Each slide projects one circle but in different places on each trial. What would happen if he were just asked to point to anything he saw?

And this is when things became mind-boggling:

MSG: Bill, just point to what stuff you see.

W.J.: On the screen?

MSG: Yes and use either hand that seems fit.

W.J.: Okay.

MSG: Fixate the dot.

A circle is flashed to the right of fixation, allowing his left brain to see it. His right hand rises from the table and points to where the circle has been on the screen. We do this for a number of trials where the flashed circle appears on one side of the screen or the other. It doesn't matter. When the circle is to the right of fixation, the right hand, controlled by the left hemisphere, points to it. When the circle is to the left of fixation, it is the left hand, controlled by the right hemisphere, that points to it.  One hand or the other will point to the correct place on the screen. That means that each hemisphere does see a circle when it is in the opposite visual field, and each, separate from the other, could guide the arm/hand it controlled, to make a response. Only the left hemisphere, however, can talk about it.  Oh, the sweetness of discovery.

Thus begins a line of research that, twenty years later, almost to the day, will be awarded the Nobel Prize.

I could barely contain myself and doubt that Christopher Columbus felt any more exhilarated by his discovery than I felt by mine. Fifty years of intense research commenced on that day probing the underlying brain mechanisms for human conscious experience.

References

Gazzaniga, M. S. (2015). The split-brain: Rooting consciousness in Biology. PNAS (111) 51, 18093-18094.
Gazzaniga, Michael S. (2015). Tales from Both Sides of the Brain. New York: Harper Collins.
Myers, R. E. & Sperry, R. W. (1958). Interhemispheric communication through the corpus callosum. Archives of Neurology and Psychiatry, 80, 298-303.

How warmth and competence inform your social life
Susan T. Fiske
Princeton University

Successful scientific contributions are equal parts serendipity, synthesis, integrity, and passion. My most-cited papers all have reflected ideas that happened to be in the right place at the right time—they pulled together competing perspectives, strived for scientific integrity, and were fueled by moral outrage. Of my most-cited work, our model for the two fundamental dimensions of social cognition, warmth and competence, best illustrates these principles.

When we are making sense of other people, we need to know immediately what the other individual or group intends toward us: Are they friend or foe? If they are on our side, then they seem trustworthy, sincere, friendly—in short, warm. If they are against us, they seem none of these. After inferring their intent, we need to know how capable they are of acting on it. If they have high status, we infer they are capable, skilled, effective—in short, competent.

This simple two-dimensional space has synthesis and predictive validity going for it. The combinations of warmth x competence describe common stereotypic responses to all kinds of people: whether at work or in societal groups around the world. For example, in most countries, their own citizens and their middle class generally are viewed as both warm and competent; people are proud of them. Homeless people and undocumented immigrants are stereotypically viewed as neither, and people report being disgusted by them. The mixed combinations are unique to this model: Older or disabled people are stereotyped as warm but incompetent, and people pity them. Rich or business people stereotypically come across as competent but cold; people envy them. Each combination elicits not only its distinctive emotional prejudices but also a distinct behavioral response by the rest of society. So the model describes distinct stereotypes that predict both emotional prejudices and discriminatory tendencies.

The stereotype content model has proved useful in dozens of countries around the world, allowing us to compare across cultures. For example, any group without an address—migrants, refugees, Roma (gypsies), Bedouins (desert nomads), hobos—all are viewed with distrust and contempt across the globe. Likewise, rich people are envied—resented as competent but cold everywhere, regardless of whether being rich is simply associated with social class or with an outsider entrepreneurial group (e.g., Jewish or Chinese people in various times and places). The model also describes people's relationships with animals (e.g., predators are competent but cold; prey are stupid) and with corporations (e.g., everybody loves Hersheys; everyone is disgusted by BP). And individual people fit this warmth-by-competence space, according to our work and related work by others. All this evidence has synthesis (conceptual integration) and scientific integrity (rigorous evidence), as evidenced by peer-reviewed publications from our lab and others.

Serendipity and passion come into play as well. As immigration and globalization increase, all countries (especially developed democracies) are coping with greater diversity than ever before. Although American and European psychological scientists have studied stereotyping and prejudice for decades, the U.S. research focused on Black-White relations (and to a lesser extent, anti-Jewish bias), and the EU research focused on national identities. In the 21st century, we need to understand biases based on all kinds of ethnic, religious, gender, sexual, age, class, and ability distinctions. (We won't be out of work any time soon.) At this time of societal flux, our model went beyond existing blunt ingroup-outgroup analyses to describe some apparently universal, applicable principles. Although related ideas had been kicking around the field, our model jelled at an opportune moment.

My own passion for this topic comes from a sense of moral outrage about people mistreating (even killing) other people because of arbitrary group categories. Stereotypes and prejudices are historical accidents of who allies or competes with whom, and what their relative status is. People underestimate these demonstrably circumstantial predictors of stereotype content that our model and research have shown. What are accidents of immigration at a particular place and time get generalized to an entire ethnicity. Similarly, gender roles get essentialized as biological destiny. And so forth. These fundamentally unfair societal processes can yield to scientific analysis, careful theory, and rigorous evidence. Moral outrage is not enough, of course. Science prevails.

The idea for this stereotype content model came from several sources. On an implicit level, I had been acquainted with the warmth and competence dimension since graduate school. My dissertation manipulated a target person's apparent sociability and civic competence. My readings had built on Solomon Asch's 1946 research, which manipulated warm-cold traits among a list of competence traits, and on Seymour Rosenberg's 1968 identification of two dimensions—social good-bad and task good-bad—in trait impressions. Shortly after graduate school, I published a paper with Robert Abelson and Donald Kinder on impressions of presidential candidates, which used integrity and trustworthiness as the two trait predictors of voting intention. In the 1990s, Peter Glick and I published the ambivalent sexism inventory, which identified two major types of sexist prejudice: hostility toward women seen as competitive and threatening (i.e., competent but cold) and subjective benevolence toward women seen as cooperative and subordinate (i.e., warm but incompetent). If you squint, these all are precedents for the warmth and competence dimensions, but I did not see that until afterwards.

My more explicit development of these ideas came when I was organizing the stereotyping, prejudice, and discrimination literature for a 1998 handbook chapter. The extant research, it seemed to me, focused on social cognitive processes presumed to be the same for all groups: in the U.S., as noted, mainly Black-White relations, but extended without qualification to, e.g., Latinos, women, and gay men. Our own analyses of gender relationships had already suggested that they differ from, for example, race relations, because men and women have greater independence, among other societal factors. So I started thinking about whether there are different kinds of outgroups and whether one could make a systematic and psychologically meaningful typology that would predict other variables.

Upon going public in talks describing our preliminary data, I discovered other people were thinking about parallel issues: Marilynn Brewer and Michelle Alexander, Andrea Abele and Bogdan Wojciszke, Steven Neuberg and Cathy Cottrell, among others. Over a career, I have learned that no one owns an idea, only the published work—especially empirical—on it. Also, in a way, it is intellectually reassuring when others are discovering related phenomena. It suggests that there's a there there.

Presumably real, the apparently universal dimensions of warmth and competence matter for our field and for the world beyond academia. For our field, they suggest researchers not glossing over the differences among various outgroups, clusters of which have particular relationships to each other and distinct resulting perceptions. Not all outgroups are equivalent. For applied work, this means that diversity programs and human resource management must acknowledge the systematic variety of outgroups subject to distinct patterns of bias. On a broader scale, the apparently universal dimensions of social cognition mean that any seemingly intent-having entity will be seen in these ways: human individuals, human groups, corporations, nations, animals, robots, nature, gods. Accordingly, if these dimensions are universal, how early do children apply them? Some work suggests that infants distinguish good actors from bad ones, as well as higher-status, more-competent actors from lower-ranked ones. Some researchers are investigating whether even dogs and primates make these same distinctions. 

More work remains, besides exploring extensions. We and others have explored the psychometrics of these dimensions. Andrea Abele has proposed that competence comprises both agency and capability, while warmth consists of both trustworthiness and sociability. Also, we and others are exploring patterns in how the dimensions are used across cultures. For example, East Asian cultures seem not to place societal reference groups in the high-high quadrant, as Westerners do, perhaps because of norms about modesty. In other work, income inequality predicts greater number of groups in the two mixed quadrants, consistent with the idea that inequality requires more complex explanations. Other cultural variants are likely to emerge. For example, societies in conflict (e.g., conducting a war, or a civil war) may be more polarized than societies at peace.

Other future directions could pursue the neural signatures of these phenomena. We did find distinct responses to allegedly disgusting targets, with less medial prefrontal cortex activation and less conscious reflection directed at considering their minds. And envied targets evoked physiological responses and self-reports consistent with Schadenfreude (malicious glee) at their misfortunes. If the dimensions and their combinations are indeed universal—and perhaps evolutionarily old—then the other quadrants and the warmth/competence dimensions themselves might have characteristic neural manifestations. Stay tuned.

Further Reading

Fiske, S. T. (2015). Intergroup biases: A focus on stereotype content. Current Opinion in Behavioral Sciences, 3, 45–50.

Fiske, S. T., Cuddy, A. J., Glick, P., & Xu, J. (2002).  A model of  (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82, 878-902.