Psychologist logo
Charles Myers
Perception

Seeing speech in unexpected places - Mouths, machines and minds

Ruth Campbell delivered the C.S. Myers Lecture at the Society’s Annual Conference in Belfast, April 1999.

18 September 1999

This article is about what is going on when we watch faces talking — how we perceive the facial actions produced in speech. Why should psychologists be interested in this? There are two reasons. Firstly, seeing facial actions improves speech understanding. The most obvious example of this is in noisy conditions when we are trying to follow a conversation – at a party or in a noisy café, for instance. Under these conditions, seeing the speaker can generate a gain in speech understanding equivalent to increasing the auditory signal-to-noise ratio by up to 20 decibels (Sumby & Pollack, 1954). Computational scientists are now attempting to develop programs that not only convert text to speech, but convert text to a (virtual) seen speaker in action. Should this succeed, the virtual speaker should also be very helpful in noisy environments such as the cockpit of a fighter plane, where the pilot is bombarded with acoustic signals while trying to follow auditory instructions or take in speech-based information. And this could give the inventors an edge in a wide range of media simulations. Secondly, although very few of us are good lipreaders, seeing speech is one of the things that we all do better than we think we do. The most convincing example of this is the 'McGurk' effect (McGurk & MacDonald, 1976), where we think we have heard a speech sound (phoneme) that is actually delivered 'by eye'. This can occur when we hear and see synchronised dubbed phonemes that are actually incongruent (see Figure 1).

Download PDF for article.