Psychologist logo
Passport being checked
Legal, criminological and forensic, Social and behavioural

Photo ID vs sequence: why order matters

Robin Kramer on two psychological concepts and their real-world applications: one he feels is overrated, one underrated.

12 September 2022

Overrated: Photo ID

Whether it's trying to gain entry to a club when underage or something far more nefarious, IDs are often used illegally when 'proving' someone's identity to strangers. And this is a key aspect of why this method of deception often works: when the person checking the ID is unfamiliar to you, they are substantially less accurate in determining whether the photograph often present on these types of documents does, in fact, depict your face (Young & Burton, 2018). Face matching, as it's referred to by professionals, is difficult when the person is unfamiliar because the checker doesn't have any knowledge of how that person's face can look across different situations. Just think about how lighting or a facial expression can alter an image, or how a photo taken years ago doesn't really look much like you anymore.

We might think that limiting ourselves to official passport and driving licence photographs, taken front-on and with neutral expressions, would address this issue. Unfortunately, face matching performance remains error-prone (Kramer et al., 2019). In fact, as an aside, matching photographs displaying smiles might even be easier and lead to fewer mistakes (Mileva & Burton, 2018). Perhaps it is only our university students, who represent the majority of participants in this area of research, that struggle so much? Problematically, we know that professional passport officers also make plenty of mistakes on such tasks (White et al., 2014). Evidence even shows that spending longer working in the job doesn't result in fewer mistakes being made.

So is there a way to reduce the number of errors made? One route may be to improve the design of the ID itself. A single, neutral, front-on image of the face may not be the best solution if we hope to illustrate the potential variation of someone unfamiliar. However, the evidence suggests that presenting a photograph taken in profile as well as, or instead of, a front-on photograph doesn't improve performance (Kramer & Reynolds, 2018). Similarly, providing several images of a face with the aim of conveying information regarding that face's variability also doesn't consistently help with the task (Ritchie et al., 2020, 2021; White et al., 2014).

Face matching is difficult when the person is unfamiliar because the checker doesn't have any knowledge of how that person's face can look across different situations.

Some researchers have hypothesised that the use of averages – computer-generated images created from several photos of the same face – may improve face matching, because these 'wash away' any extraneous information while maintaining the features that are identifiable 'you'. Unfortunately, this technique has also failed to produce consistent benefits (Ritchie et al., 2018, 2020; White et al., 2014).

So for now, it remains unclear as to how we might improve the ID document itself.

The other route to better face matching performance might be to target the personnel. Perhaps we can train them to be more accurate on the task? Here, initial findings seem to suggest that professional training programs that are currently in use fail to produce improvements in performance (Towler et al., 2019). Although typically emphasising a feature-by-feature comparison (concentrating on individual features, rather than the face 'as a whole'), this strategy has likely failed to boost performance in the past because it hasn't been clear regarding which features should be prioritised. For example, recent evidence suggests that focusing on eyebrow comparisons may be useful, although this might depend on the particular photosets under consideration (Megreya & Bindemann, 2018). Extending this notion of which features are important, Towler and colleagues (2021) implemented a training intervention in which participants were instructed to prioritise the ears and facial marks, since these had previously been shown to be most useful according to facial examiners (experts whose job it is to carry out these kinds of facial comparisons). As a result, accuracy increased by around 6 per cent post-intervention. While promising as a method of increasing performance on such tasks, this improvement is noticeably smaller than one might have hoped.

As we can see, researchers have failed to improve face matching performance by focusing on the documents themselves, and training people to be better at matching has had limited success. Of course, there is a large range of natural abilities when it comes to face matching, and the existence of 'super-recognisers' – those who happen to perform at the top of the pile when it comes to face perception tasks (Russell et al., 2009) – has provided a simple route to improvements… recruit those who are already excellent!

More broadly, many security agencies use other forms of identification (e.g. fingerprints, voices, and other biometrics). As technological capabilities continue to advance, our need to involve humans in this process will likely (and perhaps wisely) diminish. Whoever or whatever is making the judgements, my advice would be towards a greater reliance on other sources of information alongside, or perhaps even instead of, the overrated face images.

Underrated: Sequence

Order matters. Psychologists have known for years that the presentation order of items can influence the likelihood of these items being remembered. Termed primacy and recency effects, evidence has shown that the first and last items benefit the most from their position in a list to be learned (Glanzer & Cunitz, 1966). Indeed, we can probably all remember situations in which we're introduced to several people at once and those names presented in the middle of these intros were quickly forgotten. However, what may be less obvious is that the particular order of items can also matter – the item preceding the current one can be an important factor.

Known as sequential effects or serial dependence, this topic has received attention in recent years and researchers are beginning to understand some of the underlying mechanisms. For instance, the previous item in a sequence may alter how we perceive the current item through direct comparison (a perceptual bias). If the line I just saw looks particularly long then I may perceive the current one as shorter than it really is as a result. In addition, if I'm required to respond to these items in some way (e.g. through providing ratings) then my current response may be influenced by the response I gave to the previous item (a response bias). For example, having just pressed '4', some inertia or uncertainty in responding may result in my current response being nearer to '4' than it would otherwise have been. As a result, these biases may cause my response to be drawn towards (assimilation) or pushed away from (contrast) the one I gave previously, with several parameters determining which it is.

Humans are biased in ways that we are only now starting to understand.

Researchers have typically investigated these biases within the confines of controlled laboratory experiments. Here, the results have been mixed when it comes to determining whether assimilation or contrast is evident in the data. This confusion is likely caused by the influences of several factors which appear to affect the kinds of biases that are manifested. For instance, how long each stimulus is presented onscreen may have an impact – assimilative effects are associated with shorter times (e.g. less than one second; Xia et al., 2016) and contrast effects with longer ones. The length of the interval between stimulus presentations also seems to play a role, with longer intervals appearing to decrease the magnitude of the bias (Xia et al., 2016). Since these parameters tend to vary across experiments, subsequent results often show different patterns of biases being present.

In addition to the timings of the experiment, researchers have argued that perceived similarity between the stimuli plays an important role in sequential effects (Mussweiler, 2003). If the previous stimulus and the current one are seen to be sufficiently similar (or dissimilar) then information supporting that perception will become more accessible, resulting in assimilation (or contrast). For instance, when gymnastics judges (and regular participants) were told that gymnasts in a laboratory experiment's sequence shared the same nationality, their judgements showed assimilation, but when the nationalities were thought to differ, a contrast effect was found – higher scores were given after judges had previously seen weaker performances from athletes of a different nationality (Damisch et al., 2006).

This result is both interesting and troublesome, as it begs the question: do we suffer from these sequential biases in our everyday decision-making? The answer, perhaps reassuringly for scientists everywhere (but less so for athletes and others that are affected), is that these effects are observable across many real-world domains. For example, the gymnast study used data from the Olympic Games, where you'd hope each competitor would be judged independently. In fact, these biases are also present in Olympic synchronised diving scores (Kramer, 2017), although here, the biases were in the direction of a contrast effect. Problematically, these sequence effects likely play a role in any sport in which athletes compete one after the other and are subjectively scored by human observers (so not the high jump, for example).

Of course, athletes aren't the only victims when it comes to these types of biases. Researchers pooled data from several Pop Idol television series across eight countries and found evidence of an assimilation effect in performance evaluations (Page & Page, 2010). In other words, if you perform after a weak contestant, you are more likely to be evaluated poorly than if you perform after a strong contestant. Even more relevant to our everyday lives, there is also evidence of a contrast effect in speed dating decisions (driven almost entirely by male evaluators; Bhargava & Fisman, 2014), as well as an assimilation effect when rating the quality of essays (Attali, 2011).

It is clear that sequential biases can be found across a variety of seemingly unrelated facets of our lives. Where evaluations are given to items in a sequence, we are prone to making comparisons between the current and previous item. Humans are biased in ways that we are only now starting to understand. If this seems unfair to you – why should you receive a lower mark for my essay simply because the one before it wasn't very good? – perhaps being aware of this underrated psychological quirk could be the first step towards turning it to your advantage.

Dr Robin Kramer is a Senior Lecturer in Psychology at the University of Lincoln.

Key sources

Photographic identification

Kramer, R.S.S., Mohamed, S. & Hardy, S.C. (2019). Unfamiliar face matching with driving licence and passport photographs. Perception, 48(2), 175-184.

Kramer, R.S.S. & Reynolds, M.G. (2018). Unfamiliar face matching with frontal and profile views. Perception, 47(4), 414-431.

Megreya, A.M. & Bindemann, M. (2018) Feature instructions improve face-matching accuracy. PLoS ONE, 13(3), e0193455.

Mileva, M. & Burton, A.M. (2018). Smiles in face matching. British Journal of Psychology, 109(4), 799-811.

Ritchie, K.L., Kramer, R.S. S., Mileva, M., Sandford, A. & Burton, A.M. (2021). Multiple-image arrays in face matching tasks with and without memory. Cognition, 211, 104632.

Ritchie, K.L., Mireku, M.O. & Kramer, R.SS. (2020). Face averages and multiple images in a live matching task. British Journal of Psychology, 111(1), 92-102.

Ritchie, K.L., White, D., Kramer, R.S.S. et al. (2018). Enhancing CCTV. Applied Cognitive Psychology, 32, 671-680.

Russell, R., Duchaine, B. & Nakayama, K. (2009). Super-recognizersPsychonomic Bulletin & Review, 16, 252-257.

Towler, A., Kemp, R.I., Burton, A.M. et al. (2019). Do professional facial image comparison training courses work? PLoS ONE, 14(2), e0211037.

Towler, A., Keshwa, M., Ton, B., Kemp, R.I. & White, D. (2021). Diagnostic feature training improves face matching accuracy. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(8), 1288-1298.

White, D., Burton, A.M., Jenkins, R. & Kemp, R. (2014). Redesigning photo-ID to improve unfamiliar face matching performance. Journal of Experimental Psychology: Applied, 20(2), 166-173.

White, D., Kemp, R.I., Jenkins, R., Matheson, M. & Burton, A.M. (2014). Passport officers' errors in face matching. PLoS ONE, 9(8), e103510.

Young, A.W. & Burton, A.M. (2018). Are we face experts? Trends in Cognitive Sciences, 22(2), 100-110.

Sequential effects

Attali, Y. (2011). Sequential effects in essay ratings. Educational and Psychological Measurement, 71(1), 68–79.

Bhargava, S. & Fisman, R. (2014). Contrast effects in sequential decisions: Evidence from speed dating. Review of Economics and Statistics, 96(3), 444-457.

Damisch, L., Mussweiler, T. & Plessner, H. (2006). Olympic medals as fruits of comparison? Assimilation and contrast in sequential performance judgments. Journal of Experimental Psychology: Applied, 12, 166-178.

Glanzer, M. & Cunitz, A.R. (1966). Two storage mechanisms in free recall. Journal of Verbal Learning and Verbal Behavior, 5(4), 351-360.

Kramer, R.S.S. (2017). Sequential effects in Olympic synchronized diving scores. Royal Society Open Science, 4, 160812.

Mussweiler, T. (2003). Comparison processes in social judgment: Mechanisms and consequences. Psychological Review, 110(3), 472-489.

Page, L. & Page, K. (2010). Last shall be first: A field study of biases in sequential performance evaluation on the Idol series. Journal of Economic Behavior & Organization, 73(2), 186-198.

Xia, Y., Leib, A.Y. & Whitney, D. (2016). Serial dependence in the perception of attractiveness. Journal of Vision, 16(15), 28.