Fingerprint matching is biased by the assessor’s prejudices
Study could help to explain real-world miscarriages of justice.
14 June 2016
By Alex Fradera
When we think of crime scene forensics, it's easy to view it as the objective end of criminal investigation. Witnesses waffle, suspects slide around from the truth, and jurors can be misled by emotive evidence. but the physical evidence simply is what it is. Yet forensic work requires human judgment, and opens the door for human error: for example, a tendency to evaluate evidence differently depending on background information. Now a new study in Law and Human Behaviour suggests that investigators are more likely to match evidence to the wrong suspect when that suspect fits their prejudices.
Laura Smalarz of Williams College and her colleagues asked 225 American undergraduates (88 per cent white, 70 per cent women) to appraise evidence connected to a fictitious crime. Participants first read a mock police report outlining either a molestation of a child in a park, or a string of identity thefts across the city. They then looked at information about a suspect: their photo, name, and other biographic details, as well as a magnified image of their fingerprint and another supposedly found at the scene of the crime. The participants' task was to judge whether these two very similar prints were a true match. In fact, the prints always differed, meaning the suspect should have been exonerated, rather than incriminated, by the evidence.
For some participants, the suspect was depicted photographically as a white man and labelled Steve Johnson, for others it was Mei Lee, an Asian woman. And this suspect information mattered. Overall, participants were quite accurate in spotting that the prints didn't match, exonerating Mei Lee about 70 per cent of the time for both crimes, and at a similar rate for Steve Johnson when he was suspected of identity theft. But when considered in the context of child molestation, Steve Johnson's prints were considered incriminating by fully half the participants.
Smalarz and her colleagues had predicted this result because in an earlier pilot study they found that child molestation conjures strong associations with white men. Most likely it was these associations that lead participants in the main study to make "false positive" errors of judgment (seeing a fingerprint match when there wasn't one), whether by putting less effort into disproving their intuition, or actually finding it harder to detect such contraindications. Identity theft, meanwhile, wasn't found to be strongly associated with stereotypes about race or gender in the pilot study, likely explaining why the main study showed no evidence of biased responding in this context.
These results wouldn't be so worrying if people knew they were being biased – at least then they could try to self-correct. But the participants gave no indication of being conscious that their expectations were influencing their judgments. In fact, their self-ratings of how impartial they were scored marginally higher when investigating Steve Johnson's role in the child molestation, suggesting that if anything, they saw themselves as acting more objectively in that instance. While it's true that the participants in this study were not forensic professionals, it's also the case that professionals often prove as vulnerable as laypeople to cognitive and social biases that skew their judgments – for example, past research has shown that judges give harsher sentences when they're hungry and that professionals can be as poor as the public in detecting lies.
Salarz's team argue their new findings may help to explain real-world miscarriages of justice, such as when suspicion for the 2004 Madrid train bombings fell on a convert to Islam who had professionally defended convicted terrorists, seemingly validated by a fingerprint match… that turned out to be wrong. Real-life forensic systems aren't hermetically sealed; the FBI fingerprint system contains contextual information such as mug shots, biographies, and other content similar to this simulation. The researchers argue that the forensic process would be improved by managing more systematically when data is exposed to an investigator, keeping most of it veiled and only revealing it when the circumstances demand.
Further reading
Smalarz, L., Madon, S., Yang, Y., Guyll, M., & Buck, S. (2016). The Perfect Match: Do Criminal Stereotypes Bias Forensic Evidence Analysis? Law and Human Behavior DOI: 10.1037/lhb0000190