Psychologist logo
Man typing at his laptop computer at night
Digital and technology

In search of the optimum level of trust between human and machine

We have to find the Goldilocks Zone for our interaction with technology: not too trusting, not too distrustful, but just right.

20 November 2015

We live in a world where volatile industrial processes, military actions and our morning commute are increasingly controlled by automated systems. The arrival of the autonomous vehicle on our roads, drones in our skies and unmanned vessels at sea throws into sharp relief the challenges faced in collaboration between human and machine.

It might seem counter-intuitive, ridiculous even, to discuss matters of trust between human and machine; but a relationship of trust between people and the automated systems they use is often a critical factor in making these systems safe and efficient. Trusting that an automated system can handle the more hum-drum aspects of its assignment with a minimum of human interference frees up its operators for tasks that are more deserving of their attention that might require more human skills such as problem-solving, improvisation and ingenuity. But trust is a delicate balance. Trusting an automated system too much, perhaps adopting a hands-off approach, can lead to delays, inefficiencies and risk of damage or injury when there is no goal-directed supervision of its behaviour or if the environment in which it is operating changes. Not trusting an automated system enough, on the other hand, by constantly tweaking its assignment parameters or continuously monitoring it, takes the operator's attention away from tasks that require human intervention and can even render the idea of automating the system redundant.

Trust, or the lack of it, between human and machine can also impact much simpler systems than the automated vehicles of the future. For example, in a control room, multiple telecommunication functions – radio, telephone, e-mail, emergency telephone and public announcement system – can now easily be integrated into single touch-screen devices. However, many older operators do not trust that these devices will function correctly in an emergency situation and prefer to have hard-wired back-ups available, leading to unnecessary expense, project management time and, ultimately, a cluttered control room environment – which brings with it yet more ergonomic problems.

In a new paper published in Human Factors, researchers at MIT have investigated what sorts of characteristics make someone a good operator of unmanned vehicles, and how operators can be encouraged to trust the automated systems that are under their influence, leaving them to take care of the mundane aspects of an operation. Specifically, Andrew Clare and his colleagues tested how easily operators can be primed through simple verbal prompts to have just the right level of trust in the machines with which they are, for want of a better word, collaborating. "Collaboration" is a more appropriate term to use here than "controlling", "commanding" or "operating". The vessels are making many of the decisions themselves based on algorithms that optimise their respective workload, schedules and tasks. The human operator sets the high-level goals for the team of vessels, but does not control any one vessel directly.

Forty-eight participants were recruited from the local university population and the researchers gave them the task of controlling a simulated team of autonomous vehicles searching an area for hostile forces and targeting them for weapons deployment. The task was a computer-based simulation, based on existing software for controlling multiple autonomous vehicles. The algorithms that controlled the "vehicles" were written to be deliberately imperfect, thereby requiring some intervention by the participant to optimise their performance. While being trained in the use of the interface, participants in the positively- or negatively-primed groups were given a short passage to read which consisted of actual quotes from participants in a previous experiment. For the positively-primed participants, the quotes reflected positive naturalistic impressions of the software, for example, "The system is easy to use and intuitive to work with." For the negatively-primed participants, the quotes reflected dissatisfaction with the system, for example, "I did not always understand decisions made by the automated scheduler." The third group received no priming passage during training.

A participant's performance was quantified via various metrics such as the amount of area covered, the percentage of targets found, the percentage of hostile targets correctly destroyed and the percentage of non-hostile targets incorrectly destroyed. Trust in the automated system was measured by questionnaire after the experiment and online subjective assessments of current perceived performance trust in the automated system and expectations of performance were taken throughout the experiment via a scale that popped-up on-screen at regular intervals.

There were a wide range of trust levels amongst the participants, and as you'd expect, positive-priming lead to higher ratings of trust, while negative-priming lead to lower ratings of trust. However, across all subjects, there were no significant differences in performance between the different priming groups. Upon picking apart their subject pool, Clare and colleagues discovered that priming was, however, significantly affecting the performance of participants who were regular or experienced players of computer games.

It has long been known that experienced gamers suffer from "automation bias" or a tendency to over-trust automation. In the initial stages of this experiment, gamers who had been positively primed, or had not been primed at all, trusted the (deliberately sub-optimal) algorithms too much. Trust has substantial inertia, even when we are talking about trust in non-conscious machines: many small errors will be "forgiven" whilst a single significant error – a "betrayal" if you will – poisons the relationship immediately and trust has to be rebuilt over a long period of time. The initial over-confidence in the automated system displayed by the positively-primed or non-primed gamers took considerable time to unlearn, which lead to higher subjective ratings of trust in the system, but ultimately worse performance. Gamers who had been negatively primed, on the other hand, began with a much more sceptical view of the system (closer to that of the non-gamers) and as a result of this scepticism of the system, they took more action to correct the behaviour of their search teams. In other words, negative priming improved gamers' performance, in that it recalibrated their level of trust in the automated systems to more appropriate levels.

The findings of this and similar studies can be used for developing both recruitment strategies and training programs for supervisors of autonomous vehicles. Based on their experiences with technology, some recruits will benefit from prompting to be more or less trustful of automated systems. Identification of recruits who might be too trusting or distrustful of automation, combined with introduction of appropriate priming into their training could act to reduce the amount of training time, and exposure to an automated system's actual performance that is required to build an effective human-machine team.

Automated systems will never be perfect, and it is likely that they will remain under human supervision for some time, if not permanently. However, unmanned vehicles and their human operators can make a powerful – and safe – team if we can strike a balance between blind faith in technology and our more Luddite instincts. We merely have to find the Goldilocks Zone for our interaction with technology: not too trusting, not too distrustful, but just right.

References

Clare, A., Cummings, M., & Repenning, N. (2015). Influencing Trust for Human-Automation Collaborative Scheduling of Multiple Unmanned Vehicles Human Factors: The Journal of the Human Factors and Ergonomics Society, 57 (7), 1208-1218 DOI: 10.1177/0018720815587803

Post written for the BPS Research Digest by Craig Aaen-Stockdale, Principal Consultant / Technical Lead, Human Factors & Ergonomics at Lloyd's Register Consulting in Oslo, Norway. Previously Aaen-Stockdale has worked as a postdoctoral research fellow at McGill University in Montreal, and at Bradford School of Optometry & Vision Science. In 2012 he was a visiting research fellow at Buskerud University College in Kongsberg, Norway.