Psychologist logo
woman in self-driving car
Cognition and perception, Ethics and morality, Digital and technology

Who's to blame when partially automated vehicles crash?

Drivers of these vehicles are assigned most of the blame for crashes which, the researchers argue, they can’t reasonably avoid.

18 October 2022

By Emma Young

When a traditional, non-automated car hits a pedestrian, and there are no mitigating circumstances, the public is clear on who is to blame: the driver. Swap that for a fully autonomous vehicle, and the answer is different but still clear: the manufacturer. But what if the driver is using a partially automated vehicle, of which there are now many different models on the market?

A new online study in Scientific Reports finds that drivers of these vehicles are assigned most of the blame for crashes which, the team argues, they can't reasonably avoid. This work is important because, as Niek Beckers at Delft University of Technology and colleagues point out, public opinion on this matter could shape future vehicle design and also legislation.

Partially automated vehicles cannot operate without a driver, but they do take over driving tasks for long periods. For example, a partially automated car might take charge of steering and acceleration. However, the team writes, "such automation is still brittle and can fail unexpectedly." If it does, the driver is supposed to immediately take over. In fact, under the terms of use of a partially automated vehicle, the human is required to supervise the driving and be ready to take control at any time, the team notes.

Beckers and his colleagues set out to explore how people would assign blame in different crash scenarios involving a vehicle like this. A total of 250 participants read one of five vignettes in which the partial automation failed unexpectedly, requiring the driver to take control. In one of these vignettes, the driver was not distracted at the time of failure, while the other vignettes featured different types and durations of distractions at this time. For example, in one, the driver was looking down at a display of the day's news (which the team took to be an 'intentional' distraction). In another, the driver was distracted by thinking about dinner (an unintentional distraction, in their view).

The participants used 100-point scales to assign levels of blame for the crash to the driver, the vehicle itself and the manufacturer. They also used 100-point scales to report on how 'aware' they thought the driver was of the automation failure and the need for them to take control, as well as how able they thought the driver was to actually take control at that moment.

The results showed that the participants assigned relatively more blame to a distracted vs non-distracted driver. The type of distraction — whether it was deliberate (looking at a screen, say) or unintentional (letting their mind wander off) — didn't matter.

The participants also perceived the distracted drivers to have less awareness of the situation (the need for them to take control). Related to this, they felt that distracted drivers were less able to take control as required. So, even though the participants felt that the distracted drivers were less able to avert a crash, these drivers got more blame for the crash than a non-distracted driver.

The participants also wrote about their reasons for their decisions. The team's analysis of the answers suggests that many believed that the driver of the partially automated vehicle should have anticipated that the automation could fail. Some argued that the driver had made the choice to drive this type of vehicle, and had failed to supervise the automation as they were meant to do.

But is it really reasonable to expect constant vigilance from the drivers of partially automated vehicles? The team argues that it is not. "Even highly trained pilots struggle with supervising autopilot systems for prolonged periods," they point out. Research on drivers has also shown that automation reduces vigilance. The team argues that driver distractions should in fact be expected — and, contrary to the views of the participants in their study, should reduce, rather than increase, the degree of blame that a driver should take for a crash. They dub the difference between the level of blame that is assigned to a driver and the level that they should actually shoulder a 'culpability gap'.

"We argue that the responsibility attributed to a driver should be consistent with their ability to control the automated vehicle," the team writes. (Of course, the participants did feel that this control was reduced for the distracted drivers.) "If that ability is impacted by the using the automation, responsibility should shift from the driver to the automation (or, by proxy, its manufacturer)."

How might the culpability gap be closed? Providing more information about what can reasonably be expected of drivers of partially automated vehicles might help, the team writes.

As they note, there is a lot of debate about how to address the issue of drivers failing to monitor automation and intervene when necessary. Whatever routes are pursued, "we argue that the well-understood limitations in human abilities have to be accepted as they are," the team writes. This implies that legal, not just public, blame for crashes involving partially automated vehicles should be more carefully considered.