Maybe Self-Driving Car Ambitions Aim Too High; Trust Is Needed First, Researchers Say

Back to QNT News


Automotive News

January 11, 2019

By Pete Bigelow

Greater safety benefits with self-driving cars would be achieved faster if automakers focused on establishing trust and ensuring cooperation between man and machine, rather than the distant prospect of fully autonomous cars curtailing crashes, say the authors of a white paper on self-driving vehicle safety.

The vast majority of traffic crashes—94%—are caused by human error and behavior, according to federal statistics. Self-driving vehicle proponents have used this as a rallying cry, arguing that traffic carnage will be dramatically reduced once humans are removed from their traditional driving roles and automated systems take control.

New research casts doubt on that ambition. Replacing humans with automated systems is more nuanced than it appears, and such a switch would not necessarily lead to a drop in deaths and injuries, according to the white paper published by Veoneer in December.

“There will be humans in the vehicle collaborating for as many years as we can see into the future,” said Ola Boström, vice president of research at Veoneer, the technology company spun off from global supplier Autoliv in 2018.

Error will remain

That may sound like a letdown, especially with the annual round of fully autonomous demonstrations slated for the annual CES technology showcase this week in Las Vegas and market expectations of self-driving deployments in 2019.

On the contrary, Boström and others say there are compelling reasons to keep humans in the driving loop paired with advanced automated systems. Humans, it turns out, are pretty adept drivers, crashing approximately once every 120,000 miles, according to the Insurance Institute for Highway Safety.

When humans do crash, Veoneer says it’s not because of their innate humanity. It’s because of mistakes in judgment or absence of situational awareness. Human error might not be made in a vacuum, but as a symptom of underlying problems that autonomous vehicles will be susceptible to repeating.

“It’s an oversimplification to believe that if you remove human drivers, you’re going to reduce 94% of accidents,” said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology AgeLab whose studies of driver behavior and advanced driver-assist systems helped inform the Veoneer white paper. “I’m not saying we can’t make a major dent in accidents. Human error is the largest contributing factor, but the nature of that error will shift.”

Human error will be infused into crashes long before vehicle occupants step into their cars—it will be designed into automotive software and more complex infrastructure. Error will also remain in the way humans and machines quickly and instinctively interact in what Veoneer calls “moments of truth.”

The right decision

In these split-second moments in which imminent hazards can be averted, humans must trust automated systems to make the right decisions. In return, these systems must decipher a human driver’s readiness to intervene, as well as respond to a range of driver skill levels and human emotions.

“Successfully addressing these factors could well be the key to determining consumer adoption of autonomous-driving tools, not to mention delivering on the safety benefit promised from their use,” researchers wrote in the paper, “Creating Trust in Mobility.”

Automakers are off to an underwhelming start in establishing that trust. Consumer surveys show that motorists often turn off safety features such as lane-keeping assist because they beep too often. On the other end, drivers can over-trust, resulting in crashes such as several involving the Tesla Autopilot driver-assist system.

“We need to calibrate trust through system understanding,” Reimer said.

Keeping drivers in the loop requires that they know more than just whether an automated system is on or off. Even automated systems will require monitoring.

“One of the myths of automation is that less human expertise will be needed,” Reimer said. The best systems, he said, will leverage what humans are good at—making reasonable, split-second decisions based on incomplete information—and mitigate what we’re not so good at—repeating the same thing mile after mile and keeping our attention at heightened alert.

Flying lessons

One way to create those systems is to draw on lessons learned in aviation. After a rash of airline crashes in the late 1970s and early 1980s were blamed on human error, industry experts largely pushed toward full automation in cockpits. But as that industry built better technology, it also emphasized pilots’ skills and trained them on how to handle breakdowns in automation.

“Human error can even be considered something good because if you can capture it and understand it, that can help you design a cockpit for better collaboration between pilot and machine,” Boström said.

Toyota is perhaps the automaker that has come closest to articulating and developing such a system. The company’s Guardian mode doesn’t fit with conventional notions or levels of autonomy; rather, it’s a suite of advanced safety systems that sit passively in the background, activating only to thwart an imminent problem.

Veoneer is developing perhaps a similar system, though one that’s far more active and collaborative on an everyday basis.

The company has rolled out the third generation of its learning intelligence vehicle platform, called LIV3.0. Researchers are gathering data on driver gaze, emotion, cognitive load, drowsiness, hand position and posture and fusing that with information on the external environment to best figure out how to enhance human and machine interactions.

Accomplishing that may be as essential a part of the “moonshot” of achieving self-driving ambitions as fully autonomous travel itself.

“When we put a man on the moon in the ‘60s, that was also a journey, and a lot of good technologies came out of that,” Boström said. “Eventually, fully autonomous will come out, but it’s not necessarily the goal.”

Copyright 2019 Crain Communications. All Rights Reserved.

Copyright © LexisNexis, a division of Reed Elsevier Inc. All rights reserved.  
Terms and Conditions    Privacy Policy

Quality News Today is an ASQ member benefit offering quality related news
from around the world every business day.

ASQ is a global community of people passionate about quality, who use the tools, their ideas and expertise to make our world work better. ASQ: The Global Voice of Quality.