Narrowing the Self-Drive Trust Divide
Are we learning to trust self-drive vehicle technology?
In January 2018 the AAA released the findings of its latest annual survey of driver attitudes in the US. Sixty-three percent of American drivers report feeling afraid to ride in a fully self-driving vehicle. This is a significant portion of the country’s driving population – but it’s a notable decrease from the 78% of drivers who reported similar misgivings in the AAA’s previous survey released in early 2017. The change equates to 20 million more US drivers who trust riding in a self-drive vehicle.
The study findings indicate that Millennial and male drivers are the most trusting of autonomous technologies, with only half reporting they would be afraid to ride in a self-driving car.
Still, developers of autonomous vehicle technology can’t afford to be complacent. Only 13% of drivers in the AAA survey report that they would feel safer sharing the road with a self-driving vehicle, while nearly half (46%) would actually feel less safe. Others say they are indifferent (37%) or unsure (4%).
My colleague Iyad Rahwan, the AT&T Career Development Professor in the MIT Media Lab, has studied this trust issue. In a recent MIT News interview he says that trust in autonomous vehicles “will determine how widely they are adopted by consumers, and how tolerated they are by everyone else.”
In many ways we are entering new territory, Rahwan points out. Self-drive vehicles are not passive objects; they are proactive, have autonomy and are adaptive. And they can learn behaviors that may be different from the ones originally programmed for.
Several difficult challenges will have to be overcome if society is to bridge this trust gap. One is technical: how to build artificial intelligence (AI) systems capable of driving a car safely. There also are legal and regulatory questions to answer. For example, who is liable for different kinds of faults? [i]
A third class of challenges are psychological in nature: people must feel comfortable putting their lives in the hands of AI. Naturally, the nature of the trips they take is important. People moved with little fanfare from human-operated elevators to autonomous ones. Yet few individuals would fly on an autonomous airplane, even though modern airplanes are effectively drones.
Rahwan sees three important psychological barriers. General concern over ethical dilemmas associated with self-drive is one. For example, how to prioritize passenger safety versus the safety of pedestrians. His research suggests that people believe that autonomous vehicles should minimize harm but prefer to buy cars that always prioritize their best interests.
A second issue is that people may overplay the risk of dying in a car crash caused by an autonomous vehicle even if these cars are, on average, safer than conventional models. In other words, people don’t always reason about risk in an unbiased way. For example, compare the headlines created by an air crash, despite its safety (2017 was the seventh straight year that nobody died in a crash on a United States-certificated scheduled airline operating anywhere in the world), to the public’s acceptance of over 40,000 deaths and millions of serious injuries caused by road accidents in the US every year. Furthermore, fear of flying is more prevalent than fear of driving.
A third factor is a lack of transparency about what self-drive vehicles are thinking; it can be difficult for humans to predict the behavior of these machines. A precondition of trust is predictability.
Given these issues, it is critical that the industry makes realistic promises about self-drive technology, believes Rahwan. Setting very high expectations can be a recipe for disaster.
Makers of automated vehicles also must be careful to build cars that appeal to buyers’ sensibilities.
A survey of more than 22,000 consumers carried out by management consulting firm Deloitte, found that US consumer interest in advanced vehicle automation has increased steadily over recent years. Out of the 32 features tested in the study, the top five among US consumers are safety-related. These include technologies that recognize the presence of objects on the road and avoid collisions, and automatically block the driver from dangerous driving situations.
Interestingly, more glitzy features that help buyers to manage their daily lives were perceived as less useful. Examples are the ability to automatically pay tolls and control automated systems in homes. Deloitte argues that one of the main reasons these features are less appealing is that many consumers are already comfortable using their smartphones to accomplish such tasks.
Another factor to consider is the presence of automated trucks on roads. As I argue in my post A Slow Merge for Truck Automation, makers of commercial vehicles must navigate a host of speed bumps before self-drive trucks are plying our highways. It may be a long time before we see fully automated, dock-to-dock truck operation. In the meantime, we will see hybrid solutions, such as E2E (exit-to-exit) highway autonomous truck operation complemented by driver-controlled local delivery, in addition to some operational improvements such as platooning and assisted driving.
Today, it’s hard to imagine that the idea of self-drive trucks on roads inspires more confidence in car drivers. However, as consumer trust in self-drive technology builds and they become more accustomed to automation in their vehicles, the image of a driverless commercial rig will surely become less alarming.
[i] The technical, legal and safety implications of self-drive technology will be discussed in detail at the 2018 Crossroads conference organized by the MIT Center for Transportation & Logistics. The event will take place on April 17, 2018, at MIT, Cambridge, MA.
Maintenance Supervisor at Silver resorts
6 年I don't think it would be any worse then trusting a taxi or Uber driver.
Director and CEO driving business growth and transformation
6 年I’ve always been impartial to AI, but you’ve got me thinking now…
CEO WorkOptima & CEO DriverEngagement
6 年Lets start by building a network connected hack proof computer controller, so that we know for certain, someone or something else can't take control of the vehicle, or more likely many vehicles at the same time and cause terror. Lets further mandate some specific over ride capabilities and physical mechanical control intervention options (physical key, mechanical steering etc) all the same just in case......Then we can re-visit more advanced autonomy. The internet and email as examples were solutions built from a technology first principle and we have spent the last 20yrs trying to retro-fit them with security, anti-virus and miss-use capabilities, largely without much success. We cannot build autonomy with this wack-a-mole technology first approach!
Manager, Research and Analysis at Pace Suburban Bus
6 年The AI part is the (relatively) easy part. AI goes by well-defined and transparent rules. Acceptance is a bit dicier game, with fewer rules and less transparency and logic.
Capgemini / Automotive
6 年key to artificial intelligence is key to reduce human intelligence.....