Telsa Crashes Exemplify Deadly AI Co-Pilot Scenarios Every AI Developer Must Avoid
Chunka Mui
Futurist and Innovation Advisor @ Future Histories Group | Keynote Speaker and Award-winning Author
In a nod to the hope and hype sweeping across industries, the organizers of CES 2024, the annual consumer electronics extravaganza in Las Vegas, have announced their next show is going “All in on AI.”
CES 2024 is slated to be a super conference. It promises 3500+ exhibitors covering more than 40 product categories will show how “AI is playing an active role across the global industry,” “taking problem solving to epic new levels,” and “revolutionizing the user experience in everything from vehicle technology and healthcare to home security and agriculture.”
But, after the show and as those exhibitors take their products to market, I worry about the dangers of an increasingly common approach towards achieving “epic new levels of problem solving” by “revolutionizing the user experience.” I worry about AI co-pilots.
AI co-pilots like the recently announced Microsoft Copilot, GitHub Copilot, and Cordi health co-pilot seem sensible. Rather than automating human tasks, AI co-pilots keep human users “in the loop.” They watch over the user experience, gathering context, documenting, providing knowledge, and helping as the experience requires while keeping the human user in charge.
Early on, AI co-pilots are likely to play lessor roles, offering relevant information, advice, and assistance in limited situations. As AI capabilities advance, however, AI co-pilots will take on a larger percentage of situations, automating mundane cases and identifying complex cases requiring human (or AI) attention and intervention. This trajectory will deliver ever more power and is, as Microsoft CTO Kevin Scott observed, “why AI co-pilots could be the start of an industrial revolution for knowledge work.”
AI Developers, however, need to guard against an insidiously dangerous scenario along this trajectory: AI co-pilots so good that they can handle most but not all situations and thereby lull users into a false trust; this is especially dangerous when the potential exists for significant harm to users and bystanders during relatively rare instances of failure or misuse.
This is not a hypothetical problem, as a number of fatal user experiences involving Tesla’s AI-powered Autopilot driving system illustrates.?
Though named “Autopilot,” Tesla’s user terms and conditions warn of significant limitations and state that the driver is ultimately responsible for the driving of the car. A warning even flashes on the Tesla’s screen when Autopilot is initiated:?
Please keep your hands on the wheel. Be prepared to take over at any time.
Still, Tesla is not shy about trumpeting the capabilities of its system, including claiming Autopilot is “safer than a human-operated vehicle.” This, a number of former transportation officials and other experts say, has created a false sense of complacency—as demonstrated by many YouTube videos of Tesla owners’ silly antics and, less amusingly, a number of serious and sometimes deadly crashes.
In one case extensively reported by the Washington Post, a Tesla operating under Autopilot plowed into a semi-truck, shearing off the car’s top portion as it slid under the truck’s trailer. The driver was killed on impact.
When the driver activated Autopilot, he was travelling at 69 mph in a 55-mph speed zone. He was on a highway with cross-traffic, a situation for which Autopilot is not designed to work. Two seconds after he started Autopilot, the driver’s hands were no longer on the steering wheel. Just a few seconds later, a semi-truck crossed in front of the Tesla from a side road. Tesla’s forward-facing cameras captured images of the truck. But, Autopilot’s AI vision system did not recognize the truck as a threat. Neither Autopilot nor the human driver activated the brakes as the car passed under the trailer.
A Washington Post analysis?of federal data found that vehicles guided by Autopilot have been involved in more than 700 crashes, at least 19 of them fatal, since its introduction in 2014. The crash described above is one of at least ten active lawsuits involving Tesla’s Autopilot. Clearly, Autopilot did not perform as the drivers hoped. I assume Tesla engineers had hoped for better as well. The legal and liability outcomes hinge in large part on whether drivers are solely responsible when things go wrong with Autopilot, or whether Tesla should also bear some of the responsibility.
Another high-profile crash in 2021 involved a drunk Tesla owner crashing into parked emergency vehicles at more than 55 mph. According to a Wall Street Journal analysis, Tesla Autopilot had successfully driven an impaired driver for 45 minutes under normal highway driving conditions. Autopilot, however, then did not slow down or otherwise adjust as the car passed several stopped emergency vehicles with flashing lights on the side of the road. Seconds later, Autopilot was late in recognizing several stopped police cars in its own lane, also with flashing lights. Just 37 yards and 2.7 seconds before the crash, Autopilot attempts to veer and hand control back to the human driver. This was much too late. The Tesla plowed into the police cars at 54 mph, injuring the driver and five police officers. The officers are suing Tesla, but Tesla claims that the fault lies with the driver.
领英推荐
Years ago, long before these two accidents, I wrote an article at Forbes exploring whether Tesla was “racing recklessly towards driverless cars.” A critique offered by Don Norman , the renowned cognitive scientist and user experience designer, was that Tesla was being reckless because it was failing to adequately design for the key issue in human/robot interaction.
Norman argued the most dangerous model of automation is the “mostly-but-not-quite-fully-automated” kind. Where there is little for users to do, attention wanders. So, the more reliable the automation, the less likely the driver will be to respond in time for corrective action. Norman observed this pattern was demonstrated by “numerous studies over the past six decades by experimental psychologists.”
Norman faulted Tesla’s approach in several ways, including inadequate sensors, lack of understanding of how real drivers operate, lack of careful testing, and lack of driver training. For Norman, Tesla’s history of quickly responding as safety concerns arise highlighted the inadequacy of their initial testing.
Good for Tesla, but it shows how uninformed they are about real-world situations. Tesla thinks that a 1 in a million chance of a problem is good enough. No. Not when there are 190 million drivers who drive 2.5 trillion miles.
It is unclear how much Tesla took earlier critiques like Norman’s and others to heart, though its more recent track record indicates it has more work to do.
Tesla was clearly an early adopter of the AI co-pilot approach, however.? As companies and developers across other industries pursue this approach, they have an opportunity to learn from experiences like Tesla’s, pay better heed to experts like Norman, and avoid making the same kind of mistakes. Tesla’s track record, despite its immense expertise, experience and resources, highlights the difficultly of this challenge.
Don’t get me wrong. I’m on record as an optimist about how AI and other rapidly advancing technologies are ushering in the biggest personal and professional opportunities our lives, including how they could positively improve transportation, reshape healthcare, transform insurance, and address climate change.
But, with great power comes great responsibility.
One clear responsibility is understanding the danger, not the just the power, of “mostly-but-not-quite-fully-automated” AI co-pilots. Doing so is critical for safely and responsibly taking problem solving to epic new levels and revolutionizing the user experience.
--
Related articles:
Liechtenstein Institute for Strategic Development AG - Renewable Cities, Regenerative Regions and Climate Stable Development
8 个月The future lies in EVR - always add an essential Apple accessory to your auto-Tesla auto: https://www.theguardian.com/technology/2024/feb/05/viral-tesla-vr-driving-pete-buttigieg-tweet
I think it is ridiculous that they are letting these things out on the streets without a human. I live in San Francisco and there have been many examples of minor and major accidents from self driving Teslas (although one of the worst accidents that caused havoc on the bay bridge had a human behind the wheel but he wasn't paying attention until it was too late) and from a new Uber/Lyft competitor that uses self driving cars. They clearly need more testing.