Death Race 2000: How Safe Will Autonomous Vehicles Need To Be?
The technology behind autonomous navigation is developing quite rapidly. From our early investment in Cruise, we’ve gotten a vantage point on just how much progress has been made in the past three years alone. But if I’m confident about the tech, I’m less sure about the target. That is, how “safe” do autonomous vehicles need to be? And what is today’s reality? 35,092 automotive deaths in the US in the year 2015 (on a per capita basis, down significantly from the 1970s).
So is the target number for autonomous vehicles 35,091 deaths per year? That seems rational but would require us *not* overreacting to headlines which blame errors by the technology for deaths.
Or is the number some smaller percentage of current fatalities — say 10,000 deaths per year — owing to the emotional reaction that the benefits will need to be quantitatively significant in order to make drivers (and regulators) comfortable with giving up control to the machines?
Maybe controversially we should be able to tolerate more deaths per year in the move to autonomy. Certainly if you look at the deaths per capita average over the past 100 years, there were eras nearly 3x today! And if autonomous vehicles add value in aggregate because they support faster, more efficient, more relaxed shipping and travel, shouldn’t we tolerate more danger in return?
Besides wondering generally about the psychology of safety and autonomy, where we end up on this “how safe” spectrum will also influence two important factors: how quickly we attempt to move from 0% to 100% autonomous, and the role of insurance.
In terms of rollout velocity, it’s fairly noncontroversial to suggest that if every car was autonomous (and “talked” to each other continuously while all maintaining a shared or similar “safety” ruleset), the road would be safer than a 50/50 mix. Has anyone seen a graph that takes autonomous vehicles as a percentage of active vehicles and correlates against projected automotive accident rate? These forecasts will certainly be influencers to the regulatory and financial incentives that accelerate to autonomous density once the technology crosses mainstream viability. It’s a classic “it’ll happen slowly, then very quickly” dynamic.
The second question I have is about the role of insurance. As distasteful as it may seem, we do today have ways to value a life based on number of factors including age, race, vocation, geography and so on. The “how safe” question might be moot (excluding corporate negligence) if we’re comfortable with allowing insurance to fill the gap between desired safety and actual safety. That is, let’s say autonomous vehicles were twice as unsafe relative to today. Do we need as a society to wait until safety improves or can we use financial compensation as a mechanism to bridge the gap. And whose insurance? The driver, as it sits today. Or will the manufacturers (Tesla, GM, etc) need to carry liability insurance as a passthrough if it’s decided the driver wasn’t at fault in an accident but instead the algorithm was?
My Blog | Twitter | Snapchat | Medium | LinkedIn | Facebook | Instagram
Sales Manager at DeJong Operations Management & Consulting LLC
7 年Psychologically speaking, I think it comes down to a concept of control - in a normal vehicle, a person accepts responsibility for their actions and (in a lot of ways) determines their own fate based on their skill. In an AV one has no control, meaning that the onus for safety falls squarely on the manufacturer's shoulders. The closest current equivalent that I can imagine would be the fact that car manufacturers are currently held liable in the event of a mechanical failure (recalls and settlements). If EVERY wreck were due to manufacturer's defects, failures, or programming, that lends a general air of unreliability along with a huge impact on the insurance industry. How good do the odds need to be before you'll trust a machine? If one in a thousand kills someone? One in ten thousand? Great post, interesting questions!
Hunter theoretically insurance should be able to fill in the gap between desired and actual safety, while deplying AVs....however, when it comes down to your direct family (your partner, parents and/or kids) how much of a gap is acceptable?
Head of ML Operations Engineering for #GenAI
7 年It is just alarming how easily terms "financial compensation" and "value a life" are thrown around when we are talking about people. To the point, (1) in the well known tesla accident the car had 7-seconds to react, (2) Tesla advises put your hands on the wheel and keep your eyes on the road. The latter shows that even Tesla himself does not trust his car to take him home on its own. Otherwise they would say : "Relax read your paper while we take you home " . Finally , insurance companies have lobbied to enforce speed limits just to keep their profit margin safe. Does anyone think they would accept to pay double the costs just so someone could brag I have the AI taking us home baby ? Maybe, maybe not! But really what is the real tech and what is the useless tech needs to be decided at some point.
Java | Spring Boot | AWS | Azure DevOps | TypeScript
7 年People get much more scared of technology mistakes than human mistakes. A human mistake is 'easily' corrected by the next person learning from someone else's mistake. We still don't trust technology 'learning' as fast as humans in terms of previous mistakes. Autonomous cars will become the norm in a few years, car accidents will go down and insurance companies will lose a big chunk of their business.