The Fallacy behind Self-Driving Cars
Driver and Passenger generated using MidJourney. Edited using Dall-E. Used under MidJourney Commercial License.

The Fallacy behind Self-Driving Cars

Autonomous cars is seen as one of the ultimate destinations of AI research. Major corporations like Alphabet (Google), nVidia and Tesla are investing heavily into R&D. I am afraid that many assumptions about self-driving cars are simply untrue. I think we are not only decades away, but it might be a question if we will ever get there. Let's examine the fallacies behind the belief that autonomous cars are just years away.

Driving Just as Good as Humans

Sergey Brin, co-founder of Google (later Alphabet), once said about their autonomous car project: "We don't claim that the cars are going to be perfect. Our goal is to beat human drivers." In that very quote there is an implied logic, that when machines beat humans at driving, we need to let the machines drive. At first glance that seems entirely logical, right? But is it?

A recruitment poster for self-driving cars
Poster Art generated using MidJourney. Edited Manually.

In 2021, there was 42,915 fatalities from motor vehicle accidents on American roads. That is approximately 117 deaths per day. From that we can conclude that humans might not be very good at driving. But we can also do a thought experiment. Would we accept 117 deaths per day... or let's say 100 deaths per day from accidents with self-driving cars? With 100 deaths per day, or 36,500 per year, autonomous cars will be statistically better than humans at driving. Call me a pessimist, but I do not think there will be any acceptance for that number of fatalities. The question then is, how many fatalities will we accept per year from self-driving cars? 0... 10... 100... 1000... 10,000?

What we can conclude from a little napkin math and a thought experiment is that we are looking not at beating human drivers... but beating human drivers with a considerable margin.

Because the average driver is pretty bad at driving. Maybe what we psychologically mean better-than average drivers... even professional drivers, when we say "better than human"?

A fake boxing match poster with a match between an average driver and a professional driver
Subjects generated by MidJourney. Edited Manually. Used under MidJourney Commercial License.

It's surprisingly hard to find any data on how much better a professional driver is than an average driver. In fact, professional drivers often are in more accidents because they spend much more time on the road. There is also a lack of common definition for what a professional driver is. But safe to say, if I stepped into an autonomous taxi, my expectation is that the car drives better than a good limo driver.

Self-Driving Cars will have different kinds of accidents

Another misconception in autonomous cars is that a self-driving car and a human operated car will have the same types of accidents. This is another fallacy. To examine this, we need to take a step back and look at AI in general. An autonomous car will use several AI systems to make traffic decisions. You will have object detection systems, that detect cars, pedestrians, bicyclists, road signs, line markings, etc... and you have systems to predict movement of objects, to plan routes, and a ton of others. All AI systems make mistakes. They operate on confidence. Thus internally the signal going from one system to another is similar to: "I am 90% sure that is a pedestrian, 80% sure it's a child." Thus, AI makes mistakes that are not bugs. Well, humans make mistakes as well, so if the AI makes less mistakes than a human, right? Then we can be better than a human?

Not exactly. Let's hypothetically say that the AI makes a mistake 10% of the time, and a human makes a mistake 10% of the time. One can claim that statistically, AI has equal performance to humans on the task. With me so far? Well, in reality, the 10% mistakes that the AI did was different mistakes than the 10% that the humans did.

Machines and humans makes different mistakes
Drawings generated by MidJourney. Manually Edited. Used under MidJourney Commercial License.

Since Humans and Machines makes different mistakes, they will end up having different kinds of accidents. This fact creates quite a few logical conclusions:

  • A significant portion of the accidents with Self-Driving Cars will be perceived to be avoidable by human drivers.
  • The accidents that humans would get in, but the machine didn't get in, will only show up on aggregated statistics. Humans are unlikely to detect and give credit to the machine for accidents that didn't happen.
  • Many accidents involve two parties, but only one party causing it. In a hypothetical situation with 50% autonomous and 50% manually driven cars, will the difference in what mistakes are done lead to more accidents and not fewer, even though machines are as good as humans in isolation?

Our Psychological Barriers to Adoption

A self-driving car cannot be punished, it cannot feel remorse, it cannot ask for forgiveness. Just as it's hard to blame a machine for an accident, because the machine had no operator, it's hard to trust it.

An electronic device that looks like a brain
Photo generated by MidJourney. Used under Commercial License.

Trusting a machine that does mistakes, mistakes that might potentially kill you, is a huge psychological step that we might not be ready for as humans. I tend to say that technological adoption is slowed by psychological adoption. In the case of self-driving cars, I am pretty confident that the psychological adoption of the technology will slow it down tremendously.

So where are we then?

Looking at all this together I believe that we are looking at achieving Airplane-grade level of safety before we will see any mass adoption of self-driving technology. The technology is likely to be available for years at exceeding human level performance, and still not see widespread use. To achieve airplane-level safety, there needs to be a rethink of infrastructure, smart sensors in roads and signs, interconnects between cars and the infrastructure and centralized traffic control systems that can signal the cars. Only then will the psychological barriers be overcome - and that is decades away.

#autonomouscars #autonomousdriving #selfdriving #ai

I made the images and illustrations for this article using MidJourney and some Dall-E magic, it was a ton of fun and will definitively be the subject of a future article.

Shankar Bhaduri

GTM Leader/Start Up/Scale up/Board Advisor/Mentor

1 年

Great article Magnus Revang In my humble assessment there is more likelihood of a hybrid system where AI would assist a human driver avoiding say 60 -80 % of accidents. Completely autonomous cars can be distant possibility in a controlled environment and tracks which can take out the drudgery of driving in majority of journeys

回复
Rob McDougall

President and CEO of Upstream Works Software | Helping to Operationalize AI in the Contact Center | Agent Desktop Expert | Business Technology Advisor

1 年

Late last year, due to a lack of progress on commercialization, Ford and VW pulled out of the self-driving game with Argo, which promptly shut down. Many good points here, and there are others. Business Week has a great article about it (https://www.bloomberg.com/news/features/2022-10-06/even-after-100-billion-self-driving-cars-are-going-nowhere#xj4y7vzkg but paywalled).

Very plausible analysis that I completely dig! Awesome images. I will have to take Dall-E and MidJourney for a spin!

回复
Rolf Frydenberg

Daglig leder (CEO) at Manag-E Nordic AS

1 年

I completely agree with you, Magnus. But #AI still has its uses in cars, as driver assist systems. And in some situations might be allowed to control the journey along special highways where the sensors and traffic control systems are in place.

要查看或添加评论,请登录

社区洞察