Self-driving cars still have a long road ahead of them
Siddharth Pai
Founder & Managing Partner at Siana Capital, leading tech strategist, Certified Independent Director
AI hype must not mislead us: It will be a long while before they are fed sufficient data for road safety
Siddharth Pai
is co-founder of Siana Capital, a venture fund manager.
Tesla is recalling more than 2 million vehicles, nearly all the electric cars it has sold in the US to date. This is to fix a flawed system designed to ensure drivers pay attention when they use ‘Autopilot,’ a driver-assistance module that holds the promise of giving us self-driving vehicles one day. According to Wired, like many advanced driver assistance systems, Autopilot requires a driver’s hands kept on the steering wheel, although some drivers have worked out that these systems can be fooled by hanging a weight from it. In extreme cases, drivers have been found in the back-seat of their vehicles while Autopilot was in charge. Also, the system does not immediately disengage when it senses that the steering wheel has been left unattended. At highway speeds, this delay means the vehicle could travel for over a kilometre before the system reacts to a driverless situation.
For now, rather than physically recalling vehicles, Tesla plans to send a software update, according to documents posted on 13 December by the US National Highway Traffic Safety Administration (NHTSA), in an attempt to fix the problem. It comes in the midst of a two-year, ongoing investigation by the NHTSA into crashes caused by Autopilot—of which there have evidently already been 19. Driverless cars need artificial intelligence (AI) software with copious amounts of carefully categorized data to make themselves effective. Anything that is sloppily characterized can easily cause the machine to make wrong conclusions. In the realm of autonomous or self-driving automobiles, for instance, scores upon scores of low-skilled workers pore over endless hours of video footage in order to label everything a car might encounter—road signs, stop lights, pedestrians, bicyclists, other motor vehicles, ?etc—so as to correctly categorize the data that these automobiles’ artificially intelligent driving programs need to rely on.
In the absence of real-world data, AI programmers have been known to make up artificial data stores in order to feed their programs with enough data to operate. This ‘dummy data’ can be useful in some arenas, but exceedingly dangerous in others. Where does one go to study under what circumstances self-driving automobiles like the Tesla that killed its occupant in 2016 might have other such accidents? Enough instances of that crash haven’t occurred, and so the data that is needed doesn’t exist. Building predictive models here without data is risky. It is not a ‘neural network,’ it’s neurotic!
According to Bloomberg, by 2022, over $100 billion had already been spent on trying to sort out the myriad problems associated with creating truly autonomous vehicles that can navigate the open road. Bloomberg recounts a hilarious story of a line of self-driving cars constantly using one poor woman’s driveway to perform K-turns (three-point turns, as we know them in India), which allow a car to change direction while it is still in the middle of a street. But the woman’s driveway isn’t a street, it’s her private property. Evidently, the cars performing this intrusive manoeuvre came from Google’s driverless car subsidiary, Waymo. The hapless lady complained several times to Google about these non-stop incidents, but to no avail, until the lady called a local TV station in exasperation and a news crew broadcast a video of the hilarious scene. Soon after, the lady’s private property was clear. Waymo disputes that its tech failed and said in a statement that its vehicles had been “obeying the same road rules that any car is required to follow.”
While AI technologists would have us believe that computers are a lot better than human beings at making decisions, that is simply not true. They can run mathematical calculations and operations faster than we can, for sure, and today’s AI can spit out predictions from models that use enormous amounts of data. But to be clear, they are using mathematical models which are a couple of hundred years old—though they were codified for computer use only around 40 years ago. These models are bereft of certain types of intelligence. For instance, if I see crows on the road, I am unlikely to hit the brakes, as I would if I were approaching a human or a larger animal—say, a cow. I know that crows are likely to fly away as they sense my vehicle approach, whereas a human or bovine presence is significantly different as they may not get themselves out of the way soon enough. As an aside, I drove to Coimbatore from Coonoor the other day, and knew I should freeze at the sight of a bison.
An autonomous car of today might see the crows as an obstruction and swerve out of the way in response, as programmed, even though this would be dangerous to other drivers on the same road who presumably wouldn’t make the same mistake. It will take many more years of labelling data for such potential instances and then feeding all of it into AI programs for these vehicles to come anywhere close to a human’s ability to drive a vehicle on an open road.
The only place I have seen autonomous vehicles work with any level of reliability is in tightly controlled situations, such as in and around a warehouse, or on a highly automated factory floor, where repetitive and predictable vehicle movements can be automated.
This proves, yet again, that the automation of tasks using AI is not foolproof by any stretch of the imagination, and we should resist attempts by technologists to ignore safety. Traffic safety authorities in the US are doing just that.
This article first appeared in print in Mint and online on Livemint. Click here for this and more:
Managing Partner | Corporate Lawyer | Independent Director
10 个月Remarkable Insights!