Heavy Foot: Accelerating the Pace of Autonomous Vehicle Deployment
An intersection just outside of Silicon Valley

Heavy Foot: Accelerating the Pace of Autonomous Vehicle Deployment

Autonomous vehicle technology has been in development for years, but it’s not fully commercialized yet because it’s just not good enough. Autonomous vehicles (AVs) pose a threat to human safety so long as they are not as good as humans are at detecting obstacles and making split-second decisions in the face of unpredictable events - events such as a person’s decision to jaywalk, a cyclist swerving to avoid a pothole, or the driver of a non-autonomous vehicle who looks at a text message for a bit too long. Concerns about human safety delay the proliferation of AVs, but I’ll propose a solution that puts autonomous vehicles on the road sooner. I’ll also make some comments on the effectiveness and limitations of artificial intelligence in powering the digital ‘brains’ behind AVs.

An AV is one that has been engineered such that a technology platform is responsible for navigating a vehicle from A to B with no human intervention other than when the passenger specifies a destination (and it won’t be long before your synced Google Calendar will send that info to the vehicle for you). An AV can transport people or goods and we now see them on the roads in innovation centres such as Silicon Valley, with humans carefully supervising from the driver's seat.

As a product manager, I would love to own the responsibility of bringing an awesome product such as the AV to market as soon as humanly possible, and conversely I am frustrated and impatient about the speed with which the industry seems to be advancing toward that end. AV technology is going to change our world with a force that is rivalled by few recent innovations, and AVs are going to take over as the leading product of the sharing economy. In consummate product manager spirit, at the end of this article I propose a way to get AVs onto our roads as soon as possible, and faster than the pace at which the development of this technology seems to be moving today.

There is no shortage of funds available to ready AVs for commercialization, so why hasn't it happened yet?

It's obviously about safety. The idea that an autonomous 'self-thinking' machine could harm human beings makes us uncomfortable. Engineers can get AVs operating at very high safety rates, but not at perfect rates yet. So when will we (governments and industry) decide that these vehicles are safe enough? It's a tough question. Over one million people perished on roads worldwide in 2018 according to the WHO. Once commercialized, proliferation of AV technology will cause this number to drop. But despite this improvement, some of these fatalities will inevitably be caused by AVs - by machines. We scarcely bat an eyelash anymore when we hear about a road death involving human drivers, but a single, recent death involving an AV drew wide media attention media last year. Perhaps in the future every vehicle on the road will be an AV, and pedestrians and cyclists will have their own space. Theoretically, under those conditions there could be zero AV-induced road deaths. But until such a time, we are going to have to get comfortable with the occasional fatality. If the software engineers do their jobs well, these deaths will be the fault of humans sharing the roadways far more often than the fault of the AVs.

So what are engineers doing to eliminate AV-caused accidents? Machine learning is heavily used in this process. One difference between machine learning and the more primordial forms of computer intelligence is that older computer software was once just a large collection of rules and a bunch of inputs and outputs that could be communicated in a way that a machine could understand with no ambiguity.

But the intelligent software that propels AVs will have to do something that old software didn't do: interpret the 'visual' landscape. AV brains will have to identify objects on our streets by means of cameras and sensors: cars, bicycles, potholes, cyclists, shopping carts, mail boxes, curbs, driveways and many of the other expected and less-expected objects on the road. And how does machine learning 'learn' this? It learns by analyzing millions of images of these objects. When you complete a ‘CAPTCHA’ exercise on a website where you are asked to identify all the images of a tree (for example) from a grid of nine-images, you are one of the millions of people who are helping a machine learn. One of the challenges with visual displays is that machines don't interpret the content of an image nearly as well as humans do. These are just some of the things that create huge variables in images: proximity to the object, light, camera megapixels, aperture speed, air quality, and hand stillness to name just a few. A person with decent vision can interpret the gamut of objects found on the road far more effectively than a machine.

By harnessing the power of millions of humans interpreting images of trees, cars, trucks, crosswalks, and traffic lights, AV software is rapidly learning to identify obstacles on the road. Is this really intelligence and learning? I can’t help but think that even though software engineers have inculcated the concept of a fire hydrant into a machine by showing it countless images of that object, that this sort of learning is rote learning rather than learning by inference and deduction - what I would call real intelligence.

We're calling it machine learning, but are we teaching machines how to learn? We can all attest to the uniformity of Silicon Valley’s amply-photographed streets, trees, sidewalks and other mainstays of the road. It’s a great start as a basis for developing software that safely navigates an AV. But will engineers have to CAPTCHA the rest of the world - which is so unlike Silicon Valley, before SDVs are road worthy? How much more colossal of a machine learning effort will be required before AVs can share the roads safely in cities that have intersections that look like this:

An intersection just outside of Silicon Valley

What can be done then to bring AV technology to market quickly and without making our roads more dangerous? My solution is an iterative one, but delivers immediate results. The AV industry should divert more of its focus from passenger transport and onto highway transportation of goods by semi-trailer trucks. Uber, Waymo, and every other company researching this technology should be acquiring plots of land immediately off of highway exits at city limits. On these plots of land, human drivers should wait for autonomously-driven semi-trailers to arrive from the highway, at which point the driver embarks and takes control of the vehicle to bring it those last few complicated miles through the city to the final destination. Drivers would also bring empty and full trailers to those highway access points where AV technology can take over and send the vehicle to its next destination on the highway.

Highways contain far fewer variables for software to interpret than city streets do. There are no fire hydrants, parked cars, cyclists, or double-parked vehicles. And with the height of a semi-trailer, sensors mounted towards the top of the trailer should easily be able to scan quite far ahead for sudden traffic changes and dangers. I believe that the limited and controlled environment of a highway is the ideal venue in which to bring AV technology to market first, and I think that reliable and effective software for this purpose can be ready very soon - if it’s not available already. Once commercialized, the ample amounts of data collected from AV semi-trailer journeys will be invaluable to companies that are working on passenger transportation, and will help them hit their objective of autonomous passenger transportation sooner.

Ojas Sinha

Engineer | Writer | Zalando

5 年

Highways are definitely the less fatal zone for humans since the traffic pattern is more organised but that limits the learning with very few and generic pattern every day for the learning.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了