Are You Smarter than Your Car?
A fully autonomous electric bus installed in the Hong Kong Science Park

Are You Smarter than Your Car?

Who should be in control, you or your car? 

As 2017 drew to a close, I thought about what I've learned in twenty years of wrestling with robots (and the issues that surround them). I often say we need to put the human front and center, but I recently realized that I have not been putting my money where my mouth is. At one point I had a dozen human factors professionals working with me, but on recent projects the number has dipped to zero. Meeting recently with Don Norman at UCSD I realized that I need to reach out more to my colleagues in human-centered design. If we don't ask the right questions at the start, it is unlikely we will be happy with the result. In considering the design problem of creating autonomous roadways, the primary question Dr. Norman and I discussed is a very basic one: who should be in control, you or your car?

I worked on my first funded project to develop a fully autonomous robot in 1997. In that particular project I succeeded in creating a robot that could vacuum an entire office building with no human intervention, but as with so many autonomous robot projects it failed to solve the real problem: effectively cleaning floors. My goal at the time was proving full autonomy -- the ability of the robot to work "all by itself." The actual vacuum mechanism was an afterthought and the robot really was not very effective in cleaning the building. It seems to me that many of the strategies for autonomous driving fall into a similar trap. Companies intent on removing the human driver, may lose focus on the real problems like urban congestion and green house gas emissions. To maximize impact, roboticists may need to look beyond the robot to the system-level issues and opportunities.

From a system design perspective the foremost issue is how to share control between the human and the robot. While most researchers and car companies are focused on the capability of the vehicle, I spend my time wondering about the ability of the human to cope with a "mixed-initiative" system. In a mixed-initiative system neither the human nor the robot are necessarily "in charge." Instead, they both have roles and responsibilities and must work together as a team. It sounds like a reasonable prospect, but getting the task allocation right is no easy task and depends largely on the situation. How the system should work in a snow storm on a desolate mountain pass may be different than how it should function on the Los Angeles 405.

Those advocating for a fully autonomous roadway argue that the human will mess everything up. One can’t drive for very long on today’s highways without noticing evidence for this line of thought. Who doesn’t want efficiency especially when we sit stuck in traffic? However, before we concede control we should consider what we are signing up for. Do you really want your car driven by servers in Silicon Valley, run by corporations that want to monetize your data? From an ethics perspective, the concern is that focusing too narrowly on system efficiency without regard to the human individual could lead to unintended loss of privacy, control and ultimately individual choice.

In some areas the supremacy of AI is already at our doorstep. In fact, while we’ve been worried about individual robots like the Terminator, it seems we’ve conceded a great deal of control to disembodied AI. Uber Pool is a great idea, but the growth of Uber Pool means that the legions of Uber drivers can no longer choose who to pick up. The system tells them when to turn left and when to turn right. They can opt out, but then they don’t get paid. In this context, it seems a bit like AI systems are turning humans into robots. We surrendered control all too readily.

I might be willing to surrender some of my rights for the benefits of greater efficiency, but AI is by no means a panacea. Ride sharing was touted as a means to reduce congestion, but that does not seem to be the case. An increasing number of cities report that current ride sharing can make downtowns more congested. Inaccurate GPS may direct cars to the wrong corner and when drivers blindly follow the instructions of the servers in Silicon Valley they often drive in circles. Despite increasing number of safety systems in cars, accidents are going up rather than down. Many human factors experts think the increase may be because people are all too happy to depend on the AI in the car to pay attention for them. When a crisis does occur, there may not be enough time for the human to shift from watching Stranger Things to cope with the impending car crash.

 As a roboticist I have other, more technical, concerns as well. When the environment degrades (e.g. thunderstorm, blizzard, earthquake, explosion or parade), today’s AI systems fail and fail hard. Even commonplace occurrences like roadway construction, faded lane markings and fog can cause problems. Working with the military for many years I sent robots into caves, bunkers, tunnels, and minefields, and let’s just say even the best AI does not improvise nearly as well as the soldiers. Imagine the air is filled with acrid smoke from a nearby wildfire which is roaring towards you at 75 miles per hour. Do you want to be driven to safety by a timid AI system, with a top-notch corporate legal team in the metaphorical backseat?

When DARPA launched the Grand Challenge its stated goal was to jettison the human, but it turned out that, in practice, soldiers didn’t want to be taken out of the loop. At the time when the original competition was conceived I argued hard that the human should be front and center as part of the system. I wanted a mixed-initiative system where the human and AI system had different, but complementary roles. That didn’t sound as compelling to the media or defense leadership. Like the car companies today, they wanted fully autonomous AI that wrested the steering wheel from human hands. Quelling my vigorous arguments, a top DARPA official visiting my facility told me that once a vehicle could drive without the human the “hard problem” would be off the table. DARPA was wrong. The hard problem wasn’t autonomous driving. We can already do that. The hard problem is how to insert autonomy into our battlefields, roadways and our lives. The question should not be whether humans or robots are better drivers. Rather, the question should be how to collaborate effectively.

Perhaps to work within a world of human drivers, self-driving cars need to go a bit native. Google engineers now admit that their systems are programmed to go over the speed limit in order to maintain the speed of the cars around them. They figure it is more important to smoothly blend in with human traffic than to strictly follow the rules of the road. I’ve tried to explain that exact logic to police officers with less than satisfying results. Matching the speed of those around you is safer, but it also might get you a ticket. It’s an interesting glimpse into the problem of being a human. We are told to think for ourselves, but also to follow the rules. Will we make robots just as schizophrenic as their human counterparts? According to CMU self-driving expert John Dolan, it will be safer if they act like humans and bend the rules to suit our purposes. Should robots follow what we practice or what we preach?

Another major issue is the cost --not only the high cost of the sensors, but also the cost associated with pre-mapping all the roads. The pre-mapping approaches required by most companies require that the car can recognize and localize where it is within street view data that has been painstakingly captured and modeled. Within this approach the entire world is just one big searchable model and it is necessary to constantly update the model and provide access for a price. Even more troubling, is that the map-based, centralized approach is inherently susceptible to infiltration by bad actors who could manipulate the model and wreak havoc on everyone using the infrastructure. In Attack of the Clones, the Empire’s love for centralized control results in a situation where knocking out the control ship is sufficient to render all the drones useless. Do we really want centralized control to be the model for our highway system? The key is to make sure that there is an appropriate balance between centralized input and individualized control.

This is easy to say but the hard question is who will get the final say. In a conflict, will it be the centralized controller, the on board AI system or the human driver? Right now the human is king. We can ignore our smart phone and even ignore the stop signs in a crisis. We take this for granted but the near future could see a significant shift. In 2015, Elon Musk raised the possibility that the government may someday make it illegal for humans to drive cars. Later, he clarified that "Tesla is strongly in favor of people being allowed to drive their cars and always will be." I can only assume he gasped this out directly after his chief marketing officer’s hands were removed from his neck. This point still holds: What if robots are so much better at driving cars that it eventually becomes unthinkable for humans to directly drive a car. Is it unethical to allow humans to drive cars if it means that they waste billions of hours and kill approximately 40,000 people per year (just in the US)?

After long road trips, my wife usually asks me to give her an update on when she can buy the first autonomous driving car. Consulting firm McKinsey & Co predicts that by 2030, 15 percent of car sales will be fully autonomous. Hans-Werner Kaas, a partner at McKinsey explained: “What we are going through is the most unprecedented time of disruptive change in the automotive industry as it transforms itself into a mobility industry.” If recent press releases are to be taken seriously, Elon Musk and the CEO of General Motors want to give you the gift of self-driving cars quite a lot sooner.

Some people view the advent of the autonomous car as sudden miracle, but the truth is that researchers have been working on this challenge in earnest for several decades. Dean Pomerleau and Todd Jochem sat in a 1990 Pontiac Trans Sport Minivan, while their AI system drove 2,797 of 2,849 miles from Washington DC to San Diego in 1995. After completing their “No Hands Across America” trip, they ended up on the Tonight Show with Jay Leno. To me, the most amazing thing was that they built the software for under $20,000 dollars. Today, that is not enough money to fund a proposal to do such an ambitious project.

While I worked at the Idaho National Laboratory, an autonomous Jeep drove itself autonomously day in and day out, performing radiation surveys for environmental monitoring of large areas of the desert. Less than a decade after the “No Hands” trip, we had managed to remove not only the hands, but the whole human from the car. It was a lot easier to automate these systems in the desert than in a city, because of the absence of humans. What I did not realize at the time is that this was not due only to a lack of humans outside the car. Getting the human out of the driver’s seat actually made the problem easier.

My work with robots has furnished ample opportunity to learn about the dangers of humans and robots working together. In 2002 we developed an intelligent robot that could make its own decisions while mapping out radiation levels inside contaminated facilities that needed to be decontaminated or decommissioned. The robot could say “no” to the operator in order to protect itself and the AI system was given control over the close-in maneuvering which operators often struggled with. If the human drove poorly the system could literally take control away from the human. Some participants liked the system and their overall performance improved dramatically, but many of the most experienced operators fought with the robot, which dramatically reduced performance. Watching how well the autonomous system drove itself when left to its own devices, it seemed to us that humans were the problem.

In the other hand, when my robots found themselves handling weapons of mass destruction, I learned quickly that the solution is not to eliminate the human. The human was responsible for making incredibly difficult judgements that took into account the safety of many people. There was simply no way that a robot could be asked to make those judgements. The human also had much more contextual understanding about the propagation of the contaminants and the experience to know what to look for. We kept the human out of the danger area and we let the robot drive itself, but the human remained in the management and decision making role.

The best solutions balance human and robot initiative comfortably. Fifteen years ago I co-founded the International Human-Robot Interaction Conference in hopes that we could get this point across, but as I read the news today I hear many advocating for exactly the opposite. Most of the car companies view “fully autonomous” driving as the holy grail and want to engineer the car of the future without a steering wheel at all. I really hope that this dualistic thinking that pits human against machine will give way to the rich possibilities of "context-sensitive shared control." In many environments, humans provide superior contextual understanding and offer high level intent while machines react quickly, smoothly and precisely. However, depending on the context, roles can shift as needed.

I like a model where humans retain the right to hold a steering wheel, but lose the right to ram into other people, cars or bike lanes. I imagine the highway of the future as an interconnected organism where individual cells can move individually while remaining caught within a framework comprised of peer to peer interactions with their nearest neighbors. This would convert the highway system into a swarm, where vehicles of all different makes, models, and levels of autonomy are virtually tethered to one another. In the system I envision, you would have the right to control your vehicle or flick the switch for your choice of proprietary AI. No matter where the control derives from, the behavior of the car would be hemmed in by figurative “springs” connecting your vehicle to those around it. The goal is not to limit your freedom but rather to make sure you "play well with others." When there is no congestion, the system should let you open up the throttle and have some fun. Hopefully when we look back a few years from now, the idea of a static speed limit will seem ridiculous. Some will chafe at the idea that the system limits their ability to drive with impunity; but, really, as you sit stuck in traffic today, how much freedom do you really enjoy?

If we want smart cars it seems we may need to enable smart roads. This shouldn’t come as a surprise, because infrastructure has always been essential for enabling human drivers. Why would we not extend the same courtesy to robotic drivers? We need to create a robot-readable world where humans and AI systems can both benefit from a new generation of peer to peer connectivity and positioning. Instead of talking to cell towers the cars will talk to each other directly. Instead of positioning relative to satellites in space the cars will localize in relation to their nearest neighbors and to roadside equipment along the road. In a peer-to-peer system, behavior stems from the local environment, making it easier to adapt to a particular environment and situation.

As engineers and decision-makers contemplate a move away from the global position paradigm, it may leave them with a strange feeling of vertigo. Like pheromones in an ant colony, our world will fill with digital signposts or “tags.” Some tags will be anchored into the world as infrastructure, built into light bulbs, traffic lights, and signs. These infrastructure tags will enhance or simply replace GPS, feeding into existing software applications that already expect GPS coordinates. In this tag-enabled universe peer-to-peer positioning can guide both humans and robots, creating a common framework for shared understanding and collaboration.

For the automotive industry these mesh networks of reliable relative ranging will allow diverse vehicles to use a shared backbone of safe, coordinated motion. This makes the autonomy problem much easier for individual vehicles and also makes it possible to orchestrate multiple vehicles as a team. In addition to supporting cars, this enables many new forms of autonomous shared mobility. Single-person pods, low-speed electric vehicles, autonomous cars, human driven cars and autonomous ride sharing systems can virtually snap together despite their different manufacturers and control systems. Working together to anticipate traffic light changes, these vehicles can smoothly accelerate in unison and perform coordinated, predictive braking when necessary. Imagine a pod that comes to meet you right at the gate as you disembark from the airplane. It carries you through the airport and merges with another pod that has your luggage before dropping you off directly at the train station or curbside for a car pickup.

The future won’t be all about self-driving cars, but rather about how to get you from point A to point B as smoothly, safely and efficiently as possible. You may well argue about who should control your car, but I hope that autonomous shuttles will be an easier sell. Whatever comes next, remember that you are focus and that the system should adapt to your needs, not the other way around.

#DonNorman, #UCSDDesignLab, #autonomouscars, #autonomousdriving, #self-driving, #autonomousrobots,

Perry Howell

President-Owner CSS, CSS-Mindshare

4 年

Great article David. When AI and Biology are blended and Biology becomes stubborn will AI relent or override? How will this be managed? Example, a human drive knows that the autonomous vehicle next to it will slow if cutoff, so the maneuver gets executed. When does, or should the AI take over? One of the benefits of autonomous vehicles is the reduction of poor citizenry (as in the example) and impaired/incompetent driving. This tips the AI/biology scale toward AI. Does this then turn into a long term (un)training program for humans, in that no future humans are competent enough to drive. AI wins. This is a much broader topic. Thanks for the article.

回复
Malcolm Sutherland

CBT Therapist | BABCP (Accredited)

6 年

Interesting article - and the conceptual points extend beyond autonomous driving. When we add 'tech' to human roles, who wins/loses? The example of Uberpool made me think about the ethical dilemma of removing choice from human activities, including work. A smooth-running world in which people lose their roles (and fail to find others) is not a pleasant prospect. The idea of a human controlling the car wheel (but not being allowed to crash etc.) is appealing.

Mary Dzeniec

HR at Acme Dynaflex

6 年

As a Chicagoan, I wonder about prospects for *intentional* disruption of automated vehicles. Imagine your car taking you through the dicey part of town, where a group of residents finds it great fun to ambush such cars causing them to screech to a stop -- just for laughs, or possibly much much worse. Can the AI be instructed to avoid such areas? Technically, sure. But just imagine the lawsuits from the "revruns", "community organizers", and legions of activist lawyers.

Simon Fellin

Making people and things move

6 年

What is the difference between an autonomous car without a steering wheel and an autonomous shuttle? Of course the world could agree on a separate infrastructure for autonomous vehicles and it would make it a lot easier for makers to develop the vehicles. But do we have to and is it worth the effort?

回复
Chong Jin Lee

Learning & Innovations

6 年

Birds, fish and insects do it (decide which way to go). Wonder how much thought they require? Seem they do it well even at peak traffic.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了