Self-Driving Cars: 'My Program Made Me Do It'

Self-Driving Cars: 'My Program Made Me Do It'

I recently came across in MIT Technology Review a fascinating piece entitled “Why Self-Driving Cars Must Be Programmed to Kill.”

The story zeroes in on a thorny question of “how the car should be programmed to act in the event of an unavoidable accident.” It asked: “Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?”

The article posed a hypothetical crisis: Ten people suddenly appear in the street. The driver, moving at high speed and unable to stop in time, can save his life by hitting one or all of the pedestrians, or risk his life by veering out of control at high speed.

The point is that at some point in the future, automotive engineers need to help a robotic car make ethical decisions. If so, who chooses the more ethical course?

While pondering this, it suddenly came to me, “Why are we insisting that a self-driving make life-or-death decisions?”

Of course, I want to save as many innocent pedestrians as possible. But I don’t want to harm my passengers, either. Certainly, I don’t want to die.

We’re posing an extreme situation in which there is no single “right” answer.  Given that uncertainty, why not let the car do the moral dirty work? Afterwards, the survivors can survey the carnage and say, “Well, the car was just following its program.”

But right now, according to the digital logic of the technology, the only choice is to “save the occupants at all costs.” Life, and most of its ethical choices, is more complicated than that. Many – surveys say most – drivers would risk their own life to save others. (The catch is that most of those surveyed were answering the question as pedestrians, rather than owners of self-driving cars.)

Programmable cars
One solution, as the autonomous car becomes more and more software-intensive, is to envision a driverless car that’s more flexible and personal – pre-programmed to behave according to its owner’s personal tendencies.

On its face, the idea of an autonomous car reading its owner’s mind and playing God is disturbing. But at the current rate of technological change, some version of this eventuality seems inevitable. The self-driving car’s programmability already empowers a human in the driver’s seat to make decisions despite not physically driving the car. The question now is how to make that decision-making process more human.

In less critical situations, I’m now thinking that this “programmability” can make driverless cars more “fun to drive.”

When I get inside my new driverless car, shouldn’t I be able to choose how I drive my car that day? As my friend Brian Santo suggested, “Maybe you get a rules-based drop-down menu, like in Microsoft Word!”

Choices of driving style could range from that of an old lady (really cautious) to a cut-throat take-no-prisoners Boston cab driver, with lots of variations in between.

Yes, by purchasing a self-driving car, I might no longer do most of the steering and stopping.  But, ideally, I can still “direct” the car to match my personal style — or mood.

— Junko Yoshida, Chief International Correspondent, EE Times

Related content:

Shelly Stalnaker

Retired Editor/Writer

8 年

Excellent points...and this is why such development efforts should encompass a wide range of people, so that we uncover and encounter these types of issues. Simply solving the engineering aspects of a new technology application does not guarantee it is practical, desirable, or reasonable in the real world. Avoiding the "Why didn't we think of that?" result is one of the hardest challenges in innovation.

Ian Chen

Leader, Entrepreneur and Executive

8 年

This is looking at self driving cars through the lens of a people driven car. We should differentiate those self driving programs that is used for driver assistance vs. those that do not allow human intervention. The latter can open up new design options that could create cars that would never go so fast that it cannot avoid things that show up in its extreme sensor range. It's the human driver who doesn't know his limits.

Sean Alebo

Chief Innovation Officer at Selling To Parents

8 年

Wow I never looked at it this way!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了