Autonomous Vehicles & The Modern Trolley Problem
Michelle Sandford
Developer Engagement Lead @ Microsoft - Azure Data Science & AI Certified GAICD
Many, many, many years ago (let's not get into how many. Let's just say many and leave it at that) I did my undergraduate degree at the 英国杜伦大学 ( Trevelyan College ) in Philosophy. At the time I chose the degree - well, let's say the degree chose me more than I chose it, I wasn't really sure what Philosophy was - except to say it was an opportunity to go to one of the best Universities in the country - which was the path I was seeking.
However, I loved the subject almost immediately. In our first semester the idea that we cannot know if any of this is real or if we are dreaming even now - René Descartes, caught my imagination. How many times have you woken from sleep, gasping for breath, heart hammering - because the world of sleep had you so convinced that what was happening to you was real - only to wake and realise it was not? I remember the sleepovers of my youth - the Nightmare on Elm Street Movies put me off that whole genre for life because the idea was too real for me. How can we ever know what is a dream and what is reality? The philosophy gave me more comfort though, if we cannot know whether the waking world or the dreaming world is true, maybe neither is. So there is no need to stress. No need to panic. It might all be a dream. So just do the best you can. What other choice is there?
My favourite specialism was Ethics and Moral Philosophy which I studied with Professor Geoffrey Scarre, who still teaches today although wisely, he is partly retired. (I did mention this was many years ago). One of the topics we explored in his class was the Trolley Problem.
The Trolley Problem is a thought experiment: An onlooker has the choice to save 5 people in danger of being hit by a Trolley (or Tram/Train) by diverting it to the other line where it will hit only 1 rail worker instead. The Trade-off to consider - What should I do? And what sacrifices are acceptable (if at all).
There are many variations on this problem. If you say obviously save the 5 - you are actively choosing to kill 1. Would that be acceptable in a hospital scenario, where a healthy person walks in off the street and you have 5 dying people in your ward that could be saved with her organs?
Now you really start to see the problem. It's not just about saving the most people (Utilitarianism), you have to also think about whether you play an active or passive role in saving lives or causing death. You have to consider is there an ultimate truth - a good or right action to take in any circumstances or does that "truth" change depending on where you are standing?
The Trolley Problem in todays world becomes even more relevant when applied to the subject of autonomous vehicles because the driver might not be the one making the active decision to swerve away from the person in front of them and into a tree - it might be what the programming dictates. What does the algorithm dictate about saving lives or causing deaths? What priorities are coded into it's design? How are it's choices weighted?
For a vehicle to be approved for sale in each country, you would expect that it's programming be lawful and meet the rules and regulations of that country. Of course, not all rules are written into law. In many cases although religion and culture are held separate from State, should something happen that causes death - what would the people say and do? Who would they hold responsible?
领英推荐
In some countries the elderly are revered above all others for their wisdom and knowledge and what they have already given to their people. In others new life provides the greatest and best hope and is valued for what it will bring. In some places a child has no rights in law until they are 10 or 12 or 16. Would the programming of an autonomous vehicle be different depending on what country it is in?
The simplicity of the original Trolley Problem came from the person who switched the Trolley on the tracks being somewhat removed from the action. They could look away, pretend they saw nothing until it was too late. In the modern problem, the programmer is also somewhat removed. The code they wrote many years before, perhaps as part of a team, makes an active choice to swerve which causes death to some whilst saving others. But where is that person now? And how responsible are they?
I think you start to see, Responsible AI is a whole new topic with so many things to be considered - by us an individuals, as Companies and Organisations, and as People and Governments. We cannot stand back and let others decide, we much each think deeply and work through these thorny topics. In Microsoft we say someone is always responsible for the decisions an AI takes. We cannot simply step back and blame it on the machines.
Next week I'll be speaking at NDC Conferences Oslo on the topic of Responsible AI and the Modern Trolley Problem. If you'd like to watch the recording it can be found here:
If you want the TLDR here is a clip I show in my session, of the Azure Percept Obstacle Avoidance Robot avoiding LEGO people and not targeting my Shih Tzu - Leo the Lion at all.*
[* No Shih Tzu were harmed in the making of this video. The LEGO people suffered no permanent damage, although their feelings may have been bruised]
Michelle Sandford is a Developer Engagement Lead at Microsoft in Australia, she works with emerging communities like Students, Data Scientists and AI/ML engineers.
Chair, StartupWA
2 年Fascinating discussion - can’t wait to see the video of your prez.
Senior Data Consultant @ Rio Tinto | AI, Data Analytics, Business Intelligence
2 年Not sure if you caught MIT’s moral machine from 2016? That was fascinating, the modern version of the trolley dilemma. It used data from millions of participants across 233 countries. https://mitpress.mit.edu/9780262045797/the-car-that-knew-too-much/