Utilitarian vs Libertarian Self-Driving Cars: Which is Superior?
Mariia Shcherbakova
AI&Data, Regulation and Project Management | International Telecommunication Union (ITU) | Master of Science in Applied AI | Master of Law | Alumni of WIPO-UNIGE S’21
Ethics is an important part of human life. Most of the decisions we make are based on our experiences, prejudices and our own moral understanding of what is ethically right and wrong, which makes us biased. Recently, the concept of bias and biased AI has been attracting more and more attention. "Biased AI" means that a machine learning-based data analysis system exhibits a bias towards certain groups of people. As the world as a whole is gradually changing toward equality and acceptance, people expect robots to become models of impartiality and overcome prejudices.
However, because the systems are programmed by humans, biases are difficult to eliminate immediately, and it is only after the systems are implemented that the problems become apparent. Artificial intelligence systems that make racist, sexist, and other biased decisions have been the subject of public debate, including the bias of self-driving cars.?
Self-driving cars have been criticized because of autonomous decision-making about the value of human life. In a situation of unavoidable accidents that could result in loss of life, an autonomous car must decide what action to take, which also involves making decisions about the casualties that may occur. The common "Trolley problem", which is handled differently by people depending on their biases and philosophical perspectives,? is expected to be solved impartially by autonomous cars.
Ethical implications of biased AI in autonomous vehicles
To break down why there are difficulties in answering the question of what ethical standards an AI system in self-driving cars should follow, it is important to understand how and when AI bias arises and what it leads to.
To begin with, there are three stages of AI development in which bias can arise. The first stage is framing the problem (Hao, K. (2019). This means that the very first thing developers do when creating an AI system is to form a goal that they want to achieve with it. A poorly defined goal can cause bias in AI decision-making. For example, when creating models of self-driving cars, developers should ask themselves what the main goal is, and in this case the possible answer is 'to make driving safer'. If the goal is set incorrectly, such as 'to make life easier for customers', it will not mean that self-driving cars should minimize accidents, but rather focus on comfort.
The second stage is collecting data. A lot can go wrong at this stage. First, unrepresentative data means that there is not enough diverse data or too much data of one type to learn. In the case of self-driving cars, this means that there may not be enough data on accidents, how they happen, and actions to prevent or minimize damage. Secondly, data reflecting existing prejudices are data that are historically biased, leading to biased results. In our case, such data could be that most drivers are turning left at the time of an accident (fictitious example). Such data may cause the system to automatically select a left turn, which is neither helpful nor safe.
The third and last stage is preparing the data. This means that the algorithm itself may produce biased results. In this stage, developers choose which attributes they want the machine to take into account and which they want it to ignore. In the self-driving car example, these attributes might be objects around the car, their characteristics and specific situations. If we tell the algorithm that it should treat pedestrians as an obstacle and not collide with them, it will avoid them. This step is the most difficult, because it requires ethical decisions by the developers about the unavoidable accidents that the system will follow. The algorithm must be programmed for such situations, otherwise the car's actions may be unexpected. The question then arises as to what decisions developers will program, will all developers program self-driving cars to make the same decisions, should developers' assessment of the value of human life be ethically correct?
Due to such complex issues, governments have had to step in to regulate AI-based systems. There are different discussions depending on the industry in which AI is used. We will focus on the regulation of self-driving car systems.?
To begin with, regulatory standards are developed to address legal issues. This brings us to the question of whether regulatory standards will be those that are ethically correct? In order to bring the concepts of regulation and ethics closer together, the Geneva-based International Telecommunication Union (ITU) conducted a survey called "Molly Problem" to understand people's expectations of self-driving cars (Burkhalter, D. (2021). One might conclude that the results of the survey show what "ethical" or "unethical" solutions to self-driving cars are voted for by the public. However, this survey only gives regulators an indication of public opinion and does not mean that it will fully coincide with the regulations they will issue.
Moreover, ITU, which is the UN specialized agency for information and communication technology, and UNECE, the UN commission responsible for global road traffic regulations, are working together to standardize the behavior of 'AI drivers' (Setting the standards for autonomous driving. (2021). The ITU considers improving road safety as one of its objectives. The ITU's work, in theory, should facilitate the introduction of self-driving cars and overcome the biased decisions the system can make.?
However, it is only through practical examples that people can learn and form their own opinions about the technology. Judging by the sad experience that humanity has had so far, the percentage of people who accept the technology is very low. According to the study, in 2021 74% of people surveyed do not trust self-driving cars and do not believe that self-driving cars can perform better than a normal driver (Othman, K. (2021). The same study found that only 6.9% of respondents had no misgivings about self-driving systems.?
The first death caused by self-driving cars occurred in 2018, when an uber hit a woman to death (Dormehl & Edelstein 2019). This sparked the first strong mistrust of the new technology and public opinion about the legal and ethical responsibility of the creators and operators of self-driving cars. The research shows that the percentage of public distrust of self-driving cars correlates with the number of accidents involving autonomous vehicles (AVs). (Annexe figure 1)
Overall accident statistics are also unfavorable for autonomous cars, with 9.1 accidents per million miles traveled for self-driving cars, compared to 4.1 for conventional cars (Autonomous Vehicle Statistics. (2021). However, there are fewer casualties in accidents involving self-driving cars than in conventional cars.?
In light of such statistics, the dilemma of introducing self-driving cars has not yet been resolved. The government is taking small steps to bring society closer to accepting and understanding the technology through introduction of new regulations. But what philosophical perspective will the government rely on when setting regulations for self-driving cars? Before revealing the differences in the two different perspectives on the dilemma under discussion, it is worth explaining the trolley problem in relation to self-driving cars.
Trolley problem: self-driving car edition
This is an ethical dilemma where a trolley is barreling down the hill and cannot stop because its brakes are not working. If it continues on its lane, it will hit the track ahead and kill five people working in the truck. A person standing at the edge of the road can change the trolley's direction to prevent it from hitting the track with five people, thus hitting a track with only one person (Sandel, M. J. (2011). This shows how one can knowingly sacrifice one person's life to save that of five individuals.
In reference to autonomous vehicles, before considering what an already programmed self-driving vehicle would consider, it is important to look at the approaches of ethical preprogramming, which are universal ethics and user-selected programming. In the case of universal programming, all autonomous vehicles are programmed to react in the same way in cases where harm is unavoidable (Tay 2019).With this approach, if everyone drives self-driving cars that use the same standards accepted by the public, one cannot accuse the programme of bias (Hong et al., 2020). Under user-selected programming, this is where the artificial intelligence used is referred to as biased because every self-driving vehicle system is designed in its way and some may take different factors into account when making a decision. Many scientists have used the ethical dilemma of the trolley problem to understand the dilemma of self-driving vehicles and their acceptance from different philosophical perspectives.?
To better illustrate the decision-making process, the Trolley problem can be presented as follows. Suppose a self-driving vehicle with one passenger is driving towards a bridge, and suddenly a bus swerves by with three passengers into the self-driving vehicle's lane, and five people are walking down the bridge; what would the autonomous vehicle do? Or what would the automated vehicle programmer design the vehicle to do? First, there is an option that the autonomous vehicle would swerve the five pedestrians and save its passenger’s life as well as the bus passengers; second, it would hit the bus and sacrifice its passengers and the pedestrians, saving the bus passengers, or it would drive towards the bridge saving all the other passengers and pedestrians thus sacrificing its passenger.?
Utilitarianism?
Utilitarianism is conceptualized to consider the impacts of actions and how those impacts promote happiness or pain. It always chooses the alternative whose consequences are the greatest good for many individuals, hence maximizing utility (Sandel, M. J. (2011). The decision-maker tries to think through all the possible good or bad consequences of an action and then weighs them against each other to determine which action will lead to the most positive outcome. In utilitarianism the end justifies the means. Utilitarianism has several criticisms against it. For instance, how to weigh possible outcomes and how far in the future they should be considered??
Despite many issues with this philosophical perspective, according to conducted surveys, robots are expected to be utilitarian, sacrificing one person for the good of many, and are more blamed when they are more rebuked when they do not (Malle, B., & Scheutz, M. (2015). In some cases, however, the utilitarian solution would be to endanger the robot owner for the sake of saving others, making it almost impossible to sell such technology.
Considering this theory with the self-driving vehicle bridge dilemma, a utilitarian programmer would ensure at all costs, death is minimized. This looks at the action where the autonomous vehicle should swerve into the bridge sacrificing its passenger but saving the three bus passengers and the five pedestrians. As much as this is unfair to the passenger in the AV, it is the right thing to do, according to a utilitarian programmer Hayry (2013).
A utilitarian self-driving vehicle faces several problems, such as demographic concerns. Assuming that a self-driving vehicle has been designed to perform functions such as facial recognition where it can detect gender, weight and age, would it have to consider this factor before making decisions in urgent scenarios which would make the decision-making more biased? Considering factors such as gender, should a utilitarian vehicle avoid collisions involving women by focusing the target on men which would not promote equality? If this is done, it would be morally and ethically wrong.?
A utilitarian community would accept and benefit from a universal standards approach for AVs as it would help eliminate potential biases programmed by car manufacturers. Without universal standards, each autonomous driving system developer has its own databases used to train the AI and an individual approach to decision-making algorithm development. This means that bias problems can arise during the data collection and data preparation stages. First, there is a high probability that the data used by the developer was not balanced or had some biases that did not show up at the beginning causing biased decisions. Second, when preparing the data during the algorithm development process, there is the possibility that the programmer has not considered all the possible life situations on the road, as there are simply too many of them, causing unexpected and biased decisions. These stages, which every self-driving AI system developer goes through, cannot be regulated and inspected by the authorities so well that the probability of such problems with car manufacturers is 0. With universal standards regulated by specialized bodies, the utilitarian society can be confident in the decision-making process of any AV, regardless of the car producer, as such would result in greater good.?
According to the study, 76% of surveyed people agree that the car should always decide to minimize life loss for all involved parties and only 48% think that the car should minimize negative impacts first to its passengers and then, if possible, to others (Karnouskos, S. (2020). Consequently, aspects such as the common good of society, minimizing human losses and negative consequences, are expected to be taken into account by car manufacturers and technology developers. In particular, the survey strongly supports minimizing loss of life, and this is consistent with the general expectation that self-driving cars will minimize accidents and save human lives, potentially better than human drivers do. Thus, in order for society to believe in this technology, the statistics of accidents involving AVs must decrease, which will be a step toward proving that drivers with artificial intelligence are better than humans. Regarding the unethical value of life by developers, the research shows that “in the confined scope of unavoidable collisions in road traffic, simple value-of-life models approximate human moral decisions well” (Sutfeld, L., & Gast, R. (2017). However, such an approach might lead to biased calculations of the value of human life. Ethical commissions in Germany, for example, oppose a utilitarian decision-making process such as the evaluation of human life by factors such as gender, age, physical or mental conditions (Report of the Ethics Committee. (2017).?
Based on the above information, a utilitarian society will accept autonomous cars for mass use if they are proven to be better than human drivers. As this will mean fewer deaths due to accidents and greater happiness for society. In addition, a utilitarian society will agree to purchase an autonomous car knowing that the priority will be to provide the greatest good, rather than to save its own passenger, as this decision can be justified by the community's shared ethical perspective.
领英推荐
Libertarianism
This ethical theory values individual freedom a lot. It views people as sovereign, and that people have the right to make their own decisions and control their bodies at any time (Sandel, M. J. (2011). Libertarians view individuals as people who do things voluntarily and cannot be forced to do something without their will, unless justified by circumstances.? Libertarians often value freedom at all times, and in this case freedom means doing things of one's own free will rather than being forced to do them, regardless of the circumstances. Regarding the bridge dilemma for the autonomous car, if the system decides to sacrifice its passenger to save the rest, libertarians believe this is ethically wrong (Goodall 2014). This is because the passenger in a self-driving car is not involved in the decision-making process. Therefore, his or her death occurs against his or her will. A Libertarian would solve this dilemma this way: it does not matter who suffers from the action, as long as human freedom is a priority. Therefore, the algorithm used in the design of an autonomous machine must consider the freedom and decisions of its passengers before acting.
Additionally, while referring to the trolley problem, the individual standing by the roadside which has the lever to redirect the track from hitting the track with five workers, libertarians would advocate that the individual should not just act according to his thinking rather, he should seek or ask the track driver with one passenger if they are willing to sacrifice their own lives from others regardless of the situation.
In light of this information, it can be concluded that the libertarian community will not accept a universal standards approach. This is due to the fact that the freedom and will of each passenger will be prioritized over the common good. Under universal standards, passengers must accept the decision-making process, regardless of whether they want the car to take a different action. In libertarianism, all people should be given the freedom to choose which decision-making process the car should follow. Therefore, when buying a self-driving car, passengers could be asked a series of questions about what decisions the system should make in different types of situations. In a libertarian society, however, a person can act freely as long as his actions do not harm others, which means that a passenger cannot be asked a question, such as who he would harm in the event of an imminent accident. The "questionnaire" solution can only be accepted by libertarians if the decision-making process can be personalized without the passenger deciding to consciously do harm. Even so, passengers will be asked to take responsibility for the actions of their car on the road, because they themselves have chosen the decisions that the system executes.
According to the study, only 46% of surveyed people think that the car should take a decision that is considered as moral by its owner (and not necessarily by others) (Karnouskos, S. (2020). This public opinion shows opposition to people choosing their own car decisions. However, in a libertarian society, the statistics would be different, showing support for owners having the freedom to control their cars' decisions, except for those that involve deliberate harm to others.?
The question of adopting self-driving cars is a bit more difficult to answer in a libertarian society because of the potential harm to others if this technology is used. As a general rule, libertarians are free to make their own choices, so it would be right to accept the technology for mass use. As long as people have the freedom to do what they want and not be forced to do it, AVs can be used. This means that the production of traditional cars cannot be prohibited, and both options should be available for people to choose from.?
However, there are two possible reasons why self-driving cars will not be accepted in a libertarian society. First, if the developers do not come up with a way to allow owners to choose car decisions without the direct question of “who should die and who should survive”. Libertarians cannot accept that people will deliberately choose to harm others. Second, it is impossible for owners to pre-choose all possible situations, and some decisions will still be made by a programmed algorithm. Thus, people will still not be completely free to choose their car decisions.?
Conclusion
Self-driving cars are a revolutionary technology and their mass use will change the way people travel and will even affect employment in the mobility sector. The notion of autonomous vehicles has existed for more than 70 years and still, as studies show, the public is not yet ready to accept the technology for everyday use.?
One of the biggest questions of the self-driving cars implementation is the decision-making process that vehicles will follow and that the public will accept and agree on. Many surveys are being conducted by organizations in order to understand what the public expects from AVs and how they should act in different situations. Research shows that society expects robots to be more utilitarian than based on any other ethical theory. However, it can be assumed that not all programmed decisions will be utilitarian after the mass use of self-driving cars. Decisions such as saving one's own passengers before others are expected to be programmed to make society feel safe buying such cars.?
The introduction of universal standards for AVs is based on the ethical theory that society holds. Utilitarians would feel more comfortable with universal standards, but libertarians would not accept such a notion.?
General acceptance of mass use of AVs is supported by utilitarians if AI drivers are safer than humans. For libertarians, however, acceptance of technology depends on being able to choose their own decisions in advance, but at the same time not deliberately harming others. Moreover, according to surveys, real society is not yet close to accepting AVs. Car manufacturers and government organizations have a long way to go to prove to the public the safety, usefulness and convenience of self-driving cars to achieve the universal goal of making driving safer.?
References
Autonomous Vehicle Statistics. (2021). Gerber Injury Law. Retrieved January 8, 2022, from https://gerberinjurylaw.com/autonomous-vehicle-statistics/?
Burkhalter, D. (2021). On the road to regulating self-driving cars. SwissInfo. Retrieved January 8, 2022, from https://www.swissinfo.ch/eng/on-the-road-to-regulating-self-driving-cars/47042470?
Dormehl, L., & Edelstein, S. (2019). Sit back, relax, and enjoy a ride through the history of self-driving cars. Digital Trends.
Goodall, N. J. (2014). Ethical decision making during automated vehicle crashes. Transportation Research Record, 2424(1), 58-65. https://journals.sagepub.com/doi/abs/10.3141/2424-07?
Hao, K. (2019). This is how AI bias really happens—and why it’s so hard to fix. MIT Technology Review. Retrieved January 7, 2022, from https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/amp/?
Hayry, M. (2013). Liberal Utilitarianism and Applied Ethics. Routledge. History of Autonomous Cars. (2021). Tomorrow's World Today. Retrieved January 7, 2022, from https://www.tomorrowsworldtoday.com/2021/08/09/history-of-autonomous-cars/?
Hong, J. W., Wang, Y., & Lanz, P. (2020). Why is artificial intelligence blamed more? Analysis of faulting artificial intelligence for self-driving car accidents in experimental settings. International Journal of Human-Computer Interaction, 36(18), 1768-1774. https://www.tandfonline.com/doi/abs/10.1080/10447318.2020.1785693?
Karnouskos, S. (2020). Self-Driving Car Acceptance and the Role of Ethics. IEEE Transactions on Engineering Management, 67(2), 252-265. 10.1109/TEM.2018.2877307
Malle, B., & Scheutz, M. (2015). Sacrifice one for the good of many?: People apply different moral norms to human and robot agents. Proceedings of the Tenth Annual ACM/IEEE International Conference on HumanRobot Interaction.
Othman, K. (2021). Public acceptance and perception of autonomous vehicles: a comprehensive review. AI Ethics, 1, 355–387. https://link.springer.com/article/10.1007/s43681-021-00041-8#article-info
Report of the Ethics Committee. (2017). Automatisiertes und Vernetztes Fahren. BMVI. Retrieved January 8, 2022, from https://www.bmvi.de/SharedDocs/DE/Publikationen/DG/bericht-der-ethik-kommission.html?
Sandel, M. J. (2011). Justice: What's the Right Thing to Do. Allen Lane.
Setting the standards for autonomous driving. (2021). ITU. Retrieved January 8, 2022, from https://www.itu.int/hub/2021/03/setting-the-standards-for-autonomous-driving/?
Sutfeld, L., & Gast, R. (2017). Using virtual reality to assess ethical decisions in road traffic scenarios: Applicability of value-of-life-based models and influences of time pressure. Frontiers in Behavioral Neuroscience, 11.
Tay, K. (2019). Ethical Implications for Autonomous Vehicles. Doctoral dissertation, University of Wyoming. https://hdl.handle.net/20.500.11919/4409??
Zhang, Q. (2021). Individual Differences and Expectations of Automated Vehicles. International Journal of Human–Computer Interaction. https://www.tandfonline.com/doi/abs/10.1080/10447318.2021.1970431?src=&journalCode=hihc20?