Autonomous weapons & Ethics: Understanding the issue
Harriet Gaywood
An expert in PR, strategic communications, and crisis management with over 25 years of experience in China and APAC.
(Prepared as part of submission for MA in International Affairs, King’s College London Feb 2022)
Overview
How can war ever be ethical? This seemingly paradoxical question has been asked for over two thousand years in the jus ad bellum (Just War) tradition whether informally declared war or informal conflict. The tradition, which is accepted by most industrialized nations (Raden 2021) provides a structure for arguments and theoretical debate whether pacificist or realist (Whetham 2010: 11).? In the 19th century, the military theorist Clausewitz made a distinction between the concept of ‘absolute war’ and ‘real’ war recognizing the latter would be “restricted by a range of factors, including friction and the influence of policy” (Lonsdale 2010: 34). As he observed, weapons and the character of war have continually evolved even though the nature of war has remained unchanged (Mewett 2014). Autonomous weapons have been described as the third revolution after gunpowder and nuclear weapons. It is known that Australia, US, China, Israel, South Korea, Russia, and the UK (Raden 2021) are active in their research and development of autonomous weapons with the US and China leading “nation-states arms race” (Pew 2021), closely followed by Russia. In 2017, Russian President Vladimir Putin attracted attention around the world whilst speaking to some students about AI when he said “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender” (Caughill 2017).? So why is there such concern? This essay will focus specifically on the ethics associated with lethal autonomous weapon systems (LAWS) including artificial intelligence (AI).
Defining LAWS and AI
Firstly, what are autonomous weapons (LAWS)? Many technologies are being developed with autonomy as the eventual goal but are not considered LAWS. The wordings are unified in highlighting the absence of human involvement. For example, the anti-autonomous weapon NGO ‘Future of Life Institute’ defines them as follows:
“Slaughterbots, also called “lethal autonomous weapons systems” or “killer robots”, are weapons systems that use artificial intelligence (AI) to identify, select, and kill human targets without human intervention” (Future of Life Institute November 30, 2021).
The philosopher Peter Asaro is careful to distinguish between an automated weapon and an autonomous weapon “any system that is capable of targeting and initiating the use of potentially lethal force without direct human supervision and direct human involvement in lethal decision-making” (Asaro 2012: 689).
Meanwhile, the U.S. Department of Defense (DoD) continues to use its 2012 definition “A weapon system that, once activated, can select and engage targets without further intervention by a human operator.” The DoD also distinguishes between autonomous weapons, human-supervised autonomous weapons (that feature a human “on the loop” with an override switch), and semiautonomous weapons (that focuses on targets selected by a human operator).” (DoD Directive 3000.09: 13-14).
For this essay, artificial intelligence (AI) in the context of LAWS will use the following definition “a growing resource of interactive, autonomous, and self-learning agency, which can be used to perform tasks that would otherwise require human intelligence to be executed successfully” (Floridi & Cowls, 2019 in Taddeo et al 2021:1709).
Legal Conventions & Frameworks
If a nation does not have competitive military capabilities, how can it maintain its national security? What is ethical and unethical in terms of weapons development on a national and international level? The jus ad bellum principles can be applied to LAWS considering the traditional principles such as self-defense and the need to protect society; proportionality (linking war and the use of LAWS to the objectives of war); and lawful authority (Keats 2022: ilesson). The ethical constraints must also be accompanied by a legal framework which covers not only jus ad bellum but also jus in bello (conduct in war) otherwise “war is nothing more than the application of brute force, logically indistinguishable from mass murder” Whetham (2010: 1) quotes (Alex J Bellamy, Just Wars: From Cicero to Iraq (Cambridge: Polity Press, 2006).
Jus ad bellum provides a framework for international laws regarding LAWS but defining principles such as discrimination and proportionality can be difficult. International Humanitarian Law (IHL) articulates the principles of distinction and proportionality specifically prohibiting attacks during wars which is excessive when compared to the expected military advantage gained. In 2021, 55 countries discussed the UN Convention on Certain Conventional Weapons (CCW) originally ratified in 1980 questioned whether the IHL wording regarding proportionality was adequate given what is required of humans. Shouldn’t LAWS follow the same rules of engagement (ROE) set out by international conventions? Reed (2021: 83) argues that jus ad bellum cannot be applied to LAWS because they are inherently malum in se (wrong in itself). Horowitz (2016: 26) asks whether “the use of laws could comply broadly with the protection of life in war, a core ethical responsibility for the use of force; whether laws can be used in ways that guarantee accountability and responsibility for the use of force; and whether there is something about machines selecting and engaging targets that makes them ethically problematic”. Asaro (2012: 701) is more direct saying in the interest of respecting human dignity, “justice itself cannot be delegated to automated processes”.
Ethical Theory
Generally associated with philosopher Immanuel Kant, Deontology ethical theory follows ‘universal moral laws’ such as “do not steal”. This theory stresses that the ends do not justify the means, and actions should be judged based on rules and norms rather than the outcome. By contrast, in consequentialism theory, acts are considered in the relevant context and by what they achieve rather than considered in isolation (Whetham 2010: 12). The most well-known brand of this consequentialism is utilitarianism, a theory developed by philosophers Jeremy Bentham and John Stuart Mill. In a utilitarian approach, the “optimal decision favors the best outcome for the largest number of people” (KEATS 2022: ilesson). An example was US President Truman’s decision to attack Hiroshima and Nagasaki in 1945 using atom bombs justified in consequentialist terms, as fewer people died as a result of these two bombs than would have in the expected conventional invasion of Japan” (Whetham 2010: 13).
Deontology and consequentialism are important ethical theories for considering LAWS since political theories such as realism argue at a national and international level “universal moral principles cannot be applied to the actions of the state” and hold a Machiavellian belief that men are essentially evil (Lonsdale 2010: 30). Despite this, he argues there are elements of realism that do have some moral basis explaining “The notion that war serves policy, and therefore does not have its own independent rationale, can be considered a statement of morality”.
Why are lethal autonomous weapon systems (LAWS) causing so much concern?
Trust in machines
Wars are generally unpredictable and chaotic while humans are fundamentally “fallible, emotional and irrational” (Emery 2021) affecting their ability to make rational decisions in conflict situations. So perhaps machines are more reliable and may make better decisions than humans.? However, others would argue that humans are simply prone to “automation bias” namely, the trust and dependency on technology and a “tendency to defer” to machines (Zerilli et al 2021: 563).
Critics have pointed to the inability of machines to discriminate in decision-making leading to errors e.g. targeting a school, not a military academy. Rather than being trustworthy, machines are in fact fragile due to their dependence on how they are programmed and trained to operate in differing scenarios. Asaro (2012: 688) argues that the probability of error is therefore high. Tucker (December 2021) illustrated the challenge of “brittle AI” when just a small adjustment to conditions testing recognition of surface-to-surface missiles created a drop in accuracy from around 90% to just 25%.? This is one of the reasons why initial concerns about LAWS focused on their compliance with international humanitarian law (IHL) and the requirements of the Geneva Convention.
Empathy & Emotion
LAWS are said to result in fewer casualties by removing humans from conflict situations. However, this could result in a dehumanization of targets and a lack of respect for the value of human life and dignity. This could result in a detachment which is supported by laws that “will allow leaders and soldiers not to feel ethically responsible for using military force because they do not understand how the machine makes decisions and they are not accountable for what the machine does” (Horowitz 2016: 30). This raises the question of how far a human can be removed from a targeting decision and whether or not jus in bello criteria is met. Horowitz (2016: 32) explains that LAWS are not moral actors arguing that “people have the right to be killed by someone who made the choice to kill them” and therefore killing in this situation is therefore unethical. In 2019, UN Secretary-General António Guterres addressed autonomous weapons experts saying “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law” (UN 2019).
Accountability
In democratic states, the military is held accountable for their actions.? A recent research paper by Stop Killer Robots (December 2021) sets out the range of weapon types and their wide-ranging capabilities that can be handled without a human operator. Ethical theory needs to be considered and implemented by international organizations and individual states. Snyder (in Lonsdale 2010: 30) argues that norms, as referenced in Deontology may include “sovereignty. However, this is likely to be put to the side “when something of real significance is at stake”. Similarly, Lonsdale (2010: 35) argues that cultural and peacetime values may not always be reflected during a time of war. Kenneth A. Grady of The Algorithmic Society calls this “situational ethics” (Pew 2021).? Asaro (2012: 688) argues that regardless of the situation, “there is a duty upon individuals and states in peacetime, as well as combatants, military organizations, and states in armed conflict situations, not to delegate to a machine or automated process the authority or capability to initiate the use of lethal force independently of human determinations of its moral and legal legitimacy.”
Cultural differences
If ethical principles can’t be agreed upon, they can’t be applied. “Unanimity is rarely broader than a single culture, profession or social group” explains Stephen Downes (Pew June 2021). This challenge is compounded by the fact that “modern AI is based on applying mathematical functions on large collections of data”, not principles or rules. On this basis, he argues “the ethics of AI will be an extension of our own ethics.”? Kenneth A. Grady argues that even if ethics can be defined in one country, the fluidity and cross-border nature of AI will mean that the definition in another country may be different.? The consideration of ethics suggests that they are important to society yet Marc Brenman (IDARE) argues “As societies, we are very weak on morality and ethics generally. This is no particular reason to think that our machines or systems will do better than we do.” (Pew June 2021) If Brenman is correct then the consequences of relying on LAWS and their AI capabilities could present a high level of risk.
Ethical Considerations differ between Big Tech and Governments
The challenge is not only the development of military technology but the process and who is developing it. In the past technology was researched by military institutes but now the technology is often developed by commercial, civilian organizations contracted by governments to work on specific projects. This context is important because accountability is a key part of the ethical debate. Developers may either have less interest, or less recognition of the ethical implications of a technological breakthrough but corporations, particularly ones with global interests may cite ethical issues regarding certain projects. For example in 2018, Future of Life Institute lobbied over 200 organizations and 3000 individual signatories including Elon Musk, Google DeepMind's founders and CEOs of various robotics companies) to pledging ?they would?“neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” (Future of Life Pledge, June 5 2018).
Is it appropriate for developers to be held responsible for products? Heaven (2021) argues that governments must provide ethical guidelines to partners such as Big Tech and non-military organizations. For example, the US Department of Defense is partnering with Amazon and Microsoft on Project Maven to develop AI for surveillance (Google employees refused saying “We believe that Google should not be in the business of war”) and released ethical guidelines in 2020 (DOD 20 November 2021). Although the guidelines offer a process, they do not cover topics such as LAWS. This has created a trust issue for the US military.
Global values or just democratic values?
A lack of agreement on ethics makes it difficult to have international conventions on the use of LAWS. There is not only the technological challenge of “unpredictable algorithms interacting between countries” Horowitz (2016: 28) but more fundamentally, as Randall Myers from TechCast Global (Pew 2021) says “American, European and Chinese governments and Silicon Valley companies have different ideas about what is ethical. How AI is used will depend on your government’s hierarchy of values among economic development, international competitiveness, and social impacts.” As LAWS evolve, so do morals and value systems. Jim Witte (Center for Social Science Research at George Mason University) cautions against viewing ethics and morals as static systems (Pew 2021).
In its Interim Report (November 2019) the US Commission on Artificial Intelligence outlined three components regarding ethical and trustworthy AI regarding national security. In addition to ethical design, development and protection of rights, the technology “must align with America’s democratic values and institutional values” (NASCAI 2019: 48). Meanwhile organizations such as NATO are advocating “NATO-mation” to ensure that “its values, ethical stances and moral commitments will remain central in a rapidly changing security environment” (Gilli 2019: 1). So can the values of such organizations be aligned? Adam Clayton Powell III (USC Annenberg Center on Communication Leadership and Policy) argues that governments are more concerned with self-preservation than ethics.
Conclusion
Concerns about LAWS appear to be justified and there is a sense of urgency since the technology is already in use. A letter by the UN Panel of Experts on Libya (UN 2021: 17) mentions the first known use of LAWS in a war situation in Libya in March 2020 using an STM Kargu-2 and other loitering munitions. In May 2021, Israel used drone swarm weapons for the first time (Stop Killer Bots 2021).
In the interests of protecting the rights of humanity and promoting justice, decision-making in war cannot be replaced by algorithms. With the development of the technology in the hands of commercial developers, it is increasingly easier and cheaper to access meaning that non-state actors will be able to access it.? Future of Life (November 30, 2021) argues that this technology brings a threat of “proliferation, rapid escalation, unpredictability, and even the potential for weapons of mass destruction.” It is therefore essential that the UN Convention on Certain Conventional Weapons (CCW) reaches a consensus between states during its meetings in 2022. Without an international legal framework to support ethical considerations, the nation that leads in technology will drive the ethics of war.
?
?
?
Bibliography
ángel Gómez de ágreda, Ethics of autonomous weapons systems and its applicability to any AI systems, Telecommunications Policy, Volume 44, Issue 6, 2020, 101953, https://doi.org/10.1016/j.telpol.2020.101953 . (https://www.sciencedirect.com/science/article/pii/S0308596120300458 )
Asaro, Peter. (2012). On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross. 94. 10.1017/S1816383112000768.
领英推荐
Bendett, Samuel (2018) “In AI, Russia is hustling to catch up” Defense One, April 4, 2018 https://www.defenseone.com/ideas/2018/04/russia-races-forward-ai-development/147178/ Accessed February 22, 2022
Bruun, Laura “Autonomous weapon systems: what the law says – and does not say – about the human role in the use of force” Humanitarian Law and Policy Blog, November 11, 2021 https://blogs.icrc.org/law-and-policy/2021/11/11/autonomous-weapon-systems-law-human-role/
Caughill, Patrick “Vladimir Putin: Country that leads in AI development “Will be the Ruler of the World”” Futurism, September 2, 2017 https://futurism.com/vladimir-putin-country-that-leads-in-ai-development-will-be-the-ruler-of-the-world Accessed February 24, 2022
Defense Innovation Board “Campaign for an AI-ready force” October 31, 2019 https://media.defense.gov/2019/Oct/31/2002204191/-1/-1/0/CAMPAIGN_FOR_AN_AI_READY_FORCE.PDF
Department of Defense “Autonomy in Weapon Systems” DIRECTIVE 3000.09 Issued 20 November, 2012 (updated 2017) https://irp.fas.org/doddir/dod/d3000_09.pdf
DoD “DOD adopts ethical principles for artificial intelligence” Feb 24, 2020 https://www.defense.gov/News/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/
Emery, John R. (2021) Algorithms, AI, and Ethics of War, Peace Review, 33:2, 205-212, DOI: 10.1080/10402659.2021.1998749
Etzioni, Amitai & Eren Etzioni “Pros and Cons of Autonomous Weapons Systems” Military Review, May-June 2017 https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/ Accessed February 25, 2022
Future of Life Institute “Lethal Autonomous Weapons Pledge” June 5, 2018 https://futureoflife.org/2018/06/05/lethal-autonomous-weapons-pledge/ Accessed February 25, 2022
Future of Life Institute “An Introduction to the issue of lethal autonomous weapons” November 30, 2021 https://futureoflife.org/2021/11/30/an-introduction-to-the-issue-of-lethal-autonomous-weapons/ Accessed February 20, 2022
Future of Life Institute “10 reasons why autonomous weapons must be stopped” November 27, 2021 https://futureoflife.org/2021/11/27/10-reasons-why-autonomous-weapons-must-be-stopped/ Accessed February 20, 2022
Gilli, Andrea “Preparing for “NATO-mation”: the Atlantic Alliance toward the age of artificial intelligence” NATO Defense Policy Brief No. 4 February 2019
Heaven, Will Douglas, “The Department of Defense is issuing AI ethics guidelines for tech contractors” MIT Technology Review, November 16, 2021
Horowitz, Michael C; The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons.?Daedalus?2016; 145 (4): 25–36. doi:?https://doi.org/10.1162/DAED_a_00409
International Humanitarian Law https://ihl-databases.icrc.org/customary-ihl/eng/docindex/v1_rul_rule14
KEATS (2022) “Unit 3; Ethics and AI” 7SSDJ187 Strategy in the Age of Artificial Intelligence (21-22 000001) King’s e-Learning and Teaching Service
Lee, Kai-Fu “The Third Revolution in Warfare” The Atlantic, September 2021 https://www.theatlantic.com/technology/archive/2021/09/i-weapons-are-third-revolution-warfare/620013/ Accessed February 24, 2022
Mewett, Christopher “Understanding wars enduring nature alongside its changing character” War on the Rocks January 21, 2014 https://warontherocks.com/2014/01/understanding-wars-enduring-nature-alongside-its-changing-character/
National Security Commission on Artificial Intelligence (NSCAI) “Interim Report” (November 2019) https://www.nscai.gov/wp-content/uploads/2021/01/NSCAI-Interim-Report-for-Congress_201911.pdf
Neslage, Kevin “Does "Meaningful Human Control" Have Potential for the Regulation of Autonomous Weapon Systems?” University of Miami National Security and Armed Conflict Law Review, April 2019 https://repository.law.miami.edu/cgi/viewcontent.cgi?article=1092&context=umnsac
Patterson, Dillon E. “Ethical implications for lethal autonomous weapons” Harvard Kennedy School Belfer Center for Science & International Affairs, June 2020 https://www.belfercenter.org/publication/ethical-imperatives-lethal-autonomous-weapons Accessed February 26, 2022
Pax for Peace “Increasing autonomy in weapons systems: 10 examples that can inform thinking” December 2021 https://paxforpeace.nl/media/download/Increasing%20autonomy%20in%20weapons%20systems%20-%20FINAL.pdf Accessed February 22, 2022
Pew Research Center “Worries about developments in AI” June 16, 2021 https://www.pewresearch.org/internet/2021/06/16/1-worries-about-developments-in-ai/
?
Lonsdale A (2010) “View from Realism” Chapter 2 in Ethics, law and military operations. Bloomsbury Publishing Plc. Edited by Whetham D
Palmer, G.G. (2020), "AI Ethics: Four Key Considerations for a Globally Secure Future", Masakowski, Y.R. (Ed.) Artificial Intelligence and Global Security, Emerald Publishing Limited, Bingley, pp. 167-176. https://doi.org/10.1108/978-1-78973-811-720201010
Raden, Neil “AI ethics have consequences - learning from the problem of autonomous weapons systems” Diginomica, July 5, 2021? https://diginomica.com/ai-ethics-have-consequences-learning-problem-autonomous-weapons-systems Accessed February 22, 2022
Reed ED. Truth, Lies and New Weapons Technologies: Prospects for Jus in Silico? Studies in Christian Ethics. 2022;35(1):68-86. doi:10.1177/09539468211051240
Stop Killer Robots “Historic opportunity to regulate killer robots fails as a handful of states block the majority” December 17, 2021 https://www.stopkillerrobots.org/news/historic-opportunity-to-regulate-killer-robots-fails-as-a-handful-of-states-block-the-majority/
Snyder, Glenn H. (1996)?Process variables in neorealist theory,?Security Studies,?5:3,?167-192,?DOI:?10.1080/09636419608429279
Taddeo, M., McNeish, D., Blanchard, A.?et al.?Ethical Principles for Artificial Intelligence in National Defence.?Philos. Technol.?34,?1707–1729 (2021). https://doi.org/10.1007/s13347-021-00482-3
Tozman, G?khan “The rise of China in the cyber domain: the case for cyber security intelligence for NATO ” SC 138 The College Series No. 19 - December?2021, NATO Defense College
Tucker, Patrick “This Air Force Targeting AI Thought It Had a 90% Success Rate. It Was More Like 25%” Defense One, December 9, 2021 https://www.defenseone.com/technology/2021/12/air-force-targeting-ai-thought-it-had-90-success-rate-it-was-more-25/187437/ Accessed February 25, 2022
UN “Machines Capable of Taking Lives without Human Involvement Are Unacceptable, Secretary-General Tells Experts on Autonomous Weapons Systems” March 25, 2019 https://www.un.org/press/en/2019/sgsm19512.doc.htm
UN “Letter dated 8 March 2021 from the Panel of Experts on Libya Established pursuant to Resolution 1973 (2011) addressed to the President of the Security Council” United Nations Digital Library, 8 March, 2021 https://digitallibrary.un.org/record/3905159#record-files-collapse-header
Whetham, D. (2010). Ethics, law and military operations. Bloomsbury Publishing Plc
Zerilli, J., Knott, A., Maclaurin, J.?et al.?Algorithmic Decision-Making and the Control Problem.?Minds & Machines?29,?555–578 (2019). https://doi.org/10.1007/s11023-019-09513-7
Senior Executive across Finance, Media, Sport, Wellness Industries | Entrepreneurial Director with passion for Building Brands across diverse markets | Certified Trauma Informed Somatic Therapist
1 年Thanks for sharing ??Autonomous weapons have been described as the third revolution after gunpowder and nuclear weapons..