AI ethics have consequences - learning from the problem of autonomous weapons systems
Neil Raden
Author, Advisor, Mathematician .Thinkers360 Global Thought Leader/Influencer iAI, Analytics, Predictive Analytics, National Security, GenAI, International Relations, Design Thinking, InsurTech, Quantum, and Health Tech
By?Neil Raden
SUMMARY:
AI ethics can seem like an academic exercise - that's not the case for autonomous weapons. A defining AI ethics issue for our time is heating up.
First of all, I want to state for the record that I have never played a video game that involved violence or war. I think the last time I played a "video game" was Flight Simulator. As a result, I suspect some readers are much more familiar with intensive and fanciful warfare than I am.
Still, recently, I've been part of discussions with the Department of Defense and organizations that advise, consult and criticize the DoD on the topic of AI in warfare.
It is a complicated issue to introduce AI ethics with the violence and killing of war. However, It is also undeniable that the U.S., its allies, and opponents are all rapidly developing AI for their weapons systems. Anyone who cares about applying AI ethics to real-world scenarios should have a stake in this one.
One argument for AWS (not Amazon in this case, but Autonomous Weapon Systems) is a projected reduction in civilian casualties. I don't see a compelling argument for this. Automated or not, these weapons still blow things up: people, buildings, hospitals, bridges.
In every major conflict, it is civilians who take the brunt of war. In 2003, the Oxford economist Paul Collier stated in a World Bank research report?Breaking the Conflict Trap: Civil War and Development Policy?that taking fatalities and population displacements together, nearly 90 percent of the casualties resulting from armed conflict in modern wars were civilians. Most suggest that some 75 million people died in WWII, including about 20 million military personnel and 40 million civilians.
The ethics of autonomous weapons systems
In principle, most industrialized nations have a widely accepted concept, the "Just War" theory. It addresses the ethics of civilian casualties by a (ridiculous) concept of proportionality. In Just War theory, if the projected good to be achieved by war is proportional to the ultimate destruction from the violence, it is justified. Who makes this estimate? I've written about the various ethics and moral philosophy threads in many articles related to AI. This statement is an obvious conclusion of utilitarianism. This ethical system advocates that the morally correct action is the one that does the most good. I'm afraid I have to disagree with this.?
I have lots of esteemed company. Moral philosophers vigorously contest this approach to the war on moral grounds. I find it ridiculous that you can project the most good from killing and displacing millions of innocent people. As long as we're on the topic of ethics, moral absolutists believe that every action is inherently right or wrong. One such rule is that non-combatants cannot be attacked because they are, by definition, not partaking in combat. Thus, by the absolutist view, only enemy?combatants?can be attacked. The philosopher Thomas Nagel advocates this absolutist rule in his essay "War and Massacre."
What do war-making organizations think AWS is capable of? First of all, Paul Virilio wisely quipped, "The invention of the ship was also the invention of the shipwreck." There is no question that the development of AWS will and already has provoked a new and deadly arms race. And it's a shame the central driver of it is AI. AI is still a flawed, uncertain technology. As a result, assigning responsibility for irresponsible or unlawful acts executed by AWS is murky: the AI engineers, the defense contractor, chain of command or even the machine itself, punishing it by withholding an oil change? This accountability gap would make it is difficult to ensure justice, especially for victims.
The U.S. defines Autonomous Weapons Systems (AWS) in the Department of Defense publication?The Ethics of Autonomous Weapon Systems?as "a weapon system(s) that, once activated, can select and engage targets without further intervention by a human operator." Since the crucial distinguishing mark of human reasoning is the capacity to set ends and goals, the AWS suggests for the first time the possibility of eliminating the human operator from the battlefield.?Therefore, the development of AWS technology on a broad scale represents the potential for a transformation in the structure of war that is qualitatively different from previous military technological innovations.?
领英推荐
Unbundling autonomous weapons issues - point versus counterpoint
Point: Deciding who lives or dies without human intervention (assuming that intervention would be more merciful than a machine, which is not assured) crosses a moral threshold.?
Counterpoint: We do not know if they have a facsimile of compassion necessary to make complex ethical choices.
Point: The US, China, Israel, South Korea, Russia, and the UK develop AWS to select and attack targets. Parity at least guarantees security.
Counterpoint: Without international treaties, we could be entering a worldwide arms race. Nuclear weapons can destroy the world and carry a motivation not to use them, these would make a mess of it, so mutually assured destruction is not a likely premise.?
Point: Ability to replace troops with machines.
Counterpoint: It could hasten the decision to start hostilities and shift the burden of conflict even further to civilians. AWS is likely to make horrible mistakes with unanticipated consequences that could cause tensions to rise unnecessarily.?
My take
The ethics of mass killing of combatants and non-combatants, the massive destruction of property and infrastructure, the poisoning of the environment is pretty shaky ground for ethics, especially AI, which we all hope will develop for the betterment of the world, not its destruction.?
But the train has already left the station. So the only hope, and it's probably slim, is to keep humans in the loop, over targets - and execute decisions by banning the development of fully autonomous weapons by international law and treaty.
Here is an ethical principle: no technology organizations, public or private, including people developing AI and robotics, should pledge never to be part of the development of fully autonomous weapons. There may be hope, as some have done exactly that. Via?Lethal Autonomous Weapons Systems: Recent Developments:
In June 2018, Google came under?fire?as thousands of its employees signed a petition?urging?the company to cease involvement in Project Maven - a contract with the Department of Defense to develop artificial intelligence for analyzing drone footage (which Google employees?feared?could one day facilitate the development or use of LAWS). Facing pressure from employees and technology?experts?across the globe, Google subsequently?announced?its decision not to renew its contract for Project Maven and?vowed?not to?'design or deploy AI … [for] technologies that cause or are likely to cause overall harm.' In July 2018, over 200 organizations and 3,000 individuals (including Elon Musk, Google DeepMind's founders and CEOs of various robotics companies) followed suit,?pledging?to?'neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.'
But as Thomas J. Watson Sr., the founder of IBM and Henry Ford, also found, there was always money to be made doing the wrong things - making money helping Hitler.
~1 Billion Impressions | StoryListener | Polymath | PoliticalCritique | Agentic RAG Architect | Strategic Leadership | R&D
3 年AI is the most dangerous matter on the civilian hands today. It is most disastrous than the nuclear weapons. Regulatory authorities need to line up with law enforcement around the inventions, research and development of products that would be a threat to human. It’s time now.
Culture Architect: Building Bridges to Happier, More Productive Workplaces
3 年Thanks for this crunchy bite, Neil.