Introduction to ethics in the use of AI in war: Part 2
Abhishek Gupta
Founder and Principal Researcher, Montreal AI Ethics Institute | Director, Responsible AI @ BCG | Helping organizations build and scale Responsible AI programs | Research on Augmented Collective Intelligence (ACI)
Building on Part 1 of the article, let's dive into some more ideas when it comes to the discussion of ethics in the use of AI in war.
In Part 1, I had covered:
- quick Basics that talked about autonomous weapons systems, semi-autonomy, full autonomy, lethal use, and non-lethal use
- the potential advantages and costs
If you haven't had a chance to read the first part yet, I strongly encourage you to do so as we will build on the definitions that were explained in that part to discuss the issues in this one.
Let's dive into:
- Current limitations of ethics principles
- Key issues - Part 1
1. Current limitations of ethics principles
The discussions that surround the ethics of the use of AI in war today though constantly evolving still have a few key considerations without which we risk utilizing these systems in a way that can cause significant harm and the costs as outlined in the previous article will start to outweigh the potential benefits much more heavily.
A. Varying degree of emphasis on AI safety and reliability
At the moment, the way the issues are phrased in this subdomain, there is a varying degree of emphasis on AI safety and reliability. This is a problem because there are many unintended and emergent behaviours that can arise from the use of AI in the theatre of war, especially when it comes to unintended escalations due to a misunderstanding of intent and aim when you have programmatic entities interacting with each other in a volatile environment.
And this is without the fact that we also have adversarial manipulation of AI systems on the battlefield through confounding examples with the explicit aim of eliciting border-case broken behaviours from the system that can trigger a cascade of other actions that can lead to escalation and unnecessary harm.
B. Underfunding of long-term AI security research
There are certain issues in this domain that still require further examination. These include: adversarial robustness amongst other long-run security implications like the impact of the degree of autonomy on the dynamics of human-machine interaction in war. These are under-explored, under-funded, and under-emphasized due to the more immediate focus on some of the ethical challenges that the use of AI poses.
This is not to say that those are not important — they most certainly are, but it also means that if there is insufficient investment on the front of this crucial long-term security research then we will have limited capabilities of protecting ourselves. This is especially true when those who don't believe in the non-use of AI systems in war choose to deploy them raising the stakes for other state actors who are unprepared to tackle the impacts of those systems in the battlefield.
C. Soft Law is not enough
Most of the current discussions around this subject push for soft law approaches in the interim until we arrive at more mandatory legal instruments, which precedent in the domains of nuclear security has shown can take a long time to bear fruit. While there isn't any harm in the advocacy of such soft law and self-regulatory approaches, the likelihood that they are adhered to by state and non-state actors who don't believe in these ideas in the first place is very low, only diminishing the ability of those states who align with these ethical principles in the first place and are subsequently likely to adhere to them as well at an international stage.
D. Convergence on key themes but still divorced from concrete implementations
As is the case with the larger AI ethics domain, since 2017 there has been a great deal of work done in the theoretical analysis of the issues in the space — so much so that it may have become detrimental to actually moving from principles to practice.
The work in considering ethics in the use of AI in war is in somewhat similar waters with a lot of high-level convergence on the key ideas but still a lot of separation from some of the actual, current capabilities and limitations of AI systems as it applies to their use in war. Namely, we are running into the classic problem of proposals that either seek to under/over-regulate on how such systems should be governed. We might also end up looking at the wrong areas in terms of focus either because there is poor understanding of the core issues or that there is a limited understanding of what the systems are actually capable of, falling into the classic dilemma of overestimation of the capabilities in the short-run and under-estimation of the capabilities in the long-run.
E. Lack of firm governance structures
There is also fragmentation on the part of governance approaches that really places at risk the entire modus operandi for dealing with the presence of AI in the theatre of war. Namely, we have proponents for having distinct governance mechanisms because of the presence of AI and the unique challenges that it poses while others argue for the folding in of this governance into the existing structures and empowering those in the current chain of command to understand and address the challenges that come with the use of AI.
I am a strong advocate for the utilization of the existing chain of command structures as a governance mechanism because that allows us to leverage some of the best practices that have taken decades to be formed and have been battle-tested and honed over the years. This also has the advantage of keeping the locus of accountability and control with the humans who have been trained to operate in a war environment and keep at our disposal existing legal mechanisms that can leverage courts and laws to adjudicate violations that occur due to machines, humans, or a combination of both.
F. Potentially incorrect policy precedents
As such systems are rushed to be used on the battlefield, potentially because of the fear of missing out or being out-maneuvered by an adversary, we risk setting incorrect policy precedents for future uses and even more advanced versions of these systems which can fail in unexpected ways.
The policy precedents problem potentially extends beyond just warfighting as well to other mission-critical domains where there is a risk that we normalize the rushing through of systems that have significant impacts on human life into production and use while deferring the judgement on whether the benefits outweigh the costs.
2. Key Issues - Part 1
There are a large set of issues when it comes to the automation of warfare and some of them are technological and others are organizational and societal in nature. All of them warrant further examination to ensure that we are building systems that might help to obviate their use in the first place or at least help to make warfare more humane minimizing unnecessary harm.
A. Lack of ability to recall the system
When imbuing the system with enough autonomy to make decisions independently (say when disconnected from control), without failsafe mechanisms and in evolving uncertainty, there is a high risk in the kind of unintended behaviour that such a system can exhibit as it encounters challenges in the battlefield — both physical and virtual.
Building on the subject of inadvertent escalation, this is the perfect candidate for a situation where our ability to not be able to recall the system can lead to additional harm, especially given the pace of decision-making and execution that automated systems have compared to humans. This can lead to a runaway feedback loop with other automated systems, both friendly and adversarial, that further compound the issues in the space.
B. Rigid targeting
One of the key skills that trained human officers have is their ability to assimilate new information and reorient targeting and strategic decision-making as new information comes to light. This is something that might be challenging for automated systems that haven't been adequately battle-tested whose capabilities are then constrained (sometimes rightfully so) to targets that it has learned on in the training phase of the AI development lifecycle.
But this rigidity can become a liability when emergent circumstances in the battlefield, through diplomatic or other changes, warrant a re-evaluation and the system is unable to update its internal representation in a manner that allows it to incorporate that new information, leading to potentially unnecessary casualties.
C. Democratizing the ability to render harm
Finally, and perhaps most importantly, automated systems democratize (in a bad way) the ability to render harm. We have seen this dynamic play out in the information ecosystem where bots are able to manipulate the conversation and nudge the direction of discourse in a way that can cause harm for communities by further polarizing them and presenting to them only a sliver of the actual state of the information ecosystem. Essentially, a small number of actors have been able to scale their disinformation efforts with the help of automation.
With the potential to cause a lot of damage (in the physical world), automated weapons can do the same by empowering a small group with the tools that give them the ability to operate at par with smaller nation states (perhaps not the largest militaries in the world because of additional advantages that they have from having a sprawling infrastructure and access to resources). This is highly problematic because it can destabilize the world a lot more and lead us to a place where we can have unexpected threats emerging from small actors who have managed to get their hands on these automated weapons.
Conclusion
I had hoped to cover all the material for this introduction in two parts but there is a bit more to the discussion and hence please do stay tuned for Part 3 of this article that will cover the following:
- Key Issues - Part 2
- Open Questions
In the meantime, I would be delighted to hear from you on what you think can be done to have a more comprehensive coverage of the ethics principles and key issues when it comes to the use of AI in war.
More such ideas and insights featured in The AI Ethics Brief put together by the team at the Montreal AI Ethics Institute.
Advisor (Science & Technology Policy)
3 年Orlanda Gill This is cool!
Chief Growth & Investment Officer @ Canada's Ocean Supercluster | Board Member | Ocean AI/ML
3 年Excellent piece you have written. Adding to this, Abhishek Gupta. This course by Steve Blank includes a great talk about Ai and war. This video https://youtu.be/SY1n3Wvo0Ms is part of a very interesting course/ series https://timw.sites.stanford.edu/. I am paraphrasing but in the video link, one speaker talks about how we risk following the lowest common denominator if we do not set up. ground rules for the use of AI in war. And decision making is bureaucratic, slow and flawed, missing opportunities to explore and adopt timely innovation.