Introduction to the ethics in the use of AI in war: Part 3
Abhishek Gupta
Founder and Principal Researcher, Montreal AI Ethics Institute | Director, Responsible AI @ BCG | Helping organizations build and scale Responsible AI programs | Research on Augmented Collective Intelligence (ACI)
Building on Part 1 of the article and Part 2 of the article, let's dive into some more ideas when it comes to the discussion of ethics in the use of AI in war.
In Part 1, I had covered:
- quick Basics that talked about autonomous weapons systems, semi-autonomy, full autonomy, lethal use, and non-lethal use
- the potential advantages and costs
In Part 2, I had covered:
- Current limitations of ethics principles
- Key issues - Part 1
If you haven't had a chance to read the first part and second part yet, I strongly encourage you to do so as we will build on the definitions that were explained in those parts to discuss the issues in this one.
Let's dive into:
- Key issues - Part 2
- Open Questions
1. Key Issues - Part 2
There are some more key issues that bear further examination as we think more about whether such systems should be employed in practice. By taking a deeper look at some of the points raised below, we can arrive at a more nuanced understanding of the potential advantages and costs. Such an approach will make decision-making more robust, especially in the face of urgent and emergent scenarios where we might be pressed to make rapid decisions. Having a robust methodology and reasoning that has been adequately discussed before will enable us to make better decisions.
A. Unreliability of systems
AI systems are inherently probabilistic in nature. Combined with the uncertainty and rapidly evolving set of circumstances in a theatre of war, we end up with emergent behaviour from the system that leads to unreliability. At least from the perspective of us trying to get a stronger grip on the potential pathways through which such systems could respond.
We are faced with a "fog of AI" that makes it hard to assure that the system will behave reliably even when it is being bombarded with unexpected inputs that are being lobbed at it intentionally (adversarial examples) or unintentionally.
B. Lack of consensus for an international treaty
The current climate is one strewn with doubts and divergent views. Nation states have met through fora like the Convention on Certain Conventional Weapons (CCW), but the deliberations have gone on for years when it comes to LAWS with little or no consensus on what direction an international treaty could take.
In the recently published guidelines from the National Security Commission on Artificial Intelligence (NSCAI) in the US, this lack of consensus is exemplified through some of the arguments that are made on why perhaps the US should continue to pursue investment in this space due to a lack of global ability in being able to monitor and verify the adherence to a ban on development and use of such weapons systems.
C. Lack of unanimity in condemnation of LAWS
All the development and deployment would not be possible without the efforts of academic and industry labs that work on various subfields of AI that end up powering some of these systems. Due to the multifarious uses of AI across industries, a technique like object detection can help to identify all the pictures on your phone that have your dog in them but also be repurposed (with changes of course) to identify targets on a battlefield.
Perhaps, a clarion call from researchers and practitioners in refusing to work on applications that have a strong likelihood to be reused in warfare can give us more time while we figure out both the governance and technical measures that can allow for safe use (if we ever do decide that it makes sense to use such systems in this context).
2. Open Questions
While the key issues highlighted in this article and the previous one as well shed some light on the things to consider, there are some overarching questions that warrant empirical and theoretical research before we can even apply some of the nuances that we seek to elicit from the key issues discussion before. The goal with these open questions is to provide our community with some potential research directions, not just in the use of AI in war, but also in adjacent domains where similar concerns might arise over time.
A. What is meaningful human control?
There are so many graduated markers when it comes to meaningful human control that it continues to remain an open question. In particular, something that will need further research is the co-evolutionary aspect of what meaningful human control means. Specifically, as novel technology diffuses across the world, the basic level of competency and understanding along with expectations for the capabilities and limitations of the system evolve as well. This would become relevant as new cohorts of operators enter the domain and work alongside veteran operators in the field leading to disparate skill levels working side by side. In this case, training programs would need to be tailored to meet all of those needs and perhaps the evaluation protocols would also need to be adopted to certify in an adaptive manner to address those differences.
To evade the "human token" problem, we would also need to consider the interactions between the human and machine in dynamic environments where the places where each component defers to the other in a manner that leverages each other's unique strengths.
Finally, meaningful is something that should be articulated in a consistent manner across the different agencies who might be utilizing these systems in the field. A unified understanding will help to develop some of the mechanisms mentioned above in a way that helps all the actors across the different agencies move forward in a synchronized manner rather than leaving behind laggards who can compromise the efficacy of the work done by the early adopters in the field.
B. How to set boundaries between lethal and non-lethal uses?
A question for the legal experts and military operators, a clear differentiation between these two nations will be essential in providing appropriate guidance to the law-making process and help to bring about consensus across the different nation states and other actors as well.
The grey zones at the moment, as we had talked about in Part 1 and Part 2, are a great source of consternation on not only the ethical conundrums posed by such systems but also the kind of standards and practices that should be applied to the procurement and operations of these systems.
In addition, this would have bearing on the meaningful human control aspect as well because it will call for a differentiated approach that makes it clear when we have to be more and less rigorous in our approach to allowing the system to operate autonomously.
C. What are the spillover impacts on other fields if autonomy is normalized here?
This is a much larger question that perhaps needs to be contended by any field that is seriously considering the use of autonomous systems. The reason to think about the spillover impacts is because AI is inherently multi-use and subdomains usually have a lot of mingling through the open-access and open-source publishing model. In addition, with the nascent stages of development of governance frameworks, we are entering an era where treaties, laws, regulations, standards will become codified and set us down particular paths.
In case we start to normalize certain uses and take for granted some of the collateral damage that might arise from the use of such systems in the interest of the "greater good" (a very Utilitarian view), then we will make it easier to justify the use of riskier and risker systems in the interest of following the same logic without concurring with the other checks and balances like community consultations and interdisciplinary deliberations before developing and deploying these systems.
Conclusion
I'll keep the closing remarks for this relatively short since we've had a chance to walk through a lot of the nuances related to the use of AI in war. The decision to do so, as is the case with the use of AI in any high-stakes scenario, boils down to having adequate information about the capabilities and limitations of the system and then utilizing that in a deliberate manner to inform our decision-making. Establishing guardrails that can govern our behaviour ahead of time rather than being reactionary in the face of urgent and emergent circumstances will also aid in appropriating sufficient consideration to the potential advantages and costs of utilizing AI in war. The points raised in this series are just a starting point and I encourage all my fellow practitioners to spend time diving even further into the ideas from this series so that we can do our part in deploying these technologies in a responsible manner.
Hopefully this series of articles on introducing the ethics of using AI in war has been useful. Please feel free to leave your comments and any resources that you've found useful in the comments section here. Looking forward to a productive conversation!
Presumably war is ethical, so using a tool like AI in some part of those transactions will also presumably cross a spectrum of directed to autonomous modes of operation where ethical variance tolerance can be tested.