The New Art of War – AI changes the game
Technology is advancing at a staggering pace, having a profound effect on the way militaries operate.
However, military supremacy won’t be achieved through the interplay of advanced capabilities alone; it will be achieved through the ability to capture and synthesise content to make better decisions, faster than an adversary.
The art of war is therefore knowing more than your opponent and employing intelligent systems to provide a calculative advantage – strategically, operationally and tactically.
By intelligent systems, I mean those powered by Artificial Intelligence (AI) which use the entire information ecosystem to make assumptions, test them, learn autonomously, predict events and provide recommendations.
A host of new challenges
Greater chaos, complexity and competition in today’s environment means that militaries will begin to test other actors and compete in the ‘grey areas’. That is, the space between war and peace, in which militaries deploy innovative technologies that depart from traditional conventions and create new non-linear and less predictable effects.
Given these challenges, the allied community has begun investing more in the development of AI. These initiatives aim to enhance operational advantages on and off the battlefield, improve the accuracy and confidence of decisions, and enhance certainty over outcomes. The idea being that AI may tame the nature of war.
Project Maven, led by the US Department of Defence, involves the deployment of AI to process drone footage in the fight against terrorism. Now in its second year the project is yielding incredible results, demonstrating that AI is rapidly maturing as a powerful technology with seemingly limitless applications. It has proven its ability to automate routine tasks while also augmenting human capacity with new insight.
The Royal Australian Airforce’s own experimental capability, the Loyal Wingman, is another great example. Semi-autonomous drones are intended to provide a force multiplier by projecting power forward and keeping manned platforms safe. Whilst still under development it’s planned that the Loyal Wingman will maintain AI-powered capabilities that self-calibrate to support intelligence, surveillance, reconnaissance and electronic warfare missions.
With great power comes great responsibility - the crux of the issue
AI has raised many concerns particularly around ethics and legalities. For AI to become responsibly integrated into force posture, it needs to be accompanied by the right measures and safeguards.
For this reason, Explainable AI (XAI) has become the popular topic in 2019. XAI is defined as the ability of AI to be transparent, interpretable and explainable. The intent of XAI is to produce more explainable AI models while maintaining a high level of performance. Moreover, it’s to enable human users to understand, appropriately trust, and effectively manage the emerging generation of AI partners. This premise is outlined in Technology Vision 2019.
It goes without saying, that this territory is largely uncharted.
New intelligent systems will impact the future operating environment and it’s our responsibility to make sure they are accompanied by strong frameworks – especially when decisions over human life is involved. Prudence is required so that our servant doesn’t become our executioner.
Taking a step forward
At Accenture we’re developing AI which is transparent in its strengths and weaknesses and can provide confidence in its future behaviour. We’re producing more explainable and contextualised models and are committed to fostering innovation that drives the development of 5th generation capabilities for Australia and its allies.
The new art of war is about harnessing the entire information ecosystem to help military commanders apply the right solution to the right problem at the right time, in real time. Because intelligent systems that can win in the decision space will likely prevail in the battlespace.
I believe this will be a game changer. If new intelligent systems stand up to the most rigorous scrutiny, they will allow militaries to prevent conflicts when possible, keep others winnable when not, and as technology alters the strategic calculus, make better decisions in the ‘grey areas’.
A bumpy road ahead
We haven’t found a silver bullet yet for managing, let alone integrating the vast amount of sensor data from legacy platforms into new intelligent systems. This will be a key challenge moving forward. Personally though, I look forward to the challenges ahead, and especially in achieving mission focused outcomes for our clients Down Under and across the Five Eyes.
I’m keen to hear your views on AI and its applications, so please leave a comment.
Senior Research Fellow, Royal United Services Institute
5 年The idea of XAI is an interesting one and could get at one of the crucial enablers of AI exploitation: how we generate trust between human and non-human actors. Without that trust, the potential of AI is unlikely to be realised as its wings are likely to be clipped. Whilst clearly the power relationship is different, we have experience of human and non-human trust being generated with animals. It would be interesting to see what lessons we could learn from the experience of animal (and human) training in trust building between human and artificial intelligence.
Security Engineer at NTT Data Australia
5 年Hi Adam, Thanks for sharing your thoughts regarding how AI will change the art of war. I have my own thoughts on this from a technical implementation perspective, having previously taught a class on artificial intelligence at university and using a form of AI during my PhD research many years ago. Thinking about how AI technology has progressed recently: around eleven years ago (when I was doing my PhD), AI as a technology was around already, however the implementation of it was limited due to the inability to process data at large scales. Around 4-5 years ago the scalability issue was solved through the emergence of Big Data analytic platforms, such as Hadoop. These big data analytic platforms made it easier to implement data processing at scale, and started to incorporate machine learning algorithms as additions to the platform, which gave machines the ability to learn from large amounts of data. The main challenges I see in order for AI to meet the characteristics you describe regarding “explainable AI”, is basically coming up with an AI model or an approach that allows the knowledge learned by the AI to be explainable/representable to a human. As mentioned in the linked article for “Explainable AI”, basically all machine learning algorithms are a “black box” to the user, without a way for the user to determine how the algorithm developed a model of the input data and came to provide the results seen by the user. I think part of the solution for explainable AI will involve some form of knowledge representation that makes the semantics of the learned model understandable to humans. Given that humans best gain new knowledge through words and pictures, I think the ideal solution for explainable AI will come through some form or a combination of natural language representation and data visualisation. I’m not completely across the latest research that can make the learned model of a machine learning algorithm representable to a human. However it will be interesting to know whether there have been some breakthroughs made in this area.
State Manager
5 年Dominic Diep
Visiting Fellow Griffith Asia Institute, Associate Fellow Royal United Services Institute (RUSI)
5 年much debate on this topic rests on the assumption that:?capture data + synthesise content = better decisions. Is that always true for every type of problem? For logistic support, equipment monitoring etc where boundaries are sharply defined OK. However, for operational tasks on a confusing battlespace when what is important is unknown, constantly changing, of uncertain dimension and seen for the first time is the assumption true???
Executive Director, iLAuNCH Trailblazer; Deputy Chair SIAA
5 年“Semi-autonomous drones are intended to provide a force multiplier by projecting power forward and keeping manned platforms safe.”