AI-enabled Autonomy in the Battlefield: Difficult questions that deserve attention
Amir Husain
Founder: SparkCognition, SkyGrid, Navigate | Author: The Sentient Machine, Gen AI for Leaders, Hyperwar | Board: UT Austin PAIB & CS, WorldQuant Predictive, SpecFive, Global Venture Bridge
Sydney Freedberg expressed his views on autonomous military technologies in InsideDefense.com after interviewing Gen. Allen and me as a follow-up to our recent publication, "On Hyperwar". Since its appearance in the U.S. Naval Institute Proceedings journal, the ideas we expressed in "Hyperwar" have triggered quite a reaction on and offline. While I don't agree completely agree with all the characterizations in Sydney's article, it is well-written, thought-provoking and focuses attention on an important debate - a debate we need more of, in greater depth, venturing far beyond the simplistic portrayals and hackneyed phrases so commonly associated with these questions.
The truth is, AI is coming to the battlefield. Multiple countries are on this path and game theory explains the inevitability of such developments. Unilaterally disavowing further development will not prevent the arrival of this technology, nor will disavowals be trusted by other nations/organizations, sadly.
Rather than turning a blind eye to what is coming or pretending that a human will always be in the loop, it might be best to roll up our sleeves and get to work developing technologies and frameworks that will allow for safer use of AI. If things turn kinetic, can autonomous weapon systems actually be used to minimize collateral damage? Can such systems afford options short of nuclear annihilation or massive aerial campaigns that impact non-combatants in brutal ways? Can AI make the battlefield LESS deadly? These are difficult questions, and while there are those who argue for blanket bans, unfortunately, I simply don't believe blanket bans can work. The history of weapons proliferation bears witness. The path I have personally chosen is to contribute as much as I can to enabling safe and explainable autonomy. In collaboration with Gen. John Allen (USMC ret.), Prof. Bruce Porter (two-time Chair of UT Austin CS) and many other leading scientists, strategists and technologists we aim to be part of a meaningful conversation about these difficult topics, while redoubling our efforts and amplifying our investments toward the development of frameworks that enable explainability and safe autonomy in AI systems.
Cloud Security | Real Time Visibility and Governance
7 年AI and other tech advancements are absolutely critical to protect our country.