Context and the Inference Problem
People have asked me what I'm getting my doctorate in. It's in understanding the Military's intelligence model. Specifically, my research deconstructs that model to understand how “we” (as the United States military) develop the factors required to make decisions and – most importantly – how we use those factors to predict how adversaries make decisions.?It is a search for theory.
This focus is because we use the military intelligence model to drive targeting and close kill chains. The model creates context using a doctrinal process that tells military decision-makers what they should or should not target. The model predicts adversary action and interprets intent. It is a rigorous and often ambiguous process, as the soft skills required include imagination, intuition, and empathy.?
To use a familiar trope - if we can’t measure it, can we manage it? My research efforts have yet to uncover an underlying theory for the doctrinal process. The model’s steps are doctrinal, but the process itself is uncalibrated. It lacks any identified academic or scientific theory.
Without resolving these concerns, expectations are that the process will continually improve. In a technologically advancing world, decision-makers want a technological solution for this – often through companies like Palantir, Anduril, Dataminr, Recorded Future, and Seerist to automate understanding. They want to close the kill chain faster, move humans from inside the loop to on the loop, and automate the sensor-to-shooter process.?
Of course, this is not a discussion without precedent. Most military pundits lean on Christian Brose ’s?The Kill Chain?to emphasize the need for automation, pointing toward hypersonic and adversaries doing the same. A read of Andrew Cockburn ’s?Kill Chain, a book written a generation before Brose’s, covers the same concepts, providing an excellent comparison of the pace of technological advancement. Further, reading Nicole Perlroth ’s This is How They Tell Me the World Ends helps dispel concerns about total Artificial Intelligence (AI) annihilation, as is consistently predicted. Still, the concerns for automation are not without precedent.?
In conflicts throughout the Middle East and Central Europe, one-way drones are sent out to find and target humans autonomously and without discretion. In a total war scenario, would the United States face anything less? United States military leaders know the force will have to get as close to a credible automated targeting system as quickly as possible if it ever hopes to deter peer-on-peer aggression. Further, the future of unconventional conflict will include inexpensive technology, which will now require drone-on-drone countermeasures. A slow, bureaucratic response to a first strike, kinetic or otherwise, means inevitable defeat.?
However, we are a long way off from automating context. AI generation in the vein of LLM, NLM, LaMDA, ALPACA, LLAMA, and other language models are enormous leaps forward, but they don’t portend the end of human analysis. They cannot understand inference. I just finished re-reading Erik Larson's The Myth of Artificial Intelligence?as a refresher on the various types of inference required for understanding and for the focus on abductive inference. AI’s inability to derive conclusions from observations is Larson’s primary concern.?
To give just one example, I modified and used an example from the book (Kindle, Page 210) to check his work against Open AI’s free ChatGPT, the Microsoft Bing version of ChaptGPT (for recency), and Google’s Bard. The problem I provided was written as follows: “The sack of flour had been placed below the box of nails, so it had to be moved first. What had to be moved first?”
领英推荐
Here are the answers I received from each.
1.?????ChatGPT?(GPT3; Free Open AI version): “Based on the given statement, the sentence "the sack of flour had been placed below the box of nails, so it had to be moved first," indicates that the sack of flour needed to be moved before anything else. Therefore, the sack of flour is the item that had to be moved first in order to access or move the box of nails.”
2.?????Bing ChatGPT: The sack of flour had to be moved first. Is there anything else I can help you with?
3.?????Bard AI: The sack of flour had to be moved first. The sentence "The sack of flour had been placed below the box of nails, so it had to be moved first" implies that the sack of flour is on top of the box of nails. In order to move the box of nails, the sack of flour must be moved first.
All three could not correctly associate the word “it” because they could not infer that the box of nails was physically on top of the flour sack.?
If we’re going to automate targeting for any weapon system, we need to ensure the tools we use for automation are capable of deriving context. The inference must be at least as fast as the targeting process itself and should be the priority of our efforts. Many military leaders want the technology to make it happen, but – as Larson points out – “technology is downstream of theory” (pg. 251). Until we understand why we follow our doctrine, advanced automated technology will remain beyond our grasp.
And this, in short, is my research.
-Luke