Agentic AI and the OODA Loop: How Agentic AI Could Support Fire Service Decision-Making
I recently upgraded my phone to the new iPhone 16, and it comes with what Apple calls "Apple Intelligence." This upgrade sent me down a AI rabbit hole and made me curious about how this type of technology might shape or re-shape the way we operate in the fire service.
You may recall that I previously wrote about AI and its current limitations in dynamic emergency management:
At that time I argued that the application of AI in incident command was limited by its struggle to adapt to fast-changing, novel situations on the ground.
After using Apple Intelligence on my device, I began to explore a different type of AI, Agentic AI. Unlike traditional AI systems that process data passively and react to preset inputs, Agentic AI is designed to be proactive and autonomous. It is intended to observe, orient, decide, and then act in real time. The approach represents a shift toward systems that can assess complex, dynamic environments and take strategic actions without waiting for explicit human commands.
During my rabbit hole journey, I listened to a podcast featuring 高通 's Atul Suri and Rahal Bajpai from 德勤
Here is an excerpt:
"So Agenda is able to make complex, layered decisions. We call this the OODA reasoning. So observe, orient, decide and then act reasoning. It is really powerful for a complex, dynamic, multivariable scenario that requires real time decisioning because it can break down the task and adapt on the fly."
Sound familiar? If an AI agent can apply the OODA loop to make decisions toward achieving a goal, it is possible that we could deploy AI agents to complete tasks in our operations. In theory, this could reduce the workload for Incident Command and speed up decision making.
Time is a crucial factor in how we mitigate incidents. The sooner we can move mitigating actions to the start of an incident, the more effective those actions become. For example, if someone has a grease fire on the stove, the mitigating action is as simple as putting a lid on the pot. If that action is delayed, even by a few minutes, the simple measure may become ineffective, and a full alarm of firefighters may be required to extinguish the fire.
I discussed the importance of time in incident management in another post: https://www.dhirubhai.net/pulse/40-second-decisions-applying-boyds-air-combat-tactics-jonathan-boyd-avarc/?trackingId=dSTvHR4bveXxaABLlqx%2BiA%3D%3D
The potential of Agentic AI is promising because it hints at a system that could support and enhance our natural decision-making processes. With the ability to initiate actions autonomously, an AI agent may help us move our decisions and actions closer to the start of an incident, where the impact is greatest. Agentic AI may provide a means to reduce the cognitive load on incident commanders by automating routine tasks and offering real time recommendations.
What is "Agentic AI" and how is it different from the AI we are used to?
The type of AI that must of us have experienced so far is "generative AI". With generative AI you ask a question or provide a prompt, and it generates a response based on patterns in its training data. It creates content like papers, pictures, or even songs based on stored and trained information.
Agentic AI is different. Think "agent" with an "agenda." Agentic AI has a mission. It uses available information to observe its surroundings, orient itself to the current context, decide on the best course of action, and then act to achieve that mission.
Agentic AI follows the OODA loop: Observe, Orient, Decide, Act. In the observe phase, the system collects data from various sources in real time. During the orient phase, it processes this data and establishes its position relative to the incident. In the decide phase, the system weighs the options and selects the most effective action. Finally, the critical part, the “A” in OODA, it carries out the chosen response. The key difference is that Agentic AI does not wait for detailed instructions at every step. It is built to operate autonomously within defined parameters.
There are a couple of examples you can try. For instance, Apple's Intelligence has limited Agentic AI capabilities. You can ask it to navigate to a friend’s house. It will access your contacts, retrieve the friend’s address, input it into the Maps app, and start navigation.
Another example is OpenAI’s operator. Here is a video of it in action, making a reservation for Valentines:
The video is sped up, but you can see that it searches for information, evaluates the options, and then books a reservation on its own. These examples are basic, but they show how Agentic AI can operate independently.
What does this mean for us in the fire service?
领英推荐
I do not see AI replacing human command or managing an incident completely, or at least anytime soon. There are too many variables and unknowns. However, Agentic AI could be deployed in a narrow scope to support our operations.
One example is our long-standing effort to use SCBA telemetry data for situational awareness. The idea is that the incident commander could have a status board showing the air status of all SCBAs. Consider SCOTT’s version:
While the data is useful, it can get lost in the flood of information during an incident. It is only really useful if you have another person dedicated to watching the data.
If the software were integrated with an AI agent, with an agenda or mission, to alert the incident commander when a firefighter is in trouble, it could monitor the telemetry continuously. The agent would use SCBA data, radio traffic, and weather conditions to decide an alert is necessary, and then sound the alarm. In theory, this would lower the workload on the commander and help make faster decisions.
Agentic AI could also be used within SCBAs to limit false PASS alarms. It might assess multiple data sources to determine if an alert is needed and what details should be sent to the incident commander. These examples are, of course, “zero fail” safety scenarios. It will take years before the fire service fully trusts technology to handle these types of safety features.
However, a "let me know if this is a real emergency" Agentic AI, would likely see quicker acceptance in commercial fire alarm systems. Many departments deal with a high volume of false alarms, which wastes resources and can lead to alarm fatigue.
An Agentic AI system in a commercial fire alarm setting could work by accessing data from multiple sources. For instance, when a smoke detector is triggered, the system could also check live camera feeds and input from other sensors. With this additional information, the system could decide whether the activation indicates an actual fire or is merely a false alarm. If it determines that there is no fire, the system could mark the event as a false trigger and notify building management for maintenance, rather than automatically calling emergency responders. Conversely, it could let us know if there is a real fire and we would most likely upgrade the response.
So what are the limitation and challenges keeping this from quickly becoming a reality? Here is a great blog post that breaks down the current and future challenges:
To summarize the information, Agentic AI faces several hurdles. One major issue is reliability and control. Autonomous agents powered by large language models can be unpredictable and may make errors in judgment. Many current AI agents are still basic prototypes or narrow automations. Scaling them to handle complex, open-ended scenarios remains difficult.
Another technical limitation is the context length of these models. Agents have a limited memory of past interactions, which makes long-term planning challenging. While external memory stores offer some support, they do not yet match the continuous learning that humans display. In addition, agents struggle with tasks that require many sequential steps. When a task involves numerous contingent actions, an agent may lose track or fail to adjust when unexpected obstacles arise.
Current systems also rely on interpreting natural language outputs to determine the next step. This approach can be brittle. An agent might format a command incorrectly or generate irrelevant text, which can disrupt the control logic. Ensuring integration between the agent’s reasoning and its external actions is an ongoing engineering challenge.
Beyond technical issues, safety, ethics, and governance remain significant concerns. Since these agents operate without a human in the loop, there is a risk of errors or unintended consequences. Questions of accountability and liability also persist if a self-directed AI causes harm. Finally, an agent’s effectiveness depends on the quality and availability of data and tools.
Despite these limitations, I believe we will start to see Agentic AI systems gradually integrated into background operations. These systems are already being tested to help manage cell connectivity by assessing network needs and taking action automatically.
The same type of system could soon be deployed part of our push-to-talk radio networks to "self-heal" the network.
With a limited scope and well-defined parameters, we will most likely see Agentic AI systems deployed first is ways that support incident management by handling routine tasks, monitoring data, providing timely recommendations, and initiating routine actions.
However, there is still one significant challenge for our industry. Most AI agents today depend on cloud connectivity to access large language models and properly synthesize information. This reliance means that connectivity and latency remain major hurdles, especially during emergencies when reliable, real-time data is critical.
An emerging solution to this problem is edge computing. Edge computing involves too much to explain in this article (I'll try to take a deeper dive in a follow up article). But essentially, by processing data closer to where it is generated, edge computing reduces latency and improves reliability. This approach could enable AI agents to function effectively even in situations where cloud connectivity is limited or delayed. Together, these technologies could help us bring more of our critical decision-making closer to the start of an incident, ultimately supporting faster and more effective responses in the fire service.
President & Co-Founder of Incident Management Technology. Improving public safety responses with actionable data and response intelligence for responders.
3 周Great points, Chief Boyd! We’ve been exploring how local LLM-based agentic RAG systems could help filter and prioritize Incident Command data, reducing real-time information overload. Integrating proactive AI tools with our existing workflows allows us to streamline mission-critical insights and free up mental bandwidth for the complex, human-driven decisions that matter most on the fireground. Looking forward to seeing how these technologies continue to evolve.