Why AI Agents Aren't Agents

Why AI Agents Aren't Agents

Copyright 2025 Kurt Cagle/The Cagle Report

One of the big stories in 2024 was that "2025 Would Be The Year of Agentic AI". Lots of ink spilt, many TED Conferences hosted, analyst reports galore on how AI is now replacing the lowly programmer, and Agents were the next big thing.

Eh, not quite. First, agents are not exactly new. Back in the 1980s, when I first encountered systems and cybernetic theory (which was rooted in the 1960s), agents were a hot topic of discussion and were a big part of the discussion about emergent behaviours. Take a flock of birds, for instance. Each bird acted as an independent entity, grubbing warms, resting, hopping about and cawing to defend its territory. However, in many species, when some external event occurred (a bird got spooked for instance), the flock would take off, coordinating their efforts seemingly magically so that they wouldn't collide with one another in mid-air.

What was so fascinating about this is that the collaboration did not take place at a central level - no one bird was telling every other bird what to do. Instead, each bird would adjust its behaviour based on the actions of the birds in their immediate proximity, using comparatively simple algorithms (if your neighbour at your 2 o'clock moves, then you move the same way; if you have no neighbour at your 2 o'clock, continue flying straight). Each bird was relatively autonomous, but they changed their behaviour when specific events occurred. They were agents, with the collective of agents known as a swarm, flock, or hive.

This cybernetic definition of agent was very straightforward:

  • Autonomous. The agent performed certain actions independently of those around it.
  • Event Driven. An agent would initiate an action when a certain event occurred and would continue performing that action until another event happened to change that action.
  • Stateful. The agent retained a current state, regardless of its mode of interaction, and that current state was consistent and independent (an NPC in a game that is killed remains dead for the next PC that encounters it).
  • Contextual. The actions of each agent is dictated in part by the external state (context), which also implies that agents have the means to detect their external context (senses).
  • Distributed. An agent system is perforce distributed - an agent may talk periodically to its mothership, but it can't guarantee that connection is always available and should consequently be able to act within local networks.

Contemporary game design is a perfect example of cybernetic agents in action. When you play The Sims (or really ANY contemporary RPG or simulation), you are working with an agentic system.

This is part of the reason that so many game designers are pissed at the current AI Hype, because to hear it, AI invented agents. They didn't, they appropriated the term because it sounded cool. What they "invented" was a glorified form of web services, which are essentially threaded asynchronous services over a web protocol.

There are two different kinds of AI programs out there - custom GPTs and services, and ironically, the first is (marginally) more agentic than the second (despite the second being called Agentic).

In a CustomGPT, you can set a particular set of conditions that, when fulfilled, execute a command. For instance,

> Any time the user uploads a JSON file, read that file, parse it, and for every entry in that file starts with the term "Character", then create a character based upon the data for that entry.        

This has at least some of the hallmarks of an agentic system - an event is defined (a user uploads the JSON file), there is some degree of statefulness (this condition persists until it is overridden), it is contextual (the action is aware of other changes in the session). It isn't really autonomous, however, because it can only take action when prompted).

Moreover, the agent (in effect, the session) does not by itself retain a state outside of the boundaries of the session, and the sessions perforce time out after a certain point. Additionally, the AI retains memory (session state) in its context, and once that context is overrun, older imperatives or contextual information gets lost—the state decays.

Now, this does not mean that a session object (in effect, the quasi-agent) cannot persist that information in some other form in a knowledge graph and then reconstitute it later, but this is outside the scope of the AI itself - it is, in effect,, using external web service threads to act as its memory or perform specific actions (such as initiating a transaction).

Most of what is called AI Agents today are, in effect, orchestrated service calls to AI endpoints mixed with non-AI calls. An AI may generate the specific orchestration, but that orchestration is still executed by an external (non-AI) orchestrator service that handles the asynchronous call management and ultimately packages it as a synchronous response.

Please note: there is nothing wrong with this. It is still an incredibly powerful paradigm, you can (if you trust the AI to generate valid content through all stages of the process) create very powerful distributed programs this way (in AWS these are called Lambdas, and they've become the bedrock of Amazon's systems). However, these are NOT cybernetic agents.

So, what would a true agentic system look like in an AI context? Again, let's go back to the above working definition:

  • Autonomous. A true agent would have an identity (a URL or IRI) that remains consistent over the lifespan of that agent, which would be comparatively long.
  • Event Driven. A true agent would be event-driven based upon both internal and external state changes. Is it 4pm? Retrieve the mail. 5 pm? Walk the dog. Did a package that was needed for a project arrive? Start the project if there are enough pieces to begin; otherwise, wait. OpenAI recently announced a first step in that direction, but it still seems kludgy.
  • Stateful. A true agent retains its state from one event to the next. This is a big ask for a centralized system because when an agent goes quiescent, it saves all of its state until the next event (a form of encapsulation). This takes a lot of storage (especially for a conversational AI), and is one reason why swarms of agents tend to be fairly simple. In contemporary Agentic AI terms, most of the storage maintenance is still in the hands of the programmer (it's outside the remit of any given thread of conversation), which is a big reason why programmers will not disappear anytime soon.
  • Contextual. This one is pretty complex, because it means that it has to be able to identify, contact, and get at least some state from other AI Agents. This can be done with some form of pub/sub event system, where one agent subscribes to the state of another agent and then receives a notification back when the relevant state changes are made. Of course, this means that the agent has to periodically wake up and check its own messages, which again implies long-term existence.
  • Distributed. This is a side effect of autonomy. A true agent should be able to get local external state from an immediate network of other agents. Ideally, such an agent is, in fact, not controlled by a mothership but only gets higher-priority messages from it.

There are several implications of this, one being that a true agentic system only occurs when you have a local agent running on a local system and a remote agent on a different system. Moreover, the agents within a local network can assume (or elect) different roles (including coordinator or manager) in response to exigent conditions.

A second consequence of this is the capability of such agents to participate in multi-user conversations. Most current AI agents are passive in such conversations - they monitor the conversation from multiple participants but generally do not act unless certain very specific commands are initiated as part of the stream, and they go out of existence once a given session is ended.

An actual agent, on the other hand, would have persistence beyond the session, would, in effect, grow in terms of its knowledge based upon prior conversations, and would be able to manage various artefacts produced in sessions across sessions. This is certainly doable, but not based on today's working definition of AI Agents.

Final Thoughts

What I think will need to happen at some point is for game designers and AI producers to spend some time talking with one another and for the AI Tech Bros to actually listen to what the game designers and developers have to say rather than just practice performative posturing. They might learn a thing or two.

In Media Res,


Bots! They never do what you tell them to.

Kurt Cagle

Editor, The Cagle Report

If you want to shoot the breeze or have a cup of virtual coffee, I have a Calendly account at https://calendly.com/theCagleReport. I am available for consulting and full-time work as an ontologist, AI/Knowledge Graph guru, and coffee maker.


Yup, it does help! Coffee ain't cheap.

I've created a Ko-fi account for voluntary contributions, either one-time or continuingly. If you find value in what I write, either articles like this, technical pieces, or just general thoughts about work in the 21st century, I ask that you contribute something to keep me afloat so I can continue writing.



Foaf:agent: software? FWIW; a few years ago, I made a list of "artificial minds" that was intended to be a tool for considering the different characteristics of different types of software agents / programs, with the hope that it could provide a means for people to consider what sorts of characteristics they're focusing their attention on creating. https://docs.google.com/spreadsheets/d/1VixKXjZL31bZRXQS9J1FmvPyDzdkgE8B2-3fzPmRYNc/edit?usp=drivesdk ??

回复
Emeka Okoye

Knowledge Engineer | Al Engineer | Ontologist | Semantic Architect | Knowledge Graph Engineer | Information Architect | Python AI

1 个月

A great reminder of what AI Agents ought to be.

回复
John Siegrist

Enterprise Architect

2 个月

Juan Sequeda More grumpy thoughts on ”AI agent” hype.

Dirk Oppenkowski

VP Global SAP Alliance at Solace

2 个月

I love the prominent description of agentic AE as dependent on being "Event-Driven". But then I might be biased ??

Alok Mehta

Angel Investor / Investor to Buy profitable businesses/ Business Consultant for Scaling up Profitably / Turn arounds

2 个月

Kurt Cagle very very insightful n useful write up thank you

回复

要查看或添加评论,请登录

Kurt Cagle的更多文章

  • Reality Check

    Reality Check

    Copyright 2025 Kurt Cagle / The Cagle Report What are we seeing here? Let me see if I can break it down: ?? Cloud…

    14 条评论
  • MarkLogic Gets a Serious Upgrade

    MarkLogic Gets a Serious Upgrade

    Copyright 2025 Kurt Cagle / The Cagle Report Progress Software has just dropped the first v12 Early Access release of…

    14 条评论
  • Beyond Copyright

    Beyond Copyright

    Copyright 2025 Kurt Cagle / The Cagle Report The question of copyright is now very much on people's minds. I do not…

    5 条评论
  • Beware Those Seeking Efficiency

    Beware Those Seeking Efficiency

    Copyright 2025 Kurt Cagle / The Cagle Report As I write this, the Tech Bros are currently doing a hostile takeover of…

    86 条评论
  • A Decentralized AI/KG Web

    A Decentralized AI/KG Web

    Copyright 2025 Kurt Cagle / The Cagle Report An Interesting Week This has been an interesting week. On Sunday, a…

    48 条评论
  • Thoughts on DeepSeek, OpenAI, and the Red Pill/Blue Pill Dilemma of Stargate

    Thoughts on DeepSeek, OpenAI, and the Red Pill/Blue Pill Dilemma of Stargate

    I am currently working on Deepseek (https://chat.deepseek.

    41 条评论
  • The (Fake) Testerone Crisis

    The (Fake) Testerone Crisis

    Copyright 2025 Kurt Cagle/The Cagle Report "Testosterone! What the world needs now is TESTOSTERONE!!!" - Mark…

    22 条评论
  • What to Study in 2025 If You Want A Job in 2030

    What to Study in 2025 If You Want A Job in 2030

    Copyright 2025 Kurt Cagle/The Cagle Report This post started out as a response to someone asking me what I thought…

    28 条评论
  • Ontologies and Knowledge Graphs

    Ontologies and Knowledge Graphs

    Copyright 2025 Kurt Cagle/The Cagle Report In my last post, I talked about ontologies as language toolkits, but I'm…

    53 条评论
  • Ontological Ruminations

    Ontological Ruminations

    Copyright 2025 Kurt Cagle / The Cagle Report What do we mean by "ontology"? The answer is ..

    28 条评论