Brilliant Yet Blind: The Missing Wisdom in AI Agents
Julian Seidenberg (PhD)
Head of Artificial Intelligence at Datch Ltd and Narrative Ltd
Sam is a planner in a large electricity distribution company. His company has just deployed an AI Agent with an IQ of 130 that promises to make it possible for him to plan twice as many jobs in the same amount of time. Sam is very excited because the limiting factor for getting more essential maintenance work done is the lack of skilled planners. If these AI Agents work as promised, there will be fewer outages, and work crews will be more efficient in their work.
Sam does a test and asks the Agent to build a job pack for a simple maintenance work order he needs to plan. The Agent goes away for a minute, then comes back with a folder full of documents and a detailed summary of the work to be done. Brilliant!
Next, he tries it on a slightly more complex work order for upgrading a transformer. The Agent goes away and … huh? It attached documents for the old transformer that was being removed from service, not the new transformer that is due to be installed, and the summary of work is an overly long, flowery essay instead of bullet points — no one is going to read that, and it didn’t include anything about safety — doesn’t it know that the last time someone worked on that pole, they nearly impaled themselves on the nearby tree branches? Frustrated, Sam concludes that AI Agents are too stupid to be useful.
?
Why Wasn’t the Agent Helpful?
The world is abuzz with the promise of AI Agents. The Large Language Models (LLMs) powering them are smarter than the average human, are experts in every domain of human knowledge, and use internal chain-of-thought techniques to do complex reasoning before taking any action. The promise is that soon we will have Artificial General Intelligence (AGI), and then AI Agents will be able to do all the jobs in the world, while humans can sit back, relax, and collect their universal basic income.
But there appears to be a disconnect between the promise and reality. What is this disconnect?
?
Intelligence vs. Wisdom
Intelligence: one’s ability to think logically, memorize, and analyze information.
Wisdom: one’s willpower, common sense, judgment, perception, and intuition.
AI has incredible intelligence but has a distinct lack of wisdom. On one hand, LLMs can do many things, but they are fundamentally still next-word prediction machines. Humans, on the other hand, have deep context about the physical world, learned through experience, and that context manifests as wisdom. For this reason, the power of AI is best paired with humans who can use their perceptive abilities to guide, focus, and temper the AI’s raw intelligence. The future of AI is therefore likely one of augmentation rather than outright replacement.
?
Blindness Without Context
The AI needs to be able to understand what is happening in the physical world with a deep level of insight and nuance in order to make the right decisions. Even the smartest AI in the world will make dumb decisions without the correct context. That context can take the form of historic records of work, pictures as visual references, detailed documentation, and lists of staff expertise. All such disparate sources of information need to be mapped, structured, and usefully integrated into the AI’s context.
?
Autonomy vs. Repeatability Trade-Off
Business processes need to be repeatable. Frontline workers often shadow more experienced colleagues for years to learn the nuance of how to repeatably execute a given task. Human workers can also use their common sense knowledge to adapt to new unforeseen circumstances.
This ability to autonomously adapt to new situations is a key aspect of an AI Agent. Without that, the Agent is just a glorified batch job or robotic process automation (RPA). However, in order to make the correct decision in such circumstances, the decision needs to be informed by learned experience. That experience is often tacit knowledge in humans’ heads, and therefore is very difficult for the AI to learn.
An AI Agent, without the benefit of years of on-the-job training, will complete a complex task in a different way every time it attempts it. This is a fundamental dichotomy. The more autonomous an AI Agent is, the less repeatable it is. It is therefore critical to design the AI Agent with sufficient guardrails to mitigate this dichotomy.
Datch’s Approach
Successfully integrating AI Agents into the workforce will take a lot of smart engineering. Companies like Datch are designing AI Agents carefully to have the knowledge of an experienced worker, the context of all of a company’s sources of information, and a human-in-the-loop workflow to ensure human wisdom can intervene at the right moments to guide the AI back onto the right track.
This kind of AI Agent can be an asset in the workforce. It can augment human work, increase quality and efficiency, and lead us to a better future.
Interested to learn more? Get in touch with Datch.
Senior ML Engineering Consultant @ Mantel Group
1 个月Great concise read Julian! Learnt something new from you as always, especially the importance of repeatability for adoption. Trusting an AI is key to adopting it from experience, and repeatability (and reliability) feeds into trust
Don’t just get there… Arrive
1 个月This is what you were talking about when we caught up the other day! You’re so right. You will enjoy this James Pluck and Matt Gunn
Head of AI @ Serko
2 个月Relevant from Anthropic: https://www.anthropic.com/research/building-effective-agents
Read more from the great Julian Seidenberg (PhD) and others here https://www.datch.io/blog
Chapter Lead - Treasury and Trading @ASB Bank
3 个月"Even the smartest AI in the world will make dumb decisions without the correct context." and even then, there are no guarantees - pretty certain you didn't ask it to add that ladder behind the guy in the office but the AI agent still thought every employee in an electricity distribution company surely has a ladder somewhere close by to quickly climb to that transformer.. ??