My prediction for 2025: Agents will fall short of expectations in most cases, but will provide competitive edge in select, practical implementations

My prediction for 2025: Agents will fall short of expectations in most cases, but will provide competitive edge in select, practical implementations


Happy New Year :)

I have a contra prediction for 2025

Agents will fall short of expectations in most cases, but they will provide a competitive edge in select, practical implementations - because people forget the 'autonomous' in autonomous AI agents

I see a lot of optimism and hype around agents - mainly from three sources: vendors, consultancies and startups.

However, I have two reasons from my contra views

1) We have a course on Agentic workflows in the #universityofoxford - where we always take a pragmatic view and

2) I am actually implementing agents on a large scale implementation and we are recruiting

So I have an incentive?to be pragmatic

To put this in context

Crossing the Chasm is a concept from Geoffrey A. Moore's book, Crossing the Chasm: It describes the challenge of transitioning from early adopters of a technology product to capturing the mainstream market. The "chasm" is the gap between the early adopters and the early majority. Early adopters are willing to take risks on unproven technology for competitive or visionary reasons, while the early majority wants solid proof that the technology delivers consistent, practical value. Many high-tech products fail to "cross the chasm" because they cannot bridge this gap.

The single biggest question to ask is:

Do we need autonomy for this task and is it viable and safe especially in light of other options ?? Forgetting the word autonomous is convenient but impractical.

There are a spectrum of possibilities for a given problem ie

Machine Learning - Deep Learning - LLM - LLM assisted non autonomous agents ex copilot and finally - autonomous AI agents

By definition, autonomous AI agents are .. autonomous

I see this example for AI agents:

I want to write a report

I engage with an interactive agent

That engages with a 'reasoning'' agent

The reasoning agent breaks the task down into sub-tasks and hands it to specialised agents ex an editing agent. a translating agent etc

All these agents scurry around and collaborate and get your task done by implementing components and sharing with the interactive agent

The fallacy in this scenario is - you could perform the same task using chatGPT / LLMs - in other words - the scenario does not uniquely call for autonomy - and hence an autonomous AI agent

Implementing autonomous AI agents involves several challenges spanning technical, ethical, and operational domains.

Technical Challenges

  • Data Availability and Quality
  • Complex Decision-Making
  • Robustness and Adaptability
  • Real-Time Processing
  • Multi-Agent Collaboration
  • Explainability

Ethical and Social Challenges?

Operational Challenges

  • Integration:?
  • Scalability:?
  • Maintenance and Updates:?
  • Security:?

Regulatory and Legal Challenges

  • Compliance
  • Liability
  • Standardization

5. Cultural and Human Interaction Challenges

  • Human-Agent Interaction
  • Cultural Sensitivity
  • Resistance to Adoption

With this background

Tasks that are complex enough to justify the use of autonomous AI agents often involve environments where:

  • Decisions must be made quickly and adaptively.
  • Variables are numerous and unpredictable.
  • Human involvement is costly, inefficient, or dangerous.?

Based on this criteria, in light of alternatives, there are specialised use cases in Financial Service, Cyber security, logistics and in future ?Robotics

However, most of the currently proposed use cases will struggle because

  1. The business model is unclear
  2. Alternatives exist - many from LLMs themselves
  3. There is a risk to autonomy

Hence

Agents will fall short of expectations in most cases, but they will provide a competitive edge in select, practical implementations - because people forget the 'autonomous' in autonomous AI agents

If you are interested in AI agents see our course on Agentic workflows in the #universityofoxford - where we always take a pragmatic view

I am actually implementing agents on a large scale implementation and we are recruiting

Happy New Year!

Tania Peitzker

?? AI investor, technologist, consultant, Emerging Tech educator & writer.?? Now living in the #fediverse! Reach out to me via Mastodon, SKRED, ginlo, WIMI, kMeet, Olvid, Pleroma, mave, Jet-Stream & SomniumSpace??

2 个月

fascinating Ajit! Indeed #agentic workflows have been long established - with varying levels of success - through the multitude of legacy #chatbots & their #naturlanguageprocessing integrations over 2-3 decades now. For instance, many "semi #autonomous" bots were built into ERPs and CRMs (thinking #SAP, #oracle, #microsoft & other B2B solutions SaaS). I have written about the "evolution" of such Conversational AI interfaces in my 2020 book of Case Studies, published by Business Expert Press in New York. #usesandrisksofbusinesschatbots 2025 SEQUEL: The Climate Impacts of AI and the Internet: Future-Proofing Ourselves and Emerging Technologies BTW could I interview you for my next book on LLMs, Gen AI, Next Gen/Emerging Tech & the climate crisis? Am working on this research in #switzerland #germany #california, #usa, the #DACH & #APAC regions, so the #europeanunion, #eea markets as well as in the #UK. Am in #oxford next week if you had time to record a meeting in person. Will direct message you about this. Happy New Year, best wishes, Dr Tania Peitzker, Adj. Prof. USV.

  • 该图片无替代文字
回复
Dave Duggal

Founder and CEO @EnterpriseWeb

2 个月

Ajit Jaokar - Agree with your points and thesis, but digging deeper - isn't the fundamental problem that Agentic AI inherits all the weaknesses of LLMs (inaccurate, inconsistent, inexplicable, insecure, high latency, expensive, energy and resource consumption). Leaning on an LLM for reasoning and planning seems ill-conceived. Consider that in many use cases, classic AI, ML and RL, as well as good old-fashioned analytics and rules, will provide faster, better, cheaper results. While LLMs complement and extend capabilities, it takes a big leap and a lot of hubris to suggest it is a new intelligence/logic layer. My prediction is that Neuro-Symbolic approaches will grab the spotlight for providing practical, enterprise-grade solutions for agent-based automation (safe, secure, performant, scalable, cost and energy efficient). Thanks for the post. Would love to hear your thoughts. Happy New Year!

Totally agree. The possibility of agent jacking is also to be considered, esp in sensitive control areas and agent xai is not easy to mature.

Adipta Roy, PMP

Delivery Leader Accountable for P&L, Delivery, Account growth, SLAs and staffing

2 个月

Happy New Year, Ajit Jaokar . Nice write up. However would like to know the contract building part between two or more agents. Please share your views.

回复
Sankar Krishnan

Data | AI | Agentic AI | Practice @ DXC.Tech

2 个月

Not to worry. Pretty much everyone knows this. Agents is a sound byte that is just few months old now. And along with it has come Agentic AI, AgentOps, Agentic Mesh and whatever else. We cannot call "the king naked" lest you be considered "oh he has no clue" Nobody even knows how a contract between two agents work, forget doing a complex workflow autonomously. A lot of standards need to emerge, even definitions need to settle (what exactly is autonomous and what is the role of human in a autonomous system like L1-L5 in cars) and so on. 2025 will also be a year of lot of talk with significant developments on these areas. And may be in 2026, we will implement something.

要查看或添加评论,请登录

Ajit Jaokar的更多文章

社区洞察

其他会员也浏览了