Part One: The AI Agent Era Begins

Part One: The AI Agent Era Begins

A Move to Thinking Slow?

By Joe McKenna?

The AI landscape has shifted dramatically over the past two years, and for those of us entangled in the machinery of business decision-making, it can no longer be ignored.??

Whether you are an investor, a founder of a fledgling start-up, harbouring ambitions to be one, offering counsel in the boardroom, guiding a product as an owner, or steering the helm of digital innovation, what’s happening in AI is far from a passing novelty. It’s something solid, something that demands our attention.?

Yet the question persists: how does one sift the glittering illusion from solid ground???

The answer, of course, lies in the toil of experiment and reason – the tools of those who have seen many a dream float away on the wind. At Eclipse AI, we are not mere heralds; we are the first to step into the unknown. We build, test, and entangle ourselves with these technologies before bringing their lessons to others, grounded in hard-won knowledge, not ephemeral excitement.?

Whatever decision we face next, it must be shaped with AI in mind, for it has already woven itself into the fabric of our world, changing its pattern whether we notice or not.??

So below is an article, non-patronising, and hopefully enlightening and educational for you as it has been for me. Let its knowledge cure confusion and create an appetite for AI innovation in your mind.?

This article is Part 1 (of two) where I discuss:?

  • My education (thanks to the tech elves at Eclipse AI) on AI’s shift from prediction to reasoning, bringing in the era of the AI Agent?

  • A new wave of "killer apps" is rising—tools built to lead this charge?

  • How crucial the consultant is for the customer?

  • How ServiceNow? plays a crucial role in your AI future?

Generative AI's Act 01: The Agentic Reasoning Era Begins?

Two years into the generative AI revolution, research is progressing the field from “thinking fast” (rapid-fire, pre-trained responses) to “thinking slow” (reasoning at inference time). This evolution is unlocking a new cohort of agentic applications.??

In the relentless march of AI, it is crucial that we, especially in the tech sector, educate ourselves. We owe it to both our teams and customers to guide them towards the most effective AI strategies. I’ve committed to immersing myself in this revolution—not to master every detail, but to grasp its strategic impact. After all, investment is finite and must be spent wisely, and wisdom demands understanding. I will leave mastering of the detail to the skilled at Eclipse AI.?

So, let’s take a moment to reflect on where we stand. I turned to a few experts for insight, and here’s how they see it:?

With ChatGPT-4, you get a model that responds swiftly. Ask it to draft an email or solve a maths problem, and it will produce an answer based on patterns it has learned. It’s fast, often accurate, but it doesn’t truly "think" in real time. For example, with a complex maths problem, it may provide the correct answer instantly, but without real deliberation. This is why each chat ends with a reminder: it may not always be right.?

With ChatGPT-O1, we enter a new era—thinking slow. This model no longer relies solely on past patterns; it pauses, reasons, and tackles problems step by step. Give it a tough maths problem, and instead of rushing an answer, it breaks it down, recalculates if needed, and refines its response. Unlike its predecessors, it has learned to reflect before responding. This newfound ability to reason in real time makes it far more capable with complex tasks, from coding to scientific queries to decision-making.?

This leap alone shows just how fast everything is shifting, and the foundations are being laid down at an alarming pace. It’s entirely possible that the ship has already sailed on some fronts, but I believe there’s still time, still opportunity—if we shift our approach.??

We need to pivot and position ourselves at the bleeding edge of this movement. The tide may have already gone out on the foundations of AI, but there are some waves we can ride.?

Strategic Openings: Steering Through the Next Wave of AI – From Foundational Models to Machines That Think?

As a businessperson, I’ve realised the profound impact AI will have—not just in the boardroom, but in early-stage ventures and tech companies. This isn’t just about me; it’s a shift affecting everyone, at every level.?

The generative AI market is settling, with giants like Microsoft, OpenAI, AWS, Anthropic, Meta, and Google DeepMind emerging as dominant players. While the battle isn’t over, the market is taking shape, driving more affordable and accessible next-token predictions. These "next-token predictions," the process behind models like ChatGPT, have already become cheaper and spread, widening AI’s reach.?

Now, as the LLM market matures, a new frontier emerges: the reasoning layer. Inspired by AlphaGo and grounded in human decision-making studies, this layer brings deliberate, "System 2 " thinking to AI. No longer about quick pattern-matching, AI systems are evolving to reason through tasks, solving problems with real thought.?

New cognitive architectures are shaping how AI delivers these reasoning abilities, moving us into an era of intelligent agents. With the foundation in place, higher-order reasoning will scale, and a new wave of "killer apps" is rising—tools built to lead this charge. This is where the opportunity lies, for all of us, to harness the next wave of AI progress.?

Navigating the Frontier: Leading Founders, Giants, Investors, and Clients Through AI’s Next Great Opportunity?

Crucially, we must ask ourselves: what does this mean for AI founders and established software giants? How will this shift reshape the playing field for both the new entrants and those already entrenched in the industry? Where do investors, always eager to identify the next breakthrough, now see the most promising opportunities in the generative AI stack??

These are not abstract musings but urgent concerns for anyone in tech. For founders, the rise of reasoning AI opens the door to solutions beyond mere predictions, tapping into real cognitive power. Incumbent software firms must adapt, integrating these models or risk obsolescence. Investors, meanwhile, are already backing the pioneers who will shape this new phase of AI—where true intelligence, not mere imitation, takes form.?

For consultants and clients alike, the key question is clear: how will we navigate this chaotic and complex landscape? As new technologies and architectures emerge rapidly, the consultant's role becomes essential—guiding customers through this bewildering yet exciting terrain, ensuring they not only survive but thrive in the AI revolution.? ? These questions aren’t just problems—they are opportunities. A chance to rethink investments, break from old habits, and pivot towards new products. It’s a moment to realign strategy, sparking digital transformation for yourself and your clients. More than that, it’s an invitation to carve a new path in this shifting landscape. The doors are open; the time to step through is now.?

Strawberry Fields Forever: The Most Important Model Update of 2024?

The most significant model update of 2024 comes from OpenAI with "O1," or as it’s affectionately known, "Strawberry." This update is more than just a reassertion of OpenAI’s dominance in the model quality rankings—it represents a genuine shift in architecture. For the first time, we have a model with true general reasoning capabilities, made possible through inference-time compute.?

I’ve come to understand that inference-time compute is like the kind of thinking that gives you a real shot at acing an exam—because instead of rushing in, you pause, reason through each answer before committing it to paper. You’re thinking it over, weighing up the question, figuring out the best response.??

In contrast, training-time compute is more like rote memorisation. You’ve learned how to write things down, but you’re hoping the exam lines up neatly with what you’ve stored in your head. You don’t really grasp the subject—you just know how it looks.??

The problem is, if the question doesn’t match exactly what you’ve memorised, you’re stuck, fumbling. You can see how relying on training-time compute alone leaves you hoping for the best, but unlikely to ace the exam.??

Agentic Reasoning: System 2 Thinking Gives Reinforcement Learning New Life?

Pre-trained models predict the next token using vast datasets and training-time compute. While basic reasoning emerges with enough data, it’s limited. "Strawberry" changes this by using inference-time compute, allowing the model to pause and reason before responding, leading to more nuanced insights.?

AlphaGo’s "stop and think" approach, where the AI simulates future moves, serves as a perfect analogy. The longer it reflects, the better it performs. But LLMs like ChatGPT O1 struggle with open-ended tasks like creative writing, though they excel in structured areas like coding and maths.?

Reinforcement learning is key to Strawberry’s refined reasoning. By backtracking when stuck and reimagining problems, it mirrors human problem-solving, approaching tasks like geometry and programming with fresh perspectives. Its creators are now testing new reward functions to sharpen its cognitive skills, much like training a dog with treats.?

Once thought to have reached its limit, reinforcement learning has been reinvigorated. The boundaries of AI reasoning have expanded, signalling the dawn of a new, more advanced layer of intelligence. Read more here. ?

A New Scaling Law: The Inference Race is On?

The O1 revelation brings a striking new insight—a fresh scaling law. We’re familiar with the old rule: the more compute and data you throw into training LLMs, the better they perform. Simple, predictable. But with O1, a new dimension emerges: the more compute you give it at inference time, the better it reasons.?

Now imagine—what happens when the model doesn’t just think for seconds, but for hours, days, even decades? The possibilities boggle the mind. Could it solve the Riemann Hypothesis? Or perhaps Asimov’s Last Question? (Don’t ask me what this is, but the tech folk in my circle are buzzing with excitement.)?

This shift moves us from the era of massive pre-training clusters to the age of inference clouds—where compute expands with the task. The world is shifting, and we are moving with it, even if we don’t yet have all the answers.?

ServiceNow & Eclipse AI: From Prediction to Reasoning?

With this shift, ServiceNow are taking advantage of where to position themselves and provide the most value to customers. They will undoubtedly take the lead in AI agents.? At Eclipse AI, we have toiled and experimented with ServiceNow’s Now Assist, a system bolstered by generative AI.??

It was a measured pursuit to the discovery of real value – value that can flow into the arteries of global operations and local businesses alike, making the ordinary run smoother, the complex easier to navigate.??

The likes of AI-powered search, automatic summarisation of endless chat logs, and article generation after each issue is resolved – these are not whimsical frills, but essential lifelines in a global enterprise and local companies.?

With its Xanadu release, ServiceNow illustrates a pivotal shift in direction of AI from simple prediction to reasoning, planning, and autonomous action. AI is no longer passive—it now understands context, learns, and acts with purpose?

Gartner? Magic Quadrant? for AI: The Agentic Era in Action?

But let’s not be deceived; we are still standing at the shore of this new ocean. On the horizon are intelligent assistants and agents that will change the way we, as individuals, engage with the world around us. Such tools will not merely support but intertwine with the fabric of leadership and decision-making.?

At Eclipse AI, our ServiceNow practice is central to our strategy in this evolving AI landscape. Our vision aligns with ServiceNow’s ambition: to deliver cutting-edge AI technology that offers real value, grounded in a clear and deep understanding of the technology and our customers' needs.?

With every new magic, there’s the temptation to wield it everywhere, indiscriminately. Our path is narrower. We are guided by one question: will this truly create value? Each investment in AI must do more than dazzle; it must reduce risk, enhance experience, and lead to real, measurable outcomes.?

ServiceNow’s position as a Leader in the 2024 Gartner Magic Quadrant for AI in IT Service Management speaks volumes. It signals to us, and the wider market, that this platform is the key to unlocking the most valuable investments in AI, guiding us towards a future of smarter, more effective IT services and more.?

In this new wave of "killer apps" is rising—ServiceNow is one of those tools built to lead this charge. This is where the opportunity lies, for all of us, to harness the next wave of AI progress.?

Conclusion?

The spoils will go to those who remain focused, who see it not as a fleeting wonder, but as a tool – a tool to solve human problems, built on sound data, ethical frameworks, and the one constant of all business: a relentless focus on those who use and benefit from it. The promise is vast, yes, but only educated and disciplined hands will make it real.?

We are a head of the game in AI, learning for you, so that we can guide you to value. Get in touch and we will be thrilled to help.?

Look out for Part 2 of this exciting topic.?

[Images generated by Magick.ai from the talented Mark Scott and Mike Bahr ]

great piece Joe McKenna starting to appreciate the magnitude of the landscape is vital, all AI technology is not the same and so of course the use cases, application, investment, governance and return are all varied and only just starting to be explored

Agents are the future. I honestly believe that agents will be the new "app," and the future will have an agent that you customize yourself with plugins and enhancements that you buy from third parties and "stores". Absolutely spot on here Joe McKenna!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了