The Opportunities and Risks of Agentic AI
DALL-E

The Opportunities and Risks of Agentic AI

Agentic AI is the buzzword floating around tech circles, hailed as the next big leap. You might be hearing about it from consultants or vendors, who are quick to emphasize its potential to revolutionize industries like banking by squeezing efficiency and cost savings out of investments in large language models. But here's where things get interesting and a little risky.

For those of us following the evolution of AI, it’s hard not to be intrigued. Agentic AI is a step beyond the generative AI we've grown familiar with. In its simplest form, this technology allows AI to act autonomously, making decisions and executing tasks with minimal human input. "Agentic" refers to an entity being an agent, which means the AI isn't just responding to prompts but taking actions based on them.

Until now, most banks implementing generative AI have kept a human in the loop—reviewing the AI’s output for errors or hallucinations (a fun term for when the AI makes things up), tweaking the code, or having customer service agents screen AI recommendations before taking action. But agentic AI shifts this dynamic, potentially removing that safety net.

Recently, some companies have reached a comfort level where they let AI make decisions independently, without human oversight. While no U.S. bank has yet publicly admitted to using agentic AI in full production, the wheels are in motion -there are already funded projects aimed at this very shift.

What is Agentic AI?

Agentic AI operates like a brain behind the scenes. You might hear it referred to as a "foundation model operating system" or, more simply, an "AI agent." It’s capable of parsing a user’s prompt, breaking it into tasks, and executing those tasks. Imagine a customer facing a financial emergency, needing to move $1,000 but lacking enough funds in their checking account. Instead of asking a human banker for help, an AI agent could automatically check savings, search for a credit line, or assess other options. All of this is done without a person giving explicit instructions.

This technology distinguishes itself from more traditional automation tools like business-process automation because it understands and acts based on language, a crucial difference made possible by large language models.

Real-World Applications

We’re seeing practical examples, like NetXD 's "Edge AI" platform. Imagine asking Edge AI how much interest you've earned across all accounts. The system, connected to your accounts via APIs, could respond with detailed information, maybe even suggest higher-yield accounts, and (with your permission) open a new account for you in seconds. It might even automate monthly bill payments and balance splits -managing your finances without needing constant input from you.

This kind of autonomy could completely disrupt the banking sector, and I believe that within two or three years, banks will be scrambling to implement this technology.

But as with any groundbreaking innovation, there are concerns. Agentic AI has the potential to transform personal finance by automating complex decisions about where and how to optimize customer funds. The goal here is to enhance user experience and financial outcomes, but it’s still early days. Bud has spent the last year getting the technology to work consistently.

There’s a reason why many banks are keeping quiet about their trials with agentic AI—it comes with significant risks. Letting AI handle tasks independently opens the door to all kinds of errors. Imagine an AI tasked with high-frequency trading: it could cause a flash crash or unintentionally manipulate markets. Worse, malicious actors could use the same tools to engage in market manipulation or cyberattacks.

There is another potential issue: AI agents acting on behalf of companies might all receive the same information simultaneously, triggering large-scale reactions like bank runs. If multiple firms use the same AI to manage their treasury operations, a single signal could prompt all of them to withdraw funds at once. It’s not just an exaggeration -it’s a risk we need to seriously consider.

Even simple actions like paying bills could lead to unintended consequences, such as overdrawing accounts or incurring fees. And while customers or employees may be asked to confirm steps, we all know how easy it is to mindlessly click 'OK' on our devices. This complacency could cause problems to slip through the cracks.

Despite the risks, it's hard not to be excited about where agentic AI could take us. We’re not quite there yet, at least not in banking. But some companies are already deploying similar systems in healthcare and working on generative AI transformation programs in global investment banks.

The challenge, particularly in regulated industries like finance, is ensuring transparency. If AI makes decisions, those decisions can’t be locked in a "black box." Regulators need to know why an AI agent acted the way it did. But this challenge doesn’t mean agentic AI is impossible -just that it will take time.

People often compare agentic AI to self-driving cars. We’re building toward fully autonomous systems, but we're not ready to let go of the wheel just yet. For now, humans will stay in the loop, monitoring, reviewing, and occasionally correcting course. In the next five years, we may see the technology improve to a level where we can trust it more, but fully autonomous AI in finance? It’s still a ways off.

Ultimately, banks that are looking to dive into agentic AI will need to ensure they've laid the right groundwork: robust security, well-defined processes, and intelligent kill switches in case things go awry. Much like driving on a winding road, you’ll want strong brakes and lane control before you let the AI take the wheel.

Rachin Ahuja

Business Development Manager at E42.ai

1 个月

The balance between the transformative potential of agentic AI and its associated risks is crucial for organizations to navigate. While the ability of these AI agents to enhance efficiency and decision-making is promising, the challenges of transparency, accountability, and ethical considerations cannot be overlooked. At E42.ai, we believe that a thoughtful approach to integrating agentic AI, with robust governance frameworks, will be essential for maximizing benefits while minimizing risks. Looking forward to seeing how this technology evolves. https://bityl.co/SIsw

Joseph Neumeyer

Atlas Light Company - internet 4.0

1 个月

This is a thoughtful post, thank you. Yeah, AI systems - especially unattended - are a recipe for disaster if they aren't structured well. But like any technology, containment, security, and system mechanics are what make or break the tech. Nuclear reactors have safety measures, cars have safety systems, AI isn't so different.

要查看或添加评论,请登录

John Giordani, DIA的更多文章

社区洞察

其他会员也浏览了