How much to supervise AI agents?

How much to supervise AI agents?

AI agents are systems for taking actions. Unlike chatbots, they use large language models to orchestrate complex problem-solving activities involving planning, reasoning, and even interactions with other AI agents. Think of them as highly efficient virtual co-workers. Billions of them will soon join the workforce and will change the output of companies. The purpose of an AI agent is to get something done — such as an employee requesting a leave, sales assistant engaging with a consumer on personalized recommendations or a pharmaceutical executive identifying the best candidates to recruit for a clinical trial.

AI agents can become a control layer around all kinds of transactions, replacing many of the complex interfaces and workflows that characterize enterprise software platforms. Their simplicity and usefulness will be a direct challenge to both traditional software-as-a-service vendors, as well as to technology leaders unprepared for their rapid proliferation.

How much freedom or autonomy should you give the AI agents?. If we always put humans in the loop, we are unlikely to attain the true benefits of AI transformation. The level of autonomy depends on the risks and our understanding on the risks. Too much autonomy, and brand, reputation, customer relationships, and even financial stability are at risk. One approach is to wait until a general regulatory and commercial consensus on agentic AI emerges. Alternatively, for those bold enough, uncertainty can be used as a decision making tool to proceed with what to do next.

Agent autonomy is an intricate and difficult problem. You lose your productivity gains if you impose excessive supervision. Yet, in many cases, supervision is precisely what is needed to avoid disaster. Since the emergence of generative AI, there have been sufficient examples of lack of governance and control to make leaders wary — from an auto dealer chatbot offering a car for a dollar, to an airline bot that hallucinated policies?that did not exist. To avoid that problem, organizations are building AI agents that can leverage internal systems and data. That is a double-edged sword. Agents may be less likely to make things up if they’re relying on internal systems and data. But, as they become more trusted, they will also have a growing influence over life-altering actions such as approving loans, protecting critical infrastructure from cyberattacks, hiring or firingstaff.

A straightforward solution to the AI safety problem is to put a human in the loop for any decision with serious consequences. Curiously, such an approach can lead to perverse outcomes. Think about Waymo, formerly the Google self-driving car project, which provides autonomous taxi services. It is hard to imagine a risk more significant than a machine carrying people at high speed and potentially making snap decisions that impact other human lives. Yet, programming for every eventuality on the road is both problematic and ethically challenging.

New technologies typically arrive together with ethical challenges. You might be familiar with the series of classic philosophical dilemmas inspired by such difficult trade-offs as “trolley problems". The classic version of this problem involves a runaway trolley headed toward five people tied to the tracks. The driver’s choices are to do nothing, and all five people will be killed, or to divert the trolley, which would kill one person who is standing in the way. Trolley problems, which challenge us to make decisions between two undesirable outcomes, are likely to become more commonplace in the modern world — not just in for self-driving cars, but also in other areas, such as helthcare. Moral dilemmas become even more complex when the decision is delegated to a robotaxi, rather than a human decision-maker.

3 types of problems when determining agent autonomy

  1. Complicated Problems

Complicated problems can be detailed and challenging to manage. However, they can also be defined and documented, making them ideal candidates for high autonomy with minimal supervision. Complicated problems are ideal for rule-based, deterministic systems like robotic process automation. Once set up and running, they can be left alone — and checked occasionally to see if they are still operating within acceptable parameters.

2. Ambiguous problems

Ambiguous problems can have many variables with indeterminate values and that makes simple automation difficult. However, since the variables are largely known you can improve your ability to predict outcomes and make the right decision as you gather more data. A self-driving car navigating an unexpected obstacle on the road is an example of an ambiguous problem, which can be clarified with more context and information. Other examples might include identifying fraud transactions to questions from drug regulators, or assigning the right financial advisor to a banking customer based on transactional and behavioral data. AI agents won’t always get it right, but they learn fast. . Humans can help AI agents improve with feedback so that we can do more valuable things with our time.

3. Uncertain problems

Uncertain problems are the most challenging because they are difficult to define. More data does not help you as you lack knowledge of the domain itself. Examples could be a pandemic with no established treatment protocols or reliable detection tests, or the fragility of global supply chains during events like a ship blocking the Suez Canal, or attackson vessels in the Red Sea, or even endemic issues such as poverty, climate change, or homelessness. It is dangerous to grant high autonomy to AI agents when faced with high uncertainty because there is little to nothing in their training data to prepare them for how to make good decisions. Humans in this situation are better prepared to tackle these kinds of issues with their adaptability, originality, and resilience when things go sideways.

The future of AI agents will depend on our ability to build trustworthy systems that can make and execute decisions at a massive scale, whether those decisions come from humans or machines. For technologists, that will require thinking about AI governance in broader terms than implementing technical guardrails.

To sum it up as Jensen Huang Nvidia CEO said

“In a lot of ways, the IT department of every company is going to be the HR department of AI agents in the future.”

Designing an effective AI agent is not so different from becoming a better leader. Making good decisions is important, but not as valuable as being able to think about decision-making at an organizational scale. Rather than focusing on the outcomes of specific judgment calls, we need to improve the overall process of evaluating and executing decisions. This approach, which shifts accountability from individual decision-makers to those who design and manage AI-powered systems, has the potential to spark a cultural shift within organizations — far beyond mere process improvement.



ArunKumar R It would be great if you can put some end-to-end flow of enabling/Building AI Agent using Azure tech stacks like ... Data Extract-->Load--Transform-->Building ML Models/LLM --> Building AI Agent ->Supervise Agent

回复

要查看或添加评论,请登录

ArunKumar R的更多文章

  • Who will be the Kubernetes of AI agents?

    Who will be the Kubernetes of AI agents?

    AI agents are getting more and more popular. But there is a long way to go before we unlock the value of agents.

  • Why every company needs a Chief AI Officer?

    Why every company needs a Chief AI Officer?

    There are only two types of companies in this world. Those that are great at AI and everybody else.

  • Four villains of decision making

    Four villains of decision making

    The track record of humanity making decisions is not so good. The decisions range from career choices, hiring, mergers…

  • AI transformation - Balancing innovation and risk

    AI transformation - Balancing innovation and risk

    Every company is embarking on the journey of digital transformation and AI transformation is an important constituent…

  • AI Gateway

    AI Gateway

    Artificial intelligence has become a hot topic over the past couple of years. It’s transforming the enterprise…

  • Master Data Management - Implementation styles

    Master Data Management - Implementation styles

    Master data management (MDM) is a business practice that ensures that an organization's data is accurate, consistent…

  • How to be assertive without being a jerk?

    How to be assertive without being a jerk?

    Communicating confidently without offending people and being assertive is a tough act. Many people in an effort to…

  • Data culture

    Data culture

    As you embark on efforts concerning a company’s data platform or systems, a crucial first step involves evaluating the…

  • Confident Humility

    Confident Humility

    Too much of confidence will be seen as arrogance and too much of humility will be seen as weakness or lack of…

    1 条评论
  • Architecture Principles

    Architecture Principles

    Architecture principles are statements that dictate how an organization’s IT resources and capabilities should be…

社区洞察