Agents and Workflow: Real Applications to understand When to use What

Agents and Workflow: Real Applications to understand When to use What

1.?What are Agents and Workflows?

The term "agent" can be interpreted in different ways. It can be viewed as a fully autonomous system capable of operating independently over extended periods, utilizing various tools to complete complex tasks. Others use the term to refer to more structured implementations that adhere to predefined workflows.

Both can be categorized as?agentic systems, but major difference exists between?workflows?and?agents are as follows:

  • Workflows?are systems where LLMs and tools are orchestrated through predefined code paths.
  • Agents, on the other hand, are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.

2. When should we use Agents?

Agentic systems often trade latency and cost for better task performance, and we should consider when this tradeoff makes sense.

When more complexity is warranted, workflows offer predictability and consistency for well-defined tasks, whereas agents are the better option when flexibility and model-driven decision-making are needed at scale.

3. Basic Agentic System

The basic building block of agentic systems is an LLM enhanced with augmentations such as retrieval, tools, and memory. Our current models can actively use these capabilities—generating their own search queries, selecting appropriate tools, and determining what information to retain.

Figure 1: The augmented LLM

Source: https://www.anthropic.com/research/building-effective-agents

4. Workflow-Agent Framework

The following Figure presents scenario-based recommendation for choosing agentic system and various types of workflows.

Figure 2: Scenario based recommendation for Workflow and Agentic System

5. Workflow 1: Prompt Chaining

Use Case: Generating Codes from Requirement Document

Prompt chaining decomposes a task into a sequence of steps, where each LLM call processes the output of the previous one. We can add programmatic checks on any intermediate steps to ensure that the process is still on track.

Figure 3: Prompt Chaining: Generating Codes from Requirement Document

When to use this workflow:?This workflow is ideal for situations where the task can be easily and cleanly decomposed into fixed subtasks. The main goal is to trade off latency for higher accuracy, by making each LLM call an easier task. Here, the subtasks are generating HLD, LLD and finally the codebase.

6. Workflow 2: Routing

Use Case: Processing diverse Customer Queries

Routing classifies an input and directs it to a specialized follow up task. This workflow allows for separation of concerns and building more specialized prompts. Without this workflow, optimizing for one kind of input can hurt performance on other inputs.

Figure 4: Routing: Processing diverse Customer Queries

When to use this workflow:?Routing works well for complex tasks where there are distinct categories that are better handled separately. Here, separate prompts work well for different categories like general questions, refund questions and technical support. Hence, the user will get better response from this router strategy.

7. Workflow 3: Parallelization

Use Case: LLM Response Evaluation

LLMs can sometimes work simultaneously on a task and have their outputs aggregated programmatically. This workflow, parallelization, manifests in two key variations:

·??????? Sectioning: Breaking a task into independent subtasks run in parallel.

·??????? Voting:?Running the same task multiple times to get diverse outputs.

Figure 5: Parallelization: LLM Response Evaluation

When to use this workflow:?Parallelization is effective when the divided subtasks can be parallelized for speed, or when multiple perspectives or attempts are needed for higher confidence results. For complex tasks with multiple considerations, LLMs generally perform better when each consideration is handled by a separate LLM call, allowing focused attention on each specific aspect. Here the LLM response evaluation needs to be done from various dimensions, and all can be aggregated to get the final output.

8. Workflow 4 – Orchestrator-Workers

Use Case: Modification in various files dynamically

In the orchestrator-workers workflow, a central LLM dynamically breaks down tasks, delegates them to worker LLMs, and synthesizes their results.

Figure 6: Orchestrator-workers: Modification in various files dynamically

When to use this workflow:?This workflow is well-suited for complex tasks where we can’t predict the subtasks needed. Here central orchestrator LLM decides how many files need to be modified and what type of modification needs to be done.

9. Workflow 5: Evaluator-optimizer

Use Case: Successful Red Teaming Prompt Creation

In the evaluator-optimizer workflow, one LLM call generates a response while another provides evaluation and feedback in a loop.

Figure 7: Evaluator-optimizer: Successful Red Teaming Prompt Creation

When to use this workflow:?This workflow is particularly effective when we have clear evaluation criteria, and when iterative refinement provides measurable value. The two signs of good fit are, first, that LLM responses can be demonstrably improved when a human articulates their feedback; and second, that the LLM can provide such feedback.

Here the evaluation criteria are the successful red teaming of the LLM based applications through the generated prompt.

10. Agentic System

Agents consist of understanding complex inputs, engaging in reasoning and planning, using tools reliably, and recovering from errors. Below workflow depicts the same.

Agentic Workflow

Agents can handle sophisticated tasks, but their implementation is often straightforward. They are typically just LLMs using tools based on environmental feedback in a loop. It is therefore crucial to design toolsets and their documentation clearly and thoughtfully.

Figure 8: Autonomous Agent

Source: https://www.anthropic.com/research/building-effective-agents

When to use agents:?Agents can be used for open-ended problems where it’s difficult or impossible to predict the required number of steps, and where we can’t hardcode a fixed path. The LLM will potentially operate for many turns, and you must have some level of trust in its decision-making. Agents' autonomy makes them ideal for scaling tasks in trusted environments.

Conclusion:

The autonomous nature of agents means higher costs, and the potential for compounding errors. Hence, extensive testing is recommended in sandboxed environments, along with the appropriate guardrails.


References:

https://www.anthropic.com/research/building-effective-agents

https://www.intuz.com/blog/ai-agent-workflows-across-industries


RAHUL SAHA~AI Evangelist ??

Associate Principal - Data Sciences@LTIM Enterprise AI Advisory | Ex Sr AI Solution Consultant (DGM) - Adani AI Labs | Ex Chief Data Scientist -TCS IOTDE | Mentor | MTech-Data Sc. & Engg. | PGDM-Biz Analytics

2 个月

Very well Articulation ??

Excellent explanation clearly articulating different types of requirements and workflow based solutions for those. I liked that you clearly called out when to use agents and what to be cautious about

Aritra Sen

Applied Machine Learning | Generative AI

3 个月

Excellent illustrative explanation of the determinisric and stochastic flows and how agents can help in the stochastic work flows for decision making..

Suresh Thalluru

Software Developer | Full Stack | Cloud Native | SDLC | AWS Certified | Cloud Computing | Agile Practitioner | RHEL | AWS Partner | Algorithms | Data structures | SDE

3 个月

Very informative

Arvind S.

Generative AI Strategist | Sr Director | ?? Data & Analytics | ?? | ?? Story Teller ???

3 个月

awesome post...good read!

要查看或添加评论,请登录

Anindita Desarkar, PhD的更多文章

社区洞察

其他会员也浏览了