Reflection Agent with LangGraph
1)What Is Reflection?
Reflection is a prompting strategy used to improve the quality and success rate of agents and similar AI systems. It involves prompting an LLM to reflect on and critique its past actions, sometimes incorporating additional external information such as tools and observations.
So basically to achieve reflection in LLM based systems, we have multiple LLM agents. Some that generate content(Generative part of LLMs). Others that reflect on generated content and give constructive criticism that can then be used to optimize the next generation in the cycle to make the content better. Let’s take a look into the types of these reflection types we can work with. There may be many more approaches considering the type of publishing this article. Keep this in mind.
领英推荐
Agentic Workflows with LangGraph
In LangGraph, we can define the agentic AI workflow as a graph consisting of LangChain chains. Each chain represents a single workflow step and usually consists of a single AI interaction (but that's not a rule). At the end of each step, we return new state variables. LangGraph passes those variables as input to the next step or uses them in conditional statements to decide what to do next.
In our example, we create an agentic AI workflow consisting of the following steps:
As we see above, we have autonomous action planning and decision-making because AI decides what data to retrieve or skip the question if we don't have access to the required information.
We have also broken the task into smaller steps, each handled by a specialized AI agent. We have an agent capable of generating SQL queries, another agent generating a human-readable response, and an AI agent planning the action.
Executing the query and generating a text response is a step using a pre-defined Python function. Only the function's input is AI-generated.