LangGraph - What is LangGraph and usage of LangGraph
LangGraph - What is LangGraph and usage of LangGraph

LangGraph - What is LangGraph and usage of LangGraph

Today I am going to write about LangGraph. In this article I will cover what is LangGraph and usage of LangGraph.

So, let's start. LangGraph as the name suggests is a stateful graph of networks of nodes connected by edges. The nodes are agents and edges are responsible for connecting / passing the information between the nodes. The edges are normal and can be conditional.

LangGraph's objective is to create orchestration(workflow) of Language Agents to achieve a certain output. In essence LangGraph extends the functionality of static Retrieval Augmented Generation (RAG) by bringing in elements of node and message passing between nodes to achieve a task.

Let's look into the main components of LangGraph.

  • Stateful Graphs - This represents a Graph
  • Nodes - Nodes are functions that use LLM or normal Python functions
  • Edges - Normal or Conditional to pass information between nodes.

Conditional edge is executed based on satisfying some rule/condition.

Following are some features that make LangGraph very powerful and unique.

  • LangGraph enables workflows using LLM. Just assume how powerful this is. Workflows Enabled By AI.
  • LangGraph enables Human-in-Loop so humans can be looped as part of the application to review / take actions. This is a very powerful application.
  • LangGraph is very easy to use and hence business adaption is fast.

Having covered what LangGraph is, let's dive into developing a practical application of LangGraph.

We will create a Multiagent using LangGraph. The objective of this Multiagent is to write a technical blog.

The LangGraph for the same is given in the image below.

Image Source:

Let's dive into the components of this Multiagent LangGraph.

  • Two Main Nodes - Research and Document Team.
  • Research Team Node has the option of either to search the content through Search Node or Retrieve the content through RAG Node.
  • Document Team Node will be basically used to edit and write the content in case content is not found by either the Search or RAG Node.
  • Supervisor Node will supervise the Research and Document Node. Function of this supervisor node will be if based on the input Search Document doesn't provide the output then the Supervisor Node will leverage the Document Team for creating the document. This is KEY.
  • Input is the input instruction / prompt
  • Output is the output of the flow

Let's now look into high level formation of Research Team Node (for understanding LangGraph)

  • RAG Node - Will be used to Retrieve, Augment and Generate the information from the document.
  • Search - This will be Tavily Search Engine specifically built for AI Agents / LLM to search the document

For construction of Research Node, we first need to create Agent Node (Graph) followed by Supervisor node.

The high-level code for Agent, Supervisor, Research, RAG and Search node is as below. (Below code come from| NLP | LangGraph | Multi-Agent RAG | (kaggle.com))

Agent Node:

Important point is that the code below mentions the use of ChatOpenAI LLM.

def create_agent(
    llm: ChatOpenAI,
    tools: list,
    system_prompt: str,
) -> str:        

Supervisor Node:

Important point in the code below is the function definition that routes based on some condition. This is KEY.

def create_team_supervisor(llm: ChatOpenAI, system_prompt, members) -> str:
    """An LLM-based router."""
    options = ["FINISH"] + members
    function_def = {
        "name": "route",
        "description": "Select the next role.",
        "parameters": {
            "title": "routeSchema",
            "type": "object",
            "properties": {
                "next": {
                    "title": "Next",
                    "anyOf": [
                        {"enum": options},
                    ],
                },
            },
            "required": ["next"],
        },
    }
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", system_prompt),
            MessagesPlaceholder(variable_name="messages"),
            (
                "system",
                "Given the conversation above, who should act next?"
                " Or should we FINISH? Select one of: {options}",
            ),
        ]
    ). partial(options=str(options), team_members=", ".join(members))
    return (
        prompt
        | llm.bind_functions(functions=[function_def], function_call="route")
        | JsonOutputFunctionsParser()
    )        

Research Team Node: This node will have two tools Tavily Search which is a pre-packaged tool for Search Node, and we will create a custom tool for RAG Node.

For the Research Team we can use any LLM. Please note the LLM for Research Team and Agent Node defined above are different.

Research Team Agent Code:

supervisor_agent = create_team_supervisor(
    llm,
    "You are a supervisor tasked with managing a conversation between the"
    " following workers:  Search, PaperInformationRetriever. Given the following user request,"
    " respond with the worker to act next. Each worker will perform a"
    " task and respond with their results and status. When finished,"
    " respond with FINISH.",
    ["Search", "PaperInformationRetriever"],
)        

Tavily Search Node Code:

search_node = functools.partial(agent_node, agent=search_agent, name="Search")        

RAG Node Code:

research_node = functools.partial(agent_node, agent=research_agent, name="PaperInformationRetriever")        

Now we have created our Research Team Agent, Tavily and RAG Node.

Let's define the Research Team Graph and Conditional Edges to create the Research Team Agent which will plug into Supervisor Agent

Research Team Graph Generation Code:(pls note the conditional edges)

research_graph = StateGraph(ResearchTeamState)

research_graph.add_node("Search", search_node)
research_graph.add_node("PaperInformationRetriever", research_node)
research_graph.add_node("supervisor", supervisor_agent)        
research_graph.add_edge("Search", "supervisor")
research_graph.add_edge("PaperInformationRetriever", "supervisor")
research_graph.add_conditional_edges(
    "supervisor",
    lambda x: x["next"],
    {"Search": "Search", "PaperInformationRetriever": "PaperInformationRetriever", "FINISH": END},
)        

Now adding Research Node to Supervisor Node.

research_graph.set_entry_point("supervisor")
chain = research_graph.compile()        

Compile the graph:

research_graph.set_entry_point("supervisor")
chain = research_graph.compile()        

We wrap our graph into a LangChain Expression Language (LCEL) for using the graph.

Let's use the graph.

for s in research_chain.stream(
    "What are the main takeaways from the paper `Extending Llama-3's Context Ten-Fold Overnight'? Please use Search and PaperInformationRetriever!", {"recursion_limit": 100}
):
    if "__end__" not in s:
        print(s)
        print ("---")        
{'supervisor': {'next': 'Search'}}        
{'Search': {'messages': [HumanMessage(content='It appears there is an issue with the search engine at the moment, which is preventing me from retrieving the information you requested. Unfortunately, I cannot directly access the content of the paper "Extending Llama-3\'s Context Ten-Fold Overnight" without the search tool.\n\nHowever, I can suggest some typical takeaways you might expect from a paper with such a title. Generally, a paper discussing the extension of a model\'s context (like a hypothetical Llama-3, which seems to be a play on large language models such as GPT-3) would likely cover topics such as:\n\n1. **Methodology**: How the context was technically extended. This could involve new training techniques, data augmentation methods, or architectural changes to the model. \n\n2. **Performance Improvements**        
{'supervisor': {'next': 'PaperInformationRetriever'}}        
{'PaperInformationRetriever': {'messages': [HumanMessage(content='The paper "Extending Llama-3\'s Context Ten-Fold Overnight" describes a significant increase in the context length capability of the Llama-3-8B-Instruct model, from 8,000 tokens to 80,000 tokens. This enhancement was facilitated through a technique called QLoRA fine-tuning. Remarkably, the team managed to accomplish this expansion using only 3.5K synthetic training samples that were generated by GPT-4.\n\nKey takeaways from the paper include:\n\n1. The context length of Llama-3 was extended ten-fold using a novel fine-tuning technique.\n2. The process involved the generation of a small set of synthetic training samples by a powerful language model, GPT-4.\n3.         

So now you reckon how powerful LangGraph is. There are numerous business applications of LangGraph such as Conversational Agents, Generating Insights using multiple documents, any insight generation workflows etc.

Thanks. Hope you all have a good read.

Disclaimer: Opinion / Views expressed above are the author's personal and has no bearing or affiliation to the authors current employer or any earlier/past employers.

Image Credit:

https://medium.com/@venugopal.adep/introducing-langgraph-crafting-intelligent-language-agents-just-got-easier-2aebdf730b78

https://www.kaggle.com/code/yannicksteph/nlp-langgraph-multi-agent-rag

Credit:

https://www.kaggle.com/code/yannicksteph/nlp-langgraph-multi-agent-rag




要查看或添加评论,请登录

社区洞察

其他会员也浏览了