LangGraph - What is LangGraph and usage of LangGraph
Today I am going to write about LangGraph. In this article I will cover what is LangGraph and usage of LangGraph.
So, let's start. LangGraph as the name suggests is a stateful graph of networks of nodes connected by edges. The nodes are agents and edges are responsible for connecting / passing the information between the nodes. The edges are normal and can be conditional.
LangGraph's objective is to create orchestration(workflow) of Language Agents to achieve a certain output. In essence LangGraph extends the functionality of static Retrieval Augmented Generation (RAG) by bringing in elements of node and message passing between nodes to achieve a task.
Let's look into the main components of LangGraph.
Conditional edge is executed based on satisfying some rule/condition.
Following are some features that make LangGraph very powerful and unique.
Having covered what LangGraph is, let's dive into developing a practical application of LangGraph.
We will create a Multiagent using LangGraph. The objective of this Multiagent is to write a technical blog.
The LangGraph for the same is given in the image below.
Let's dive into the components of this Multiagent LangGraph.
Let's now look into high level formation of Research Team Node (for understanding LangGraph)
For construction of Research Node, we first need to create Agent Node (Graph) followed by Supervisor node.
The high-level code for Agent, Supervisor, Research, RAG and Search node is as below. (Below code come from| NLP | LangGraph | Multi-Agent RAG | (kaggle.com))
Agent Node:
Important point is that the code below mentions the use of ChatOpenAI LLM.
def create_agent(
llm: ChatOpenAI,
tools: list,
system_prompt: str,
) -> str:
Supervisor Node:
Important point in the code below is the function definition that routes based on some condition. This is KEY.
def create_team_supervisor(llm: ChatOpenAI, system_prompt, members) -> str:
"""An LLM-based router."""
options = ["FINISH"] + members
function_def = {
"name": "route",
"description": "Select the next role.",
"parameters": {
"title": "routeSchema",
"type": "object",
"properties": {
"next": {
"title": "Next",
"anyOf": [
{"enum": options},
],
},
},
"required": ["next"],
},
}
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
MessagesPlaceholder(variable_name="messages"),
(
"system",
"Given the conversation above, who should act next?"
" Or should we FINISH? Select one of: {options}",
),
]
). partial(options=str(options), team_members=", ".join(members))
return (
prompt
| llm.bind_functions(functions=[function_def], function_call="route")
| JsonOutputFunctionsParser()
)
Research Team Node: This node will have two tools Tavily Search which is a pre-packaged tool for Search Node, and we will create a custom tool for RAG Node.
For the Research Team we can use any LLM. Please note the LLM for Research Team and Agent Node defined above are different.
Research Team Agent Code:
supervisor_agent = create_team_supervisor(
llm,
"You are a supervisor tasked with managing a conversation between the"
" following workers: Search, PaperInformationRetriever. Given the following user request,"
" respond with the worker to act next. Each worker will perform a"
" task and respond with their results and status. When finished,"
" respond with FINISH.",
["Search", "PaperInformationRetriever"],
)
Tavily Search Node Code:
领英推荐
search_node = functools.partial(agent_node, agent=search_agent, name="Search")
RAG Node Code:
research_node = functools.partial(agent_node, agent=research_agent, name="PaperInformationRetriever")
Now we have created our Research Team Agent, Tavily and RAG Node.
Let's define the Research Team Graph and Conditional Edges to create the Research Team Agent which will plug into Supervisor Agent
Research Team Graph Generation Code:(pls note the conditional edges)
research_graph = StateGraph(ResearchTeamState)
research_graph.add_node("Search", search_node)
research_graph.add_node("PaperInformationRetriever", research_node)
research_graph.add_node("supervisor", supervisor_agent)
research_graph.add_edge("Search", "supervisor")
research_graph.add_edge("PaperInformationRetriever", "supervisor")
research_graph.add_conditional_edges(
"supervisor",
lambda x: x["next"],
{"Search": "Search", "PaperInformationRetriever": "PaperInformationRetriever", "FINISH": END},
)
Now adding Research Node to Supervisor Node.
research_graph.set_entry_point("supervisor")
chain = research_graph.compile()
Compile the graph:
research_graph.set_entry_point("supervisor")
chain = research_graph.compile()
We wrap our graph into a LangChain Expression Language (LCEL) for using the graph.
Let's use the graph.
for s in research_chain.stream(
"What are the main takeaways from the paper `Extending Llama-3's Context Ten-Fold Overnight'? Please use Search and PaperInformationRetriever!", {"recursion_limit": 100}
):
if "__end__" not in s:
print(s)
print ("---")
{'supervisor': {'next': 'Search'}}
{'Search': {'messages': [HumanMessage(content='It appears there is an issue with the search engine at the moment, which is preventing me from retrieving the information you requested. Unfortunately, I cannot directly access the content of the paper "Extending Llama-3\'s Context Ten-Fold Overnight" without the search tool.\n\nHowever, I can suggest some typical takeaways you might expect from a paper with such a title. Generally, a paper discussing the extension of a model\'s context (like a hypothetical Llama-3, which seems to be a play on large language models such as GPT-3) would likely cover topics such as:\n\n1. **Methodology**: How the context was technically extended. This could involve new training techniques, data augmentation methods, or architectural changes to the model. \n\n2. **Performance Improvements**
{'supervisor': {'next': 'PaperInformationRetriever'}}
{'PaperInformationRetriever': {'messages': [HumanMessage(content='The paper "Extending Llama-3\'s Context Ten-Fold Overnight" describes a significant increase in the context length capability of the Llama-3-8B-Instruct model, from 8,000 tokens to 80,000 tokens. This enhancement was facilitated through a technique called QLoRA fine-tuning. Remarkably, the team managed to accomplish this expansion using only 3.5K synthetic training samples that were generated by GPT-4.\n\nKey takeaways from the paper include:\n\n1. The context length of Llama-3 was extended ten-fold using a novel fine-tuning technique.\n2. The process involved the generation of a small set of synthetic training samples by a powerful language model, GPT-4.\n3.
So now you reckon how powerful LangGraph is. There are numerous business applications of LangGraph such as Conversational Agents, Generating Insights using multiple documents, any insight generation workflows etc.
Thanks. Hope you all have a good read.
Disclaimer: Opinion / Views expressed above are the author's personal and has no bearing or affiliation to the authors current employer or any earlier/past employers.
Image Credit:
Credit: