Visual LangGraph Generator
Image 1: A Corrective RAG visual graph

Visual LangGraph Generator

A fun little project to visually generate graph boilerplate, inspired by langgraph-engineer. The key difference? We're building the graph on a canvas, like assembling Lego bricks!.

For me, it's so much easier to think and build visually. While IPython is great for visualizing a graph after generation, wouldn't it be cooler to see it before we start coding? As an added bonus, this approach gives us much more clarity on the kind of graph we want to build. Don't like something? No problem – just erase and start over. Want to add a new edge? Simply draw one in. You get the idea – it's way easier to do this visually than through code.

This is how it was done.

  1. A React front-end to create a canvas for placing nodes, edges, conditional edges, and agent states. This was the most challenging part.
  2. During graph build, a dictionary stores every element and its associated relationships. This blueprint is crucial, as the LLM will rely on it to create the graph's Python code.
  3. Once the visual graph build is complete, the dictionary is serialized to JSON and sent to a FastAPI-based backend.
  4. The JSON is fed to a creator LLM that generates the first draft of the graph's Python code. It has context from common LangGraph examples (again inspired by langgraph-engineer) such as simple 2-node graphs, conditional graphs, subgraphs, agent supervisor, etc. The richer the context, the better the output.
  5. The creator LLM's output is then fed to a reviewer LLM, which critiques, corrects, and generates the final version. Alternatively, we can also feed the reviewer LLM's feedback back to the creator LLM for regeneration using a 2-node graph.
  6. Voilà! The graph boilerplate code is ready. While my initial goal was simply to create graph boilerplate, I discovered far more value in the process itself. It provides remarkable clarity and insights into the kind of graph we want to build, going well beyond just generating code.
  7. As for the LLMs used: GPT-4o was the creator LLM, and Claude 3.5 Sonnet was the reviewer LLM. Tried multiple other LLMs (didn't try Llama 405B) and none could follow the instructions in the prompt to accurately generate the complex graph structures and conditional logic required for LangGraph implementations.

Another example below:

Image 2: Most common 2 node graph(equivalent of create-react-agent)

Below is a boiler plate code generated in first iteration for a Corrective RAG use case(refer Image 1). Pretty good for a complex graph in first try. Claude even gave pointers on how to develop each node and conditional edges - pretty cool

import json
import logging
import operator
from typing import Annotated, List, TypedDict

from langgraph.graph import END
from langgraph.graph.state import StateGraph

class AgentState(TypedDict):
    question: str
    generation: str
    documents: Annotated[List[str], operator.add]

def retrieve(state: AgentState) -> AgentState:
    # Implement the logic for the retrieve function
    # For example:
    # state['documents'] = retrieve_documents(state['question'])
    return state

def grade(state: AgentState) -> AgentState:
    # Implement the logic for the grade function
    # For example:
    # state['grade'] = grade_documents(state['documents'])
    return state

def transform_query(state: AgentState) -> AgentState:
    # Implement the logic for the transform_query function
    # For example:
    # state['question'] = transform_query(state['question'])
    return state

def web_search(state: AgentState) -> AgentState:
    # Implement the logic for the web_search function
    # For example:
    # state['documents'].extend(web_search(state['question']))
    return state

def generate(state: AgentState) -> AgentState:
    # Implement the logic for the generate function
    # For example:
    # state['generation'] = generate_answer(state['question'], state['documents'])
    return state

def relevant_docs_decision(state: AgentState) -> str:
    # Implement the logic to decide the next step based on relevant documents
    # For example:
    if len(state['documents']) > 0:
        return "generate"
    else:
        return "transform_query"

graph = StateGraph(AgentState)

graph.add_node("retrieve", retrieve)
graph.add_node("grade", grade)
graph.add_node("transform_query", transform_query)
graph.add_node("web_search", web_search)
graph.add_node("generate", generate)

graph.set_entry_point("retrieve")
graph.add_edge("retrieve", "grade")

graph.add_conditional_edges(
    "grade",
    relevant_docs_decision,
    {
        "transform_query": "transform_query",
        "generate": "generate"
    }
)

graph.add_edge("transform_query", "web_search")
graph.add_edge("web_search", "grade")
graph.add_edge("generate", END)

app = graph.compile()

# Usage example:
initial_state = AgentState(question="What is the capital of France?", generation="", documents=[])
result = app.invoke(initial_state)

# Key changes and improvements:
# 1. Corrected the implementation of conditional edges.
# 2. Simplified node names to use function names directly.
# 3. Improved the structure of the graph to better reflect the intended workflow.
# 4. Added placeholder logic in node functions to illustrate their purpose.
# 5. Removed unused imports and variables.
# 6. Added a usage example to demonstrate how to invoke the compiled graph.        

要查看或添加评论,请登录

Ajith Aravind的更多文章

社区洞察

其他会员也浏览了