Simulating Conversations with Agent-Based Models in LangGraph

Simulating Conversations with Agent-Based Models in LangGraph

Ever wondered how to simulate dynamic interactions between characters like Jim and Pam from "The Office"? With agent-based models in LangGraph, you can create engaging, personality-driven simulations powered by Large Language Models (LLMs).

Introduction

Agent-based modeling is a powerful approach for simulating interactions between autonomous agents, each with their own behaviors and states. When combined with the capabilities of LLMs, we can create agents that follow predefined rules and generate natural language responses that reflect complex personalities.

In this article, we'll explore how to use LangGraph to simulate a conversation between two beloved characters—Jim Halpert and Pam Beesly. We'll explore creating agents with distinct personalities, constructing the conversation graph, and managing the simulation loop.

Creating Agents with Distinct Personalities

To simulate a believable conversation, each agent must have a well-defined personality. In LangGraph, we achieve this by crafting a system prompt for each agent, which guides the LLM's responses.

For example, here's how we define Pam's persona:

class PamNode:
    def create_system_message(self):
        return SystemMessage(
            content="You are Pam Beesly, a whimsical and playful character deeply in love with Jim. Respond accordingly."
        )        

By setting up the system prompt in the create_system_message() method, we ensure that all of Pam's responses align with her character traits. Similarly, we can define Jim's persona, capturing his wit and charm.

The Core Component of the Agent: The act() Method

The primary function of each agent is encapsulated in its act() method, which generates responses based on the current state of the conversation. For instance, Pam’s act() method retrieves the conversation history from ConversationState object, passes it to the LLM, and processes Pam’s response accordingly.

def act(self, state: ConversationState):
    """
    Generate Pam's response based on the conversation state.
    """
    # Collect the conversation history and pass it to the LLM
    conversation_history = [self.system_message] + state.messages
    # call llm
    response = self.llm.invoke(conversation_history)
    response_content = response.content.strip()
    print("------------------------")
    print(response_content)
    # Add Pam's response to the conversation state
    return {"messages": [AIMessage(content=response_content)]}
        

This method is central to the agent’s behavior. By accessing the conversation state, it enables Pam’s responses to stay contextually accurate, contributing to the seamless flow of dialogue. The method gathers all prior messages, generates Pam’s reply using the LLM, and then appends her response to the conversation state.

Building the Conversation Graph

LangGraph allows us to structure the flow of the conversation using nodes and edges. Each node represents a function that updates the conversation state, and the edges define the sequence of interactions.

First, we define the conversation state:

class ConversationState(BaseModel):
    messages: Annotated[Sequence[AIMessage], operator.add] = Field(default_factory=list)        

This state keeps track of all messages exchanged between the agents. The Annotated class and operator.add specify how new messages are added to the sequence.

Next, we create the agents and the graph:

# Create Jim and Pam agents
jim = JimNode(llm)
pam = PamNode(llm)

# Create the state graph
graph = StateGraph(ConversationState)

# Add nodes for Jim and Pam with their respective actions
graph.add_node("Jim", jim.act)
graph.add_node("Pam", pam.act)

# Define conversation flow: Start -> Jim -> Pam -> End
graph.add_edge(START, "Jim")
graph.add_edge("Jim", "Pam")
graph.add_edge("Pam", END)

# Compile the graph
conversation_graph = graph.compile()        

This setup ensures that the conversation starts with Jim, followed by Pam, and then ends, forming a complete exchange.

Managing the Simulation Loop

To simulate multiple exchanges between Jim and Pam, we implement a loop that repeatedly invokes the conversation graph. This approach avoids recursion depth limits that can occur with recursive loops in LangGraph.

# Initialize the conversation state
state = ConversationState()

# Simulation loop: Simulate 3 exchanges between Jim and Pam
for _ in range(3):
    state = conversation_graph.invoke(state)
        

This loop runs the entire cycle defined in the conversation graph multiple times. After the invoke() method completes a cycle, the updated state is passed to the next iteration, letting the conversation progress naturally while keeping the full exchange history in the ConversationState object.

Utilizing the Runnable Interface

LangGraph's Runnable interface standardizes how components like LLMs and custom chains are invoked. By using methods like invoke(), batch(), and stream(), we can efficiently manage inputs and outputs.

For instance, both the LLM and the conversation graph use the invoke() method:

response = llm.invoke(prompt)
state = conversation_graph.invoke(state)        

This consistency simplifies the integration of various components and enhances code readability.

Conclusion

By leveraging LangGraph and LLMs, we can create personality-driven simulations that mimic real-world interactions. This approach opens up possibilities for developing conversational agents and storytelling applications.

This article is the first section of the tutorial Chatbot Workbench: Developing and Testing Chatbots using LangGraph and Agent-Based Simulation. To explore the full tutorial and access the code, visit Chatbot Workbench on GitHub.

要查看或添加评论,请登录

Sergey Krivov的更多文章