Concept: Building PromptLab with MCP and LangGraph

Concept: Building PromptLab with MCP and LangGraph

Anthropic 's MCP is going to be a foundational standard for connecting AI systems to external tools. It allows the community to build tools and connect them to first-party clients like Cursor or Claude Desktop. Want to extend the function of an existing client system? Just take your MCP server and connect it into the ecosystem to extend a working system.

In my previous blog, I presented the concept of an MLflow MCP Server, which allows your AI system to communicate with your model repository based on user queries expressed in plain English.

The Core Idea

We've all experienced the frustration of getting mediocre responses from AI systems. Often, the issue isn't the AI's capabilities, but rather the quality of our prompts. A good prompt structure can make output quality so much better.

This is where PromptLab comes in. It bridges the gap between casual queries and optimized prompts by automatically enhancing user inputs with appropriate structures, parameters, and guidance.

Imagine this: You're casually asking questions to an AI system, and behind the scenes, PromptLab detects the content type, extracts key parameters, and transforms your simple query into an expertly crafted prompt.

For example, if you ask:

Write a short essay about climate change impacts on coastal cities        

PromptLab transforms it into:

Write a well-structured essay on the impacts of climate change on coastal cities, focusing on the environmental, economic, and social consequences. Your essay should include:
- An engaging introduction that sets the stage and presents your main argument
- 2-3 body paragraphs, each with a clear topic sentence and supporting examples or data
- Smooth transitions between paragraphs to maintain the flow of your argument
- A conclusion that summarizes the key points and offers insights into potential solutions or mitigation strategies

Ensure that your essay is well-researched, analytical, and thought-provoking.        

The difference is dramatic—and so are the results.

The Workflow: MCP + LangGraph

PromptLab combines two powerful frameworks:

  1. MCP for template management
  2. LangGraph for orchestration

The workflow consists of five key stages: classification, enhancement, validation, adjustment, and generation.


Workflow design

When a user submits a query, the system first classifies the content type (essay, email, technical, creative). Then, it connects to our MCP server to retrieve the appropriate template, fills in the extracted parameters, and validates that the enhanced prompt maintains the original intent. If validation passes, it generates the final response; if not, it automatically adjusts the prompt.

Building PromptLab: A Step-by-Step Guide

Step 1: Designing the Template System

The heart of PromptLab is its template system. I started by creating a YAML-based configuration that allows users to create and modify prompt templates:

templates:
  essay_prompt:
    description: "Generate an optimized prompt template for writing essays."
    template: |
      Write a well-structured essay on {topic} that includes:
      - A compelling introduction that provides context and states your thesis
      - 2-3 body paragraphs, each with a clear topic sentence and supporting evidence
      - Logical transitions between paragraphs that guide the reader
      - A conclusion that synthesizes your main points and offers final thoughts
      
      The essay should be informative, well-reasoned, and demonstrate critical thinking.
    parameters:
      - name: topic
        type: string
        description: The topic of the essay
        required: true        

This approach allows us to:

  • Externalize prompt structures in human-readable format and the community can add more templates
  • Define the required parameters for each template
  • Update templates without changing code

Step 2: Creating the MCP Server

With our templates defined, the next step was building an MCP server that exposes these templates as callable tools:

# Initialize MCP server
mcp = FastMCP(
    name="persona",
    instructions="I provide specialized templates for enhancing prompts based on content type."
)

@mcp.tool()
def essay_prompt(topic: str) -> str:
    """
    Generate an optimized prompt template for writing essays.
    
    Args:
        topic: The topic of the essay
    
    Returns:
        An enhanced prompt template for writing an essay
    """
    if "essay_prompt" in templates:
        return apply_template(templates["essay_prompt"], {"topic": topic})
    else:
        return f"Write a well-structured essay about {topic}."        

The MCP server loads templates from our YAML file and registers each as a callable tool. This creates a clean separation between template management and the server implementation.

Step 3: Building the Processing Pipeline with LangGraph

With the MCP server providing templates, I needed a workflow to process user queries. I chose LangGraph for its powerful state management and conditional branching capabilities:

# Create the graph
workflow = StateGraph(QueryState)

# Add nodes
workflow.add_node("classify", classify_query)
workflow.add_node("enhance", enhance_query)
workflow.add_node("validate", validate_query)
workflow.add_node("adjust", adjust_query)
workflow.add_node("generate", generate_response)

# Define edges
workflow.add_edge(START, "classify")
workflow.add_edge("classify", "enhance")
workflow.add_edge("enhance", "validate")
workflow.add_conditional_edges(
    "validate",
    route_based_on_validation,
    {
        "adjust": "adjust",
        "generate": "generate"
    }
)        

The workflow consists of five key steps:

  1. Classification: Determines what type of content the user is requesting (essay, email, technical, creative)
  2. Enhancement: Calls the appropriate template from the MCP server
  3. Validation: Ensures the enhanced prompt maintains the original intent
  4. Adjustment: Refines the prompt if validation fails
  5. Generation: Creates the final response using the optimized prompt

Step 4: Running the System

With all components in place, using the system is straightforward:

Start the MCP server:

python promptlab_server.py        

Send a natural language query using the client

python promptlab_client.py "Write a short essay on vlimate change"        
Trace of enhanced query generation and final result

What's Next

As AI becomes more integrated into our workflows, the quality of our prompts becomes increasingly important. By separating templates from code and using a standardized protocol for connections, we can create more maintainable, extensible AI systems that produce better results for all users—regardless of their prompt engineering expertise. Future enhancements could include:

  • Adding more specialized templates for different domains
  • Adding support for multi-modal prompts
  • Implementing a feedback loop to improve templates based on user ratings

GitHub repository

Please extend this if you want.



Rahul Pandey

Unlocking Business Potential with AI Solutions | Senior Solutions Architect @ adidas | Certified Expert in Databricks, AWS & GCP | Writer & Speaker | MLflow Ambassador ??

1 天前

I have made some changes to the project to make it more scalable. More updates here: https://www.dhirubhai.net/pulse/promptlab-latest-updates-rahul-pandey-iedwf Thanks for checking it out ??

回复

要查看或添加评论,请登录

Rahul Pandey的更多文章

  • PromptLab: Latest Updates

    PromptLab: Latest Updates

    In my previous article, I introduced the concept of PromptLab—a tool designed to optimize user queries, enabling more…

  • Concept: Building MLflow MCP Server

    Concept: Building MLflow MCP Server

    MLflow is a powerful ML platform for managing the entire machine learning lifecycle, making each phase traceable and…

    2 条评论
  • Byte-Sized Paper Summary: Week 9, 2025

    Byte-Sized Paper Summary: Week 9, 2025

    It is a go-to source for concise summaries of research papers, cutting-edge tech releases, and key industry updates…

  • Choosing the Right Evaluation Metrics for your ML Project

    Choosing the Right Evaluation Metrics for your ML Project

    Introduction In machine learning, choosing the right evaluation metric is crucial for assessing model performance and…

  • Byte-Sized Paper Summary: Week 8, 2025

    Byte-Sized Paper Summary: Week 8, 2025

    It is a go-to source for concise summaries of research papers, cutting-edge tech releases, and key industry updates…

  • From Second Brain to On-The-Go Audio: Transforming My Notes into Podcasts

    From Second Brain to On-The-Go Audio: Transforming My Notes into Podcasts

    In my last article, I described how I created a system to accelerate my learning and upgraded my terminal, which helped…

    1 条评论
  • Byte-Sized Paper Summary: Week 7, 2025

    Byte-Sized Paper Summary: Week 7, 2025

    It is a go-to source for concise summaries of research papers, cutting-edge tech releases, and key industry updates…

  • How to Merge LLMs?

    How to Merge LLMs?

    The landscape of open-source LLMs is evolving rapidly, with models now handling trillions of tokens and billions of…

    1 条评论
  • Byte-Sized Paper Summary: Week 4, 2025

    Byte-Sized Paper Summary: Week 4, 2025

    It is a go-to source for concise summaries of research papers, cutting-edge tech releases, and key industry updates…

  • Byte-Sized Paper Summary: Week 3, 2025

    Byte-Sized Paper Summary: Week 3, 2025

    It is a go-to source for concise summaries of research papers, cutting-edge tech releases, and key industry updates…