Geek Out Time: Model Context Protocol (MCP) and the Future of AI Tooling - Upgrading Previously-Built Multi-Agent Financial Advisor Copilot
(Also on Constellar tech blog: https://medium.com/the-constellar-digital-technology-blog/geek-out-time-upgrade-previously-built-ag2-financial-copilot-powered-by-deepseek-into-an-d6fb43829266)
In my previous blog (https://medium.com/the-constellar-digital-technology-blog/geek-out-time-building-a-multi-agent-financial-advisor-copilot-with-ag2-formerly-autogen-98acd2e4b54c), I explored how AutoGen 2 ( AG2 ) allows developers to move beyond LLM chat and into real-world automation. I created a Financial Advisor Copilot capable of sending birthday gifts, scheduling meetings, and submitting claims using specialized AI agents. (dummy function)
But what if those agents could be called like services?What if you could treat each model as a compute node, accepting structured input, operating in context, and returning clean, actionable results?
That’s where Model Context Protocol (MCP) comes in.
And in this geekout, I am going to try inside a single Google Colab notebook — a live, programmable AI backend using FastAPI.
What Is Model Context Protocol (MCP)?
Model Context Protocol (MCP) is an emerging design pattern — and potentially a future standard — for treating AI models like services with structured APIs.
As a16z describes in their deep dive, MCP takes inspiration from the Language Server Protocol (LSP). It standardizes the way models receive inputs, understand context, and execute tasks. This allows models to function not just as assistants but as programmable agents with real capabilities.
MCP payloads look like this:
{
"model": "engagement-agent",
"task": "send_birthday_gifts",
"input": {
"client_ids": ["123", "456"]
},
"context": {
"requester": "admin_bot",
"priority": "high",
"timestamp": "2025-03-22T10:00:00Z"
}
}
Each component is explicit: what to do, who’s doing it, and under what circumstances.
Step-by-Step: Building an MCP API with AG2 in Google Colab
We’ll wrap our Copilot in a REST API using FastAPI and run the whole thing inside a Colab notebook — no deployment required.
1. Install Dependencies
!pip install fastapi uvicorn nest-asyncio autogen requests
2. Patch Colab’s Async Loop
import nest_asyncio
nest_asyncio.apply()
3. Set API Key
import os
os.environ["DEEPSEEK_API_KEY"] = "your_deepseek_api_key"
4. Define Agents
We create four AG2 agents:
Each agent uses DeepSeek’s API.
5. Define Agent Functions Using model_context
@user_proxy.register_for_execution()
@engagement_agent.register_for_llm(description="Send birthday gifts")
def send_birthday_gifts(model_context=None):
client_ids = model_context.get("client_ids", [])
return {
"status": "success",
"results": [f"? Sent gift to client {cid}" for cid in client_ids]
}
Similar functions are defined for renewals, meetings, and claims.
6. Build a Router
class MCPRouterAgent:
def route(self, model_context):
return {
"engagement-agent": engagement_agent,
"operation-agent": operation_agent,
"fa-agent": fa_agent,
}.get(model_context.get("model"))
7. Set Up Group Chat and Manager
groupchat = GroupChat(agents=[user_proxy, fa_agent, engagement_agent, operation_agent], messages=[], max_round=10)
manager = GroupChatManager(groupchat=groupchat, llm_config={...})
8. Start FastAPI App in Colab
from fastapi import FastAPI, Request
import uvicorn
import threading
app = FastAPI()
mcp_router = MCPRouterAgent()
@app.post("/mcp")
async def handle_mcp(request: Request):
model_context = await request.json()
selected_agent = mcp_router.route(model_context)
user_proxy.initiate_chat(manager, message=str(model_context), max_turns=5)
return {"status": "processing", "agent": selected_agent.name}
def run_server():
uvicorn.run(app, host="0.0.0.0", port=8001)
threading.Thread(target=run_server).start()
9. Test the API Inside Colab
import time, requests
time.sleep(2)
response = requests.post("https://0.0.0.0:8001/mcp", json={
"model": "engagement-agent",
"task": "send_birthday_gifts",
"input": { "client_ids": ["123", "456"] },
"context": {
"requester": "admin_bot",
"priority": "high",
"timestamp": "2025-03-22T10:00:00Z"
}
})
print("Response:", response.json())
Output: Live API Interaction
Here’s what actually happened when we POSTed that payload:
UserProxy (to chat_manager):
{ "model": "engagement-agent", "task": "send_birthday_gifts", ... }
Next speaker: EngagementAgent
Suggested tool call: send_birthday_gifts
Executing function...Response:
{
"status": "success",
"results": [
"? Sent gift to client 123",
"? Sent gift to client 456"
]
}
Just like that, your AG2 agent has become a live service endpoint.
Full code:
# Step 1: Install Dependencies
!pip install fastapi uvicorn nest-asyncio autogen requests
# Step 2: Import Required Modules
import nest_asyncio
import uvicorn
from fastapi import FastAPI, Request
from autogen import ConversableAgent, GroupChat, GroupChatManager, UserProxyAgent
import requests
# Patch the event loop for Colab
nest_asyncio.apply()
# Step 3: Set DeepSeek API Key
import os
os.environ["DEEPSEEK_API_KEY"] = "xxxxxx" # Replace with your actual DeepSeek API key
# Step 4: Define Your AG2 Agents
# User Proxy Agent
user_proxy = UserProxyAgent(
name="UserProxy",
human_input_mode="NEVER", # Automate without human input
code_execution_config={"work_dir": "coding"},
)
# Engagement Agent
engagement_agent = ConversableAgent(
name="EngagementAgent",
system_message="""You assist with client engagement tasks like sending birthday gifts and renewal reminders.
You can:
1. Send birthday gifts to clients with upcoming birthdays
2. Send policy renewal reminders to clients
Be warm, personable, and focused on strengthening client relationships.""",
llm_config={
"config_list": [
{
"model": "deepseek-chat", # Replace with the correct DeepSeek model name
"api_key": os.environ["DEEPSEEK_API_KEY"], # DeepSeek API key
"base_url": "https://api.deepseek.com/v1", # Replace with the correct DeepSeek API endpoint
}
]
},
)
# Operation Agent
operation_agent = ConversableAgent(
name="OperationAgent",
system_message="""You assist with operational tasks like scheduling meetings and handling paperwork.
You can:
1. Schedule client meetings
2. Submit claim forms
Be efficient and provide clear confirmations when tasks are completed.""",
llm_config={
"config_list": [
{
"model": "deepseek-chat", # Replace with the correct DeepSeek model name
"api_key": os.environ["DEEPSEEK_API_KEY"], # DeepSeek API key
"base_url": "https://api.deepseek.com/v1", # Replace with the correct DeepSeek API endpoint
}
]
},
)
# Financial Advisor Agent
fa_agent = ConversableAgent(
name="FinancialAdvisor",
system_message="""You provide financial advice and insights to clients.
Be professional, accurate, and client-focused.""",
llm_config={
"config_list": [
{
"model": "deepseek-chat", # Replace with the correct DeepSeek model name
"api_key": os.environ["DEEPSEEK_API_KEY"], # DeepSeek API key
"base_url": "https://api.deepseek.com/v1", # Replace with the correct DeepSeek API endpoint
}
]
},
)
# Step 5: Define Task Functions
# Engagement Agent Functions
@user_proxy.register_for_execution()
@engagement_agent.register_for_llm(description="Send birthday gifts to clients")
def send_birthday_gifts(model_context: dict = None) -> dict:
# Handle missing or invalid model_context
if model_context is None:
model_context = {}
# Extract input data
client_ids = model_context.get("client_ids", []) # Directly access client_ids from model_context
# Simulate sending gifts
results = []
for client_id in client_ids:
results.append(f"? Sent gift to client {client_id}")
return {"status": "success", "results": results}
@user_proxy.register_for_execution()
@engagement_agent.register_for_llm(description="Send renewal reminders to clients")
def send_renewal_reminders(model_context: dict = None) -> dict:
# Handle missing or invalid model_context
if model_context is None:
model_context = {}
# Extract input data
client_ids = model_context.get("client_ids", [])
# Simulate sending reminders
results = []
for client_id in client_ids:
results.append(f"? Sent renewal reminder to client {client_id}")
return {"status": "success", "results": results}
# Operation Agent Functions
@user_proxy.register_for_execution()
@operation_agent.register_for_llm(description="Schedule a client meeting")
def schedule_client_meeting(model_context: dict = None) -> dict:
# Handle missing or invalid model_context
if model_context is None:
model_context = {}
# Extract input data
client_name = model_context.get("client_name", "Unknown Client")
# Simulate scheduling
return {"status": "success", "message": f"? Meeting scheduled with {client_name}"}
@user_proxy.register_for_execution()
@operation_agent.register_for_llm(description="Submit a claim form")
def submit_claim_form(model_context: dict = None) -> dict:
# Handle missing or invalid model_context
if model_context is None:
model_context = {}
# Extract input data
client_name = model_context.get("client_name", "Unknown Client")
claim_type = model_context.get("claim_type", "General")
# Simulate submitting a claim
return {"status": "success", "message": f"? {claim_type} claim submitted for {client_name}"}
# Step 6: Define the MCP Router Agent
class MCPRouterAgent:
def route(self, model_context):
model = model_context.get("model")
if model == "engagement-agent":
return engagement_agent
elif model == "operation-agent":
return operation_agent
elif model == "fa-agent":
return fa_agent
else:
raise ValueError("Unknown model/agent")
# Step 7: Set Up Group Chat and Manager
groupchat = GroupChat(
agents=[user_proxy, fa_agent, engagement_agent, operation_agent],
messages=[],
max_round=10,
)
manager = GroupChatManager(
groupchat=groupchat,
llm_config={
"config_list": [
{
"model": "deepseek-chat", # Replace with the correct DeepSeek model name
"api_key": os.environ["DEEPSEEK_API_KEY"], # DeepSeek API key
"base_url": "https://api.deepseek.com/v1", # Replace with the correct DeepSeek API endpoint
}
]
},
)
# Step 8: Start FastAPI Server
app = FastAPI()
mcp_router = MCPRouterAgent()
@app.post("/mcp")
async def handle_mcp(request: Request):
model_context = await request.json()
selected_agent = mcp_router.route(model_context)
user_proxy.initiate_chat(
manager,
message=str(model_context),
max_turns=5,
)
return {"status": "processing", "agent": selected_agent.name}
# Step 9: Run the FastAPI Server in the Background
import threading
def run_server():
uvicorn.run(app, host="0.0.0.0", port=8001)
# Start the server in a separate thread
server_thread = threading.Thread(target=run_server)
server_thread.start()
# Step 10: Test the API Locally
import time
# Wait for the server to start
time.sleep(2)
# Local server URL
url = "https://0.0.0.0:8001/mcp"
# Payload
payload = {
"model": "engagement-agent",
"task": "send_birthday_gifts",
"input": {
"client_ids": ["123", "456"]
},
"context": {
"requester": "admin_bot",
"priority": "high",
"timestamp": "2025-03-22T10:00:00Z"
}
}
# Send POST request
response = requests.post(url, json=payload)
# Print response
print("Status Code:", response.status_code)
print("Response JSON:", response.json())
What Changed from the Previous Version?
Feature AG2 Copilot v1 MCP API in Colab Input Method CLI prompts JSON via HTTP POST Routing Chat-based flow Router via model key Context Embedded in prompt Structured context field Execution Inline chat commands API-triggered tool calls Integration Manual Programmatic and composable Deployment Local only API-ready, testable, portable
We havee evolved from an LLM “copilot” to a modular, programmable agent service.
Summary
This is what the future of AI infrastructure looks like — agents that talk, think, and act, all within interoperable systems.
Model Context Protocol isn’t just a spec — it’s a pattern you can implement today. AG 2 makes it possible. Google Colab makes it accessible.
Next steps? Add authentication, plug into CRMs, or move to Hugging Face Spaces for live demos.....The future is programmable.
Happy coding !