Understanding Agentic AI Frameworks: LangGraph and CrewAI
?
In the fast-changing world of artificial intelligence, agents are emerging as essential components, capable of autonomously performing tasks and making decisions using advanced machine learning models. Central to their operation is the orchestrator, often represented by a large language model (LLM). This LLM serves as the cognitive core, empowering agents with capabilities like reasoning, decision-making, reflection, and adaptability along with orchestration.?
Among the various open-source frameworks designed to facilitate agent orchestration via LLMs, two notable examples are CrewAI and LangGraph.?
?
?
Crew AI: Role-Based Simplicity?
CrewAI specializes in enabling AI agents to operate as a cohesive unit, assuming distinct roles and sharing goals. The objected oriented encapsulation of agents, tasks, tools, roles, processes, etc. is very well defined in CrewAI . It offers a straightforward and developer friendly approach for orchestrating workflows.?
Key Features:?
?
Below is code for CrewAI research agent and writer agent, the main goal to do research on the internet and write a technical blog.?
Example Code in CrewAI [more examples here]:?
# Define the research agent with specific attributes
research_agent = Agent(
role="Research Analyst", # The role of the agent
goal="Find and summarize information about specific topics", # The goal of the agent
backstory="You are an experienced researcher with attention to detail", # Background information for context
tools=[SerperDevTool()] # Tools available for the agent to use (e.g., search engines)
)
# Define the content writer agent with specific attributes
content_writer_agent = Agent(
role='Article Writer', # The role of the agent
goal='Write a structured and informative technical blog from the text provided', # The goal of the agent
verbose=True, # Enable detailed output for debugging purposes
memory=True # Enable memory to maintain context across tasks
)
# Define a research task for the research agent
research = Task(
description="""
Extract key insights, ideas, and information from AI topics
related to technology and self-improvement.
""", # Task description detailing what the agent needs to do
expected_output="""
A concise report on AI and technology, containing key insights
and recommendations in bullet points.
""", # The format and structure of the expected output
agent=research_agent, # Assign this task to the research agent
output_file="researcher_tasks.md" # The file where the task output will be saved
)
# Define a blog writing task for the content writer agent
write_blog = Task(
description="""
Write an engaging blog post based on the research on AI advancements.
""", # Task description for the content writer
expected_output="""
A full blog post of around 500 words with citations from all the URLs.
""", # Expected output format and content
agent=content_writer_agent, # Assign this task to the content writer agent
output_file="writer_tasks.md" # The file where the blog post will be saved
)
# Create a crew that will manage agents and tasks
crew = Crew(
agents=[research_agent, content_writer_agent], # List of agents working in the crew
tasks=[research, write_blog], # List of tasks to be executed by the crew
verbose=True # Enable detailed logging for crew activities
)
# Begin execution of all tasks in the crew
crew_output = crew.kickoff()
# Print the raw output from the crew tasks
print(f"Raw Output: {crew_output.raw}")
# If the output is available in JSON format, print it in a readable way
if crew_output.json_dict:
print(f"JSON Output: {json.dumps(crew_output.json_dict, indent=2)}")
# If the output is available as a Pydantic model, print it
if crew_output.pydantic:
print(f"Pydantic Output: {crew_output.pydantic}")
? ??
This simplicity makes CrewAI accessible and user-friendly, especially for simple multi-agent systems. The only caveat is the entire dialog or system workflow is dependent on the decisions of the LLM, there is no determinism that can be introduced.?
?
LangGraph: Graph-Based Customization?
LangGraph takes a fundamentally different approach, representing agent interactions as a graph. This enables the modeling of complex workflows, including cyclical processes and multi-layered feedback loops, making it highly suitable for research and experimental projects requiring extensive customization. This is also similar to how dialog management systems have been built via finite state machines in the past with IVR systems or workflow processes in enterprises.?
领英推荐
Key Features:?
Example Code in LangGraph [reference]:??
? ? workflow = StateGraph(AgentState)
# Add nodes to the workflow graph
workflow.add_node("research_agent", research_node) # Add the researcher node to the graph
workflow.add_node("content_writer_agent", writer_node) # Add the writer agent node to the graph
# Add conditional edges to determine the workflow path based on the router's output
workflow.add_conditional_edges(
"Researcher", # From the "Researcher" node
router, # Router function to determine the next step
{
"continue": "writer_agent", # Continue to the writer agent if "continue" is returned
"call_tool": "call_tool" # Call a tool if "call_tool" is returned
}
)
# Add direct edges to the workflow graph
workflow.add_edge("writer_agent", END) # Mark the writer agent node as the endpoint
workflow.add_edge(START, "Researcher") # Set the starting point to the researcher node
# Compile the workflow graph to make it executable
graph = workflow.compile()
?
Choosing the Right Framework?
Use LangGraph When:?
Use CrewAI When:?
Attached is also a good example comparing CrewAI + Langgraph together, where part of the workflow is orchestrated via nodes/graphs available in Langgraph to make it deterministic, and part is agentic via the CrewAI apis.???
Importance of real world testing
Ultimately, the true measure of a system lies in its ability to solve real-world use cases and address customer challenges effectively. To ensure this, it is essential to rigorously test the accuracy and performance of both frameworks in practical, real-world scenarios. By benchmarking them under diverse conditions and use cases, we can uncover their strengths, identify areas for improvement, and determine their true potential to deliver value in real-world applications. Such thorough testing is not just a step but a cornerstone in building reliable and impactful solutions.
?
?
?
Data Engineer | AI/ML & Cloud Specialist | Generative AI | MLOps || Open to Relocation || Open to face 2 face interviews
1 个月Yes, thank you for such a great article, this article effectively summarized the core functionalities of both CrewAI and LangGraph. Phidata could be another potential frameworks. As Uniphore and Konecta have partnered recently AI tools like these combined can offer a great potential to their AI Architectures.
Vice President, SuperWarm.AI
2 个月Great insights, Neha! Exploring Lang Graph and Crew AI can offer valuable perspectives. Looking forward to seeing the feedback and examples from fellow AI enthusiasts. Thanks for sharing!
Generative AI | AI Prompt Engineer | Master's in Business IT
2 个月Thank you for sharing this Neha Gupta
Senior Product Manager - Job Feed Recommendations Platform | Experimentation Platform | Data as a Product | Marketplace Product
2 个月Thanks for sharing Neha! Very helpful!
Model Post training, alignment and interpretability
2 个月I think one of the greatest plus points of LangGraph has been is transparency and ease of logging intermediate outputs. I wonder how CrewAI compares on that front?