New AutoGen Release - v0.4.4: Serializable Agent Configuration, Support for Azure Hosted Models and More

New AutoGen Release - v0.4.4: Serializable Agent Configuration, Support for Azure Hosted Models and More

We just released v0.4.4 for the AutoGen library with several important developer experience improvements:

Full release announcement here.

ICYMI - we showed how to replicate capabilities of the OpenAI Operator Agent in < 50 lines of code with AutoGen ! Code here.


Serializable Configuration and State Management

You can now serialize and deserialize both configurations and states of agents and teams. This enables:

  • Saving agent configurations as JSON for version control and deployment
  • Maintaining persistent chat sessions across server restarts
  • Managing state in distributed applications
  • Loading pre-configured agent teams, improving reproducibility

The feature works with both individual agents and team configurations, supporting all standard AutoGen components including RoundRobinGroupChat.

model_client = OpenAIChatCompletionClient(model="gpt-4o")
    assistant = AssistantAgent(
        "assistant",
        model_client=model_client,
        system_message="You are a helpful assistant.",
    )
    critic = AssistantAgent(
        "critic",
        model_client=model_client,
        system_message="Provide feedback. Reply with 'APPROVE' if the feedback has been addressed.",
    )
    termination = TextMentionTermination("APPROVE", sources=["critic"])
    group_chat = RoundRobinGroupChat(
        [assistant, critic], termination_condition=termination
    )
    # Run the group chat. 
    # Dump the team configuration to a JSON file. !!
    config = group_chat.dump_component()        

I am excited to see how people use this feature!


New Client for Azure AI Hosted Models

The new Azure AI client provides direct access to Azure and GitHub-hosted models, including:

  • Phi-4
  • Mistral models
  • Cohere models

Implementation requires only an Azure endpoint and GitHub token for authentication. The client handles model-specific configurations and API interactions automatically. Note that we already support AzureOpenAI models, but this expands to support for all BYOM models on Azure.

client = AzureAIChatCompletionClient(
        model="Phi-4",
        endpoint="https://models.inference.ai.azure.com",
        # To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings.
        # Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens
        credential=AzureKeyCredential(os.environ["GITHUB_TOKEN"]),
        model_info={
            "json_output": False,
            "function_calling": False,
            "vision": False,
            "family": "unknown",
        },
    )        


Improved Model Client Caching

A new default in-memory cache is now available for ChatCompletionClient. Key features:

  • No external cache service required
  • Automatic caching of repeated queries
  • Cache state persistence across sessions
  • Simple implementation with existing model clients

# Create a model client.
    client = OpenAIChatCompletionClient(model="gpt-4o")

    # Create a cached wrapper around the model client.
    cached_client = ChatCompletionCache(client)

    # Call the cached client.
    result = await cached_client.create(
        [UserMessage(content="What is the capital of France?", source="user")]
    )
    print(result.content, result.cached)        


Other Updates

  • Improve Console for Magentic One.
  • Documentation Updates
  • Bug fixes


Full release details here - https://github.com/microsoft/autogen/releases/tag/v0.4.4



enterprise-ai.io AI fixes this Release update for AutoGen v0.4.4.

回复
Aparna Kapoor

Managing Director and Partner at Boston Consulting Group | APAC AI leader | GenAI product architect | Agentic AI expert

1 个月

Pretty solid updates !

回复
Vijay Vishnu

Transforming Experienced Tech Professionals (7-15 YOE) into Cloud & AI Experts | Scalable Systems Specialist |Tech Stack Simplifier

1 个月

Awesome product

回复
John Robert

Lead Machine Learning Engineer | Data Engineer | Platform and Infrastructure Engineer | 3x Azure Certified Associate | AWS Community Builder

1 个月

Well done, can't wait to test this out

要查看或添加评论,请登录

Victor Dibia, PhD的更多文章

社区洞察

其他会员也浏览了