Agent of Agents
I've written about the importance of various AI architectures of the near future, today I want to focus on agent of agents. As Google Agentspace and tools like Langgraph continues to evolve, we now have agents, agents using tools, and performing actions. Where using tools is equivalent to a programmer using a program or a function to manipulate and change data; and actions allow the agents to impact the world around them via integration calls.
Trying to build a single agent to do too many tasks is the architectural equivalent to the monolith. In order to avoid the monolithic approach; you can create an agent that knows about and can forward information to other agent specialists. This is similar to the message router pattern, where the router is used to forward requests to a downstream agent to handle the details of the request.
User -> (Agent of Agents [router / supervisor]) -> Agent Specialist.
Another visual analog is thinking about the switchboard operators of old that would manually connect a caller to their destination.
This is important because it allows the developer to modularize their stack, here a little thinking can go a long way.
/project root
/router-agent
/sub-agent-01
/sub-agent-02/agent.py
/tools
/actions
agent.py
The top level agent is where the router and sub-agents are stitched together using langgraph
To make this a concrete, let's think about image generation and manipulation as a set of agents:
# Create the router
graph.add_node("router", router_def)
# Add the sub agents
graph.add_node("manipulator", manipulator_def) # Things like resize, recolor, etc
graph.add_node("in_painter", inpainter_def) # Tools and actions to call imagen 3
# Create the graph
graph.set_entry_point("router")
graph.add_conditional_edges("router", should_continue)
graph.add_edge("router", "manipulator")
graph.add_edge("manipulator", "router")
....
Here we can see the creation of a general purpose agent for handling all of my personal image requests.
With the goal of an interactive workflow looking something like this where the user gets an interactive experience:
> User: uploads and image and asks: Can you resize this image to 1024 x 1024?
> System: Absolutely, here you go <displays new image>
> User: great, can you correct the color? I'd like it to be a little darker and the reds to stand out a bit more.
> System: Here are X to choose from, which do you like?
...
> User: I really like this last one, can you add a dozen puppies to the background running around?
> System: Here are X to choose from, which do you like?
And as we continue to progress, it will be more than just a simple chat and dialog flow, where system controls like color adjusters, focus, etc are added on the fly by the models changing the way we interact on the internet as we know it today.
Moving into the future toward Google Cloud Next '25 where Agentspace will see the next explosion in AI tools. Agentspace is one of the catalysts that unlocks enterprise users potentials and as it brings a single pane of glass to agents and more explicitly agent of agents architectures.
I hope to see you at Next '25 where there will be a lot of fun Retail and other industry relevant topics and amazing announcements that will positively impact our individual productivity.