Separation of Control in XMPro MAGS: Why It Can Be Trusted
Pieter van Schalkwyk
CEO at XMPRO, Author - Building Industrial Digital Twins, DTC Ambassador, Co-chair for AI Joint Work Group at Digital Twin Consortium
The fundamental architecture of XMPro MAGS (Multi-Agent System) incorporates a crucial design principle—separation of control from execution. This separation creates a robust security model that establishes trust through structural constraints rather than through mere promises about behavior. To understand why this matters, we must first examine how the Agent Memory Cycle and control mechanisms function as distinct components in the system.
The Agent Memory Cycle: Thinking Without Direct Action
The Agent Memory Cycle represents the "thinking" component of the MAGS system. It processes observations, generates reflections, and formulates plans—but crucially, it cannot directly execute actions in external systems.
When the Agent Memory Cycle produces a plan, it outputs this in PDDL (Planning Domain Definition Language) format, which is essentially a structured representation of intent rather than execution capability.
PDDL is a standardized planning language that describes states, goals, and possible actions in a formal, declarative manner. It specifies?what?should happen but lacks the mechanisms to make those things happen in external systems.
This limitation is by deliberate design. The Agent Memory Cycle can think about taking actions, but it cannot actually implement them without the mediation of another system component—the XMPro DataStream.
XMPro DataStreams: The Control Envelope
XMPro DataStreams function as the "control envelope" around the entire system. These DataStreams provide:
The pattern for all XMPro MAGS processes follows these macro steps:
?Step 1 - Input Data To An Agent
Step 2 - Agent Team Execute Memory Cycle In An XMPro DataStream
When an agent's Memory Cycle produces a PDDL plan, this plan must be interpreted and executed through a DataStream that has been deliberately configured with specific permissions and capabilities. The Agent cannot override this as it has no executive powers.
Step 3 - Action Automation In DataStream
The DataStream applies?"hard guard rails" that determine which planned actions can actually be executed, under what conditions, and with what limitations.
Tools: Capability Without Direct Control
Similarly, the tool usage within MAGS follows this same principle of separation. When an agent's Memory Cycle decides to use a tool, it cannot directly invoke system capabilities. Instead, it must request the tool's use through a controlled interface. The DataStream then evaluates this request against predefined rules and permissions before allowing any action to proceed.
This creates a situation where tools are?available?but not?directly accessible?to the agent. The agent can request their use, but cannot bypass the control mechanisms that govern their application.
Why This Creates Trust
This architectural separation creates trustworthiness through structural constraints rather than behavioral promises. Even if an agent's Memory Cycle were to generate plans with problematic actions, those actions simply?cannot?be executed without passing through the DataStream's control mechanisms.
The trust is built into the architecture itself, not dependent on the perfect behavior of the agent.
Practical Implementation for Organizations
Organizations implementing XMPro MAGS can build their trust model on this separation by:
By focusing on the DataStream as the control mechanism—separate from the agent's planning capabilities—organizations can create AI systems that have sophisticated reasoning capabilities while maintaining rigorous control over their actual impact on systems and data.
Conclusion
The separation between the Agent Memory Cycle and the execution control in XMPro MAGS represents a fundamental architectural choice that enables trustworthy AI. The agent can think, plan, and request—but the DataStream determines what actually happens. This model allows organizations to leverage the capabilities of advanced AI while maintaining explicit, transparent control over its actions.
This separation principle—thinking in one component, permission and execution in another—provides the foundation for responsible AI deployment where trust is built through structure rather than promises.
?Pieter van Schalkwyk is the CEO of XMPro, specializing in industrial AI agent orchestration and governance. Drawing on 30+ years of experience in industrial automation, he helps organizations implement practical AI solutions that deliver measurable business outcomes while ensuring responsible AI deployment at scale.
About XMPro: We help industrial companies automate complex operational decisions. Our cognitive agents learn from your experts and keep improving, ensuring consistent operations even as your workforce changes.
Our GitHub Repo has more technical information if you are interested. You can also contact myself or Gavin Green for more information.
Read more on MAGS at The Digital Engineer