Building Trust in Agentic AI: The Case for Model Supply Chain Transparency
Vincent Caldeira
Chief Technology Officer, APAC at Red Hat ? Technical Oversight Committee Member at FINOS ? Green AI Committee Member at Green Software Foundation ? Technical Advisor at OS-Climate ? Technology Advisor at U-Reg
Introduction
As AI systems transition from standalone models to autonomous, agentic systems, the need for trust, transparency, and risk-aware design has never been more critical. These intelligent agents, powered by Large Language Models (LLMs) and multi-agent orchestration, are increasingly making decisions that impact businesses, individuals, and society at large. However, trust in these systems cannot be assumed: it must be designed, measured, and continuously reinforced at the system level, not just at the model level.
One of the key enablers of AI trustworthiness is model supply chain transparency—a framework that allows organizations to assess and verify the provenance, safety, and alignment of AI components used in complex systems. Without clear insights into how AI models are built, trained, and deployed, it becomes nearly impossible to conduct a risk-based analysis of system requirements. This blog explores why model supply chain transparency is essential, how it supports risk alignment in agentic AI, and best practices for designing trustworthy AI ecosystems.
The Growing Complexity of AI Supply Chains
Modern AI systems are increasingly no longer monolithic; they are composed of multiple interconnected models, APIs, and components including external data sources and tools. This complexity introduces new risk factors, including:
These challenges underscore why model supply chain transparency is crucial. This is why it is important for our industry to standardise AI supply chain visibility, ensuring that models are built with accountability and risk alignment in mind.
Why Risk-Based Analysis is Critical for Agentic AI
Unlike traditional AI models that provide outputs on request, Agentic AI systems act autonomously based on high-level goals. This shift from reactive to proactive AI necessitates a new approach to risk assessment. Organisations deploying multi-agent orchestration and function calling frameworks must evaluate:
领英推荐
A risk-aligned AI system does not simply execute functions—it understands its limitations, communicates uncertainty, and allows for human oversight when necessary.
Best Practices for Enhancing AI System Trust
To ensure AI systems are trustworthy, organisations must embed safety measures at every stage of the AI lifecycle. The following best practices can help:
By integrating these practices, organisations can proactively design for trust rather than retrofit safety features after deployment. Looking at this in terms of established implementation patterns such as the Emerging Patterns in Building GenAI Products by Martin Fowler and Bharani Subramaniam from ThoughtWorks, embedding elements and best practice for trust by design is going to be increasingly important in successful deployment of AI at Enterprise scale in the years to come.
Conclusion: Trust as a System-Level Imperative
As AI transitions from models to systems, organizations must adopt a holistic approach to trust and transparency. This requires:
Ultimately, trust is not a feature, it is a foundation. To ensure AI systems are safe, effective, and aligned with human values, we must design for trust at every level—from data and models to decisions and deployments.
Like this. You get me thinking about what analogs exist to all the other things that we mostly just trust to work as designed. (Sometimes, too optimistically.)