Deconstructing Trust

Deconstructing Trust

The dominant paradigm of trust is interpersonal trust -- an attitude of commitment and goodwill between humans.?The focus on humans is natural. The question of how we should trust strangers has been critical to the survival of our species for 2.8 million years. But for the first time in 40,000 years -- since the extinction of the Neanderthals -- we now have an entity not of our species that lays claim to our trust. Unsurprisingly, given our genetic and cultural propensities, many of us have rushed to anthropomorphize AI agents and trust them as if the simulation of human-like behavior were sufficiently trustworthy.

To rethink trust for what may well become the post-Anthropocene epoch of autonomous AI agents, we must deconstruct the assumptions of interpersonal trust, one block at a time.

First, let's consider the implications when we do not assume that the person is human. This is easier than you would think. In many countries, corporations are people too. Corporations are legal persons designed to achieve specific objectives, protect stakeholders, and generate profits within the confines of regulations. Over the past 300 years, we have learned to trust corporations with the rights and obligations of persons under the law.

But the difference between a natural person and a legal person spans a canyon of contrast between the ways we trust them.

Humans trust corporations with limited scope and expect accountability in exchange.

We trust corporations to the extent that they fulfill specific well-defined contractual obligations to shareholders, clients, and the public while complying with regulations, laws, and community norms. Between humans, trust is like the attitude of friendship or love in which the relationship is all-encompassing in scope. X trusts Y like X loves Y. But we often narrow the scope of trust to a specific context and task. You may trust your neighbor to water your lawn but not to feed your child. When it comes to a corporation, though, I think we always narrow the scope of trust to specific contexts and tasks. We expect trust in a corporation to take the form - X trusts Y to do Z. Indeed, we trust a corporation because of that limited, defined scope. This definition and limitation of scope will become particularly important when we consider AI agents. Would you trust a helpful bank bot that also offers medical advice along with a great deal on a used sedan?

Unlike (most) natural persons, corporations lack empathy and conscience so our trust in them is rooted in transparency, accountability, and governance rather than personal goodwill or emotional sincerity. Trust is never offered freely to a legal person; it is measured, managed, negotiated, verified, and hedged. In contrast to the way you trust your friend, you trust your bank based on its contractual terms, institutional reputation, and compliance with laws and regulations, not because you expect it to act with personal empathy towards you.

A corporation trusts another only when cooperation is enforced by contract.

Let us now take the human out of the equation altogether and consider trust between two legal persons. Like copulating porcupines, corporations that plan to work together must orient carefully to maximize benefits and minimize pain. Each party comes with its own protective “spines” — terms, clauses, and contingencies — all meant to guard against breach of contract or unintended consequences. When legal persons negotiate the fine points of trust with each other, its about getting close enough for cooperation while staying safe enough against defection. Legal governance, risk management, and regulatory compliance makes it possible to align interests without either party getting pricked.

We now have a framework in which to understand the nature of trust between diverse entities without assuming that they are all persons related by interpersonal trust.

Doldrums

But we haven't yet established where an AI agent fits into this framework. An AI agent is neither a machine nor a natural person nor a legal person. So we cannot trust an AI agent in the same way that we can trust a car, a human, or a corporation.

AI agents are not machines. They are based on probabilistic LLMs that learn statistical patterns from examples to generate similar patterns that are then filtered to align with human preferences. Unlike mechanical systems, complex neural networks are non-linear dynamical systems with some properties that emerge only at scale.?

AI agents are not humans. They do not reason, do not have feelings, and most certainly do not have empathy for the unique human condition. And yet, because they present themselves as if they were natural persons, fluent in natural language and conversant with publicly available knowledge, we are prone to anthropomorphize them.?

AI agents are not legal persons. Yet. Although there have been efforts to seek rights of authorship and invention for AI models and personhood for humanoid robots, the courts so far have declined to grant legal personhood to AI agents.?

Despite the lack of legal rights and privileges, autonomous AI agents resemble corporations more than they resemble machines or people. Like corporations, AI agents can enter into contracts, comply with policies and regulations, pay for services and accept payment, and possibly staff entire companies with only a single human operator accountable for their decisions, words, and actions.

This, then, is the central paradox of trust in AI agents. As humans, we are prone to trust AI agents the way we trust machines or natural persons. Rather than by assuming that they have machine-like reliability or human-like empathy, perhaps we ought to trust AI agents the way we trust corporations — with the expectation of design for a limited purpose, adherence to policies, compliance with laws, and transparency into their operations.

Loomings

Not all trust is the same but not all agents are created equal either. Let's consider three archetypes of agents.

  • Personal Agent: A personal agent—Alexa, Siri, or even the autocomplete tuned to your typing—is, in principle, an extension of you. Its job is to reflect your will, anticipate your needs, and act in ways that align with your preferences. A digital valet, you wish it were Jeeves or Jarvis. Trusting a personal agent isn’t like trusting a hammer to drive a nail or even a dog to fetch a stick. It’s more like trusting a mirror to reflect not only your image but your intentions.
  • Corporate Agent: The tireless employee—a bank bot, a contact center representative, or your financial advisor—is a different entity that operates through policies and systems. This isn’t about empathy. It’s about accountability. When you trust a corporate agent, you’re not trusting it to “grok” you; you’re trusting it to do its job. And you’re not trusting it out of some deep faith in its goodness; you’re trusting it because it's operator has every incentive to avoid a lawsuit or regulatory fine. Trust is less a relationship and more a transaction, a carefully negotiated détente between you and a black box with just enough vested in the iterated prisoner's dilemma of a game to avoid screwing you over too often.
  • Autonomous Agent: The wild card—like a self-driving car navigating complex traffic scenarios to reach a destination safely—is a system capable of operating independently, making decisions, and adapting to its environment without direct human oversight. Trusting an autonomous agent is tricky because it defies our usual mental shortcuts for trust. It’s not personal, like trusting a friend or assistant. It’s not transactional, like trusting a corporation. It’s something else entirely. The more autonomous an agent becomes, the harder it is to trust it in the traditional sense—because traditional trust relies on predictability and autonomy involves uncertainty.

A personal agent, a corporate agent, and an autonomous agent each demand different kinds of trust. These aren’t just degrees of trust—like “light trust” versus “heavy trust.” They are fundamentally different species of trust, shaped by the agent’s role, purpose, and capacity to betray us. We've finally reached the strange and slightly terrifying frontier.


To be trustworthy, a thing must be reliable. A personal AI agent must encapsulate the intentions of its human interlocutor with competence and fidelity. A person must live up to the expected commitment with good will. A corporate AI agent must perform a scope-limited job with competence, maintain its integrity under pressure, adhere to organization policies, and comply with governing regulations in the same way that an employee of that corporation must. A corporation must conduct its business according to its stated purpose in compliance with regulations and laws with transparency. An autonomous AI agent must operate as if it were a well-regulated corporation, subject to independent audit, governance, and regulatory compliance. And until the courts grant autonomous AI agents some kind of para-personhood legal status, we must anchor our trust by enforcing a containment policy that guides their agency while mitigating the risks of their independence.?

Call me Ishmael

This is why trust is so central to our relationship with technology. As we cede more and more agency to non-human entities, the nature of trust becomes not just a philosophical question but a practical one. How do we design systems that inspire the right kind of trust? How do we ensure that the agents we create—personal, corporate, autonomous—are not only effective but trustworthy in ways that align with their roles?

These questions aren’t just technical. They’re existential. They force us to confront what it means to delegate power, to share vulnerability, and to cooperate with entities that exist beyond the boundaries of our own humanity.

In the end, trust is not just about the agents we create. It’s about the kind of world we want to build—one where cooperation thrives not despite our differences but because of them. And that, perhaps, is the ultimate test of trust: not certainty but courage.

So here I am again at the ship's prow while the white whale swims in the dark waters beneath, waiting. Until we dive again, let me know what you think.

Michael Schneider

IT Infrastructure Engineer, Architect, and Consultant

3 个月

Before asking how, need to ask why. Why should we trust autonomous AI, Regenerative AI, or any AI for that matter. Why should we trust other humans? In the human sense trust is earned by actions where the start of AI it is not as humans are the ones controlling AI. I do recall something known as the 5 whys which allows us to dig to the core of most situations for resolution or improvements. I may have missed a prior post, but the "why" seems to be a skipped here. Instead, it's "given that we should trust", how do we trust. I disagree with this approach as there are many reasons why AI operations should not be trusted. One prime example is Google search (being AI) decided to depict pictures of USA's forefathers as a race they were not, or AI developers, and businesses (Yes, I'm speaking to AWS Ethical AI here), directing training to sway to their opinions and viewpoints, rather than truth and fact. This is only the start. AI has not earned our trust and neither has the developers behind its training. So, to the AI trainers (me being one at times), how do you "earn" the trust of the people?

回复
Ananth Sankaranarayanan

Director AI @ Meta | AI, Data, Developers and Supercomputing National Science Teaching Association (NSTA) Award recipient

3 个月

Well written, Vin! Miss our conversations!!

回复
Laurent Van Veen

Solution Architect at Hewlett-Packard

3 个月

Very nicely written and food for thought!

回复

要查看或添加评论,请登录

Vin Sharma的更多文章

  • Factotum not Fiduciary

    Factotum not Fiduciary

    We extend trust—like a line of credit—to machines, strangers, and corporations because we have come to expect…

    3 条评论
  • Trust in Agents

    Trust in Agents

    If your work is motivated by curiosity, you often have a question that you want to pursue wherever it may lead…

    4 条评论

社区洞察

其他会员也浏览了