Oasis Security Approach For AI Agents: Verifying Instead Of Trusting

Oasis Security Approach For AI Agents: Verifying Instead Of Trusting

Decentralized AI is growing rapidly and the increasing prominence of autonomous agents is reshaping the landscape of Web3, with machine-powered agents set to become integral players in on-chain ecosystems. However, as these agents take on greater decision-making power and handle more capital, a critical challenge arises like how do we trust their actions? In an era where transparency and accountability are crucial, ensuring these AI agents operate with integrity and verifiability becomes essential.

As for Oasis as a leader in privacy and security for decentralized ecosystems, Oasis is uniquely positioned to comment on and address the evolving dynamics of AI verification in Web3. Through their work on privacy-preserving technologies and trusted execution environments, Oasis brings much-needed solutions to the table bridging the gap between user trust and autonomous agents. So let’s explore the methods of verifying AI agents, each with its trade-offs while examining how Oasis’s innovations like ROFL (teeML framework) and the Sapphire EVM extension for off-chain computations contribute to the future of decentralized AI.

Verifying Decentralized AI Agents: The Oasis Perspective

As the Oasis blog said that, the decentralized web demands a verification process to ensure the trustworthiness of agents. Oasis, with its expertise in privacy and off-chain computation, provides insight into why verifiability is so important, especially as AI agents increasingly handle sensitive and high-stakes tasks. Trust in Web3 cannot be taken for granted it must be proven.

The main verification methods the blog discussed the Zero-Knowledge Proofs (zkML), Optimistic Verification (opML), Trusted Execution Environment (teeML), and cryptoeconomic models each offer distinct advantages and have some major drawbacks. Oasis's contributions and framework innovations showcase their commitment to enhancing both verifiability and privacy for AI agents.

Zero-knowledge proofs provide strong assurances of model correctness without exposing underlying data. zkML excels at ensuring that AI models execute faithfully, which is critical for trust in sensitive environments. This method enables agents to operate without revealing data, safeguarding privacy while verifying computation. However, as highlighted in the article, zkML's high resource demands make it expensive and less scalable for more complex models.


Oasis recognizes the potential of zkML but also acknowledges the challenges of outsourcing proof creation, which introduces latency, cost, and privacy concerns. While zkML remains promising, its current limitations suggest that zkML solutions might be best suited for simpler, well-defined use cases where privacy is paramount, rather than for broad, real-time AI tasks.

Optimistic Verification takes a different approach by trusting AI model outputs while allowing third-party network "watchers" to verify correctness. This scalab

From Oasis’s vantage point, opML may serve well in systems with low-latency requirements or where network watchers are actively monitoring agent behavior. However, for applications where privacy and speed are critical, optimistic verification's longer verification timelines and reliance on external parties may limit its practicality.

Oasis places significant emphasis on Trusted Execution Environments (TEE) as a promising solution for verifiable AI. TeeML uses hardware-based security to ensure that computations are executed correctly and privately within a secure enclave. TEEs provide both privacy and verifiability, making them a strong contender in use cases where both attributes are necessary. The article notes that while teeML is a secure and practical approach, it comes with dependencies on hardware, which can limit its decentralized nature and broader applicability.



Oasis’s work in this area, particularly with its ROFL teeML framework, positions them as innovators in the space. By extending the Ethereum Virtual Machine (EVM) through Oasis Sapphire, Oasis enables off-chain computations to be integrated securely within blockchain environments. This innovation not only strengthens privacy but also enhances the verifiability of AI agents in a way that aligns with blockchain principles.

The cryptoeconomic approach, where node operators vote on queries and discrepancies are penalized, offers a simple and inexpensive solution. However, as the article notes, this method is the least secure of the major approaches, due to the potential for collusion among nodes. Cryptoeconomic verification may be ideal for low-risk scenarios but falls short when applied to high-value tasks or sensitive information.

Oasis, while not dismissing cryptoeconomics entirely, focuses on more robust solutions like teeML to ensure that AI agents maintain a high level of trust. Given the growing importance of AI agents in Web3, solutions that prioritize both verifiability and privacy, such as Oasis’s Sapphire and teeML, offer stronger guarantees than the cryptoeconomic model.

The blog also touches on the roles of Oracle networks and Fully Homomorphic Encryption (FHE) in verifying AI. Oracle networks allow for trusted off-chain computation by providing tamper-proof, verifiable data, while FHE allows for computations on encrypted data. Both approaches offer exciting possibilities for the future, and Oasis continues to explore these areas to provide even stronger privacy and trust guarantees in the decentralized ecosystem.


While still in the early stages, oracle networks and FHE offer promising future directions for enhancing AI verifiability. Oasis's existing infrastructure, which already supports privacy-first computation, is well-positioned to integrate these technologies as they mature.

In conclusion, the challenges of AI verification are multifaceted, and while there is no perfect solution today, several promising methods are emerging. Oasis’s contributions, particularly with teeML and the Sapphire EVM extension, demonstrate their commitment to solving the verifiability problem while prioritizing user privacy. By providing practical, scalable solutions for AI verification, Oasis is helping build a future where decentralized agents can be trusted to operate autonomously, transacting without user intervention.

As Decentralized AI evolves, Oasis will continue to play a crucial role in innovating and updating privacy-preserving and verifiable technologies, ensuring that AI agents can be trusted, secure, and ready for widespread adoption in Web3.

Thanks for reading!

Sarah Ali

Student at the Faculty of Mass Communication, Cairo University

4 个月

Very informative

回复

Very helpful

回复

Perfect ????

回复

要查看或添加评论,请登录

Mohamed Ibrahim的更多文章

社区洞察

其他会员也浏览了