Building Responsible Decentralized AI: Insights from d/acc Day Vitalik Buterin, Juan Benet & Dawn Song at UC Berkeley
As the crowd settled into UC Berkeley's Banatao Auditorium, the air buzzed with anticipation. We were about to witness a rare convergence of brilliant minds tackling one of the most pressing questions of our time: How do we build responsible AI in a decentralized world?
The panelists represented a perfect storm of blockchain and AI expertise: Ethereum Foundation Ethereum co-founder Vitalik Buterin , Protocol Labs founder Juan Benet , and UC Berkeley professor Dawn Song , moderated by a general counsel from the Decentralized AI Society.
The d/acc Vision: More Than Just Acceleration
"We do not want a world that tries to prevent problems by stagnating," Vitalik stated early in the discussion, setting the tone for what would follow. "I don't actually think the steady state utopia is something that exists. I think when there is no overall progress, you get more economic and social tension."
This wasn't just a defense of technological progress—it was an articulation of the d/acc philosophy that has become Buterin's calling card: decentralized, defensive, and democratic acceleration.
But what does this mean in practice? Buterin broke it down:
"We want to create a world where a very small group of people—or in a future, not people—don't end up controlling a huge amount of power. We don't want a highly centralized system from which there is no escape."
He pointed to the real-world evidence for this concern, noting how "even protesting has become much higher cost because of facial recognition" in authoritarian regimes. Technology, when centralized, becomes a tool for control rather than liberation.
The Cybersecurity Inflection Point
Dawn Song brought the conversation into sharp focus on immediate risks, particularly in cybersecurity.
"AI is a dual technology. It can help both attacker side and defender side," she explained. "Our analysis shows that AI is going to actually, unfortunately, help attackers more in the near term."
This sobering assessment didn't lead to pessimism, however. Instead, Song outlined promising research directions in what she called "proactive defense through secure by construction and signify construction"—using AI itself to build systems with formal verification of security properties.
"I do strongly believe that we are hitting an inflection point now with Frontier AI advancements. We can actually use AI to build proof agents that can help automate theorem proving for program verification."
Rather than just using AI to generate code—which Song noted now accounts for about 50% of code in many companies—she advocated for "verifiable code generation," where AI creates not just the code but also its formal specification and proof of correctness.
The Centralization Dilemma
Juan Benet articulated the fundamental tension at the heart of AI development: the tradeoff between open, decentralized development and security.
"There's a long conversation between should you build extremely advanced AI systems, especially AGI or ASI, in a fully open, decentralized setting or more closed setting," Benet explained.
The benefits of openness are obvious to the tech community: diffusing power and capabilities globally, decreasing unfair advantages of early movers, and enabling a pluralistic vision of the future. But the risks are equally clear:
"When you create this kind of open environment, you also open up the landscape for attackers being able to use extremely powerful and capable models to cause massive disruption."
Benet, despite his open-source background, admitted that in this case, he's "been mostly on the other side." The potential risks of fully open development of superintelligent systems, he suggested, include "extinction level risks."
But centralization brings its own dangers—the potential for a small group to "exploit the massive capability expansion" and the geopolitical risks of national AI races.
Finding Middle Ground in a Complex Landscape
As the discussion continued, it became clear that none of the panelists saw an easy path forward. Both complete centralization and unrestricted openness carry existential risks.
Buterin suggested focusing on "tools and things that allow higher degrees of human-computer interaction" rather than autonomous AI agents. "I don't actually think that creating independent life forms is even necessarily the optimal thing from our point of view."
Song highlighted the importance of "agents that work in the best interest of users instead of the companies that create them"—a radical shift from the current paradigm where models are "trained with objectives to maximize revenue for the companies."
Benet emphasized the need for verification and security at every step of the AI development process, from training to deployment, while acknowledging the limitations: "We barely have a barely functioning internet... talking about being able to container superhuman intelligence with that kind of infrastructure, it's just not very likely."
The Precipice of Something Extraordinary
As the panel drew to a close, the moderator posed a provocative question: "Have we already opened Pandora's Box? And would we even know if we have?"
Buterin's response was philosophical: "It depends on which layer of the box you're talking about... We opened one about two million years ago with fire. There was the one before with biogenesis."
Song called for "more scientific understanding" and "science and evidence-based AI policy," noting that even within the AI community there's fragmentation about fundamental risks and approaches.
But it was Benet who captured the extraordinary moment we find ourselves in:
"Space-time is really, really big, and we don't have any evidence of other living or intelligence systems outside of this planet... As best as we can tell, we have a really, really special thing, and we're in the precipice of something very significant. It's eerily weird to be alive in this particular moment in time."
The upside, he suggested, is "enormous"—exploring the universe, unlocking the beauty of math and science, enabling humanity to "upgrade." But the path forward is treacherous, filled with "sharp rocks," and if we fail, it could be "game over for purpose and meaning."
"It's a very hopeful moment in our species' history," Benet concluded, "and we get to write the future."
As the audience filed out of Banatao Auditorium, the weight of this responsibility—and opportunity—hung in the air. The d/acc vision offers a framework, but the implementation will require all of us.
Questions to Consider
d/acc Day at UC Berkeley was led by Vitalik Buterin, presenting his vision for a more thoughtful approach to technological acceleration—one that prioritizes decentralization, defense, and democracy. The event was co-hosted by Berkeley RDI, a leading academic voice on safe, responsible and decentralized AI & blockchains since Fall 2021.