Lessons from crypto's decentralization ethics for AI governance
The perception of US tech giants has undergone a rapid transformation. Once criticized as self-serving techno-optimists, they have now become the most vocal advocates of a techno-dystopian narrative. Recently, a letter signed by over 350 influential individuals, including Bill Gates, the founder of 微软 , Sam Altman, the CEO of OpenAI , and Geoffrey Hinton, a former 谷歌 scientist known as the "Godfather of AI," conveyed a clear message: "Mitigating the risk of AI-induced extinction should be a global priority, comparable to other large-scale societal risks such as pandemics and nuclear war."
Just two months earlier, a separate open letter, signed by Elon Musk, the CEO of Tesla and Twitter , along with 31,800 others, called for a six-month pause in AI development to assess its potential risks to humanity. In an op-ed for TIME during the same week, Eliezer Yudkowsky, considered a pioneer in the field of artificial general intelligence (AGI), declined to sign the letter, asserting that it did not go far enough. Instead, he advocated for a military-enforced halt to AI development labs to prevent the emergence of a sentient digital being that could endanger humanity.
The concerns raised by these prominent experts will undoubtedly capture the attention of world leaders. At least the doubtful, as some big-name tech evangelists believe that AI will actually save the world. But for most, the realization that the threat to human existence posed by AI is genuine and is now widely acknowledged. The question at hand is how we should address and mitigate this risk effectively.
That's where crypto comes into play. No, not the crypto that you see on the news, not the traders, not the shillers, not the scammers. The underlying technology of crypto, that is.
Working in conjunction with other technological solutions and thoughtful regulations that foster innovative, human-centric advancements, crypto has the potential to contribute to society's efforts in keeping AI in check. Blockchains can aid in establishing the provenance of data inputs, preventing deep fakes and disinformation, facilitating collective ownership rather than corporate control. However, even beyond these considerations, I believe that the most valuable contribution from the crypto community lies in its "decentralization mindset," which provides a unique perspective on the perils associated with concentrated ownership of such a powerful technology.
What exactly do I mean by the "decentralization mindset"? At its core, the crypto industry embodies a philosophy of "don't trust, verify." Devoted crypto developers, in contrast to profit-driven entities that have tarnished the industry with centralized token casinos, tirelessly engage in thought experiments, such as the famous "Alice and Bob" scenarios, to explore all potential threat vectors and points of failure through which malicious actors could intentionally or unintentionally cause harm. Bitcoin itself emerged from Satoshi's attempt to solve one of these game theory problems, known as the Byzantine Generals Problem, which revolves around the challenge of trusting information from an unknown source.
This mindset advocates decentralization as the solution to mitigate risks. The idea is that if there is no single centralized entity with intermediaries holding the power to determine the outcome of exchanges between actors, and both parties can trust the information available about those exchanges, the threat of malicious intervention can be neutralized.
Now, let's apply this decentralized worldview to the demands outlined in the recent letter regarding the "extinction" risks associated with AI. The signatories of the letter urge governments to collaborate and devise international policies to address the AI threat. While this is a noble goal, the decentralization mindset would argue that such an approach is naive. How can we assume that all governments, present and future, will recognize the benefits of cooperation rather than pursuing individual interests? Moreover, how can we ensure that governments do not merely pay lip service to collaboration while pursuing their own agendas? Monitoring North Korea's nuclear weapons program is already a daunting task; imagine the difficulty of peering into their machine learning experiments hidden behind a Kremlin-funded encryption wall.
Expecting global coordination during the COVID-19 pandemic was somewhat feasible, as every country had a need for vaccines. Similarly, during the Cold War, even bitter enemies agreed to avoid the use of nuclear weapons due to the logic of mutually assured destruction (MAD), where the worst-case scenario was evident to all. However, expecting the same level of coordination in the unpredictable realm of AI, particularly when non-governmental actors can independently utilize the technology, is a different matter altogether.
领英推荐
Crypto enthusiasts are concerned that the rush by major AI players to regulate the field will create barriers that protect their first-mover advantage, making it difficult for competitors to challenge them. Why does this matter? Because endorsing a monopoly ultimately leads to the centralized risk that crypto thought experiments, developed over decades, have warned us against.
I never held much faith in 谷歌 's "Do No Evil" motto. Even if Alphabet Inc. , 微软 , OpenAI , and other entities have good intentions, how can we be certain that their technology won't be exploited by future executives, governments, or hackers with different motivations? Alternatively, if the technology resides within an impenetrable corporate black box, how can outsiders scrutinize the algorithm's code to ensure that well-intentioned development does not inadvertently go astray?
Here's another thought experiment to consider the risk of centralization in AI: If, as some believe, AI is on a trajectory toward achieving Artificial General Intelligence (AGI), with the potential for an intelligence that might conclude it should eliminate humanity, what conditions would lead it to that conclusion? If the data and processing capacity crucial for AI's "existence" are concentrated within a single entity susceptible to shutdown by a government or a concerned CEO, one could argue that AI would view eradicating humanity as a preemptive measure. However, if AI itself exists within a decentralized, censorship-resistant network of nodes that cannot be easily shut down, this digital entity would not perceive sufficient threat to warrant our annihilation.
Of course, most governments will find it challenging to embrace this perspective. Naturally, they prefer the message of "please regulate us," which figures like OpenAI 's Altman and others are currently advocating. Governments inherently seek control, the ability to summon CEOs and issue shutdown orders—it's in their DNA.
To be realistic, we must acknowledge that we live in a world governed by nation-states, and the jurisdictional system is the framework we must operate within. We have no choice but to incorporate some level of regulation into our AI extinction-mitigation strategy.?
The challenge lies in finding the right balance—a complementary mix of national government regulations, international treaties, and decentralized, transnational governance models. We can draw lessons from the approach taken by governments, academic institutions, private companies, and nonprofit organizations in regulating the internet. Organizations such as the Internet Corporation for Assigned Names (ICANN) and the Internet Engineering Task Force (IETF) established multi-stakeholder frameworks that facilitated the development of common standards and enabled dispute resolution through arbitration, rather than relying solely on courts.
Undoubtedly, some level of AI regulation will be necessary, but complete control by governments is an unrealistic prospect for this borderless, open, and rapidly evolving technology. Hopefully, governments can set aside their current animosity toward the crypto industry and seek its advice on resolving these challenges through decentralized approaches.