How to Spot an Existential Threat
Michael Anton Dila
Designer of strategic conversations, agent of AI Diplomacy, and originator/advocate of System 3.
Can we all just take a breath? There are no small number of reasons we might all fear for the future of humanity and our planet. Extinctions are on the rise and the changes in global climate suggest growing threats to existing ways of life, including our own. There are emerging geopolitical threats, "local" conflicts with global effects from Ukraine to the Middle East to the South China Sea. And, of course, there's AI, and the often repeated (though rarely explained) idea that there's a 5-10% chance that AI could lead to the extinction of humanity.
Listening to Elon Musk talking with Andrew Ross Sorkin at the 纽约时报 DealBook Summit (https://youtu.be/2BfMuHDfGJI?si=ZR82h0WvI94SRJpV), I was surprised to hear Elon refer to Douglas Adams' Hitchhiker's Guide to the Galaxy. He used it to draw out the lesson that the hard part of our human predicament, at any given point, is to figure out what the right questions are.
In the Hitchhiker's Guide we discover that the Earth is actually a huge organic computer, conceived by the world's most intelligence beings (the mice, as it turns out) to calculate the answer to the question of "life, the universe, and Everything." The answer, as it turns out, is 42. In receiving this answer, the mice realize that they weren't really sufficiently clear what the question was (which, in a savage twist of irony, turns out to be, "what do you get when you multiply 6 by 9"?).
My point is not that we should all relax and spend more time appreciating cosmic jokes, but that we should take Elon's point: that getting to the right questions is both the hardest and the more important work we face. On the technical front, AI is likely to be on the path to further breakthroughs. The research, science and technology have certainly appeared to be gathering up a head of steam in the last few years. What emerges will likely surprise us. Hopefully, the surprises won't leave us as flat footed as the mice in The Hitchhiker's Guide.
Our greatest existential threat, however, is at least as likely to be our inability to govern our technology, as it is to be the result of a runaway superintelligence. In fact, these are really versions of the same fear. The challenge for governance is that there is no body or institution that is trusted, competent and authorized to govern pervasive and ubiquitous technology. This is why we have not come to shared norms, goals and action about the climate crisis. It is why nuclear detente has been an insufficient deterrent to conflicts other than total war.
In the case of climate destruction, warfare and the existential threat of "a technology that decides" to leave humanity behind, it is "our" choices about how to live that are the root cause of these threats. Most of us have little or no input into the choices and decisions that the elites in business and government make that effect all of our lives.
领英推荐
Open AI was founded not only with an awareness of the potential to misuse and abuse powerful technology, but with an intention to institute a design that would prevent such harms. We can have a good faith belief that these were authentic intentions and still be worried that those intentions can be undone. Nazi Germany and Apartheid South Africa each had rule of law and systems of justice. In both countries those systems were co-oped, corrupted and made complicit in the crimes of those regimes. Precisely what ought to horrify us most in such examples is not that they operated outside the law, but that their evil intent became law.
How we spot the dangers of existential threat is only part of our problem. We need to design practical paths to effective action to confront such threats. If the intent of the Open AI board when it fired Sam Altman was to challenge such a threat, then it failed. If, as it's own mission assures us, Open AI intends to create AGI that benefits all of humanity, then is it, in fact, equipped to carry out that mission?
Hélène Landemore and John Tasioulas have suggested that "we" may be a crucial element to ensuring that Open AI (and others effectively carry out their stated missions (https://iai.tv/articles/we-need-to-democratize-ai-helene-landemore-john-tasioulas-auid-2680). They believe that the deliberative reasoning capacity of citizen assemblies can be adapted to integrate with the systems of governance of private companies. This is an untested hypothesis, but it informed an effort that was undertaken by Open AI earlier this year when they held a competition for ideas to make possible democratic inputs into the governance of AI. In fact, I led a team that made a submission, and though ours was not among the ten that were awarded grants by Open AI, we were among thousands encouraged to think about and design solutions for these challenging problems.
How do we ensure not only survival but the ethically effective operation of democracy in conditions of existential threat? That may not be the question we face, but it is certainly one we should be asking more often.