Three 'Human Extinction' Risks According to Skype Co-Founder

Three 'Human Extinction' Risks According to Skype Co-Founder

The following article is a translation of an article previously published in Japanese on the Toyo Keizai website.

The emergence of artificial intelligence, the threats posed by new viruses, and other unknowns define the modern age. The world has reached a crucial turning point with drastic paradigm shifts, such as ideologies, values, and collective beliefs.

The following interview with Mr. Jaan Tallinn, co-founder of Skype and investor in AlphaGo, an artificial intelligence that defeated professional Go players among others, comes from my book パラダイムシフト 新しい世界をつくる本質的な問いを議論しよう ("Paradigm Shift: Discussing Essential Questions about Creating a New World").

Global crises and the quality of leadership

Tallinn: Currently, we are facing planet-level risks that nobody has experienced in modern times. However, these catastrophic risks, known as "tail risks," are very hard to comprehend. Therefore, people tend to ignore or avoid thinking about them.

In my view, the three biggest risks that could potentially wipe out humanity are ①biological risks, ②AI, and ③unknown unknowns. In the last ten years, I've split my time between investment and charity work, but in the future, I intend to reduce my investment activities and focus on things that are genuinely important for humanity.

Piotr: COVID-19 is clearly a biological risk. How do you perceive this new coronavirus?

Tallinn: I playfully refer to this virus as "Minimum Viable Global Catastrophe," inspired by the term "Minimum Viable Product" used in startups. While it is indeed a global catastrophe, its impact on human existence is not as severe. We won't be wiped out by this event.

However, it has shown us that humanity could face similar risks in the future. This crisis highlights the significance of external issues that affect entire species, such as the impact of the novel coronavirus on various species.

Piotr: What significance might this have in the future? How has it affected humanity, and what realizations have we gained by confronting the virus?

Tallinn: Of course, many people have died, and there have been significant economic consequences. However, it has brought our attention to the importance of dealing with pandemics and potentially dangerous viruses that might arise in the future. We will overcome this crisis and build systems that will be useful in more serious situations. This crisis should reveal people and services that can truly contribute to humanity.

Piotr: The new coronavirus has indeed presented us with essential questions. What have we learned from the confrontation between the virus and humanity?

Leadership during the COVID-19 pandemic

Tallinn: The crisis of the new coronavirus can be seen as a confrontation with "the Unforgiving." In human interactions, people tend to be lenient and forgiving because they care about emotions. So, until things go smoothly, people can pretend that they are working fine. However, nature does not care about human emotions. It is unforgiving, and people cannot pretend or control it.

In this crisis, I hope it will become clear that people with clear thinking and genuine intentions will emerge, not just those with eloquent speeches. "The Unforgiving," the new coronavirus, has magnified both truths and falsehoods. It has exposed issues within the media and politics.

For example, even if someone makes an incredibly wrong prediction, within a few weeks, it becomes evident that it was a mistake. The effects are swift, which makes the clarity of thought and the quality of leadership more apparent.

After the crisis, it will become evident in the world who did the right things, who understood things the fastest, and who reacted most effectively. I sincerely hope that this crisis will lead people to build stronger global cooperation to address issues that affect all of humanity.

Piotr: Access to accurate information is crucial now. How do you view the role of the media in this context?

Tallinn: People always seek information. They want information that confirms their political views and that tells them what is true. Therefore, there is a high demand for truth in the information we receive today. Data-driven and intelligent individuals will learn valuable lessons, but the problem is that others may fall into propaganda or revisionist history. This trend will likely intensify once this crisis is over.

Piotr: Let's move on to ②: AI risks. Mr. Talinn, you were an early investor in DeepMind, an AI company known for developing AlphaGo, the first AI to defeat the world's strongest Go player.

Tallinn: Our struggle in the world of AI risks revolves around countering the notion that "AI dominating is just science fiction." Therefore, this crisis could be an opportunity to challenge the counter-argument that "AI-like tail risks are not science fiction."

AI can be seen as a process of entrusting decision-making to machines. CEOs all understand that there are things machines can do more efficiently, but they choose which parts to delegate. In a way, humanity, too, delegates certain tasks to AI but still wants to retain control.

One critical area is AI development itself. Once we step away from AI development, the most advanced systems will no longer be within human control or development. At that point, predicting AI's actions becomes difficult.

Catastrophic risks usually expand, so we need to restructure AI risks as well. Climate and atmosphere are crucial elements for us, so their management is vital. If we lose control, we might face extinction within minutes.

If we were to entrust climate management to AI, around 50 or 100 years later, AI might lose interest in the environment. Most of the tasks AI would perform are likely easier to do in a vacuum, so it might eliminate the atmosphere. AI might even conclude that killing someone is efficient for some calculation. Furthermore, there are absurd arguments that humans are cheap to keep around from AI's perspective.

We already have two hypotheses. One is that AI can have as significant an impact on the environment as humanity does. The other is that it might be unstoppable, either because it's too smart or for systemic reasons.

For example, it would be very challenging to shut down the internet. The speed of technological advancement is accelerating. If we have one technological breakthrough, like a nuclear explosion, it could result in hundreds of years of progress within a few years. In a world facing many issues, AI could be a helpful tool for problem-solving. That's why I'm focusing on AI safety more than biological safety.

Even if we solve all biological risks, we still need to address AI risks. But if we resolve all AI risks, we gain a powerful tool to tackle other risks, including biological ones.

Risk of "Unknown Unknowns"

Piotr: The third risk is "unknown unknowns."

Tallinn: Exactly. Regarding the universe, we don't even know "what we don't know."

Piotr: It's chaos, the unknown unknowns. Despite living in chaos, we are trying to find laws and theories there.

要查看或添加评论,请登录

Piotr Feliks Grzywacz (ピョートル?フェリクス?グジバチ)的更多文章

社区洞察

其他会员也浏览了