Existential risk
Willem Schramade
Professor of Finance & Sustainable Investing Advisory | Nyenrode Business University
Almost everyone knows them: the disaster films in which humanity is destroyed by a hair. They often start idyllically, but soon people worldwide are ambushed by aliens or natural disasters. At the very end, some heroes manage to avert the disaster –after many casualties and dramatic plot twists. What is typically missing in the plot is reflection on how it could have come this far and how it could have been prevented. The explanation is simple: such reflection is anything but spectacular, does not trigger our dopamine and is therefore unsuitable for an action film. An exception is ‘Don't look up’, but that film is also only about the last year before the disaster and mainly illustrates our inability to deal with existential risks.
This needs to change, according to philosopher Toby Ord. In his book ‘The Precipice’, Ord defines existential risks as those that threaten humanity's long-term potential. By this he means any scenario in which humanity is either wiped out (premature extinction) or survives in a worthless way – for example, in a permanent authoritarian regime.
?
But what are these existential risks? Ord distinguishes two categories: natural and man-made existential risks. The first category includes impacts from comets and asteroids; extreme volcanic eruptions; and the explosion of stars. However, Ord attaches a less than 0.01% joint probability to such events wiping out humanity in the next 100 years.
?
According to Ord, the most serious existential risk comes from man-made risks, such as nuclear wars, climate change, pandemics and especially the derailment of artificial intelligence. He estimates the joint risk of such scenarios ending humanity in the coming century at 1/6. That is in fact Russian roulette with future of humanity. It is an unsustainably high risk level, because if it remains at such levels in the coming centuries, it will probably go wrong this millennium. That is why Ord calls his book the precipice: we are now at a point in history where existential risk is much greater than ever before, and a drastic reduction is necessary.
?
Such risk reduction is not easy, because while humanity has a lot of technology and power, it does not have the wisdom to deal with it properly. But fortunately, it's not all doom and gloom in Ord's book. He also points to ways to close the gap between power and wisdom. This requires, among other things, that much more research is done into specific existential risks: how big are they, how can they be reduced? How can we coordinate between countries and organisations? And how can the public debate be informed about this? There are organizations that are working this, such as The Future of Humanity Institute, at Oxford University, to which Ord is affiliated; and in the Netherlands there is Otto Barten’s Existential Risk Observatory.
?
领英推荐
The challenge is mainly a mental one: how do we arrive at a new ethical perspective on the way we see the world and our role in the world? How are we going to think more long-term?
?
These are precisely the challenges we face if we want to be real long-term investors. Thinking in terms of existential risks is then very useful. The first and most banal reason is that existential risk can also become financial risk – as we have already seen with climate change. The mere threat of existential risk can lead to action and investment-relevant changes, especially in the current era of transitions. People mistakenly think that they don't have to think about long-term risks because they run longer than their investment horizon. That is an expensive fallacy, because long-term risks can occur faster than you think. And that applies even more to the anticipation of those risks.
?
Second, the perspective of existential risk helps to think more thoroughly and broadly about risks and value, including those that play a role in the less long term. After all, these are risks that have either not yet occurred, or have happened too long ago to appear in financial data series. The risks are complex and unprecedented, which makes precise figures impossible. You have to learn to deal with that; and still dare to estimate those chances, to quantify them. It also means that backward looking indicators such as tracking errors are of no use. You’d better focus on the underlying processes: how are they going to affect the structures of societies, economies and industries? What does that mean for the profitability of companies? Etc. And you can put that back into models.
?
Another mistake is to think that you are safe with passive investments. On the contrary, passive investing means investing in so many titles that you cannot meaningfully assess the risks involved. The benefits of spreading risk above 30 stocks are only small, while the fragmentation in attention increases enormously. And don’t forget that entire indices can fail. The Russian stock exchange closed this year and effectively went to 0, just like during the Russian revolution a century ago. Those things happen and you’d better take them into account. Let's think more carefully about the disasters that can happen to us – before we end up in a disaster movie ourselves.
This article is based on column I previously wrote in Dutch: Existentieel risico | Investment Officer (voorheen Fondsnieuws)
Retired Professor of Instruction @ The University of Texas at Austin | Economics
2 年Thanks for posting. Very direct and insightful.