What do we think about extreme risks?
The End is Nigh
Among my retirement activities I haven’t yet taken up the opportunity to don sandwich boards, walk the streets and offer opinions of inevitable doom. To be clear it’s absolutely not how I feel, but it’s also good to know it’s a fall back option hehe. ?(For anyone interested I have actually included a personal update at the end of this blog detailing one of my less well known retirement activities!)
Anyway, who needs that doomsayer sandwich board contribution when we have mainstream media headlines like some of those appearing this week on the front pages in big bold letters in the national newspapers catering to a range of public opinion.
“AI creators fear the extinction of humanity” The i , Weds 31/5/23
“AI pioneers fear extinction” The Times , Weds 31/5/23
“AI COULD WIPE OUT HUMANITY” The Daily Mail , Weds 31/5/23 (where CAPSLOCK is essential in any written opinion)
Here was I thinking optimistically about how useful AI applications can be in tackling some of our thorniest societal and global inequalities …. but I guess if everyone is dead then it’s certainly equal eh?!
Anyway that’s only mainstream national newspapers (they are still circulating apparently) but I guess the far more read social media options can do this kind of existential scaremongering all the time if the right/wrong feeds come your way…?
Educating the Public on Extreme Risks
I think, as a society and probably even as a business community, we are really pretty hopeless at thinking about long term risks (including the intergenerational kind) and their probabilities. Coupled with critical thinking it’s not a particular educational focus in schooled skills perhaps? ?So I guess it’s not our fault. This skill shortfall is especially true when it comes to Extreme Risks where even the professional “go to” tools of modelling and probabilistic distributions are as useful as underwater hairdryers ( I know I don’t even need land based hairdryers).
Wouldn’t it be incredible if we had some kind of broadly acceptable mental templates to help us think through extreme risks and their impact in some kind of ranking order that might actually inform , prioritise and justify our current actions, investments and votes? OK I’m just too idealistic at times!
In the hands of the various media outlets as the primary routes of public and business information what chance do we stand? But in the hands of scientists and academics … a little better I suggest.?
Now certain niche professionals (I can’t say experts , it’s too toxic) like say actuaries think about probability and risk ALL THE TIME ( I just went all Daily Mail there, sorry).
To them thinking in those terms is as comforting, wholesome and familiar as tucking into a bowl of chicken or vegetable soup with a nice hunk of fresh bread.
Whereas in the hands of our media certain examples of extreme risk are instead presented to the public as a bowl of Carolina Reaper chillies with a hunk of Icelandic fermented shark (Hakari) to consume. ?Yet paradoxically other extreme risks are presented to us as empty bowls – nothing to see here - despite what the boffins say. So how can we get a collective sense of proportion?
Before I move on , I think I better briefly acknowledge that it’s not been too long since the niche subject of LDI in Pensions became a systemic and therefore arguably extreme risk. I won’t be rehearsing my advocacy and arguments here or referencing the buildings that remain standing but I will say it’s another reason that our over-reliance on probabilistic distributions falls flat on it’s backside in extreme scenarios.
Mathematical nonsense
It’s the kind of overly mathematical comfort-blanket nonsense that leads to us to finding Local Authority Pension Funds publishing that a failed transition to a greener economy, leading to a 4 degree warmer world, would see their annual investment returns reduce by between 0.07% and 1.21%.
I especially like the decimal places eh? Please say that fails your sniff test? Yet it's published serious stuff that well paid smart folk have rubber stamped.
I suggest that in a 4 degree warmer world no one will be thinking about pensions! (I take my leads?from climate scientists about what extreme physical risks that crystallises)
For more on this story see Sandra Wolf 's article on Mallowstreet 25/5/23 “How useful are climate scenario models?”
Scary Stats
Various high level data-based , science backed , peer reviewed stats do actually scare and unsettle me – not to the point of making sandwich boards , but to the point of arguing for better frameworks and better informed public discourse.?
For example here are some hopefully familiar nature based concerns that come somewhere towards the top of my mental ranking list :
69% of vertebrate animals have disappeared since 1970, WWF LPR 2022.?( I’m 60 this year so 1970 doesn’t feel so long ago)
In the last 30 years nearly 80% of insects have disappeared from Europe. No bugs splattered on the windscreen…. but understanding their role in the overall chain and the knock on consequences of a kind of insect apocalypse is key here.
But there are actionable ways back and for any of the stats that attract your attention, there are solutions. We just need to understand the consequences of inaction and be able to assimilate different risks in some kind of ranking order. Intuition can work pretty well if you just think about these things with helpful and reliable source data. But delegated to AI will work even better!
Be more like Tim?
Thinking about “extreme risks” reminds me fondly of a personal favourite talk given by Tim Hodgson of the WTW Thinking Ahead Institute 10 years ago in 2013.
Here is a link to his entertaining 13 minute talk on extreme risks.
So anyway in this work (which illustrates why probabilistic distribution risk analysis is not a good guide to impact and behaviour) Tim offers us a ranking of extreme risks.
There are related detailed papers going back to 2009 I think which are periodically updated by WTW's specialists.
Funnily enough AI wiping out humanity (this week’s topical headlines) appeared in 2013 under “technological singularity” ie rise of the machines (although I don’t think made the top 15) .
But in 2009 'Global temperature change' did not feature at all yet it occupied the number one spot of extreme risks in 2019. Also 'killer pandemic' appeared at number 15 in 2009 only to have dropped off the list by 2013.
Alien invasion of Earth (the unfriendly kind) is a favourite go to extreme risk but always ranks quite low … after all it hasn’t happened yet …. has it?
This week’s top 20 long term risks!
Intriguingly these extreme risks shift around in ranking order (and our consciousness) despite their very long term nature and the comparatively short period of 14 years consideration. Perhaps this is why we can only act in response to what we have just experienced (eg future pandemic preparation has much improved since COVID - (and WhatsApp lessons are getting learned)). However this is perhaps why something like the boiling frog nature of climate change will be so hard to reverse / halt. What lived experience is needed to at scale prioritize tackling the contributions to the harmful trend?
领英推荐
So would we be better off if the Times and the Daily Mail et al adopted this awareness format? Or would it only serve to add to confusion or maybe worse create indifference to a list of things that might destroy all that we hold dear?
I believe it would be helpful to have some kind of effective publicly recognised ranking system of big existential longer term type risks and remedies and likelihoods. Where we can justify spend and forgo things today for a better tomorrow. However these cannot be placed alongside the same pot of spend for immediate lived experience issues (cost of living crisis , jobs , war etc) because the immediate always trumps the longer term , even though the longer term problems will swamp the present ones when they become immediate and lived.
The COVID experience should have taught us that redirecting spend at scale is always possible and that planning ahead is better than reacting after the event.
But humans can’t ever agree …. and our focus is on the now. So here’s?where I’d look to AI to help us rather than worry about AI being the next source of our demise!!
Thought experiment question: Where would AI rank itself in a list of existential risks to humanity alongside nuclear wars, killer pandemics , climate catastrophes , biodiversity loss and alien invasions etc?
Which ones would AI suggest we get to work on sooner rather than later? And how much to spend on each?
Human behavioural biases always shows us the way
Left to the media (the reflected court of public opinion) , I think suspicions are always with the new (eg AI) and trust is always when the old (eg fossil fuels) .. no matter how misguided that is at times …. but you may disagree.
I can’t help preferring science and data and academia to inform us and to guide us ?… not the money , not the power , not the immediate issues , not the popular politics , but the science and the data . Is that merely another construct of my human bias though?
I note that the letter from experts that’s caused all this week’s AI headlines are really just informed opinions more than evidence based research … they are opinions from folk who are mostly financially linked to technology in a massive way (although may see their franchises getting overtaken by AI upstarts) .
I see this as unlike the typically low income , lab based climate scientists and data gatherers in biodiversity who characteristically reluctantly resort to "code red for humanity" warnings until the data is overwhelming but I’m probably missing their conflicts.
If Science is to Save Us
I can go on and on and I do and I realise I mustn’t. (bad habit). But ?I’ve a couple of other thoughts /strands / connections to share however ?I won’t elaborate too much on them.
I’m a bit of a fan of astrophysicist Martin Rees’ book “If Science is to Save Us” . Listed as a best science book of 2022 by the Financial Times which should ensure it’s not a best seller. However, its very much written in English and not in Science language, it’s far reaching and wise. It's human and it's compelling. The title can be read as a warning or as a note of optimism perhaps?
It is structured around a few big picture interlinked mega challenges which includes enabling AI.
Taken from his introduction “The case for effective action to address long term threats is compelling. But unless there is a clamour from voters , governments won't properly prioritize measures crucial for future generations. So scientists must enhance their leverage …. via blogging and journalism, and enlisting charismatic individuals and the media to amplify their voice and change the public mindset."??
I absolutely agree with that.
In 2019 I wrote a blog or two here on my LinkedIn articles section?(eg “Demonstrating Vision” and “Do we just get what we deserve?”)?hoping that business might lead the way where Govts and People can’t or won’t. The business noise has certainly since changed around various ESG factors but it’s not obvious to me that the actions and motivations are yet materially different. ?It easily becomes another source of profit and why might we expect it to be any different?
So instead I’m looking to science and the communication method offered by Rees et al to lead the way. Big business obviously gets too caught up in its own spin and gain-seeking imperatives despite good intentions of many of it's people.
It’s a Cruel Public Arena (the public includes big business backed lobbyist influences)
However news also simultaneously reaches me via Katharine Hayhoe (one of my favourite climate scientists and definitely one of the world’s most effective and resilient climate communicators) that there’s a mental health crisis in science. ( I know there's a mental health crisis everywhere!)
However as a female science communicator based in Texas , she often faces the most sickening online abuse and attacks for communicating science and data that doesn't fit with world's eye views of big segments of the population. It takes so much guts and resilience to keep doing what she is doing.?Yet the issues are in all our interests. Lesser scientists are much more likely to hide than expose themselves to this.
As said , I’m not reaching for sandwich boards , Rees cites that there are powerful grounds for optimism , and I think science has to play a bigger role….in govt , in business and in society so long as it can be heard and supported. Human nature is a barrier to this , AI delegation could help… unless it wants to destroy us…. .
A personal note – something completely different!
?Despite the above , I’m loving life! Many of you know I’m enjoying a lot of adventures and global?travels in my retirement (responsibly apparently but I’m conveniently believing what I’m told about offsets). I was very keen to actually take up the freedom to pursue the privileged opportunities offered by retirement with financial comfort and try to forge a whole new phase of life. However it is, and always was, hard to leave behind the familiar and the identity that we get from professional working life. Although it’s very tempting to stay involved in some capacity or other , my own na?ve compromise is to occasionally chip in with comment or blog on this platform and not be tied to any dates or obligations. So far , so good anyway !
When I do enjoy a social catch up in the City or even an occasional seminar event which I really appreciate getting invited to, I realise that these LinkedIn efforts are read and noticed and even appreciated by quite a few kind old friends. And so long as it’s only occasional, a lengthy written article is hopefully tolerated! (my last was December 2022)
So anyway I have a bit of personal news to share. It seems that’s OK on LinkedIn these days?
One of the retirement (and part time pre retirement) hobbies my wife and I have taken up and do quite often is metal detecting. That’s a very different crowd to City life although the passion and knowledge within it's many hobbyists is no less. So we’ve actually found quite a lot of interesting things which we’ve recorded on the national database (Portable Antiquities Scheme) and it's helped me to enjoy looking backwards in time as much as forwards! My historical knowledge of Ancient Britain through to Celtic , Roman , Saxon and Medieval times is already considerably more developed than I ever could have expected it to be. And even though much of the time we find nothing of historic interest it is always spending time out with nature in rolling countryside.
Fame at last!
Anyway as well as many really nice finds , I was incredibly fortunate to find something quite spectacular on a Dorset farm in Feb 2020 and in archaeological history terms of national importance. So my Bronze Age finds (over 3000 years old) are currently prominently on display in Room 2 of the British Museum. In their own display case – you can freely go see them if you are that way inclined!! You might guess it’s an incredibly rare accolade to get that and it’s something I feel very proud about. For anyone that watches the TV series “Detectorists , its my "Lance moment” and something of a "high five" or respectful nod in my new community.
The key find is a beautiful complete rapier (sword) from the mid Bronze Age era (c1250BC and possibly unique to England/UK - I won’t get technical but there's a lot of detailed info behind that). Such metalwork was the height of technology then and it's incredibly impressive. There's also a decorated arm ring and axe head. It's not a permanent display and these artefacts may end up at the Dorset County museum in time but for now – British Museum on display for all to see , and I’m v happy!
I wonder what a list of extreme risks might have looked like in the Bronze Age?
Ooh I should make a plea, for any of my landowner friends and contacts that might be willing to let us detect on their land we’d be very grateful! Let me know if that's a possibility for you or you know someone else who might offer to help.
In the comments section I've posted a picture of me in the British Museum recently behind the display case in question.
Smallprint
In my last public ramble at Christmas “Do I want more tech or more people influencing my life?” I added a footnote that "the blog was not produced by ChatGPT-3 although there’s no guarantee that future ones won’t" (it's already GPT – 4 now). For the avoidance of doubt there’s been no use of AI in this blog either for the same reason as before - ie if it was then it would be more informative, more succinct and more amusing. I remain open to changing my approach….
John Belgrove 2/6/2023 xx
Head of Global Client Business at BondIT
1 年Extreme risks.... here's one James Ashley at Goldman Sachs Asset Management recently shared with me >>> 11% of people in the UK have a plan for a zombie apocalypse (YouGov 2017) whereas only 10% have high confidence in their retirement plans (ONS 2020)! #priorities #pension #shawnofthedead
Non-exec Chairman and Life President at Blissful Retirement
1 年Fantastic John! I enjoyed the read but, as a fan of The Detectorists, that personal twist is just marvellous.
Co-Lead of Investment at The Pensions Regulator
1 年Wonderful blog, John; full of great insights on the big dilemmas we all face today. Huge congratulations on your archeological finds - that must have really set the heart racing!
Pensions Actuary and Climate Specialist at WTW
1 年Great thought piece John.
Head of Investment Solutions
1 年Hello John. I'll do what is expected of me and add a challenging comment. In the climate change scenario impact on investment return stuff, perhaps what bothers some people is not really the precision or mathematical nature, but that it's not giving the sufficiently dire results they expect/want? (Though a 100+ basis point drop in annualized 40-year total fund returns sounds pretty bad.) While, as you know, I believe that return modeling is hard to do (but should be done) just for normal economic scenarios, and adding climate risks makes it even harder, the call for some for "narrative scenarios" instead of numbers suggests that maybe the numbers aren't providing the desired narrative. I think that forecasting returns / creating and using CMAs is a complex business but needs to be done, and ESG scenarios should be part of that process -- because that's ESG integration. By the way, I asked ChatGPT to rank threats to humanity over the next 30 years, and AI is number 4, after global catastrophic climate change, nuclear war, and pandemics/emerging infectious diseases.