About The Coming Wave
Over the last weekend, I finished reading the book titled “The Coming Wave”, published earlier in 2023 [1]. In case you have read it, would be good to get to know what you think about it. In case you have not read it yet: I believe it is an important book. It is specifically useful if you are excited about the potential of Artificial Intelligence (AI), Robotics, Quantum Computing and Biotechnology.
Now, I’m thinking, how to best summarise the book. Well, making good use of AI, I posed the following question to Microsoft Bing Chat: ?Can you please summarise the book titled "The Coming Wave" by Mustafa Suleyman and Michael Bhaskar in 200 words?
This is what the AI behind Bing Chat gave me in return:
“The Coming Wave” by Mustafa Suleyman and Michael Bhaskar is a book that explores the impact of artificial intelligence (AI) and synthetic biology on humanity. The authors argue that these two technologies will create immense prosperity but also threaten the nation-state, the foundation of global order. Suleyman shows how these forces will create wealth and surplus unlike anything ever seen, but their rapid proliferation also threatens to empower a diverse array of bad actors to unleash disruption, instability, and even catastrophe on an unimaginable scale. The book establishes “the containment problem” - the task of maintaining control over powerful technologies - as the essential challenge of our age. Suleyman’s credentials for the task are good: he was co-founder of DeepMind, arguably the smartest AI company around. Although he hasn’t worked in molecular biology, his account of DNA sequencing, gene editing and the design and manufacture of new genetic products seems well-informed and supports his case that AI and computational biology are the twin challenges that will soon confront societies.
Above is what generative AI can seemingly pull together within seconds as of today. I feel I must go a bit further still.
Let’s start with the back cover. I see a good number of people, whose work, writing, and thoughts I very much respect, lending their support to this book: folks like Daniel Kahneman, Bill Gates, Eric Schmidt, Nouriel Roubini and others. That already makes it stand out.
The book appears to have been in the making for several years. It is very eloquently written, appears well researched and offers lots of tangible, interesting and fitting historical examples together with other pieces of evidence to underpin the arguments laid out. I find the language sometimes a bit rich and think the text includes here and there a bit of repetition, but then I’m a fast reader, no problem in the end.
Let’s turn to the content. The book is much about the impact of a set of fast-advancing technologies (the coming wave), with Artificial Intelligence (AI) and Synthetic Biology seemingly the ones with the earliest substantial impact on the world (bang!), then accompanied by advanced robotics steered with the help of AI, and quantum computing.
A key message is revealed early in the book: that we are facing a rather big dilemma. These technologies together, without containment in place, would lead with near certainty to catastrophic or dystopian outcomes. Much of the book is about the authors explaining why this is likely the case, and how bad it can get. They paint a not-so-distant future where without checks and guardrails for these technologies all alternative paths into the future lead to outcomes equally undesirable. While the chance of successful containment seems deplorably low, the authors argue that because the potential outcomes are expected to be so bad, containment for these technologies is a must-do.
To my discomfort, I find their arguments in the book for what could go wrong and why rather logical.
The reason then, why I write this article: I agree with the authors that “this situation needs worldwide, popular attention” [2]. Awareness about the issues needs to permeate through many corners of companies and society.
Technologies come in waves, and they tend to proliferate. Once something has been invented, it tends to be difficult to control its use. Many technologies can be used for good and bad. If the effects of the bad are very serious (like a global deadly pandemic caused by a synthesized virus, or a rogue AI model adjusting itself continuously through reinforcement learning and causing a sustained global cyber security disaster), then the technology needs to be contained in some form, for example through controlling its use, limiting the sharing of code via GitHub repos, constraining the access to certain application programming interfaces, etc. Undertaking steps towards new regulation as in the EU in June 2023 and publishing new principles as by the UK government in Sept 2023 in the case of AI are of course part of the mix, but by far not a sufficient medicine according to the authors.
They argue that the issue gets exacerbated by certain characteristics of those key technologies, like
I’m personally familiar with the evolution of machine learning, deep learning, and generative AI and for that reason I tend to agree with Suleyman’s expectation of exponential improvement in AI even from today onwards. This is very exciting. Regarding large language models and foundation models, beyond learning about their opportunities and risks, you might want to look at the article and check out the predictions produced by Bain Capital Ventures for 2023, as made in Dec 2022, to see whether you recognise some of the progress they anticipated to happen in this year.
In contrast to AI, I had no idea about synthetic biology. This is an interdisciplinary field that combines biology, engineering and of course AI to design and construct new biological systems. I know how to synthesize artificial datasets for data science purposes, and I have seen products produced by 3D printers, but I didn’t know that you can buy a DNA synthesizer to print the code for new types of life basically in your garage at home.
As is not difficult to imagine, AI (including future versions of complex foundation and large language models) may well supercharge and accelerate the further evolution of synthetic biology. As the authors suggest, quite possible that a completely new organism (for good or malicious purposes) could be produced by a person typing into a computer a couple of natural language prompts. Entering some sentences of text to prompt an AI model allows us today to generate realistic and completely new images and novels, and tomorrow to create a new virus or pathogen. It sends a shiver down my spine.
To get a glimpse, read about Cello, an example mentioned in the book, or Clotho. Things are moving fast at accelerated speed indeed. As a further example see the announcement of Ginkgo Bioworks from Aug 2023 (with more information here). Ginkgo works with Google to teach AI to speak DNA, to use and create AI foundation models and fine-tuned models for engineering biology and advancing biosecurity. The opportunities are mind-blowing. An example analogy is a Google Maps for the bio space: I want to engineer a protein with these and these characteristics, please – dear app, give me turn-by-turn navigation steps for how to get there. --> Ok, let’s get started, no left-turn here as this leads to a toxic outcome, so proceed with the following step in your protein engineering approach … ?
Interesting questions pop up for example related to intellectual property: Can companies who create and own certain foundation models use them to predict a million new enzymes and just patent all these generative outcomes?
And all moves in the direction of access for everybody at low cost. As Anna Marie Wagner explains in above referenced, insightful video clip [@56:52]: “Ginkgo’s goal is that we can reduce the cost of doing this work year over year over year at an exponential rate to make this technology accessible to all of the amazing ideas that are out there whether it’s an academic lab in a small start-up or in a large multinational corporation.” Hopefully not a sinister garage lab of course.
The list of potentially very bad outcomes continues pretty much throughout the book. The arguments seem to me surprisingly convincing, including the many incentives and dynamics that exist and drive all involved players (from states to companies, scientists, and entrepreneurs) to march ahead with powering this technological advance in an unstoppable way. In short: mankind might not get extinguished by climate change but by its own creation of the technologies that represent ‘the coming wave’.
In Part III of the book, the authors make a worrying observation: that the aforementioned technologies get introduced to societies which are already under stress or even dysfunctional. “This is not a world ready for the coming wave” [4]. The authors make the point that in the past, very impactful technology was largely only accessible to nation states due to enormous costs (think for instance aircraft carriers, or intercontinental ballistic missiles, or nuclear-powered submarines).
In contrast, AI, synthetic biology, AI-driven robotics etc. develop and proliferate today so fast, that these technologies end up in the hands of big tech companies, research labs and smaller players down to ‘garage tinkerers’ (as Suleyman calls them). With AI, just download code from GitHub, take your savings and train a new AI model on a bunch of GPUs in the cloud. As the authors argue: the grand bargain between state and citizens (that the state can guarantee order, security, successfully manage pandemics or deliver low unemployment) may well suffer or disintegrate, causing disappointment and disbelief on the side of citizens (e.g., when their jobs get automated away by AI and robotics, or when the military cannot guarantee national security anymore in the face of technologies with asymmetric impact).
So then, what can be done?
The authors first contemplate multiple options for going forward (from shutting down research on some high-risk technology to super-tightly controlling its use), but all end up in different flavours of serious risk or some form of catastrophe.
However, optimism prevails in this book in the last chapter IV, providing some form of urgently needed relief to the readers. Ten steps are outlined that shall help to successfully live through this next wave of technologies. They include suggested actions for various stakeholders, from the makers of the technologies to governments and the people more generally (all of us). Suleyman envisages this to be a precarious balancing act, navigating on a narrow path. “I imagine containment as a narrow and treacherous path, wreathed in fog, a plunging precipice on either side, catastrophe or dystopia just a small slip away.” [5].
I have to say this fills me with some anxiety, because I’m not good in walking a tightrope. To get an idea of how it feels walking along a narrow path, one step away from deadly catastrophe, have a look at Walter Rossini’s YouTube video “Matterhorn - balancing on the summit ridge” . Mind, in that short video clip the sun is shining. Now, imagine there is fog too, and that it won’t be a single person whose misstep carries them into the abyss.
Once I finished reading the book, I asked myself (I happen to work professionally in the areas of AI and Quantum Computing): Ok, so what does it mean for me? I’m still thinking through this. However, I picked up two messages from the book:
First, if above wave of technologies poses some serious challenges, surely we can fix it (like the ‘we’, that can fix climate change. Trust that ‘we’?). The issue: The ‘we’ is ill-defined, a plethora of players acting in misalignment, with lack of coordination, driven by different incentives. In the absence of a strong and clearly defined ‘we’, Suleyman suggests to “build a we”, and that might start very modestly with raising more awareness about the challenges ahead.
Second, the book suggests that too many visions of the future often start with what technology can do. Suleyman argues that technologists instead should focus on helping to imagine and create a richer, social, human future. For example, the next generation of foundation models to be created not for the sake of showing that we can improve dramatically over what technically already exists today. “Technology is not the point of the future” [6]. ?I think that is true in many cases, and to give a different example, in the case of 6G technologies, we shouldn’t fall into this trap either.
Midway through the book I thought the future is quite scary (from quantum computing killing large parts of communication whose security we really take for granted today to super-complex AI models that keep learning and refining their strategies in autonomous ways thereby outpacing human intelligence to AI-powered, autonomous, weaponised swarms of robots on the loose to genetically engineered killer hornets). It feels a bit like a horror movie.
I take some comfort from the fact that at least to cope with the risk arising from quantum computing to asymmetric cryptography, risk mitigation technology is well under way (e.g., Post Quantum Cryptography). Regarding the other areas, I’m not so sure yet.
Maybe the best thing is denial and saying: It’s all exaggerated in this book. And should the future come close to the predicted concerning scenarios, we will fix it (again, this ill-defined ‘we’ …). It just reminds me of how thoroughly our world deals with climate change. Will we fix that in time? Before or after we will have fixed the issues outlined in The Coming Wave?
What a lovely set of challenges we face. All in all, I much recommend the book.
[1] Mustafa Suleyman, Michael Bhaskar: The Coming Wave, The Bodley Head, London, 2023
[2] Chapter 1, p19
[3] Chapter 7, p105
[4] Chapter 9, p156
[5] Chapter 14, p276
[6] Chapter 14, p285
Senior Manager R&D, Research Clusters AI and Quantum at Vodafone
1 年Sort of timely in relation to this thread: The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 - https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023
Synthetic biology is a long way off imo. Growing or printing organs might be feasible and 3d models for former are being curated today. The main fear is that it will be affordable to only a few. If it's successful, it might reduce illegal organ harvesting. The concept of a minimal genome has been around since 1998 but never really fructified. Craig vebter did finally make one in 2014 or so Hope was that we can construct life from the ground up, but looks like we haven't been able to do much .Computing using DNA is feasible but the advantage is unclear to me. Regulations when done in a knee jerk fashion do nothing to help a field, rather it gives consultants and bureaucrats a few years to revel in word soup stewing...light touch Regulations to help guide industry are fine. Too many governments are trying to regulate AI and doing nothing to stop weapon use...
Great analysis. I’d add some other factors to fire. 75% of CEO’s already believe A.I. is fundamental for future growth. It’s adoption in business has already tipped. The tech companies like Microsoft, IBM, Amazon, Alphabet are competing to outdo each other Vs restrain. The share price will out. The companies spending the most lobbying politicians are tech.,McKinsays predicts 800m jobs will be replaced by 2030 by AI. That is the working population of China. We have populist governments in many parts of world. Including UK and possibly again in USA with Trump. I can’t see who will be the grown ups.