Can We Change the Direction of The Coming Wave?
Image credits: Pixabay and Unsplash

Can We Change the Direction of The Coming Wave?

Just finished reading an absorbing book on AI by one of its leading practitioners, and I was hoping it wouldn’t be one of those rapturous, ra-ra kind of books on the subject. Thankfully, The Coming Wave by Mustafa Suleyman, founder of DeepMind which Google acquired, and Inflection AI, isn’t one of those kinds of books; it takes a balanced view of both the immeasurable advantages that AI can bring as well as the irreparable damage and harm it can wreak if not channeled and regulated properly.

The Coming Wave begins with a lengthy explanation of why technology comes in waves with plenty of examples from history that illustrate how the real benefits of some of the major inventions of the previous three centuries have continued to unfold long after the technological breakthroughs themselves. The author uses this analogous explanation to help us realise that what little we have experienced of AI in the past and what we see today are but a brief glimpse of what AI technology is capable of in the wave yet to come. That each new big invention or idea creates its own momentum, unleashes a cascading effect in many areas of our lives to achieve a critical mass that then goes on to produce the next set of inventions and advancements in technology. We aint seen nothing yet is his message to us, general readers.

The book begins rather slowly, with perhaps too much explanation using well-known examples from the past that seem unnecessary. Thankfully, the book is well-written though I still wished many times that the author would get to the AI part a little sooner.

Suleyman presents the dilemmas of AI technology to us upfront. He writes that until now everything around us – with the exception of nature – has been shaped by human intelligence. But that could change with the coming wave, when two technologies, AI and biotech coincide and converge, upending our entire existence on earth.

“The coming wave is defined by two core technologies: artificial intelligence (AI) and synthetic biology. Together they will usher in a new dawn for humanity, creating wealth and surplus unlike anything ever seen. And yet, their rapid proliferation also threatens to empower a diverse array of bad actors to unleash disruption, instability and even catastrophe on an unimaginable scale. This wave creates an immense challenge that will define the twenty-first century: our future both depends on these technologies and is imperiled by them.”

Why the AI wave is difficult to contain is what Suleyman answers in the book; Image: Pixabay

Suleyman expresses deep skepticism over whether we can actually “contain” the AI and biotech wave that is coming. He uses the term containment which is used in international relations and foreign policy and even quotes George Kennan, its leading proponent. Rather early in the book to express concern over containment, I thought, when he has yet to tell us where the real concerns and dangers lie. I realized, however, that the author is only expressing his apprehension based on the fact that we have never actually tried to contain any technology before.

Part II of the book, titled The Next Wave and chapters 4 and 5 are really the core of the book. In these chapters Suleyman writes about the technology of intelligence and the technology of life – about AI and biotech – in greater detail. What benefits these technologies promise, how they are being developed and how they will transform our lives are brought to life with several examples and facts. For example, he writes that early AI was developed by teaching machines how to play games, in order to see what else they could learn. He spends a lot of time writing about the East Asian strategy board game of Go – popular in China, Korea and Japan – and how it played an important part in AI development, as also IBM’s Deep Blue beating Gary Kasparov at chess. He also writes about the development of biotech which was initially used to develop better and hardier strains of food and is now exploring new frontiers with synthetic DNA – even human DNA. Some of what he writes about can seem like something out of science fiction, but he warns us that these are real: synthetic DNA, gene editing, genome sequencing, and well, even DNA printing!

For those of us who have read Yuval Noah Harari’s books, especially Homo Deus and 21 Lessons for the 21st Century a lot of what is written in this section might already seem familiar. Yet, what we get here is an insider’s and practitioner’s first-hand view of the new technologies that are here or are coming. What is of concern to Suleyman and ought to be of concern to all of us, is the sheer pace of development of both these core technologies, and therefore the risk of hyper-proliferation. Both the pace and the proliferation are only likely to grow, because the costs are continually coming down, according to him. It is the economics that seems to be determining the pace with which these new technologies spread. Besides, AI is at the core of the confluence of these two technologies, and academic research as well as patents in these areas have grown exponentially. Suleyman writes:

“Industry research output and patents soared. In 1987, there were just ninety academic papers published at Neural Information Processing Systems, at what became the field’s leading conference. By the 2020s there were almost two thousand. In the last six years there was a six-fold increase in the number of papers published on deep learning alone, tenfold if you widen the view to machine learning as a whole. With the blossoming of deep learning, billions of dollars poured into AI research at academic institutions and private and public companies. Starting in the 2010s, the buzz, indeed the hype, around AI was back, stronger than ever, making headlines and pushing the frontiers of what’s possible. That AI will play a major part in the twenty-first century no longer seems like a fringe and absurd view; it seems assured.”

The sheer pace of development and proliferation, however, are not the only factors preventing containment and regulation, in any sensible fashion. In The Coming Wave, Mustafa Suleyman dedicates an entire chapter to all the well-entrenched incentives that operate in the techno-industrial landscape and writes about them in some detail. From national pride and strategic importance, to China’s AI programme, the arms race, and AI even being used as an instrument of foreign policy. I think that national pride morphed into national security long ago, and I would also add AI’s use in military defence equipment as one of the biggest incentives as well as danger. Investors and their funding of the AI wave has also got to be part of the entrenched incentives.

Synthetic DNA and gene editing are the future; Image: Sangharsh Lohakare on Unsplash

Part II of the book, The Next Wave also has chapters on several other new technologies that promise to change our world – from robotics, quantum computing, advanced biotech to nanotechnology and nuclear fusion as the great big hope of a cleaner and safer energy future. In what comes after the coming wave in the second half of the twenty-first century, Suleyman writes:

“All the elements of AI, advanced biotechnology, quantum computing, and robotics combine in new ways, prepare for breakthroughs like advanced nanotechnology, a concept that takes the ever-growing precision of technology to its logical conclusion. What if rather than being manipulated en masse, atoms could be manipulated individually? It would be the apotheosis of the bits/atoms relationship. The ultimate vision of nanotechnology is one where atoms become controllable building blocks, capable of automatically assembling almost anything.”

The Coming Wave also deals with the four differentiating features of AI in a separate chapter and why these are unlike anything we have encountered before. The author writes about asymmetry and transfer of power, acceleration, omni-use and autonomy and why we should care about them. In my blog posts on AI , including the one on the 2024 Davos Summit of the WEF, I have raised concerns about the autonomy dimension of generative AI as being particularly dangerous and open to misuse. I am not a techie, but in my opinion, it is autonomy that cedes power to machines reversing the asymmetry between man and machine in a direction that might be irreversible in future. And by the time we realise this, it might be too late – the genie was long out of the bottle.

Part III of the book, States of Failure deals with the role of the state, the importance of democratic institutions and their precarious future under the coming wave. Here, the book once again tends to lose grip of the main subject and the author rambles somewhat about the various threats that AI and the twin-technology confluence pose to human beings and to countries. In the process Suleyman fails to make a clear and convincing argument for why these technologies need to be regulated with care. A real pity, because he writes throughout the book of the unprecedented nature of these technologies and the need for containment, and just when he needs to clinch the argument, he seems to sadly lose the plot. Which makes the reader wonder about the seriousness of the author’s opinions and intents, not least because there are several sections in the book where the starry-eyed AI programmer and his dreamworld ideas escape through, despite his efforts at containing them.

One such particular instance is when Suleyman writes about LaMDA (Language Model for Dialogue Applications) that his team was developing at Google. Designed to be a LLM (large language model) for conversations, he and his team were amazed at the kind of conversations they could have with it for hours. Sadly, it led to an engineer called Blake Lemoine losing his job because he went public with the announcement that LaMDA was a sentient being and that it deserved the full rights and privileges of personhood.

While Suleyman writes about the dangers of AI technology falling into wrong hands of both state and non-state actors and even terrorist groups among many others, he does not write about the dangers of AI being misused by the state itself against its people. He writes about China as a highly advanced AI power and how the government funds and uses AI technology in a 24×7 surveillance of its citizens, so surely, he is aware that any authoritarian state – including military juntas – can misuse AI all they want with no fear of any punishment since there are no checks and balances yet in the AI world.

When it comes to regulation of AI, he writes about the EU regulations on AI as well as the Chinese regulations which are quite advanced and sophisticated, even if in effect, “Chinese AI policy has two tracks: a regulated civilian path and a freewheeling military-industrial one” as Suleyman puts it. Regulation is only a start according to Suleyman, but a necessary first step working towards containment. Yet, even in the last chapter of the book, Ten Steps Toward Containment, the author doesn’t actually manage to detail out some of the most important aspects of containment or regulation, for example restricting general-purpose AI technology, or disallowing autonomy in AI development, and the need to regulate the underlying technology rather than only the uses and applications of it.

As it turns out, governments have largely left it to their private sector and academic and research institutions to develop AI as they see fit, choosing not to regulate it. And strangely enough, Suleyman seems to be of the view that governments should build their own AI capabilities and expertise and not be dependent on the private sector. After seeing what happened with Edward Snowden and the revolving door between government and private companies, I would tend to agree. Besides, as I keep saying, if governments themselves depend on big tech companies, how can they be expected to regulate them? The Coming Wave is a book for the general reader to understand the bright future and the dangers of the coming wave of AI-led technologies, and policymakers and regulators would do well to read it. With more informed policymaking, might we all expect to be safely on the shore when the coming wave crashes in?????????



Note: At the time that I wrote my blog post on the 2024 Davos Summit of the World Economic Forum, I hadn’t had a chance to see a panel discussion on The Hard Power of AI, featuring the author of the book reviewed here, Mustafa Suleyman, among others including Nick Clegg of Meta. It is worth watching, though I expected the discussion to be around AI’s use in defence and military equipment and the future of warfare – yet another controversial area of AI technology – and it isn’t.

https://www.weforum.org/events/world-economic-forum-annual-meeting-2024/sessions/the-geopolitical-power-of-ai/

I must mention that the book is co-authored with a person called Michael Bhaskar, though nowhere in the book is there any reference to him, except in the acknowledgements section where it says that Michael would like to thank his co-founders at Canelo and his family. Elsewhere at the end of the book, we are told that Michael is a writer and publisher in the UK. I won’t be surprised if this is unprofessional PR agency idiot bosses’ mischief once again, who I suspect had interfered with this year’s WEF Davos Summit as well.


This article was originally published on my blog on May 23, 2024.

要查看或添加评论,请登录

Geeta Sundaram的更多文章

  • Cancel Culture of the High Low-Brow Kind

    Cancel Culture of the High Low-Brow Kind

    As I shared on LinkedIn and Twitter, sorry X, sometime ago, I am busy reading Leo Tolstoy’s War and Peace these days…

  • Reading the Slo-Mo Global Economy

    Reading the Slo-Mo Global Economy

    We’re into the last quarter of 2024, the year of anticipated slowdown in the global economy. While we’re still awaiting…

  • Is Being a Pioneer Brand Good Enough?

    Is Being a Pioneer Brand Good Enough?

    The history of advertising and marketing is probably littered with scores of cases of companies and brands that began…

  • Free-for-all Wars with No End in Sight

    Free-for-all Wars with No End in Sight

    When I last wrote about the Israel-Hamas War, it was about ordinary Gazans being hounded out of Gaza. That was five…

  • The Festive Season Economy: Bargains, Bling and BOGOF

    The Festive Season Economy: Bargains, Bling and BOGOF

    In India, our festive season got off to a start last month with Ganesh Chaturthi and Onam, and there’s plenty more to…

  • Mischief In the Name of Religion and Spirituality

    Mischief In the Name of Religion and Spirituality

    I finished reading two more books in my aged father’s home library in Goa, this time on religion and spirituality…

  • Pivoting To the New Economy

    Pivoting To the New Economy

    It is a couple of years since the world said goodbye to the Covid-19 pandemic and made an attempt to get back to…

  • The Circus-like National Discourse in India

    The Circus-like National Discourse in India

    Having concluded the marathon general elections in India just a few months ago which threw up a split verdict, one…

  • Where is Brand Differentiation in Consumer Electronics?

    Where is Brand Differentiation in Consumer Electronics?

    In my previous blog post, I had written about the consumer durables industry and how it is cluttered and commoditised…

  • How Consumer Durables Can Avoid Commoditisation

    How Consumer Durables Can Avoid Commoditisation

    When you consider that the Indian advertising and marketing industry is exploring a new system of consumer…

社区洞察

其他会员也浏览了