Nexus (Harari 2024) — A Book Review
What if the greatest danger from artificial intelligence isn’t killer robots, but invisible forces shaping our choices? In Nexus, Yuval Noah Harari dives into the world of information networks, exploring how humans have always been storytellers — believing myths, spreading lies, and building societies around shared fictions. But here’s the twist: Now machines are learning to manipulate these stories. From the printing press sparking witch hunts to Facebook algorithms fueling real-world violence, Harari shows how every leap in information technology changes us — for better or worse. AI is the latest revolution, but unlike earlier tools, it can think and act on its own. Are we ready for a world where machines can write the script of our lives? Or have they already started? This book may make you question everything you think you know — before the machines decide for you.
What if the stories that hold our world together are also the very things that could undo it? In Nexus, Yuval Noah Harari dives deep into this unsettling possibility, tracing the evolution of information networks from ancient civilizations to the age of artificial intelligence. But Harari’s work isn’t just about networks or technologies; it’s about how these tools have shaped who we are and how they could change what it means to be human in the near future. With his trademark storytelling, Harari offers a narrative filled with historical anecdotes, warnings, and insights, peppered with humor and thought-provoking questions. And just when you think he’s painting a dystopian future of killer robots, he drops a surprise — what we should really fear isn’t robots with guns but AI systems quietly manipulating our decisions without us even realizing it.
From the very beginning, Harari hooks readers by showing that not all information seeks to tell the truth. Lies, fantasies, errors, and even myths are just as much a part of information networks as facts. In fact, humans have historically relied on fictions — stories about gods, empires, and economies — to cooperate in large numbers. It’s as if our brains are hardwired to prefer a good story over a boring truth. Think about Santa Claus. Even though children eventually realize he isn’t real, the story brings people together in a collective celebration, binding families and communities. That’s the power of shared narratives: they can unify us, even if they’re not true. But here’s the catch. When these myths are used irresponsibly — like in propaganda or authoritarian regimes — they can be just as dangerous as they are unifying.
Harari takes us on a whirlwind tour of history, illustrating how information technologies — from ancient manuscripts to modern-day algorithms — have driven human progress and chaos alike. One striking example is Malleus Maleficarum, a 15th-century manual that fueled witch hunts across Europe. This book, filled with dangerous misinformation, spread rapidly thanks to the newly invented printing press, leading to countless deaths. Harari uses this example to show how each leap in information technology — whether it’s the printing press, radio, or the internet — comes with both immense power and unforeseen consequences. It’s like inventing the airplane; suddenly you can fly across the world, but you also have to deal with turbulence, crashes, and jet lag.
The arrival of AI, Harari argues, marks a new kind of turbulence — one that humanity isn’t fully prepared for. Unlike previous technologies that required human input, AI systems can make decisions independently. Imagine a self-driving car deciding which route to take without consulting you. Now extend that to an AI running financial systems or military operations. The unsettling part isn’t just that AI can act on its own; it’s that its decisions might be impossible for humans to understand. Harari compares it to giving a toddler a smartphone filled with complex apps — except this time, we’re the toddler, and the smartphone is controlling our infrastructure, politics, and personal lives.
One of the most chilling examples Harari offers is the role of AI-driven algorithms in the Rohingya crisis in Myanmar. In 2016–17, Facebook algorithms, designed to maximize user engagement, began amplifying hateful propaganda. This misinformation fueled ethnic violence, proving that AI doesn’t need to send robots to control us — it can simply manipulate the information we consume. It’s like the whisper game we played as kids, where a phrase gets distorted as it’s passed along, but with tragic, real-world consequences.
Harari cleverly draws a comparison between democracies and dictatorships, showing how these systems handle information differently. Dictatorships, he explains, prioritize control over truth, manipulating data to maintain power. Democracies, on the other hand, thrive on openness and correction, allowing citizens to challenge misinformation. But here’s the twist: AI has the potential to disrupt both systems. It can empower authoritarian regimes by enabling mass surveillance, or it can destabilize democracies by spreading misinformation and polarizing public opinion. Imagine democracy as a tightrope walker trying to maintain balance. AI, in this scenario, is like a gust of wind — capable of either helping the walker stay upright or toppling them over, depending on how it blows.
But Harari doesn’t just leave us with doom and gloom. He offers a glimmer of hope, reminding us that technological problems are solvable if the right minds tackle them. Take email spam, for example. A few decades ago, spam clogged inboxes and wasted countless hours. But with the development of better algorithms, companies like Google managed to eliminate 99.9% of it. This example serves as a reminder that human ingenuity can tame even the most chaotic technologies. The challenge with AI, however, goes beyond technical fixes. It’s a moral challenge — how do we decide what kind of society we want when machines are part of the decision-making process?
领英推荐
Harari emphasizes that AI’s impact won’t be limited to algorithms running behind the scenes. It will reshape human relationships, governance, and even our sense of self. Think about GPS. It’s a helpful tool, but over time, many of us have become so dependent on it that we’ve forgotten how to navigate on our own. Now imagine that dependency on a much larger scale, where AI not only guides our routes but also influences our political views, spending habits, and even whom we date. At what point do we stop being independent thinkers and become puppets in a world run by algorithms?
Harari’s narrative is filled with humor, too. At one point, he pokes fun at the techno-optimists who believe that AI will solve all of humanity’s problems. It’s like someone bringing a pet parrot to a courtroom and insisting that it can argue cases better than lawyers. Sure, it’s a funny idea, but also a dangerous one if taken seriously. The risk, Harari warns, isn’t just that AI will take over our jobs — it’s that we’ll become complacent, assuming the technology will always act in our best interest.
Harari also points out that AI’s rise isn’t happening in isolation. It’s unfolding against the backdrop of global economic and political instability. Governments are scrambling to regulate AI, but progress is slow. As Harari notes, much of the talk around AI regulation has been “aspirational” at best — big on promises, short on action. Meanwhile, tech companies continue to develop AI systems at breakneck speed, driven by the lure of profits. It’s like racing down a highway with no brakes, hoping that the exit sign will show up before it’s too late.
Yet, Harari insists that the outcome of the AI revolution isn’t set in stone. Just as societies adapted to previous information revolutions, we have the ability to shape how AI integrates into our lives. The question is whether we’ll act fast enough. History shows that those who control information wield enormous power. If we leave the regulation of AI to tech billionaires and private corporations, we risk losing our ability to govern ourselves. It’s like letting a child play with matches in a room full of fireworks — something will explode sooner or later.
In the end, Harari’s Nexus isn’t just a book about technology; it’s a call to action. He urges us to take control of the narrative before AI systems begin writing it for us. And while his warnings are sobering, they are also empowering. The power to shape the future, he reminds us, still lies in human hands — for now. The real question is whether we’ll rise to the occasion or sit back and let the algorithms decide for us.
So, as we stand on the brink of this new era, the question we need to ask ourselves is not just what kind of world AI will create. It’s deeper than that. What kind of world do we want to live in — and will we have the courage to build it before it’s too late?
Reference
Harari, Y. N. (2024). Nexus: A brief history of information networks from the Stone Age to AI. Penguin Random House. https://www.ynharari.com/book/nexus/