"Nexus" by Yuval Noah Harari. A book review- part 2
Maciej Szczerba
Executive Search ?? Working across ???????????? Podcast host at "Past, Present & Future"" on YT???Besides:"I'm Winston Wolf , I solve problems"
As promised a week ago, today the second part of the review of Yuval Noah Harari's book ‘Nexus’. If anyone has not read the first part of the review I invite you to do so here:
The book is so intense when it comes to thoughts that I had to break the review into two parts.
Let's start with what the second part of the book (divided into 4 parts) is about. In part three, Harari describes the various pathologies of the system that Big Tech has created based on its algorithms. And why Big Tech's business model is socially dangerous.
As an example, the author cites Facebook's promotion of radical content by Buddhist clerics in Burma (2016-17), encouraging the genocide of the Rohingya-a minority nation that adheres to Islam.
According to Harari, between 7,000 and 25,000 civilians were killed, between 18,000 and 60,000 people were raped and 730,000 Rohingya had to flee to India.
The numbers make you think, don't they?
More, a UN mission in 2018 found that through the distribution of hate speech, Facebook played a ‘determining role in the campaign of ethnic cleansing’.
The business model of many social media is to prey on controversy and controversial content. But as Facebook admitted in a media statement at the time: ‘It's not us who are to blame, it's our algorithms.’ At the time, Facebook's algorithms were programmed to seek out controversial statements because they engaged the most traffic. But, as Facebook defended itself, the algorithms were becoming independent. They were finding the content themselves to promote. This included rants calling for ethnic cleansing.
This shows us the greatest task of our time when it comes to AI-‘to keep humans in the loop’.
In the book, Harari often uses the example of the mid-century author Heinrich Kramer's ‘Hammer on the Witches’. This publication resulted in pogroms (if not to call it genocide) of many thousands of people (mostly women) accused of ‘witchcraft and contact with Satan’.
Today, instead of ‘witches’, there is the Rohingya, for instance. Who is it tomorrow? More likely, it will be who AI chooses.
I certainly agree with Harari on one thing- ‘Democracy is a conversation, and the conversation depends on language’. Algorithms do not talk to us.
I read Harari's publicist unequivocally. We are at a point where Bg Tech can't keep telling us: ‘We are just a platform, It's the users who do what they want when it comes to hateful or unwanted content, and our algorithms just spread it’. There needs to be a shift towards Big Tech being more accountable, and most likely regulated.
Of course, as Harari reiterates, a kitchen knife is more for cutting vegetables or meat. But it is also sometimes used as a murder weapon (at least in crime novels). In extreme circumstances, a kitchen knife can be used for life-saving surgery. And so it is with AI.
And another interesting and memorable statement by Harari-‘Not every citizen needs a PhD in computer science, but everyone needs an understanding of how computers make policy today’.
AI may assist in the development of new dugs, but it may assist the Iranian regime in tracking women who do not wear the hijab. This led to the judicial murder of the young woman Mahsa Amini.
A huge threat to society and democracy are so-called ‘social programmes’. In China, AI assesses and gives you positive and negative points for your social behaviour. Didn't the camera catch you crossing the lanes at a red light? Did you slip a day with your tax payment? Or perhaps even how much alcohol and tobacco you buy in the shop?
Nobody knows how the Chinese social index works.
领英推荐
In a dystopian scenario (in my opinion, not at all unlikely), social inderx spreads to Western countries.
How do you feel about it, if you imagine it can happen in your country?
According to Harari, we have reached a watershed moment in history when non-human intelligence is taking over from human-intelligence.
And here we return to Harari's main thesis repeated through all his books. Man differs from animals (even our dear chimpanzee brothers) in that he is able to create myths. The social system, money and even religion. What if AI can produce the same myths? New power regimes, new types of money (like CDOs squared) and even new religions?
Computers are biased. Because the data we feed them are biased.
There is a well-known IBM case from around 2017, when white men were preferred to work at IBM, black women were discriminated against. The reason: there were far fewer black women at the base of candidates. Such examples could be multiplied.
But as Harari says: algorithms ‘think’ they know humans better. In reality, they are imposing a new order on our lives. An order that we do not understand.
In the fourth part of the book ‘computer politics’, Harari shows the impact of AI on democracy. He rightly, in my view, notes that civilisation is a marriage of culture and bureaucracy. AI is capable of mastering both culture and democracy.
What to make of it all?
The entire book ‘Nexus’ is one big pamphlet in defence of democracy. And praise to the author for that.
For me, the key concept from the book is ‘mutualisation’: If we (Big Tech, the government) know about you what we know, you should know about us why we know it. Telling us that we don't understand how the ‘balck box’ in neural networks works is not enough.
All focus on China, where judges hand down criminal sentences in collaboration with AI. Of course, the criminal code applies, but at the same time the judge has access to an AI-assisted program that shows what sentence to pass. Is it only in China?
Check out the Loomis vs.Wisconsin (2016) case for yourself. There is not space to discuss the case further here, enough that an American citizen was sentenced to too high a penalty because the judge relied on the cooperation of an algorithm.
So how do we defend democracy in the AI sphere? Or perhaps let's get off the high horse a bit - how do we defend our own, human sovereignty in the AI sphere?
1.Regulation. Many people think that the European AI Act is too restrictive. After reading the Nexus, I don't think so. There is ?human in the loop” at last.
2.The problem is and will always be bureaucracy, which needs to be taught how to regulate AI
3.Since civilisation =bureaucracy + culture, we need attractive stories at the level of societies about what AI really is. The majority of the globe's population does not know this.
4.Additionally, bots pretending to be people probably need to be banned.
Perhaps a piece of advice? There will be another short post on this topic stay tuned in the week!
? Data Scientist AI Expert @ NorthGravity | LLM, RAG, Agents, LangChain, LlamaIndex, Knowledge Graph ?
1 个月Thank you for your review. This book is in my reading queue for this year ??