Yuval Noah Harari vs. AI
From The Economist: Yuval Noah Harari argues that AI has hacked the operating system of human civilisation

Yuval Noah Harari vs. AI

There’s really nothing good or bad in the world - it’s all just a matter of how we use things. Nuclear energy was also invented with good intentions, but the human ego took over and we ended up using it to destroy each other.

At the end of the day the tech won't really add anything to our lives unless we change ourselves - that’s the future of humanity.

Here is the rule:

The world is divided to two with regard to every aspect of our lives -

1) Those who see benefit in their ego will always be for the thing.

2) Those that don’t see benefit in their ego will always be against it.

This is the real reason why Yuval Harari is against AI . Because he doesn’t see benefit for his ego. Go check every single issue in our world according to this rule - see who is for it and who is against it.

Yuval Noah Harari argues that AI has hacked the operating system of human civilisation The Economist

I wish our celebrity historian would stick to history and not try to analyze the future based on his understanding of the past.

As cool as it sounds, it's not AI that's "hacking the operating system of human civilization." To make a statement like that first you need to understand what the operating system of humanity is! Which he clearly has no clue about.

According to his article in The Economist , our operating system is what we use to communicate: "AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.”? ?

My Commentary

It’s not our ability to manipulate words we need to worry about but rather about our desire to manipulate each other. And when people look back at our era, they’ll see that we began manipulating the things we use to communicate a few decades before AI came along. There’s all the swearing and violence that’s embedded in our culture and entertainment. Super heroes who kill and hurt other people are the height of coolness. It will be interesting to see historians in the future trying to figure out why we insisted on forging such a negative and warped environment for ourselves.?

Our brain is a result of our desires that are all about our desire to receive for ourselves, or to do for others. Computers can’t define desires. When we define our desires as plus or minus then they can calculate our actions as what we do with our desires. But at the end of the day, computers will be never be like us and we’ll never be like computers because they’re limited by mathematical calculations. Even if they do millions of calculations they can never break free of that to the world of thoughts and imagination.?

A human can do anything except make computers work with the intention of receiving for our sake, or bestowing for the sake of others. We also can’t change this in ourselves on our own. But we do have the ability to attract the positive force in nature that will help us change our intentions with regard to why we do certain actions.?We're still going to do the things we normally do. The only thing that will change is our intention behind everything we do.

If we could use AI or an algorithm to fix human relations, more specifically for us to treat each other well, that would be great - but the challenge would be getting us humans to go along with it. No one would agree with it, so it wouldn’t work. We know how to be good people but no one is willing to create a plan for making us actually behave that way toward one another. That’s because our true nature - our ego - controls everything.

So the bad news is that the more we advance with all this innovation and keep ignoring what's really going on with humans - the worse things will get in the world. Our only way out is to understand that using all this tech before we’ve done something about our nature will only harm us. We can see how we're using all these advanced technologies to kill each other. So ideally we would invest in changing ourselves, and mainly how we relate to our fellow man. Where are the "programs" for that??

We need a lot of knowledge for our advancement, but it all needs to be aimed at helping people understand that we need to change.

We’re actually quite primitive because we’re not even capable of looking inside ourselves in this day and age. After all, who created AI and nuclear energy? Us humans.? We need to understand that if we don’t transition people from bad to good we won’t last here much longer. At the end of the day the tech won't really add anything to our lives unless we change ourselves - that’s the future of humanity. We need a lot of knowledge for our advancement, but it all needs to be aimed at helping people understand that we need to change. Nothing else will get us through this transition with the minimum amount of suffering. ?

What kind of attributes do we need to aspire to? To being good and giving. Actions are good but this is really about our thoughts and intentions toward others.

We can use tech to demonstrate how much we want to harm each other. And it would be really useful if we could figure out a way to use tech to persuade people to change their inner nature, rather than place their future in the hands of computers.

Prof Dr Oussama HAMAL

Professor - Researcher - Lecture || Ph.D in Artificial Intelligence || IT Consultant || Keynote Speaker || AI in Hight Education || AI in Architecture & Construction Smart City IoT CIM BIM&GIS || AI in Healthcare

5 个月

??

回复
Osher El-Netanany

Technology Consultant at Tikal - Fullstack as a Service

1 年

You're absolutely wrong. He is not against AI - he says it very clearly that research on AI comes with many benefits, and research on should continue, and he gave the example of researching medicines - that must carry on, but released to the public only after made safe. He argues few things: 1. that the release of AI to the public domain should be after AI is made safe 2. that democracy is a conversation. By gaining mastery over language AI becomes disruptive to our ability to conduct a meaningful discussion - if we don't regulate it on time we will not be able to regulate it in a democratic way, if at all. 3. that the regulation should start with prohibitting release of new AI tools to the public domain before they are proven to be safe, and requiring AI agents to disclose that they are AI. To this I'll add - you may want to hold the discussion until you find does "safe" mean with AI - however, by the pace we converse and regulate, this term can be found only by discussing it in order to regulate it, and not the opposite.

回复
Alex Gutman

Marketing Specialist @ InsideCRO | Content/Product Marketing | Growth Hacking | Marketing AI Prompt Engineering | PR

1 年

Additionally, this is his boss saying that we will eat bugs and be happy. We will own nothing and be happy and this is why Tucker was fired for telling the truth. https://m.youtube.com/watch?v=h-df7v4yveI

Alex Gutman

Marketing Specialist @ InsideCRO | Content/Product Marketing | Growth Hacking | Marketing AI Prompt Engineering | PR

1 年

He’s associated with the greatest evils on the planet. HE is evil. Climate change is a scam and they’ve been prepping the world to comply with their BS.

回复
Dan Livshitz

Empower the Future with Cutting-Edge Technology

1 年

In addition to my previous comment, I would like to point out that your critique of Yuval Noah Harari seems to overlook some important aspects of his work. By accusing Harari of not seeing the benefit of AI for his ego and implying that his opposition to AI is self-serving, you may be presenting a misleading interpretation of his views. Harari's writings are known for their complexity and multi-layered analysis, and reducing his arguments to a matter of ego-driven motives does not do justice to the depth of his ideas . His concerns about AI should not be dismissed as mere self-interest, but rather considered as part of a broader discussion about the ethical implications of AI and its potential impact on human society. It is crucial to engage in meaningful and constructive dialogue about the role of AI and its potential consequences, rather than misrepresenting the views of those who contribute to the discussion. By focusing on the intentions and ego of individuals like Harari, we risk undermining the importance of addressing the significant challenges and opportunities presented by AI and its role in shaping our future.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了