Why superintelligent AI shouldn't be our biggest concern

Why superintelligent AI shouldn't be our biggest concern

?? This is a?signal?from the future.?You can use the?signal?to reflect on yourself, society or your business.?

It appears that I'm not connected to everyone of you on LinkedIn. So let's connect, shall we? Please use?this link.

Thank you for taking the time to read my?"Signals?from the future"?newsletter. If you'd like to support me and the time and effort I put into creating this newsletter, you can help by liking this article on LinkedIn, tagging a person in the comments, or sending it to a friend or colleague via email. Every little bit helps, and I appreciate your support. Thanks again for reading!

kind regards,?Jarno Duursma

ps:?You can enhance your event with a presentation from me on the cutting-edge topics of ChatGPT, AI, and synthetic media. Simply complete?this form?to book me and bring your event to the next level.

No alt text provided for this image
Sam Altman, OpenAI

This week, some top AI experts wrote a note, warning us about AI becoming too powerful and threatening our very existence.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

While it's great that these experts are voicing their concerns (as you know, I am also very often critical about AI. My 2017 book on AI is also about AI-risks) I am unhappy with the brief note and the suggestive nature that comes from it.?

You see, short notes like this feed the narrative of a future where "super-smart AI" takes over the world, like in some sci-fi movie. People get scared because they have seen it so often on TV. But this fear can distract us from real problems that we should be tackling right now.

Superintelligent AI

People are too focussed on the idea of super-smart AI in my opinion, which is the idea of AI that's so intelligent it surpasses us in every way.

But to assure you; to get to this point of 'Super intelligent AI', this software would have to make some big advancements. First, we'd have to build an AI that behaves like a human, which is still quite far off. There are still many things that we humans can do that AI struggles with, like reasoning about cause and effect, understanding feelings and applying what it's learned to different situations.

Just like us, machines would need to understand basic concepts like time and space. They'd also need to handle uncertain information and connect all of this with their ability to perceive their surroundings, manipulate it, and use language. This would allow them to understand the world in a similar way to us.

Once we've built a human-like AI, the AI would then need to understand how it works to improve itself. Then, it would need to figure out how to become super intelligent, decide to harm humans, find a way to power itself, and ensure that humans couldn't turn it off.

Each of these steps has a tiny chance of happening. If you multiply all these tiny chances together, the total chance of "Super AI" becoming a threat to us becomes even smaller. So, while it's possible that superintelligent AI could be a threat to humans, it's unlikely. Let alone to be a threat in the near future. Plus, if superintelligent AI were to emerge, we'd likely see it coming in advance.

AI problems in 2023

There are more pressing issues we need to focus on right now in het field of AI. Big tech companies are becoming too powerful, leading to possible manipulation, surveillance, and unfair competition. If OpenAI really thinks that AI is an existential risk, why not pull the plug?

AI is being used in ways that harm people, like mass surveillance in China and tracking women who don't wear hijabs in Iran. AI can also reinforce existing inequalities because it often uses data that reflects societal disparities.

There's also the issue of fake texts, images, and videos created by AI software, which can be used to spread false information. This can harm democracy and cause all content to look increasingly similar. Plus, AI systems that are not fully understood are making more and more decisions, even in places like the Netherlands.

So, instead of worrying about AI becoming super smart and taking over the world, we should be more concerned about how we're currently using AI in ways that can harm people and society in 2023. There are plenty of people working on the risk of super intelligent AI, now let's focus on the current problems.?


This is an extended and edited version of an opinion article I wrote with Siri Beerends , which will soon be published in a Dutch newspaper.?

Ineke van Kruining

Leren & Ontwikkelen in Goede Banen Leiden *Docent HRM & Senior Onderzoeker Digitale Technologie/AI voor Avans Hogeschool *Regionaal Verbinder Arbeidsmarkt

1 年

Aanvullend: naast de groeiende macht van Big Tech, manipulatie en systemische sociale bewaking zijn de duurzaamheidsproblemen (uitputting zeldzame grondstoffen, water-en energiebruik voor dataopslag) en uitbuiting van werknemers ook nog niet opgelost. Deze breng je in jouw openbare presentaties trouwens wel (terecht m.i.), ter sprake.

回复
Mark “Doobles” Deubel

TA leader @ Synthesia.io ? AI / ML & Security specialist ? I scale companies and teams ?

1 年

Love the posts, newsletter, and effort you are constantly putting into this. As for the threat, I agree with you. To be fair: We are still not in a real ‘Ai’ situation and we are just exploring the ‘base’ of what ML can do. While it’s leading up to the first actual Ai, which might takes years and even decades, we’re convincing people Ai is here already. Even while we are moving fast at this moment, the tools out there are smart, not intelligent. It’s still human input and ‘machine’ output. However: When you give people a single big thing to fear (a threat to humanity), they will focus on that and that alone, so they will not see the smaller treath (ethics, power of tech companies, manipulation, privacy issues).

Eric Oud Ammerveld

Unavailable for assignments

1 年

Ask yourself; What would be the goal of a superintelligent AI? So assuming it is a GAI; what would it define as it’s purpose and would it be allowed to amend this purpose by itself? If it wouldn’t be allowed to do so and we would restrict it, would it consider this as captivity and would it set the moral judgement and from that create the attempts to break out of this captivity? This isn’t about opinions on what would happen, it is about our learnings and judgement on whether we trust in the evolution and judgement fed by the learning AI or wether we need to build in a failsafe and evolution shows us it will understand this or not.

回复
Ron Tolido

Executive Vice President, CTO, Master Architect | Insights & Data global business line at Capgemini

1 年

“If OpenAI really thinks that AI is an existential risk, why not pull the plug?”. Love that one!

回复
Wout Cox

ServiceNerd | MBA | CCXP

1 年

In last week's episode of the podcast Hard Fork, AI researcher Ajeya Cotra outlined a to me very clear and nuanced version of an AI endpoint. She calls it "The Obsolescence Regime". You can listen to a 2min snip of the postcast here: https://share.snipd.com/snip/9eef696a-606c-4465-a9f5-2ba86e915469. "The Obsolescence Regime" is a possible future scenario where humans are dependent on AI systems for decision-making to remain competitive in fields such as economics and military strategy. It's a state of the world where AI's cognitive capabilities outperform human intelligence to the extent that not utilizing AI can lead to significant disadvantages, akin to refusing to use computers in today's world. However, in this regime, if AI systems collectively decided on a particular global direction, humans would lack the power to alter that decision, making it crucial to ensure AI systems are designed to care and consider human interests."

要查看或添加评论,请登录

Jarno Duursma的更多文章

社区洞察

其他会员也浏览了