Super AI scares the pants off me
Bron: Future of Life Institute

Super AI scares the pants off me

The main threat that seemed to emanate from AI for a long time is that AI would make itself (because it is self-learning) smarter than humans and that it would conquer the world. Many thought of a dangerous, shooting robot that would kill and enslave people at random. Unrealistic in 2023 of course, when we are still confronted daily with malfunctioning voice control in our cars and you are presented with deep fake images of people with six fingers on the internet. What would you worry about, many people seem to think. But combining different AI models and increasing computing power into Super AI is now suddenly scaring insiders and scientists. There are even proposals to freeze Super AI development for six months, an eternity in the exponential world. Such as the initiative of the Future of Life Institute, which is expressed in an open letter “Policymaking in the Pause”. https://futureoflife.org/open-letter/pause-giant-ai-experiments/ Why this call for reflection on Super AI now? Maybe because AI development and capabilities suddenly accelerate incredibly. Below I explain what is going on and why I am also very worried about the development of Super AI.

Since the arrival of GPT3 and 4, people have become en masse acquainted with Generative Large Language Models. You ask a question and you get a surprisingly good written reply, which is very good not only linguistically but also in terms of subject matter .. high school pupils and university students are having a blast of a time. As a result of the combination of AI models and increasing computing power, surprising leaps have been made by AI, which also demonstrate surprising new AI skills. The main concern is that Generative Large Language Models suddenly turn out to be better (have a much higher accuracy) than specialized machine learning models. The latest models simply appear to be able to solve problems for which they were not developed. And - and here it comes - the makers themselves have no conclusive explanation for this. Due to the race to commercially apply Super AI, we have ended up in a world with unknown risks. One of the explanations could be that AI models could train themselves by generating the necessary training data from which to learn. A kind of 'perpetuum mobile' of learning with perhaps infinite knowledge and applications as a result.

Will AI eventually build that shooting robot by itself? I don't think it will get that far. The danger lies much closer to home, namely in the further growth of the social problems that we already know from social media, and which we have been unable to solve or manage for twenty years. Such as information overload, screen addiction, the damned influencer culture, polarization, fake news and alternative facts, and ultimately - see the deployment of digital thugs in the USA, Poland, Hungary and Israel, undermining our democracies. Not to mention China, Russia and Saudi Arabia. The danger that Super AI with extremely fast evaluating combination models adds to this problem is that it will be used via aps and filters (Tik Tok) to further influence people's behavior and thus further train the AI. This leads to ever-growing effectiveness in convincing people of something (usually something bad). With the exponential turbo power of these AI Models, Fake News develops into Fake-Everything. The difference between real and fake then becomes blurred in images, speech, data, etc. Current cyber security is no match for AI-enhanced cyber weapons that, due to the automated exploitation of computer code, will lead to an information security crisis and therefore (!) to draconian countermeasures.

What we seem to be heading for with Super AI is not the shooting robot, but a complete breakdown of interpersonal trust. And that affects everything and everyone and disrupts our society. That is why we must immediately stop developing Super AI and think together about the right way forward.

Nico Beenker

# kappen met social media

Nico Beenker

IT Project / Programma Management | Digital Transformations |

1 年

This is what Open AI wrote in reply to the following question: Write a critical review on the possible misuse of Generative Large Language Models and other advanced AI systems and name countries, and sources, that are known for using AI against their citizens. https://youtu.be/s77uRVs0Xlk ??

要查看或添加评论,请登录

Nico Beenker的更多文章

社区洞察

其他会员也浏览了