ML, DL, Generative AI, LLMs, ChatGPT are Quasi-AI, Machine Pseudo-Intelligence

ML, DL, Generative AI, LLMs, ChatGPT are Quasi-AI, Machine Pseudo-Intelligence

Artificial intelligence isn't synthetic intelligence: It's pseudo-intelligence.

This is not the case. The issue is not the Stupidity of AI, as it is presented in the same title article, "Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous".

All commercial, applied ML applications, be it ANNs, DL, NLP, LLMs, GPT, ChatGPT, are simulated pseudo-intelligence, with some engineered machine rote learning, instead of meaningful, associative or active learning, which is as risky and unsafe as pseudointellectuals.

"ML by itself cannot be intelligent because lacks reasoning, planning, logic, and doesn’t interact with the environment. ML detects patterns and performs predictions based on statistical analysis of data using math based algorithms. These algorithms are not intelligent per se. Intelligence is much more than that."

"So, today’s Artificial intelligence, Machine Learning, and Data Science atmosphere is charged with false stories, inflated achievements. That’s bad for all of us. Because in the end what we receive is pseudo-science".

ML

Introduction

British government holds its first AI Safety summit, bringing together heads of state and big tech near London. The two-day summit begins on Wednesday as concerns grow that the emerging technology may pose a danger to humanity.

The meeting will focus on strategising a global, coordinated effort to address the risks and misuse of AI tools. The summit is centred around ‘frontier AI’, which is defined as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety”.

The Summit brought

Academia and civil society

Industry and related organisations

Governments

Multilateral organisations

Council of Europe

European Commission

Global Partnership on Artificial Intelligence (GPAI)

International Telecommunication Union (ITU)

Organisation for Economic Co-operation and Development (OECD)

UNESCO United Nations

The whole event is Much Ado About Nothing; for all today's AI, be it ML, DL, NLP, GPT, ChatGPT, is not real or true AI, but statistical predictive modeling, dubbed as "machine learning" and "deep learning", It is ML, which is just simulated pseudo-intelligence, with some engineered machine rote learning, instead of meaningful, associative or active learning, which is as risky and unsafe as pseudointellectuals.

Among other many Quora questions, I was asked What is going on with artificial intelligence? The answer might be just to the point.

All the same.

The dot.com-like fakery, scam, fraud and mystification, only in $multi-trillion scales. The tech industry today is full of snake oil, selling fake products and false promises. And the crown jewel of tech snake oil is Narrow/Weak/Fake AI, instead of Real AI, Trans-AI or RSI (Real Man-Machine Superintelligence).

It is what fed to us as a "junk food" of "machine learning", "artificial neural networks" and "deep learning" algorithms.

Today's pseudo-AI is the snake oil of the 21st century, led by the Big Tech ML/DL big data analytics platforms, dubbed as G-MAFIA and BAT-triada, with their startups off-shootings or acquisition....

https://futurium.ec.europa.eu/en/european-ai-alliance/posts/causal-artificial-superintelligence-casi-human-machine-general-purpose-technology-best-investment

Big Tech as Apple, Amazon, Facebook, Google and Microsoft are eliminated potential rivals and concentrated brain power in the quasi-AI field, IGNORING that

  • Predictive analytics is not really AI
  • Machine learning is not AI; automating regression tasks to draw n-dimensional lines though data points
  • Deep learning is not true AI
  • Business process automation is not AI; it’s a rule-based system that uses conditional processing
  • REAL Artificial Intelligence does not need masses of training data to work
  • ML is a buzzword in technology
  • ML is NOT a branch of REAL artificial intelligence that lets machines learn new skills and solve new problems
  • Without real understanding/intelligence/intellect, machine learning UNABLE "to teach machines how to solve problems, answer questions and draw conclusions from source material without human intervention" or "teach computers to act like the human brain by learning autonomously over time"
  • Without real understanding/intelligence/intellect, machine learning can NOT enable computers to learn how to detect potential cases of fraud across many different fields, such as in finance and banking; how to identify the sentiment behind the messages a customer sends; how best to reply to customer queries; how to provide earlier, more accurate medical diagnosis, etc.
  • Large neural networks trained for language understanding and generation, dubbed as large language models (LLMs), as GPT-3, GLaM, LaMDA, Gopher, Megatron-Turing NLG, PaLM, and the likes, are just parroting NL data analytics statistical software, with nil intelligence and zero understanding.

Big Tech Swallows Most of the Hot AI Startups

I have more than enough written about real AI vs. fake AI, humans as machines and/or machines as humans, so let me refer to other smart articles, as

Artificial Intelligence, Really, Is Pseudo-Intelligence

We should be calling it Pseudo-Intelligence, not Artificial-Intelligence

We should be calling it Pseudo-Intelligence, not Artificial-Intelligence

What we currently label “AI” has proven to be useful in a wide range of difficult tasks such as image/object recognition, language translation, game playing, voice transcription and synthesis etc. But these “AI’s” are not intelligent, the models behind them rely purely on statistical relationships in observed data and obtain no ability to reason or understand what they are doing.

It’s not artificial intelligence if it’s not even intelligence

… all the impressive achievements of deep learning amount to just curve fitting.

There are numerous, very impressive, seemingly intelligent AI models. GPT-3 can produce convincing, grammatically correct text in a surprisingly wide range of domains. Amongst many cool applications, it can write stories, blog posts, poems, summarise articles, answer questions and provide rational medical diagnoses given a list of symptoms. It can even write functioning code and produce answers to simple logic puzzles.

However, despite how convincing, even reasoned the model outputs are, there there is no reasoning occurring. GPT-3 merely predicts the next word in the sequence given the previously observed words. It doesn’t reason mathematically about whether “2+2=4”, it merely recognises that usually, in the data it was trained on, “4” typically comes after “2+2=”.

If there is any suggestion of reasoning in GPT-3’s output, it’s because some collection of words resembling a pattern of reasoning were statistically relevant in the training data for similar prompts or contexts. GPT-3 (or newer language models) might even be able to convincingly pass the Turing test and give an impression of consciousness, but it is just a Chinese room that lacks understanding and intentionality and is thus not actually doing any ‘thinking’.

DALL·E is another impressive model that can generate realistic and convincing images given a text description.

On the surface, it might appear that DALL·E is being creative and using its understanding of what an avocado is, what an armchair is, or even what a shape is. But again, the model output is based purely on the statistical relationships observed in the large corpus of text-image pairs that the neural network was trained on (with a very clever selection of neural net architectures and loss functions).

All today's commercial AI agents do not have general models of the world ...which would be essential for any intelligent agent to properly imagine and reason about scenarios it hasn’t seen.

Furthermore, all of these AI’s are examples of Weak or Narrow AI’s. They are only useful or effective in the domains are they specifically trained for.

Is there any reason to pretend it’s not Pseudo-Intelligence?

In the future, we may produce a conscious intelligence that can reason, imagine, plan, have goals & desires, continually learn in any domain, and also have whatever other properties deemed necessary to be a true artificial intelligence. Until then, we’re just making computers apply sophisticated functions to some data or set of inputs and pretending the program is "intelligent".

Conclusion

While AI has the potential to greatly benefit society, it's important to understand that not all AI is created equal.

Understanding the difference between real and fake AI is essential in order to make informed decisions and to ensure that we are not being deceived. By understanding the technology, assessing the results, and examining the source code, we can spot the difference between real and fake AI and ensure that we are using the right AI applications for our needs.

Resources

Trans-AI: How to Build True AI or Real Machine Intelligence and Learning

Don't be stupid to be intelligent, or why superintelligences are to rule the human world

REAL VS. FAKE AI - HOW TO SPOT THE DIFFERENCE

AI as the World Disruptor: Trans-AI as the Silver Bullet to the World's Major Problems

Supplement

Pseudointellectual is

"A person who wants to be thought of as having a lot of intelligence and knowledge but who is not really intelligent or knowledgeable.

A person who claims proficiency in scholarly or?artistic?activities while lacking in-depth?knowledge?or critical understanding.

A person who?pretends?to be of greater?intelligence?than they actually are".


要查看或添加评论,请登录

Azamat Abdoullaev的更多文章

社区洞察

其他会员也浏览了