ML, DL, Generative AI, LLMs, ChatGPT are Quasi-AI, Machine Pseudo-Intelligence
This is not the case. The issue is not the Stupidity of AI, as it is presented in the same title article, "Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous".
All commercial, applied ML applications, be it ANNs, DL, NLP, LLMs, GPT, ChatGPT, are simulated pseudo-intelligence, with some engineered machine rote learning, instead of meaningful, associative or active learning, which is as risky and unsafe as pseudointellectuals.
"ML by itself cannot be intelligent because lacks reasoning, planning, logic, and doesn’t interact with the environment. ML detects patterns and performs predictions based on statistical analysis of data using math based algorithms. These algorithms are not intelligent per se. Intelligence is much more than that."
"So, today’s Artificial intelligence, Machine Learning, and Data Science atmosphere is charged with false stories, inflated achievements. That’s bad for all of us. Because in the end what we receive is pseudo-science".
Introduction
British government holds its first AI Safety summit, bringing together heads of state and big tech near London. The two-day summit begins on Wednesday as concerns grow that the emerging technology may pose a danger to humanity.
The meeting will focus on strategising a global, coordinated effort to address the risks and misuse of AI tools. The summit is centred around ‘frontier AI’, which is defined as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety”.
The Summit brought
Academia and civil society
Industry and related organisations
Governments
Multilateral organisations
Council of Europe
European Commission
Global Partnership on Artificial Intelligence (GPAI)
International Telecommunication Union (ITU)
Organisation for Economic Co-operation and Development (OECD)
UNESCO United Nations
The whole event is Much Ado About Nothing; for all today's AI, be it ML, DL, NLP, GPT, ChatGPT, is not real or true AI, but statistical predictive modeling, dubbed as "machine learning" and "deep learning", It is ML, which is just simulated pseudo-intelligence, with some engineered machine rote learning, instead of meaningful, associative or active learning, which is as risky and unsafe as pseudointellectuals.
Among other many Quora questions, I was asked What is going on with artificial intelligence? The answer might be just to the point.
All the same.
The dot.com-like fakery, scam, fraud and mystification, only in $multi-trillion scales. The tech industry today is full of snake oil, selling fake products and false promises. And the crown jewel of tech snake oil is Narrow/Weak/Fake AI, instead of Real AI, Trans-AI or RSI (Real Man-Machine Superintelligence).
It is what fed to us as a "junk food" of "machine learning", "artificial neural networks" and "deep learning" algorithms.
Today's pseudo-AI is the snake oil of the 21st century, led by the Big Tech ML/DL big data analytics platforms, dubbed as G-MAFIA and BAT-triada, with their startups off-shootings or acquisition....
Big Tech as Apple, Amazon, Facebook, Google and Microsoft are eliminated potential rivals and concentrated brain power in the quasi-AI field, IGNORING that
领英推荐
I have more than enough written about real AI vs. fake AI, humans as machines and/or machines as humans, so let me refer to other smart articles, as
We should be calling it Pseudo-Intelligence, not Artificial-Intelligence
What we currently label “AI” has proven to be useful in a wide range of difficult tasks such as image/object recognition, language translation, game playing, voice transcription and synthesis etc. But these “AI’s” are not intelligent, the models behind them rely purely on statistical relationships in observed data and obtain no ability to reason or understand what they are doing.
It’s not artificial intelligence if it’s not even intelligence
… all the impressive achievements of deep learning amount to just curve fitting.
There are numerous, very impressive, seemingly intelligent AI models. GPT-3 can produce convincing, grammatically correct text in a surprisingly wide range of domains. Amongst many cool applications, it can write stories, blog posts, poems, summarise articles, answer questions and provide rational medical diagnoses given a list of symptoms. It can even write functioning code and produce answers to simple logic puzzles.
However, despite how convincing, even reasoned the model outputs are, there there is no reasoning occurring. GPT-3 merely predicts the next word in the sequence given the previously observed words. It doesn’t reason mathematically about whether “2+2=4”, it merely recognises that usually, in the data it was trained on, “4” typically comes after “2+2=”.
If there is any suggestion of reasoning in GPT-3’s output, it’s because some collection of words resembling a pattern of reasoning were statistically relevant in the training data for similar prompts or contexts. GPT-3 (or newer language models) might even be able to convincingly pass the Turing test and give an impression of consciousness, but it is just a Chinese room that lacks understanding and intentionality and is thus not actually doing any ‘thinking’.
DALL·E is another impressive model that can generate realistic and convincing images given a text description.
On the surface, it might appear that DALL·E is being creative and using its understanding of what an avocado is, what an armchair is, or even what a shape is. But again, the model output is based purely on the statistical relationships observed in the large corpus of text-image pairs that the neural network was trained on (with a very clever selection of neural net architectures and loss functions).
All today's commercial AI agents do not have general models of the world ...which would be essential for any intelligent agent to properly imagine and reason about scenarios it hasn’t seen.
Furthermore, all of these AI’s are examples of Weak or Narrow AI’s. They are only useful or effective in the domains are they specifically trained for.
Is there any reason to pretend it’s not Pseudo-Intelligence?
In the future, we may produce a conscious intelligence that can reason, imagine, plan, have goals & desires, continually learn in any domain, and also have whatever other properties deemed necessary to be a true artificial intelligence. Until then, we’re just making computers apply sophisticated functions to some data or set of inputs and pretending the program is "intelligent".
Conclusion
While AI has the potential to greatly benefit society, it's important to understand that not all AI is created equal.
Understanding the difference between real and fake AI is essential in order to make informed decisions and to ensure that we are not being deceived. By understanding the technology, assessing the results, and examining the source code, we can spot the difference between real and fake AI and ensure that we are using the right AI applications for our needs.
Resources
Supplement
Pseudointellectual is
"A person who wants to be thought of as having a lot of intelligence and knowledge but who is not really intelligent or knowledgeable.
A person who claims proficiency in scholarly or?artistic?activities while lacking in-depth?knowledge?or critical understanding.
A person who?pretends?to be of greater?intelligence?than they actually are".