Real AI vs. Faking AI: a crusade against a massive ignorance, negligence and passivity, fraud and fakery

Real AI vs. Faking AI: a crusade against a massive ignorance, negligence and passivity, fraud and fakery

Fake it - "to pretend to be something that one is not or to have some knowledge or ability that one does not really have".

Disrupting the post-truth deepfake reality: truth is not truth, news are not news, AI is not-AI

Trump’s lawyer Rudy Giuliani has insisted that truth is not truth, but nonsense is still nonsense, implying if the post-truth age us just all about lies.

Now, the news (factual information about current events) is not any news, but deep fakery, lies, fabrications, misinformation or disinformation, propaganda and hoaxes, making money through ads revenue, when journalists and politicians have become ensnared in a symbiotic web of lies that misleads the public.

"The U.S. press, like the U.S. government, is a corrupt and troubled institution. Corrupt not so much in the sense that it accepts bribes but in a systemic sense. It fails to do what it claims to do, what it should do, and what society expects it to do.

The news media and the government are entwined in a vicious circle of mutual manipulation, mythmaking, and self-interest. Journalists need crises to dramatize news, and government officials need to appear to be responding to crises. Too often, the crises are not really crises but joint fabrications. The two institutions have become so ensnared in a symbiotic web of lies that the news media are unable to tell the public what is true and the government is unable to govern effectively. That is the thesis advanced by Paul H. Weaver, a former political scientist (at Harvard University) in News and the Culture of Lying: How Journalism Really Works " [Why the News Is Not the Truth by Peter Vanderwicken].

Following the post-truth politics, fake, biased news, we have a deepfake and biased technology of AI and ML, generative AI, large language foundation models, GPTs, Chatbots, etc. where AI “simulating” or "faking" human intelligence.

Intelligence Models of the AI – Analysis of the 10 cycles

In all, there are four different approaches in Human-like Artificial Intelligence in the context of their techno-scientific R & D:

  • Intelligence in software code. Closed intelligence, in which the details were known only by the programmer.
  • Intelligence in rules and the “knowledge engine” logics. The operational logics of the system is open to the user.
  • Intelligence in the architecture. Intelligence transferred to the computer architecture. Direct support for the efficiency of the applications.
  • Intelligence in the (learning) algorithms (and data). Human kind learning based systems. Algorithms are not known by the end-users. Key aspect is the quality of the data

[About the Essence of Intelligence – Will Artificial Intelligence (Ever) Cover Human Intelligence?]

The evolution of Human-Faking AI has been passing 6 cycles, eras, periods or waves:

The Era of Ancient AI, since antiquity, in the form of myths, stories and rumors of artificial superbeings, gods, titans, angels, demons, mythical creatures, and synthetic beings endowed with intelligence or consciousness by master craftsmen.

The Era of Sci-Fi AI, Robots, Humanoids, Aliens, etc., in literature and cinema, from the Frankenstein novel to Terminator, the Matrix, the Ex Machina movies

The third wave – AI in program code, from 1950s to 1970s, when the term “thinking machines” and “artificial intelligence” was introduced by Alan Turing and John McCarthy in 1955. The Darthmouth workshop proposed to proceed the study “of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use

language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves “.

Later in 1950s McCarthy introduced Lisp language (LISt Processor), which became the first tool to develop “real” AI applications.

The fourth wave – Expert Systems, from 1970s to 1980s. Expert system (ES) is a computer application that has “built-in” intelligence – knowledge in the form of a rule base. By definition, expert system is a computer system emulating the decision-making ability of a human expert. Instead of programming language the end-user is defining his problems to the system by using the structures of the problem specific user interface.

Hypertext (Hypermedia) and WWW and Semantic Web are technologies closely related to AI, computing and information / knowledge management in the form of linked content structures; in a way this represents built-in structural intelligence in documents and document structures.

The fifth wave – AI in Architectures, from 1980s to 1990s. Knowledge engineering and AI systems are based on implementing reasoning and inference processing, instead of algorithmic data processing, directly to the computer architecture to get processing in such tasks more effective to allow effective application-specific computing. The most famous activity in this area was the Japanese nationwide project called “New (Fifth) Generation Computer System” (FGCS) coordinated by the Institute for New Generation Computer Technology (ICOT).

The sixth wave – Learning-based Narrow and Weak AI, from 2000s and continuing. Intelligent systems are based on system’s ability to adapt (change the behavior, react in feedback) and to learn about the situation, in which it is used. Learning might be first taught and then self-learning during the use of the system.

The current wave of AI is based on the effective use of learning algorithms: Neural Networks, Self-Organizing Maps, Deep Learning. Neural network builds a model that resembles the structure processing of a human brain. It uses “what-if” based rules and it is taught (supervised learning) by examples. The learning algorithms are based on the use of nonlinear statistics. Currently we see a growing number of new AI applications based on use of natural, human language in various industries including banking, recruitment, health-care, agriculture, transit, etc. Advances of AI in creating human-like communication and replicating natural language patterns used by humans are based on large language corpora – a collection of human-produced texts in various encodings, first of all written text, but also, spoken, signed, etc. Large corpora with billions of words are used to create text models, i.e. algorithms, which can parse input text and 'understand' it, i.e. answer some simple questions concerning the input.

The seventh wave, Deep Learning and NLP-based Generative AI, multimodal producing various types of content, including code, text, imagery, audio, video and synthetic data.

The eighth wave of human-like AI, Learning-based General AI, AGI, by means of Generative AI and Large Language Foundation Models, as a human-like, human-level Strong AI

The ninth wave of AI, superintelligence, as a human-like superhuman AI via the superintelligence alignment

The Era of Real-World AI and Hyperintelligent Hyperautomation, from 2020 and continuing. The Real AI has the causative power to acquire, learn and apply knowledge to manipulate a broad range of environments. The World model learning and inference are taught or encoded manifesting a deep and broad understanding of reality and self-learning and self-knowing during the use of the system.

It is plain and clear that the human-mimicking AI will never cover Human Intelligence for its biological complexity of evolution, while completing it as more powerful technological intelligence working on its own principles and rules.

Automating Learning and Intelligence in term of the anthropomorphic paradigm that Computers like Humans is the reason of all the booming and busting waves, eras or periods or cycles.

In reality, intelligent systems are based on system’s ability to interact with the world, simulating and modeling, learning and self-knowing, inferencing and communicating, adjusting and adapting to its environments (change the behavior or the settings, react in feedback, etc.) and to learn about the situation, in which it is acting.

Then any human-mimicking AI, as ESs, AI programming, AI architectures, ML statistic algorithms, ANNs, self-driving transportation, robots, large language models (LLMs), voice assistants, GPTs, or Artificial General Intelligence, is Non-Real AI but the hardware/software/data/algorithmic automation.

To be real intelligent, the World Model Engine is requested. It is semantically and ontologically aligned with World's Data/Information/Knowledge, Algorithms, Programs and Supercomputing forms the foundation of the hyperintelligent technology.

Or, there is no real machine intelligence and learning without the Integrated World Model Engine:

General-Purpose AI Technology = Real AI= 5Trans-AI = Integrated World Knowing, Inference, Interaction + AI & GenAI & ML & DL & ANNs & LLMs +...

For example, Generative AI (GenAI) refers to the use of AI to create new content, like text, images, music, audio, code, and videos. It is described as using an ML model to "learn" the patterns and relationships in a dataset of human-created content, with the "learned patterns" to generate new content, while useless without simple natural language prompts.

As a consequence, the rise of generative AI has led to a huge increase in the creation of deepfake content, which is digitally altered or generated content purporting to depict a real person or scenario, whether through video, image, or audio.

The End-to-End Platform for AI

There is no comprehensive ML and end-to-end AI platforms in practice, if only as commercial hype or corporate fraud, negligence or ignorance, which should be recognized, prevented or explained, respectively:

They are really software platforms for designing, deploying, and managing data analytics applications and statistical data models, unable to really forecast future events or outcomes, explain predictions or discovered patterns, thus having nothing to do with real, true or genuine AI Technology.

Faking AI has negative effects on individuals and societies with unforeseen and unanticipated consequences.

This means that a Unified Platform for AI must run by the World Model Knowing, Inference and Interaction Engine of the Real-World AI.

What is Not-AI

"The simulation of human intelligence processes by machines, especially computer systems";

Britannica's AI, the ability of a computer or computer-controlled robot to perform tasks commonly associated with intelligent beings".

Google's AI: "a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more".

"To qualify as AI, a system must exhibit some level of learning and adapting. For this reason, decision-making systems, automation, and statistics are not AI.

AI is broadly defined in two categories: artificial narrow intelligence (ANI) and artificial general intelligence (AGI). To date, AGI does not exist.

The key challenge for creating a general AI is to adequately model the world with all the entirety of knowledge, in a consistent and useful manner. That’s a massive undertaking, to say the least.

Most of what we know as AI today has narrow intelligence – where a particular system addresses a particular problem. Unlike human intelligence, such narrow AI intelligence is effective only in the area in which it has been trained: fraud detection, facial recognition or social recommendations, for example".

[Not everything we call AI is actually ‘artificial intelligence’. Here’s what you need to?know]

All today's AI is Not-AI, including "predictive AI" and "generative AI" and large language models, as Google's LaMDA, Bard, Gemini, Microsoft's Co-Pilot or OpenAI's ChatGPT, but automation software technologies, like automatic human-programmed kitchen appliances.

Besides, to function properly, the NAI needs a lot of high-quality, unbiased data, what is practically impossible.

Co-Pilot, for augmenting human programmers, draws its data from billions of lines of code shared on GitHub. ChatGPT and other large language models use the billions of websites and text documents stored online.

Text-to-image tools, such as Stable Diffusion, DALLE-2, and Midjourney, use image-text pairs from data sets such as LAION-5B.

The NAI also needs powerful computational infrastructure for effective training. .

The NAI needs advanced mathematical models and algorithms.

As the real world constantly changes, NAI systems need to be constantly retrained using new data not to produce answers that are factually incorrect, or do not take into account new information that’s emerged since they were trained.

Again, today's AI tools and applications are data-driven software systems, having not 5T capacities, having no intelligence or understanding, learning or inferencing.

Trans-AI: How to Build True AI or Real Machine Intelligence and Learning

Abstract

We are at the edge of colossal changes. This is a critical moment of historical choice and opportunity. It could be the best 5 years ahead of us that we have ever had in human history or one of the worst, because we have all the power, technology and knowledge to create the most fundamental general-purpose technology (GPT), which could completely upend the whole human history.

The most important GPTs were fire, the wheel, language, writing, the printing press, the steam engine, electric power, information and telecommunications technology, all to be topped by real artificial intelligence technology.

Our study refers to Why and How the Real Machine Intelligence or True AI or Real Superintelligence (RSI) could be designed and developed, deployed and distributed in the next 5 years. The whole idea of RSI took about three decades in three phases. The first conceptual model of TransAI was published in 1989. It covered all possible physical phenomena, effects and processes. The more extended model of Real AI was developed in 1999. A complete theory of superintelligence, with its reality model, global knowledge base, NL programing language, and master algorithm, was presented in 2008.

The RSI project has been finally completed in 2020, with some key findings and discoveries being published on the EU AI Alliance/Futurium site in 20+ articles. The RSI features a unifying World Metamodel (Global Ontology), with a General Intelligence Framework (Master Algorithm), Standard Data Type Hierarchy, NL Programming Language, to effectively interact with the world by intelligent processing of its data, from the web data to the real-world data.

The basic results with technical specifications, classifications, formulas, algorithms, designs and patterns, were kept as a trade secret and documented as the Corporate Confidential Report: How to Engineer Man-Machine Superintelligence 2025.

As a member of EU AI Alliance, the author has proposed the Man-Machine RSI Platform as a key part of Transnational EU-Russia Project. To shape a smart and sustainable future, the world should invest into the RSI Science and Technology, for the Trans-AI paradigm is the way to an inclusive, instrumented, interconnected and intelligent world.

Trans-AI: How to Build True AI or Real Machine Intelligence and Learning

Resources

AI=5Trans-AI: Transcendental, Transdisciplinary, Transformative, Translational, Techno-Scientific Intelligence

Universal Technoscience (UTS): {Philosophy, Science, Technology, Engineering, Mathematics; AI} > Universal AI Platform

Trans-AI: a real and true AI (TruthAI)

Universal AI literacy: AI brainwashing: Artificial Human Intelligence (AHI) vs. Techno-Scientific Intelligence

Consciousness: Artificial Intelligence = Machine Consciousness = World Model Engine

SUPPLEMENT: NNs are Not-AI

Neural net is a multi-variable function from input space to output space. All proven statements about neural nets assume precise formal description of input-output spaces – otherwise we could not prove anything.

The Universal Approximation Theorems establishes that neural nets can approximate whatever continuous functions between Euclidean spaces; there are also variations for non-

Euclidean spaces, algorithmically generated function spaces etc. In practice are some aspects of these theorems often overlooked.

First, theorems do not say, how to organize approximating neural net – how many layers, how many units in every layer, what kind of activation function to use – the best valued have to be find with practical experiments.

Second, many problems are not continuous functions, e.g. all classification problems, image recognition problems etc. For non-continuous problems a neural net may not converge at all and researcher has to start experimenting.

There are no mathematical results of type:

ML: Input_text > Output_text

The Input_text, Output_text are not mathematical structures, every natural language model (e.g. neural net) creates (makes a mathematical approximation) his own way.

Ambiguity and misunderstanding has created lot of frustration among data scientists. This has both deeper causes and also deeper consequences.

要查看或添加评论,请登录

Azamat Abdoullaev的更多文章

社区洞察

其他会员也浏览了