AI = The World Modeling Machine [AI, ML, NNs, DL, GenAI, LLMs, GPT-x]
https://www.igi-global.com/book/reality-universal-ontology-knowledge-systems/859

AI = The World Modeling Machine [AI, ML, NNs, DL, GenAI, LLMs, GPT-x]

"Man should not create machines in his own image and likeness".

STOP FOOLING YOURSELF BELIEVING THAT AI COULD MIMICK HUMAN INTELLIGENCE, OUR NATURAL PERCEPTION, THINKING AND ACTION.

MACHINE INTELLIGENCE IS ABOUT BUILDING THE WORLD MODELING AI MACHINES TRANSCENDING BUT COMPLETING THE HUMAN MIND.

WE DON'T NEED THE HUMAN-COMPETE LARGE LANGUAGE AI/ML MODELS, BUT THE HUMAN-COMPLETE LARGE WORLD MODELING AUTONOMOUS INTELLIGENT SYSTEMS.

"There are real and true, scientific and objective AI or unreal and fake, imitating and subjective AI. To build true intelligent machines, teach them how to model. simulate and effectively interact with the world". [A New Man-Machine Real AI World]

AI as machine intelligence and learning is innovated as the World Modeling Machine reifying and integrating predictive and generative AI, including statistical learning algorithms, machine learning, neural networks, deep learning, generative pre-trained transformers (GPT) foundation models, or large language models, as ChatGPT.

The World Modeling Machines are disrupting the Human Intelligence Modeling AI/ML Systems as a quantum leap to Real AI Technology (RAIT).

We’ve all heard a lot of hype, hope and fear about artificial intelligence and machine learning, neural networks and deep learning, large language models and chatbots, intelligent automation and humanoid robotics.

Today's AI systems are over relying on statistics and anthropomorphism, big data and compute, as GPUs, making them narrow and weak, unsustainable, inefficient or unintelligent.

It is demonstrated how to reify today's AI/ML as a real and true AI and ML by means of the World Modeling and Reality Simulating Engine:

AI= RAIT = The World Modeling and Reality Simulation Machine (WMM) + AI [ML, NNs and DL, GenAI, LLMs, Chatbots,...]

[Trans-AI: How to Build True AI or Real Machine Intelligence and Learning

Trans-AI: a real and true AI (TruthAI)]

WMM is Real/True/Genuine AI, extending beyond data, text, audio and images to include the entire spectrum of physical, social, mental and digital realities.

WMM blends the digital and physical worlds, the conceptual and concrete worlds, the actual and virtual realities, all with numerous applications in all parts of human life and practice.

WMM will process world's data from various sources, such as humans, science, the Internet/web, IoT devices, sensors, cameras and more, to comprehend and interact with the world surpassing traditional limitations of human perception and cognition.

The fact that humans, animals, and intelligent systems use world models is as old as metaphysics and psychology, engineering, control and robotics.

Without programming or modeling what reality and its content is and how the world changes and works, there is no effective and interactive intelligent systems, agents, machines or beings.

The transition from AI to WMM represents a disruptive paradigm shift in the today's anthropomorphic and statistical AI, moving from understanding the world through data as text to reality, in all its complexity.

This disruption promises to open new capabilities and applications, fundamentally changing how technology must interact with the world and humans perceive the environments around us.

The WMM framework and prospective applications has been described in the author's book "Reality, Universal Ontology, and Knowledge Systems: Toward the Intelligent World":

https://www.igi-global.com/book/reality-universal-ontology-knowledge-systems/859

For smart investors, it might take about $5-10B to have a commercial WMM prototype within?1-2 years.

Reifying AI Models as a Real and True or Interactive AI

The AI Reification Rule refers to LLMs, GenAI and Chatbots, as listed below:

The transformer-based?Deep Neural Networks?have enabled the boom?of LLMs and generative deepfake AI systems, such as?

  • Nvidia AI applications, AI Foundations and Generative AI Services,
  • OpenAI's?GPT?series of models (e.g.,?GPT-3.5?and?GPT-4, used in?ChatGPT?and?Microsoft Copilot),?
  • Google's?PaLM?and?Gemini,?
  • xAI's?Grok,?
  • Meta's?LLaMA?family of open-source models,?
  • Anthropic's?Claude?models,
  • Mistral AI's?open source?models,?
  • text-to-image?AI image generation?systems such as?Stable Diffusion,?Midjourney?and?DALL-E,
  • text-to-video?AI generators such as?Sora,
  • OpenAI,?Anthropic,?Microsoft,?Google, and?Baidu?with numerous smaller startups have been involved in the R & D & P of generative AI models.

Only a few minds are aware that "the realm of artificial intelligence (AI) may be on the cusp of a new transformative leap, transitioning from Large Language Models (LLMs) to an innovative and expansive concept, which we may call “Large World Models (LWMs).”

Due to a lack of the Large World Modeling Engine, Generative AI has uses both in software development, healthcare, finance, entertainment, customer service,?sales and marketing,?art, writing,?fashion,?and product design, and in?cybercrime, fake news?or?deepfakes?to deceive or manipulate people, and the mass replacement of human jobs.

Our post is a follow-up of AI Bible: why Generative AI bubble is to burst and Interactive AI is to rise.

Statistics AI IS NOT-AI, Statistics ML IS NOT-ML

Our fundamental/intuitive truths are:

  • Real or True or Interactive AI/ML is about simulating the world or modeling reality in terms of formal ontology, science, mathematics, engineering and technology.
  • AI/ML is not about modeling the human body/brain/brains/behavior/business/tasks or simulating human intelligence in hardware and software in terms of statistics and probabilities

To summarize the major points:

Today's AI is Not-AI, Artificial Intelligence is NOT the simulation of human intelligence processes, or replicating and mimicking the human body/brain/brains/behavior/business. The anthropomorphization of AI, as narrow/weak AI, human-like, human level, general AI, or artificial superintelligence, is "creating machines in the image of humans", with all the consequences.

Today's ML is Not-ML, Machine Learning can NOT "learn from data"; computer algorithms can not be "trained" to find relationships and patterns in data, to solve problems, make predictions, classify information, cluster data points, or generate content. ML is NOT imitating the way that humans learn.

Today's NNs are Not-NNs, Neural Networks are mathematical structures which are NOT patterned after the human brain; neural networks are digital, static and symbolic, while the biological brain is dynamic, plastic and analog

Today's DL is Not-DL, Deep learning or Deep NNs are NOT modelled after the human brain, and DL models CAN NOT "recognize" complex patterns in pictures, text, sounds, and other data to produce accurate insights and predictions.

Today's GenAI is Not-AI, Generative AI CAN NOT "create" new content and ideas, including conversations, stories, images, videos, and music.

The Singularity of Human Learning

Learning is to learn to know about the world, how to interact with its environments, "acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences", consciously or unconsciously, intentionally or unintentionally, via practice or experience, education, training or instruction, reasoning or study.

Only humans and animals and some plants have the power to learn. Human learning starts from the conception until death as a consequence of ongoing interactions between people and their environments.

Real AI is a Function of Reality Modeling, Data, and Computation, its hardware and software, programs and algorithms.

THERE IS NO REAL AND TRUE OR AUTONOMOUSLY INTERACTIVE AI (AIAI), YET. BUT THERE IS A MORE ADVANCED DATA-TRANSFORMING HARDWARE, SOFTWARE, AND CLOUD TECHNOLOGY MISBRANDED AS AI, ML, DL, or NN PLATFORMS, aka Fake and False AI, ML, DL, NN platforms.

Artificial AI, Real Human Intelligence and REAL and TRUE AI

There are a number of AI systems, while no generally accepted definition of AI.

It could be explained by a general prejudice that machine Intelligence and Human Intelligence are two essentially related forms of intelligence, as a form and its copies, substitutes or examples.

  • AI: AI is created by programming machines, using computer algorithms, data, mathematical models and specialized hardware. which is designed to simulate human intelligence.
  • Human Intelligence: Natural Intelligence or intellect refers to the intellectual/cognitive abilities of humans, the sum of mental capacities such as knowing and self-knowledge, thinking, understanding, communication, reasoning, imagination and memory formation, action planning, problem solving, and decision-making, all to learn and effectively interact (perceive, navigate and adapt, change and adjust) with the world, which arise from the complex biological structure of the brain.

In fact, human and non-human intelligences are essentially different in many respects, nature, structures, functions and properties, as it is noted in Wikipedia,

"Artificial intelligence?(AI), in its broadest sense, is?intelligence?exhibited by?machines, particularly?computer systems, as opposed to the natural intelligence of living beings".

Many national AI policies & strategies, from African Union to Viet Nam, are oriented on the OECD AI Principles focusing on how governments and other actors can shape a human-centric approach to trustworthy AI.

In November 2023, OECD member countries approved a revised version of the Organisation’s definition of an AI system, recommended as a simple input-output agent function for all the OECD membership:

"AI system is a machine-based system that, for explicit or implicit?objectives,?infers, from the input it receives, how to generate outputs such as predictions,?content,?recommendations, or decisions?that?can?influence?physical or virtual environments.?Different?AI systems vary in their?levels of autonomy?and adaptiveness after deployment".

https://www.oecd-ilibrary.org/docserver/623da898-en.pdf?expires=1711221182&id=id&accname=guest&checksum=3D06D1A04E200F77A976A84D7ECC2F7B

It is patterned after the concept of intelligent agent as an entity taking actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge, using sensors to perceive the environment, make a decision, and act upon that information using actuators. It could be a robot, machine, human or animal, while sensors are cameras, and actuators are effectors to induce actions.

RAIT vs. HAIT (Human AI/ML Technology)

What is common to all real intelligence, machine or human, is the power to compute (know and learn) and effectively interact (sense, navigate and adapt, change and adjust) with the world.

Real AI is about creating intelligence by artificial means, while modeling the world and simulating reality, instead of replicating, transcending human intelligence.

True AI is not some human-like agent about thinking or acting humanly or rationally, as the mainstream assumes.

It is fundamental misassumptions compromising the whole enterprise of human-like and human-level AI.

True AI technologies are cyber-physical, man-machine systems autonomously, interacting with the world of realities, physical or social, digital or virtual, in efficient and sustainable and intelligent ways, as optimizing its objective functions.

It is plain that real AI machines involve a material cause, efficient cause, and formal cause, as well as a final cause, to effectively navigate, adapt, adjust, manipulate or interact with all possible environments, physical or digital, social or virtual.

RAIT is capable to constantly learn to know the world, its content, interactions and behaviors, in all its complexity and generality, of all possible scopes and scales, levels and detail.

All intelligent behavior is performed modeling and simulating or programming reality, its categories and classes, systems and networks, individuals and instances, relationships and interactions, rules and regularities, as world's data variables, structures, relationships, and values in powerful computing technology.

Now we come to the key point of the most complex problem of real intelligence, namely:

The World Modeling as the Essence of Intelligence, Machine or Human

The core of any true intelligent systems is mental/internal/cognitive/conceptual models or schemas and worldviews, a model of the world, a wide world perception, a framework of basic ideas and beliefs.

A?mental model?is an internal representation (model) of external?reality: a way of representing reality within one's?mind, playing a major role in?cognition,?reasoning?and?decision-making. So, the mind constructs "small-scale?models" of reality to simulate possible scenarios or anticipate events.

In?psychology, mental models?could refer to?mental representations?or mental simulation generally.?

Mental world models can occur in various forms; e.g., (Craik, 1943, Evans, 2006, Furlough and Gillan, 2018, Gentner and Stevens, 1983, Halford, 1993, Johnson-Laird, 1983, Treur and van Ments, 2022).

It is worthwhile to mention the hypothesis of artificial causation by Craik:

If the organism carries a ‘small-scale model’ of external reality and of its own possible actions within its head, it is able to try out various alternatives, conclude which is the best?of them, react to future situations before they arise, utilize the knowledge of past events in?dealing with the present and future, and in every way to react in a much fuller, safer, and?more competent manner to the emergencies which face it. (p. 61)

Such internal models work in a way similar to how the real world works.

K.J.W.?Craik,?The nature of explanation, University Press,?Cambridge, MA?(1943)

"all mental models use the same causal [brain-mind] mechanisms with the same brain structures and processes, neural circuits and pathways and networks, in different brain?modules, regions and areas".

A system scientist, Jay Forrester, defined general mental models as in:

The image of the world around us, which we carry in our head, is just a model. Nobody in his head imagines all the world, government or country. He has only selected concepts, and relationships between them, and uses those to represent the real system (Forrester, 1971).

Human mental models are deeply inherent and subjective assumptions, generalizations, knowledge and past experiences, differing between individuals, communities, nations or societies

They influence how we see and understand the world and how we think and take action

Now, worldviews can be expressed as "a comprehensive representation of the world and our place in it" or the "fundamental cognitive, affective, and evaluative presuppositions a group of people make about the nature of things, and which they use to order their lives."

In AI/ML, Meta AI’s Chief AI Scientist Yann LeCun proposes that the ability to learn “world models” — internal models of how the world works — may be the key to building human-level AI.

https://ai.meta.com/blog/yann-lecun-advances-in-ai-research/

"LeCun proposes that one of the most important challenges in AI today is devising learning paradigms and architectures that would allow machines to learn world models in a self-supervised fashion and then use those models to predict, reason, and plan".

To objectify human mental models and worldviews and human-like AI world modeling, we introduce the new construct of the world modeling and reality simulating framework, all to interpret the world and interact with it as a complex reality of various environments

The depth and scope of intelligence is determined by the depth and scope of understanding the world, from the nearest environments to the cyberworld of the internet data, to the physical universe, "the totality of all space and time; all that is, has been, and will be", to the whole of reality, or everything that exists.

AI is a Function of three things:

Reality modeling/embeddings, its universal mathematical ontology, providing a unified view of all science as the codified world knowledge,

Data Universe, its world's data, with data modeling and meaning, provide a unified view of all data resources,

Machine's Computation, its software and hardware, programs and algorithms.

As such, it involves the world formal ontology (UFO) with STEM, including statistics, computer science and engineering, as well as humanities, arts, and social sciences.

RAIT is informed by the World Model and Reality Simulation Engine, All Reality Simulator (ARS), featuring a Universal Classifier, Identifier and Generator (UCIG) that automatically orders or categorizes, identifies or generates all possible things in the world and their representations, as concepts or data.

It acts as a "meta-physical intelligence mechanism", which is to uniquely identify and classify every resources (from concrete objects to concepts to data and numbers), being a universal system for conveying and interpreting all possible meaning.

Now, every item and its class or every object and its concept, with all possible relationships and interactions among/between them, gets its own special ID tag/label/badge called a Universal Thing Identifier, UTI.

It is a string of characters uniquely identifying anything and everything, from an “item” to a “concept”, including living entities and human beings, artefacts and physical or virtual assets, digital twins, processes, persons or organizations) for the cyber and physical, mental and abstract worlds.

The UTI covers

personal IDs of all human beings

A Legal Entity Identifier (LEI), a uniquely verifiable 20 digit code assigned to legal entities or companies on an individual basis.

a UUID (Universal Unique Identifier) as a sequence of hexadecimal digits of 128-bit value (the numbers 0 through 9 and letters A through F) used to uniquely identify an object or entity on the internet,

a Uniform Resource Identifier (URI) that identifies an abstract or physical resource, such as resources on a webpage, mail address, phone number, books, real-world objects such as people and places, concepts,

URLs locating and retrieving information resources on a network, as the Internet

The UTI code contains a description (a record with information) about an entity such as its identity and class membership.

In all, RAIT embraces conceptual models of reality, major worldviews, all valuable statistical learning classifiers, informing AI models and ML algorithms, "video generation models as world simulators", general purpose simulators of the physical world, etc.

Again, the RAIT's intelligence mechanism consists in the World Model and Reality Simulation Engine, including All Reality Simulator, Universal Entity Classifier and Generator, embracing AI/ML Universal Data Classifiers and AI/ML Universal Data Generators.

What Are Generative AI and Large Language Models?

The "secret sauce" of LLMs " is Numbers, Statistics, and Probabilities.

A LLM is a probabilistic model of a natural language, dealing with numbers/embeddings, statistical algorithms and correlations and probabilities, combining large datasets (scraped web data from the public internet) and neural networks, as transformers-based DNNs with multi-heard attention mechanisms.

It is neither cognitive models, nor world knowledge models, but just statistical and probabilistic modeling of big data sets.

LLMs are used as the Generative AI systems prompt-generating similar data (text, images or other data) using generative statistical models of the joint probability distribution P{A, B} of the given observable and target variables <A, B>.

It follows Bayes' theorem (Bayes' law or Bayes' rule): P{A, B} = P (B/A) P(A) = P(A/D)P(B), where A, and B are causal variables

What is it all really?

First, GenAI/LL models?DO NOT learn?but compute the patterns and structure of their input?training data, generating the data with similar statistic characteristics.

Second, GenAI/LLMs "hallucinate" ("something that is believed to be true or real but that is actually false or unreal") is due to its lack of ability to know the context/meaning behind the information it is processing.

Third, its transformer-based DNNs mix word contexts in a way that makes it really good at guessing the next word.

Fourth, LLMs are "trained" to "rote learn" on massive amounts of information, public or licensed materials, unlawfully scraped from the internet.

This could cover books, blogs, news sites, Wikipedia articles, reddit discussions, social media conversations, Quora's answers, Google's search results, billions of poems and music lyrics, billions of homework assignments and their solutions, billions of standardized test questions and their answers, billions of examples of code doing all sorts of things, billions of online questions and answers, including my 3.8K answers and 3.4K posts on Quora or 433 articles on the LinkedIn platform. and innumerable posts on the FB/AI/ML/DL.

In reality, we have predictive analytics, statistical computing, or computational statistics, mathematical programming, and probability theory, all impersonated as AI, ML, DL.

Again, NO statistical algorithms?can "learn from?data?and?generalize?to unseen data, performing?tasks?without explicit?instructions", without having the world modeling and reality simulation engines (ALL REALITY SIMULATOR).

What You Need to Know about LLMs

LLM chatbots are marked with the following key features:

Large Language Models are trained on the internet, having also trained on all the biases and prejudices, illusions and delusions of humanity, thus regurgitating stereotypical assumptions, conspiracy theories, political misinformation, etc.

LLMs models do not have the knowledge of the world as “core beliefs”, the world modeling and reality simulation engine. They are simply tokens guessers trying to predict what the next tokens would be if the same sentence were to appear on the internet.

LLMs do not have any sense of truth or right or wrong, any idea of factuality and reality.

LLMs "hallucinates" making nonsensical mistakes due to the training data having a lot of inconsistent material.

LLMS are auto-regressive, when errors accumulate due to the "self-attention" mechanisms. Even if only one error is made, everything that comes after might be tied to that error, becoming a cascade of infinite errors.

You should always verify the outputs of a LLM, which the result of mixing and matching some information bits and pieces to assemble a reasonable sounding response.

The quality of response is directly proportional to the quality of the input prompt, not from the statistically "smart" model.

The LLM remembers and know nothing, it doesn’t “remember” what has happened in the exchange. It is all a programming trick to make the guessing model look like it is "having a conversation" because the log of the conversations becomes a fresh new input.

LLMs don’t do problem-solving or planning, having no goals, and the backward-looking Transformers' self-attention can only be applied to the input words that have already appeared.

So, ChatGPT as a Transformer based LLM, with or without instruction tuning and reinforcement learning with human feedback (RLHF), has everything but intelligence, being dumb and dull as stochastic calculators.

As a result, training a massive fake AI model, the size of GPT-4, would currently take about 8,000 H100 chips, and 15 megawatts of power, enough to power about 30,000 typical British homes.

Conclusion

First, the mainstream Statistics AI (SAI) is Not Real, True or Interactive AI and ML.

Second, humans should not create AI in their own image, for anthropomorphic "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs".

Third, the SAI Technology needs to be transformed into the RAIT have real prospects, like as "the 30 technologies and trends on the Gartner impact radar falling into one of four themes: smart world, productivity revolution, privacy and transparency, or critical enablers", as pictured below.

Resources

Trans-AI: How to Build True AI or Real Machine Intelligence and Learning

Trans-AI: a real and true AI (TruthAI)

Universal Standard Entity Classification System: the Catalogue of the World

The world models and patterns rules as the essence of intelligence, natural and artificial

Universal Formal Ontology Intelligent Technology: Real-World AGI

NextGen AI as Hyperintelligent Hyperautomation: Universal Formal Ontology (UFO): World Model Computing Engine

What is real intelligence? What is natural intelligence and artificial intelligence and how are they different from each other?

Artificial intelligence and illusions of understanding in scientific research

Does AI Understand?

AI’s Threat to Scientific Progress: Monoculture and the Illusion of Knowledge

AI Bible: why Generative AI bubble is to burst and Interactive AI is to rise.

Busting the Big Tech AI bubbles

SUPPLEMENT: Applications Of LWMs

The applications of LWMs are vast and varied, touching virtually every sector of society. From enhancing personal health to reshaping urban landscapes, these advanced AI systems hold the potential to significantly improve efficiency, sustainability and quality of life. Below are just a few examples of their potential application:

Healthcare: LWMs promise to revolutionize healthcare by integrating a vast array of data sources, including patient medical histories, real-time biometrics, genomic data and even broader environmental factors. This holistic approach could lead to more accurate diagnoses and personalized treatment plans. For instance, LWMs could predict health issues before they become critical by analyzing subtle patterns in a patient's data that might be overlooked by traditional methods. They can also assist in surgical procedures, offering real-time data analysis to surgeons.

Urban Planning And Smart Cities: In the field of urban development, LWMs could play a pivotal role in creating smarter, more efficient cities. By analyzing data from various sources such as traffic patterns, utility usage and environmental sensors, LWMs could help urban planners make more informed decisions. They could simulate the impact of urban projects on traffic flow, pollution levels and energy consumption, leading to more sustainable and livable city environments.

Education And Training: LWMs have the potential to transform the educational landscape by providing highly personalized learning experiences. These models could adapt to individual learning styles and paces, offering customized educational content that evolves based on student performance and engagement. In vocational training, LWMs could create realistic simulations for hands-on practice in fields like medicine, engineering and aviation, enhancing skill acquisition and proficiency.

Environmental Monitoring And Sustainability: LWMs could play a significant role in monitoring and managing environmental resources. By analyzing data from satellites, weather stations and environmental sensors, these models could provide insights into climate change patterns, help in disaster prediction and management, and guide sustainable resource utilization. For instance, they could optimize water usage in agriculture or predict the impact of deforestation on local ecosystems.

The Next Leap In AI: From Large Language Models To Large World Models?


要查看或添加评论,请登录

Azamat Abdoullaev的更多文章

社区洞察

其他会员也浏览了