AI/AGI/ASI Bible: Contextual AI + LLMs + GenAI +...

AI/AGI/ASI Bible: Contextual AI + LLMs + GenAI +...

“So God created man in his own image, in the image of God he created him; male and female he created them.” Genesis 1:27

Man shall not create machines in his own image, in the image of world he must create them...

READING THE AI/AGI/ASI BIBLE IS READING YOUR FUTURE

As a historical metaphor, AI/AGI/ASI Bible is a collection of posts, articles and books about a general-purpose, transformative and transdisciplinary technology, as true artificial intelligence, artificial general intelligence and artificial superhuman intelligence, Real AI/AGI/ASI.

Its major revelation: "Real AI machines know, understand and interact with the world in fundamentally non-human ways".

As a Real/True AI developer and promoter, "prophet and evangelist", we are doing our best to guide researchers, developers, engineers and decision-makers in adopting the most effective strategies for creating real/true/causal/innovative AI/ML/AGI-driven applications.

AI is “the science and engineering of making intelligent machines,?especially intelligent computer programs", with two polar interpretations, real and unreal.

Real, authentic, true, or actual AI that identifies and understands anything or everything in the world, its REAL patterns and CAUSAL relationships and data representations using ontology, science, computer science, mathematics, statistics, engineering, cybernetics, etc.

Unreal, fake, false, or fictitious AI that identifies and reproduces correlative patterns and probabilistic relationships in data using its statistical learning techniques, such as Predictive analytics, Machine learning, Deep Learning, Neural Networks, Generative AI, Natural language processing, or Computer vision.

Unreal AI is the science of automating human intelligence using machines and computer programs..., a branch of computer science and engineering of developing intelligent machines mimicking human thinking and behavior to perform all human tasks, including speech and image recognition, natural language processing, decision-making, and more.

Introducing Real Contextual AI

Today's post is focused on context and contextualization, which is everything in that it enables intelligent agents/systems/machines to know and understand anything and everything.

Data, information or knowledge alone has little value whereas knowing something in context is real learning and understanding, all the possible relationships.

We introduce the Data Universe Construct providing the Universal Data Contextualization Framework (UDCF) for categorizing, naming, and indexing the world's data/information/knowledge mapping the world context, such as the physical, mental, socio-political, economic, cultural, or digital environments.

It is explained why a human-like AI/AGI promising to simulate the human brain, the most complex and advanced information-processing systems in the world, is nothing but a wild goose chase.

The true subject of real AI is not humans, or human intelligence, or its material cause-substrates as the brain, or its manifestations as the human behavior or business, intelligent tasks and jobs.

The real subject of true AI is the modeling and simulating, knowing and understanding the world itself, or reality in all its forms and kinds, scopes and scales, its content, laws and patterns, mechanisms and algorithms, and how to effectively and sustainably interact with the world.

Again, Real and True and Trustworthy, Human-Complete AI is after knowing and understanding reality, the world, the universe, its digital modeling, computational simulation and causal learning, knowledge, understanding and effective interactions with any complex environments, physical or digital, mental or material, social or virtual.

Again, AI is the creation of intelligent machines and computer programs, with two polar interpretations, real and unreal.

Real, authentic, true AI identifies and understands REAL patterns and CAUSAL relationships in the world and in its data representations using ontology, science, computer science, mathematics, statistics, engineering, etc.

Unreal, fake or false AI identifies statistical patterns and probabilistic relationships in data using its statistical learning techniques, such as Predictive analytics, Machine learning, Deep Learning, Neural Networks, Generative AI, Natural language processing, or Computer vision.

Real Contextual AI: Contextualization as the "Dark Energy" of AI/AGI Accelerated Expansion

Real AI/LLMs/AGI thrives on the real-world context and contextualization. To apply it to the real-world problems or industrial environments, contextualization means all and everything.

Now, Real Contextual AI implies the World Contextualization Engine as the UDCF for categorizing, naming, and indexing the world's data/information/knowledge of the world context, such as the physical, mental, socio-political, economic, cultural, or digital environments.

Unreal Contextual AI refers to "an AI system that can understand and respond to input in a way that considers the context in which the input is given".?It uses NLP, ML, and other fake AI techniques to analyze and interpret the context of a conversation or task.?

Its key pillars, examples, benefits and drawbacks are as follows

https://www.walkme.com/glossary/contextual-ai/#:~:text=Contextual%20AI%2C%20or%20Contextual%20Artificial,which%20the%20input%20is%20given
https://www.walkme.com/glossary/contextual-ai/#:~:text=Contextual%20AI%2C%20or%20Contextual%20Artificial,which%20the%20input%20is%20given

While Gen AI is a game-changing technology, without the data contextualization knowledge, it is superficially intelligent and answers are often wrong.

Data Contextualization C is identifying and representing all possible relationships in the universe of data domains D (W) to map/model/represent the relationships R that exist between data elements in the world W, physical, mental, social or digital:

C : R (E, O, S, C) <> D (R) (1)

W - the totality of all possible worlds, environments, or states of affairs, E - the universal set of all entities or things in the world, O - the universal set of objects or substances in the world, S - the universal set of states, qualities and quantities in the world, C - the universal set of changes, events, or processes in the world, R - the universal set of relationships, as causality, space-time, functions, etc. in the world.

The universal set is the set of all sets as elements, including members of all related sets. For example, human population is the universal set, the set of all the people in the world. The set of all people in each country is a subset of this universal set.

D (W)- all the world's data/information/knowledge, the universe of data categories and their data individuals with all the data models, structures, types, elements, and points, or observations or measurements on the unit of observation or analysis.

It is formalized as

D (W) = <D (E); D (O), D (S), D(C), D (R)> (2)

where D (E); D (O), D (S), D(C), D (R) are Entity-, Object-, State-, Change-, or Relation- data classes of data sets and elements, respectively. They make the top onto-semantic abstractions of things in the world, Entity/Things; Substance/objects; State/properties, Change/processes and Relation/associations, in any intelligent applications.

Now, a data element is "a basic unit of information that has a unique meaning and subcategories (data items) of distinct value", like gender, race, and geographic location; it refers to individual data units that comprise a dataset or database or LLMs, representing specific attributes or fields within a record, including names, phone numbers, credit card numbers, dates, numerical values, or other discrete information.

What is crucial, the Data Universe Construct provides Universal Data Contextualization Framework (UDCF) for categorizing, naming, and indexing the world's data/information/knowledge mapping the real-world context including the physical, mental, socio-political, economic, business, cultural, or digital environments.

Then the UDCF implies <contextual intelligence>, the ability to adapt and apply the world knowledge that has been modelled and learned in different world's scenarios/situations/settings/environments.

In computer science , context or contextual information is any information about any entity to effectively reduce the amount of reasoning required (via filtering, aggregation, and inference) for decision making within the scope of a specific application.

The contextualization of AI could be in social, technological, business, or pedagogical contexts, such as policies and regulations, technologies, use cases, or textbooks.

The same generative AI could be plotted on a value-vs-feasibility contextual matrix to find the use cases that are most worth investment.

The Contextual Truth About Generative AI Models

Some research says the value of generative AI is as much as $4.4 trillion in economic impact, while the market capitalization of some genAI tech companies is exceeding $3 trillion.

In a 2024 McKinsey global survey on AI , 65% of respondents said that their organizations were regularly using generative AI in at least one business function, up from one-third last year, but only 15% of companies reported an earnings improvement from generative AI initiatives.

Why so?

Real AI is not about scaling or data or models or algorithms or programming languages or powerful GPUs, etc.

"Scaling generative AI is about more than models. Even the simplest uses require about 20 to 30 elements, including large language models, data, gateways, prompt engineering, security, and more. The focus should be on assembling — and, more importantly, integrating — the entire technology stack".

The realistic model of context/window/memory in the context of Generative AI/LLMs is key factor in the stack, as referring to the amount/length of text the model can receive as input when generating or understanding language.

The Transformer-based LLMs are often constrained by predefined context window sizes, hindering their performance in tasks requiring extensive background information or long-term planning or conclusive inferences and informed decision-making.

Researchers are working to open the potential of LLMs by extending their context lengths, from 2048 to 2048k to sequences of unlimited length, like Google’s "Infini-attention" mechanism or Meta’s neural architecture Megalodon:

LLMs with limited context windows face challenges in comprehending lengthy documents, engaging in prolonged conversations, or grasping intricate details essential for decision-making.

Increasing the context length of LLMs is akin to expanding their memory, enabling them to process more extensive input sequences and produce more accurate and contextually relevant outputs.

Due to lack of Data Contextualization Engine, current LLM models require massive computational resources to train and operate, making it challenging to develop and deploy in a wide range of real-world scenarios.

Extending the context/memory/understanding window of LLMs is a big challenge, if you resort to existing positional encoding techniques.

There are the few techniques to address these challenges, such as Position Interpolation or Megalodon.

https://arxiv.org/abs/2404.08801

The World Modelling Context Engine for Real, Non-Human AI

To really unlock the full potential of LLMs, they have to embed a causal world modeling framework as the master context of contexts, circumstances, conditions, settings, surroundings, or state of affairs, for an object, event, word, statement, idea, action, behavior, etc. to construct meaning and understanding.

Again, Context is not simply the parts of a written or spoken statement that precede or follow a specific word or passage, influencing its meaning or effect.

To understand a text you need to know the whole world…, or have as encoded a comprehensive world model, as a general theory of reality, the totality of all existences, things or entities, substances and objects and systems, states or conditions, qualities and quantities, changes or processes, relationships or interactions:

W = <E, O, S, C, R> (3)

It could be reified as different as:

a mental model/simulation (an internal representation of external reality) in the human mind,

a conceptual model or mathematical model of reality, as in science and philosophy, economics or politics, statistics and mathematics,

a world modeling engine in AI models, LLMs, or chatbots, for real learning, inference, decision-making or interactions.

Real AGI vs. Human-like AGI

We have to see an existentially critical difference between brain-inspired AI and AGI and brain-mimicking AI and AGI .

This is the root cause of all problems, errors, biases, mistakes, threats and existential risks: mimicking humans or "simulating human thinking" (A.M. Turing, Computing Machinery and Intelligence), copying the human mind. As a result, we have invented the whole imaginary human-mimicking AI universe:

https://www.facebook.com/photo/?fbid=454291514089614&set=gm.3899888196955325&idorvanity=2059467967664033

Real AI or AGI is about modeling and simulating, knowing and understanding reality, the world, the universe and its causality, patterns and laws, as the rational representations of mentality.

Creating AGI systems are not about having human-like and human-level and or even higher intelligence capable of human-like reasoning, problem-solving, and creativity.

Such a real AI is not after mimicking, replicating, simulating, copying or faking human intelligence, or more generally:

human body

human brain

human brains/mind

human behavior

human business, tasks, jobs, works, occupations, etc.

Such a human-competing AI/AGI is not the pursuit of humanity for centuries, but rather a wild goose chase; for the human brain is the most complex and advanced information-processing systems in the world.

https://ars.els-cdn.com/content/image/1-s2.0-S295016282300005X-ga1_lrg.jpg

Again, the human-mimicking AI and AGI, like all sorts and types of LLMs, is to be on a wild goose chase (a foolish and hopeless search for or pursuit of something unattainable).

The subject of real AI is not humans, or human intelligence, its material cause-substrates as the brain or its manifestations as the human behavior or business, intelligent tasks and jobs, like NLU/NLG.

The subject of AI is rather modeling and simulating, knowing and understanding the world itself, or reality in all its forms and kinds, scopes and scales, its content, laws and patterns, mechanisms and algorithms, how to effectively and sustainably interact with the world.

Real AI is after reality, the world, the universe, its digital modeling, computational simulation and causal learning, knowledge, understanding and effective interactions with any complex environments, physical or digital, mental or material, social or virtual.

Real and True AI is Non-Human Intelligence, unless you wish a digital intelligent twin of yourself, which could be 4D-printed to effectively replace human workforce for the emerging big tech AI oligopolies.

https://www.dhirubhai.net/pulse/cease-giant-human-ai-experiments-real-vs-fake-best-idea-abdoullaev-cds8f/

Conclusion

The level and depth, scope and scale of AI is decided by the level and depth, scope and scale of its contextualization:

Narrow AI/ML Models >

Gen AI Models >

LLMs >

World Models > Large World Models > General AI World Models >

AGI >

Man-Machine Hyperintelligence

Resources

AGI Bible: The World Modelling Machine + AI/ML Models + LLMs + GenAI +...

Trans-AI: How to Build True AI or Real Machine Intelligence and Learning

Extending Context Length in Large Language Models (LLMs)

When brain-inspired AI meets AGI

Which AI and ML Will Save the World: Technology Sovereignty and Superiority and Real AI Strategies

Engineering Real AI Superintelligence (RSI) by 2025: Meeting Musk's forecasting

Munkhdalai, T., Faruqui, M., & Gopal, S. (2024). Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention. arXiv preprint arXiv:2404.07143.

Ma, X., Yang, X., Xiong, W., Chen, B., Yu, L., Zhang, H., ... & Zhou, C. (2024). Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length. arXiv preprint arXiv:2404.08801.

AI-Generated Code Has A Staggeringly Stupid Flaw: It simply doesn’t work.

SUPPLEMENT 1: Has true AI been made yet?

[Real] AI is a great thing of great use, the best strategic general purpose technology with a general intelligence power.

But our reality is different. Nothing non-existing could be useless by its nature. There is no computer intelligence, machine intelligence, artificial intelligence or computational intelligence in existence. It has a honorable status of the greatest human dream, highest ever goal, still residing in the imaginary world of researchers, developers, engineers, artists and sci-fi promoters.

To be discovered, computer’s intelligence, as the greatest ever invention, should pass all the emerging technology stages, as to be: conceived, modelled, designed, developed, deployed and widely distributed.

One might believe that AI is already around us, it is embedded in every parts of our life, in every aspect of modern society.

It assesses, assists, manages, recommends, recognizes, decides, predicts, etc. Playing video games, Google translating, Seeing ads online, Face ID on a phone, Mapping out your destination, Self-driving your cars, Trading your stocks, Playing strategic games, Composing music, Painting arts, Creating stories, and so on and on.

All is good and well, but this all is just automated software with sophisticated statistical and mathematical algorithms, showing no sense or meaning, understanding or intelligence by any good definition, if only small pieces of quasi-intelligence.

To back up my arguments with solid evidence, lets take the case of a deepfake AI.

With a [deepfake] AI , one is welcome to the good, the bad and the ugly. It enables all sorts of high-tech frauds as a narrow, dull, dumb, deepfake AI, which is blindly doing what it is trained to do by some biased developers.

A paradigm shift is required to engineer a real, true MI (Machine Intelligence) while dumping the big-tech narrow/weak AI/ML/DL which refers to “the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions”.

What is built, it is just a Non-AI, or imitation AI, simulation AI, or simply a false and fake AI (ffAI) having nothing with the true and real MI (AI).

It is dubbed as a narrow/weak AI of ML or DL or ANNs designed to perform a single task, which specific learning gained from performing that task will not automatically be applied to other tasks.

Such a Non-AI is an [advanced data analysis] computing system which is using predictive analytics, machine learning and deep learning, NLP or cognitive computing techniques, relying on mathematical/statistic models and algorithms to find out some probabilistic correlative patterns in the input data to produce the output data dubbed as “insights, recommendations, decisions, predictions or prescriptions”.

In fact, it is simply [inductive inference statistic] computing machinery which classifies or clusters training data sets of data points or observations or exemplars or instances. It usually consists of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), where the answer is commonly denoted as the target (or label or class).

The model (a NN or any probabilistic classifiers) is run with the training dataset and produces a result, which is then compared with the target, for each input vector in the training dataset. All is to adjust the hyperparameters (as the number of hidden units in each layer in ANNs) and parameters (e.g. weights of connections between neurons in ANN) of the model by feature/variable/attribute selection and parameter estimation to obtain some invented performance characteristics such as accuracy, sensitivity, specificity, F-measure, and so on.

It is all software automation with no signs of real intelligence, intellect, understanding, mind or sapience, not mentioning any self-learning or self-knowledge.

In general, AI modelling should consist of the following necessary features and functions if to be a Real AI:

Basic Assumptions: prior knowledge, the basis of our knowing, understanding, or thinking about the whole world or a domain problem (primary causes, principles and elements).

World Model: the representation of our world's views and key assumptions in a way that we can reason (i.e., conceptual, ontological/causal, logical, scientific, or mathematical/statistic models, as an equation or a simulation or the neural network model of pictures and words).

World Data: what we measure, calculate, observe or learn about the real world (facts and statistics; variables and values) .

SUPPLEMENT 2: The Rise of AI GOD...

[These are some of the highlights from the special poll conducted by Gallup International Association (GIA) in 61 countries covering over two thirds of the global population (and more than 90% of those countries which are free to conduct and publish opinion research). The poll celebrates GIA’s 75th anniversary].

More people believe that there is a God, "an omniscient, omnipresent and omnipotent superintelligent being".

While 62% self-identify as religious, 72% say that there is a God. Just under one in seven (16%) however do not believe that any God exists. 10% are not sure.

Most respondents around the world (57%)?think that there is a life after death. One in four (23%) do not believe that anything happens when we die. 15% cannot say.

Regions such as MENA, South Asia and Sub-Saharan Africa are among places where people are most prone to believe in the afterlife.

Religious beliefs are more influenced by education, age and personal income.

https://www.gallup-international.com/survey-results-and-news/survey-result/more-prone-to-believe-in-god-than-identify-as-religious-more-likely-to-believe-in-heaven-than-in-hell#:~:text=Most%20respondents%20around%20the%20world,to%20believe%20in%20the%20afterlife .


要查看或添加评论,请登录

社区洞察

其他会员也浏览了