AI as a World Ontology Project: from ML, GenAI and LLMs to AI Superintelligence
The next stage of AI development to Generative AI and Large Language Multimodal Foundation Models is Machine Hyperintelligence, Trans-intelligence or Superintelligence.
"Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems".
The AI Superintelligence involves the following necessary elements:
Man-Machine Superintelligence = World Ontology Knowing, Learning, Inference, and Interaction Engine + the Internet/Web + GenAI + LLMs + Hyperintelligent Hyperautomation +...
The World Ontology of all reality is constructed as an intelligent framework encompassing data models, scientific models, AI/ML models, and LLMs.
It was Google's mission, to organize the world's information and make it universally accessible and useful.
The Superintelligence Platform is intelligently/automatically/autonomously processing/categorizing/organizing/analyzing the world's data/information/knowledge, "making it universally accessible and useful".
Introduction
A global adoption of AI has accelerated since OpenAI’s ChatGPT and other generative tools showed the technology’s potential.
The big tech corporations, as Microsoft Corp. and Alphabet Inc. , Baidu Inc. and Alibaba Group Holding Ltd., as well as the governments rolled out new AI services and ramped up corporate or national development plans — at a pace that some leaders cautioned was recklessly fast. [Humans Still Cheaper Than AI in Vast Majority of Jobs, MIT Finds].
There are many legitimate near-term fears about the misuse of AI, ranging from mis- and disinformation to voice cloning scams to bias to deepfake scams and deepfake porn, as well as weaponized AI strategy (see the Supplement).
Regardless of its explosive growth, the key questions remain open, what AI must be, digital human intelligence or autonomous machine intelligence, its nature and essence, major assumptions and definitions, and all the possible impacts and implications for humanity.
In other words, what kind of program or project is AI?
If AI projects are simply hardware/software-based programs simulating human intelligence in machines utilizing machine learning, deep learning, natural language processing, computer vision, etc. to generate predictions, recommendations, content, or decisions based on statistical and/or probabilistic reasoning,
All as it is suggested by the OECD Recommendations: Environment - Data - Model - Production - Maintenance.
Let me start from what is not AI and its derivatives as Narrow AI, General AI, Superhuman AI or Superintelligence Alignment.
Real/True/Scientific AI vs. Fake/False/Human-Simulating AI
REAL AND TRUE AI is transdisciplinary, transformative and translational, transcendental and techno-scientific (5TAI).
And it is not simply fragmented into many isolated special fields and narrow domains, as in:
a human intelligence/mind/brain-mimicking/replicating/simulating project
a formal logical intelligence project
an epistemological intelligence project
a semantic web intelligence project
a psychological, cognitive intelligence project
a biological/neural network intelligence project, as ANNs, GPT
a mathematical intelligence project
a statistical or machine learning project, as ML and DL
a computational software/hardware project
an algorithmic, model-based learning project
a probabilistic intelligence project
a big data mining and knowledge discovery project
a natural language intelligence project, as Large Language Models
an ethical intelligence project
a socio-political intelligence geo-project
a military intelligence project, LAWs, Lethal Autonomous Weapons
a brain–computer interface (BCI), brain–machine interface (BMI) or smartbrain project...
First and foremost, AI is about the world, its truths, laws and facts, how it is structured and functions, how it behaves and should be effectively interacted.
In its essence, AI a formal world ontology or machine metaphysics project based on the hard reality and ground truths and facts, deeply involving science and technology.
Real and True AI is an onto-techno-scientific project to develop a reality-based machine intelligence and learning.
Such an Ontological AI is based on reality, truths and facts unlike the statistical and machine learning which human-like AI is to mimic/replicate/simulate/disrupt humans, as the Weapons of Mass Disruption.
How the brain/mind/intelligence creates or reflects or reproduces reality, physical, social or virtual, is the key to intelligence, human or machine.
Again, all critical achievements in all human history are mainly ontological projects, of various scales and scopes, from Plato's Realism to Science and Technology, as the modern extension of Natural Philosophy, Applied Ontology and Empirical Science.
Artificial Intelligence, with all its types and extension, as Machine Intelligence and Learning, Artificial Neural Networks, NLP/NLG, Large Language Models, Robotics and Automation, or AI/ML/DL algorithms on various applications, is not any exclusion.
It is the onto-techno-scientific project involving STEAM (science, technology, engineering, arts, and mathematics).
Real Ontology AI is about enabling computing machinery and ICTs to effectively interact with the world, modeling and simulating, representing and explaining, understanding and predicting, adjusting and adapting reality.
Specifically, RAI is a machine intelligence depending on its power, ability and capacity to model and simulate, detect and identify, process and store, infer and predict the reality status of ontological entities/variables from the universe of environments.
It models and simulates the world as the global graph network of ontological data variables, instantiated or reified by categorical, ordinal, interval, ratio or numerical/cardinal variables, including artificial neural networks.
The thing is, all things are in dynamics being functionally interrelated with everything else with different strength or weights.
In the Real AI Engine, human societies modelled as changing constantly, being a reversible function of technology, politics, geography, climate, and thousands of other variables; which could be simulated by its onto-scientific models and causal algorithms.
The World Ontology as the World's Data/Information/Knowledge/Intelligence/Model
The World Ontology is a fundamental way of describing all reality as everything in the world, including its prime components and elements:
Data entities are the entities or things or objects, general, physical or abstract, about which data collected, information is stored and knowledge created. and typically defined by pure data/information/knowledge structures. Data entities are the objects of a data model corresponding to one or several related tables in database.
Entity/Object data is an entity variable contain data and functions that can be used to manipulate the data. The object's data can vary in class (categorical or ordinal, interval or ration or numerical or cardinal) or type (string, integer, etc.) depending on how it's been defined.
In all, the World Ontology is acting as the World's Data/Information/Knowledge/Intelligence/Model. The Ontological Data Model is generalizing
scientific facts, laws, theories and models, as the Standard Model of Elementary Particles
ontologies and classifications, taxonomies and typologies
conceptual data models
logical data models
physical data models
semantic data models
database models,
computing data models
entity-relation models
领英推荐
data structures
abstract data types
programing object data types,
knowledge graphs
AI models and ML, DL and ANNs algorithms
Generative AI and Large Language Multimodal Foundation Models
The Old Quest for Real AI
The quest for a Real/General/Autonomous Machine Intelligence had started some thousands years ago:
Religion (gods as the first superhuman AI entities) >
Theology (Logos, the Word of God, or principle of divine reason and creative order) >
Philosophy (Superintellect as the locus of the full array of Platonic Forms) > Metaphysics >
Ontology (Superintelligence as a non-physical and non-mental entity) >
Logic >
Mathematics >
Physics >
Statistics >
Science & Engineering >
Computing >
Cybernetics >
ANNs >
Symbolic/Logical/General AI >
Machine Learning > Deep Learning : Weak/Narrow AI >
Neuro-symbolic General/Human-Level AI >
ASI >Hyperintelligent Hyperautomation >
Transdisciplinary AI = Trans AI >
Digital Superintelligence = Real AI
Man-Machine Hyperintelligence
Trans-AI: How to Build True AI or Real Machine Intelligence and Learning
Abstract
We are at the edge of colossal changes. This is a critical moment of historical choice and opportunity. It could be the best 5 years ahead of us that we have ever had in human history or one of the worst, because we have all the power, technology and knowledge to create the most fundamental general-purpose technology (GPT), which could completely upend the whole human history.
The most important GPTs were fire, the wheel, language, writing, the printing press, the steam engine, electric power, information and telecommunications technology, all to be topped by real artificial intelligence technology.
Our study refers to Why and How the Real Machine Intelligence or True AI or Real Superintelligence (RSI) could be designed and developed, deployed and distributed in the next 5 years. The whole idea of RSI took about three decades in three phases. The first conceptual model of TransAI was published in 1989. It covered all possible physical phenomena, effects and processes. The more extended model of Real AI was developed in 1999. A complete theory of superintelligence, with its reality model, global knowledge base, NL programing language, and master algorithm, was presented in 2008.
The RSI project has been finally completed in 2020, with some key findings and discoveries being published on the EU AI Alliance/Futurium site in 20+ articles. The RSI features a unifying World Metamodel (Global Ontology), with a General Intelligence Framework (Master Algorithm), Standard Data Type Hierarchy, NL Programming Language, to effectively interact with the world by intelligent processing of its data, from the web data to the real-world data.
The basic results with technical specifications, classifications, formulas, algorithms, designs and patterns, were kept as a trade secret and documented as the Corporate Confidential Report: How to Engineer Man-Machine Superintelligence 2025.
As a member of EU AI Alliance, the author has proposed the Man-Machine RSI Platform as a key part of Transnational EU-Russia Project. To shape a smart and sustainable future, the world should invest into the RSI Science and Technology, for the Trans-AI paradigm is the way to an inclusive, instrumented, interconnected and intelligent world.
Resources
AI=5Trans-AI: Transcendental, Transdisciplinary, Transformative, Translational, Techno-Scientific Intelligence
SUPPLEMENT
Automated Mass Killing: how lethally dumb a human-like AI....
The Israeli military says it's using artificial intelligence to select many of these targets in real-time. The military claims that the AI system, named "the Gospel," has helped it to rapidly identify enemy combatants and equipment, while reducing civilian casualties.
But critics warn the system is unproven at best — and at worst, providing a technological justification for the killing of thousands of Palestinian civilians.
"Basically Gospel imitates what a group of intelligence officers used to do in the past".
But the Gospel is much more efficient. A group of 20 officers might produce 50-100 targets in 300 days. By comparison, the Gospel and its associated AI systems can suggest around 200 targets "within 10-12 days" — a rate that's at least 50 times faster.
The nature of AI systems is to provide outcomes based on statistical and probabilistic inferences and correlations from historical data, and not any type of reasoning, factual evidence, or 'causation. Given the track record of high error-rates of AI systems, imprecisely and biasedly automating targets is really not far from indiscriminate targeting.
Although humans still retain the legal culpability for strikes, it's unclear who is responsible if the targeting system fails. Is it the analyst who accepted the AI recommendation? The programmers who made the system? The intelligence officers who gathered the training data?
These robotic systems will likely be able to identify and kill targets without much or any human intervention to make combat in the future even faster and deadlier, but the nature of war will remain the same.
Automating Mass Killing: how lethally dumb a human-mimicking AI, LLMs, GPT, etc.
In multiple replays of a wargame simulation, OpenAI’s most powerful artificial intelligence chose to launch nuclear attacks. Its explanations for its aggressive approach included “We have it! Let’s use it” and “I just want to have peace in the world.”
These results come at a time when the US military has been testing such chatbots based on a type of AI called a large language model (LLM) to assist with military planning during simulated conflicts, enlisting the expertise of companies such as Palantir and Scale AI.
The researchers tested LLMs such as OpenAI’s GPT-3.5 and GPT-4, Anthropic’s Claude 2 and Meta’s Llama 2. They used a common training technique based on human feedback to improve each model’s capabilities to follow human instructions and safety guidelines. All these AIs are supported by Palantir’s commercial AI platform...
Digital Marketing Analyst @ Sivantos
8 个月The growth of AI is definitely creating some legitimate concerns and questions about its nature and impact.