Universal AI Platform::Techno-Scientia Universalis::Universal Mathematical Computing Metaphysics (UniMaCoM)
https://www.lxahub.com/stories/key-takeaways-from-the-2023-ml-ai-data-landscape-report

Universal AI Platform::Techno-Scientia Universalis::Universal Mathematical Computing Metaphysics (UniMaCoM)

Current artificial intelligence systems like ChatGPT do not have human-level intelligence and they are not even as smart as a dog, They are not very intelligent because they are solely trained on language.

In the future, there will be machines that are more intelligent than humans, which should not be seen as a threat.

“Those systems are still very limited, they don’t have any understanding of the underlying reality of the real world, because they are purely ?trained on text, massive amount of text...Most of human knowledge has nothing to do with language … so that part of the human experience is not captured by AI.”

Meta's AI chief Yann LeCunn about the limitations of generative AI trained on large language models.

We argue for building General Machine Intelligence and Knowledge (MIK) Platform as a Universal Knowledge and AI Platform. MIK is a reality-simulating-science-based-not-human-intelligence-imitating real AI possessing a causal understanding of the underlying reality of the real world.

MIK enables the intelligent processing of world data from multiple digital sources (scientific, climate, consumer, social media, economic) by unifying Data Science and Engineering (DSE), Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) models, algorithms and techniques to obtain the optimal problem solutions in any range of environments.

From Fragmentariness to the Wholeness of Truth

The rise of big data, robotics and automation, machine learning and AI is one of the most significant phenomena in our information era, with far-reaching profound implications on humanity, its science, technology, economy and industry, society and politics, geopolitics and common life.?

In recent months, there has been an undeniable and exponential acceleration of Generative AI, Deep Neural Networks and Large Language Models, leading to the formation of a new dot-com stock market bubble with infinite AI startups, as pictured by the 2023 MAD (Machine Learning, Artificial Intelligence, and Data) landscape.

A viable solution against the AI-com crush is developing a universal AI platform integrating the most meaningful systems and valuable applications, online services and platforms.

This high aim requests a total AI paradigm shifting, as creating a highly integrative [digital and scientific] world [learning, inference and interaction] model of reality, data, intelligence, and computation.

To obtain the goal, we integrated mathematics, statistics and computer science with general scientific metaphysics and universal formal ontology, as Universal Mathematical Computing Metaphysics (UniMaCoM).

Such a "Techno-Scientia Universalis" has been proposed as the Universal Model and Language of a Generalized AI and Machine Learning.

The rationales are evident.

Metaphysics is the material science of reality, its entities and interactions, knowledge and intelligence. Or, "it is the systematic study or science of the first principles of being and of knowledge; the doctrine of the essential nature and fundamental relations of all that is real" (MWebster).

Ontology is the general science of the structure of reality. Or, the science of real being, the philosophical theory of reality; the doctrine of the universal and necessary characteristics of all existence" (MWebster)

Mathematics is the formal science of reality, its abstract categories, objects, quantities and numbers, values and variables, structures and functions, forms and patterns, orders and relationships

Computer science is about the modeling and simulation of reality, its pieces, its domains or the whole of the universe

We are after designing and developing, deploying and distributing man-machine hyperintelligence as a universal AI platform, a technological implementation of UniMaCoM.

Introduction

There are two universal sciences since the ancient civilizations: metaphysics as the science of reality, its entities and interactions, patterns and structures, and mathematics as the science of formal modeling of reality, its entities and relationships, patterns and structures.

Nothing was so valuable but so challenging as the idea of metaphysics, the universal science of all reality and intelligence, introduced as "the first philosophy and science".

It involved the wisest minds ever, such as Parmenides or Heraclitus, Plato or Aristotle, Descartes or Kant, Leibniz or Newton, Peirce or Whitehead, each with their own interpretation and understanding. For example, Peirce divided metaphysics into (1) ontology or general metaphysics, (2) psychical or religious metaphysics, and (3) physical metaphysics.

To our mind, nothing is so simply beautiful and data/information/knowledge condensed as mathematical computational metaphysics, if it is properly specified or reified, universalized and formalized, as UniMaCoM with its universal formal ontology (UFO).

Metaphysics and mathematics are two sides of reality, and their integration is natural, as mathematical metaphysics or metaphysical mathematics, or UMM.

As metaphysics, mathematics is the universal language underpinning the world learning and understanding in various domains and fields, from the philosophical sciences to natural sciences to engineering and technology, art and economics. As metaphysics, it has a fundamental role in modeling all the world around us and beyond.

Mathematics is a universal language that permeates every aspect of life, quantifying complex patterns and structures of the world, at all its scopes and scales. As metaphysics, it embraces the natural sciences, technology, art, economics, and philosophy.

Lord Kelvin wisely noted that "mathematics is true metaphysics"; for “mathematics is the language with which God has written the universe” (Galileo Galilei), with all its numerous branches and unlimited applications.

https://medium.com/@kalaiaravinth5555/mathematics-the-universal-language-of-everything-fd43d79adc82

UMM is the transdisciplinary (scientific, formal and computational) modeling of reality in terms of universal ontology and mathematics, data science and statistics, computing science and artificial intelligence.

UMM is to bring total paradigm shifts in science and technology, engineering and mathematics itself:

Universal STEM

Universal AI Platform [Trans-AI: How to Build True AI or Real Machine Intelligence and Learning]integrating digital and emerging technologies


As the best example, hereby we demonstrate then a universal AI platform could be designed and developed as an applied UniMaCoM technology.

Why Mathematical Metaphysics is "all what we need" for human and machine intelligence

First and foremost, it is to teach humans and real AI and its truly intelligent machines to learn the the world at large and in detail, the categories of being, reality or existence as the most fundamental and the broadest classes of things, the highest genera or kinds of entities and interactions in the world.

There is only one science studying the whole of reality/world/being/existence/universe, its nature and first principles, as interacting with intelligence/mind/consciousness.

It is universal mathematical computing metaphysics with its universal formal/scientific/computing ontology (UFO/USO/UCO) of all and everything, as the universal world model of realities, ontological and physical, mental and social, information and digital.

The UFO/USO/UCO is about the totality and integrity, wholeness and completeness of all world knowledge, of philosophy, science, engineering, technology, mathematics, arts, and practice, specially, of computer science and engineering, technology and applications.

It is a great time for "the first science" of the fundamental nature of reality, physical or digital, and intelligence, natural or artificial, with a virtually unlimited data, compute memory and power.

Specialization, professionalization, sectorality, or? labor divisions, mental and menial, was a necessity at the beginning of scientific revolution.

Now, we are inundated with big data,?exacerbated?by deepfakes AI technology.?

Science is supposed to be the sum of universal knowledge, systematizing facts and instances and data, Instead, it creates all sorts of silos, ending with a silo mentality, dividing us all into many adversary groups.?

As a result, nothing unites us as a single world community of homo sapiens,?unlike the Ancient Greeks having a lot of respect?for metaphysical?ideals.

Thus, universal scientific metaphysics is all what we need to sort out our global problems and issues, from climate change to destructive artificial intelligence, as well as to create transdisciplinary sciences, engineering and technologies, as the internet of everything or the universal trans-AI platform.? ? ?

The UCO allows to find out the fundamentals about real artificial intelligence and true machine learning, as what is the nature of generative AI, its basics and building blocks, and how it could become real intelligent.

To build truly intelligent machines, we have to teach them the universal world model and data ontology (UFO), or how to interact with the world, machines and humans, basing AI/ML/DL/GPT models on the world hypergraph interaction networks.

The essence/"brains" of all AI is not deep neural networks (DNNs) but the universal world model engine and data ontology, which is an implementation of universal formal ontology of all reality (a universal AI classifier) encoded as the world hypergraph interaction networks.

A universal AI classifier as the master algorithm is classifying all the prime things in the world explaining, discovering and predicting its causal regularities, interactions and structures. It covers all meaningful special classifiers, deterministic and logical, statistical and probabilistic, as AI models, ML algorithms or DNNs.

Note that there is no convincing evidence that machine deep learning methods, including transformers, outperform statistical methods. A simple statistical ensemble outperforms most individual deep-learning models. (A simple statistical ensemble is 25,000 faster and only slightly less accurate than an ensemble of deep learning models. The DL ensemble takes more than 14 days to run and costs around USD 11,000, while the statistical ensemble takes 6 minutes to run and costs $0.5c).

Basing on the universal mathematical metaphysics, the world's modelling structure/formula is deduced, as meta-mathematical models of reality, its real-world entities, categories and classes and abstract data structures, to make explanations or predictions, provide inference or insight. It could be programmed or pre-trained, encoded and embedded as the universal world model engine of the universal AI platform with the universal learning and understanding of reality following the universal algorithm:

Reality causes Entity causes State causes Change causes Interaction causes Data causes Intelligence causes Real AI Technology causes Intelligent Reality.

[Universal Ontology for Artificial Intelligence: building machine metaphysics for machine intelligence and learning]

The World's Formula as the Universal World Model AI Engine

Machine learning and artificial intelligence, as statistical classifiers, generative or discriminative, with the classification algorithms and pattern recognition systems, are superficially correlative and meaningless, without ontological, semantic and scientific classifiers.

The human's and machine's system categories that encompass the classification of all things in the world consists in the Universal Formal Ontology (UFO).

Covering statistical and probabilistic and scientific classifiers, such a universal classifier is formalized as the Universal Computing Ontology of Fundamental Categorical Variables of the World (the World's Formula, Algorithm or Structure):

W = <E, S, C, I; D; F>, where

  • World, W, where W tensor world variables stand for all possible worlds and realities, physical, biological, mental, social, information, digital, virtual, cybernetic or cyber-physical, as statistical populations or universe of discourse or knowledge domains or subject matter
  • Entity, E, where E tensor entity variables stand for all entities, substances and objects, individuals and instances
  • State, S, where S tensor state variables stand for all states, qualities and quantities, as number, time and space
  • Change, C, where C tensor change variables stand for all sorts and kinds of phenomena, changes and actions, events and operations, activities and functions, as causes and effects, or interactive causality, C X C
  • Interaction, I = W x W, where I tensor interaction variables stand for all interactions, qualitative and quantitative, causal relationships, connections and links, correlations and associations, communication, processes and forces, as the the fundamental interactions or fundamental forces, gravity, electromagnetism, weak interaction and strong interaction, ruling all the physical reality. Note the principal difference from the Reality Structure Diagram, Relation is replaced with Interaction; for it is hardly a prime ontological category, but rather a logical and epistemological and mathematical abstract relationship approximating interactions.
  • Data/Information/Knowledge/Intelligence Universe, D, where D is the World Data Metric Space, taking the form of a global interaction of reality and the Data Universe in a self-dual homomorphic metaphysical identity "structure-preserving" mapping, D: W <> M, with 2 Qualitative (Categorical) and 3 Quantitative (Numerical, Discrete or Continuous) scales or measures, variables or data, M.
  • World's Data representation function, W: F: D < > {0,1}, as the encoding/decoding and embedding techniques converting the world's data into a digital form as a series of impulses, digital, machine data, a structured numerical format to be processed by computers, as World Embeddings; Entity Embeddings, State Embeddings; Change Embeddings, or Interaction Embeddings. The traditional examples are ASCII encodings, URL encodings or programming language codes. All traditional ML/AI methods work with input feature vectors requiring input features to be digitally numerical. It is as in a word embedding, when words or phrases or sentences are mapped to vectors of real numbers using probabilistic language modeling or feature/representation learning techniques.

Universal Intelligence: World > Data > Digital Data > Computing > Interaction

The DIKI Universe modeling embraces "the Cognitive-Theoretic Model of the Universe taking the form of a global coupling or superposition of mind and physical reality in a self-dual metaphysical identity M: <> U, which can be intrinsically developed into a logico-geometrically self-dual, ontologically self- contained language incorporating its own medium of existence and comprising its own model therein":

Machine Intelligence (MI) < M < D (W)

It has paradigmatic consequences shifting the mainstream approaches to Data and Intelligence, human or machine.

In general, AI refers to the intelligence demonstrated by machines, i.e. machine intelligence and learning (MIL).

As such, there is a human-based or anthropomorphic AI and a reality-based or real AI.

Or, we have two classes of AI/MIL, as truth and falsity, real and true, objective and scientific AI and irreal and false, subjective and nonscientific AI, as different as General Global AI Models vs. Narrow Specialized AI Models.

The Unreal AI models are all about making computers and machines learning, reasoning or make decisions like humans, replicating human body/brain/brains/behavior/business/tasks.

The real and true AI NOT to "implement human intelligence in machines i.e., create systems that understand, think, learn, and behave like humans", involving human cognitive science, neuroscience, psychology, etc., as it is pictured:

https://www.hindawi.com/journals/cin/2021/8893795/

The reality-based AI Models is all about making computers and machines effectively and sustainably interact with the world, simulating and modelling directly reality itself, in all its complexity and dynamics, its entities, changes and interactions, laws, rules and patterns, to effectively and sustainably interact with the world.

So, the world's structural formula could be programmed or pre-trained, encoded and embedded as the universal world model engine of the universal AI platform with the universal learning and understanding of reality following the universal algorithm:

Reality causes Entity causes State causes Change causes Interaction causes Data causes Intelligence causes Real AI Technology causes Intelligent Reality.

AI's Universal Classifier/Master Algorithm/General Model

In computing science, data science and machine learning, "a classifier is an algorithm that automatically orders or categorizes data into one or more of a set of classes.” So, a classifier is the algorithm itself – the rules or mathematical functions used by machines to classify data. In its turn, a classification model is the result of your classifier’s machine learning, which is trained using the classifier, thus it is the model, ultimately, classifies the data using training data sets.

There are various statistical algorithms, sold as ML/AI algorithms, depending on the sorts of training data sets, labeled or unlabeled, structured or unstructured:

  • linear regression calculating how the X input (meaning words and phrases) relates to the Y output (opinion polarity – positive, negative, neutral)
  • logistic regression, estimating the probability of dependent categorical variable Y, given independent categoric or numeric variable X,?predicting binary outcome, Yes/No, Existence/Non-existence, Pass/Fail
  • naive Bayes classifier, a family of probabilistic algorithms that use Bayes’ Theorem to calculate the possibility of words or phrases falling into a set of predetermined “tags” (categories) or not. This can be used on text analysis, news articles, customer reviews, emails, general documents, etc.
  • supervised learning algorithms
  • unsupervised learning algorithms
  • semi-supervised learning algorithms
  • reinforcement, "trial and error" learning algorithms
  • artificial neural networks as an ordered set of algorithms
  • self-supervised learning algorithms...

In?logic,?mathematics?and?computer science,?metalogic?and?computability theory, an algorithm is an?effective method?or?effective procedure a?mechanical?method or procedure or process for solving a problem by "any effective means from a specific class".

Or, an?algorithm is a?finite?sequence of?rigorous?instructions to solve a class of specific?problems?or to perform a?computation.

Algorithms are used as specifications for performing?calculations?and?data processing.

"Advanced algorithms can use?conditionals?to divert the code execution through various routes (automated decision-making) and deduce valid?inferences?(automated reasoning)", thus achieving?automation.

Algorithms can be expressed within space and time?and in a well-defined?formal language?for calculating a?function.

The UFO master algorithm is generalizing the concept of algorithm, from quantum algorithms to computer science algorithms, informing all the possible types and sorts of algorithms, as in:

  • Logico-philosophical algorithms: Induction > Abduction > Deduction > Analogy
  • Mathematical algorithms: functions, maps, mappings, rules, assigning from a?set?X?to a set?Y?each element of?X?exactly one element of?Y, where X?is called the?domain?of the function?and Y?is called the?codomain?of the function; Arithmetical Operations, Equations, Algebraic/Differential/Integral/Transcendental equations, Euclidean algorithm, or Euclid's algorithm, Binary Exponentiation, Modulo Arithmetic, the Calculus algorithms, Differentiation, Chain Rule, Integration, Analytic Geometry...

https://medium.com/spidernitt/mathematical-algorithms-b28112f14fb0

  • Scientific algorithms: Observation > Induction > Hypothesis > Testing/Experiment > Evaluation > Theory > Technology > Observation
  • Physical algorithms, physical laws, effects and equations
  • Chemical algorithms, chemical formulas and reactions
  • Biological algorithms, genetic algorithms
  • Technological algorithms, engineering design algorithms
  • Computing Algorithms:

https://www.techtarget.com/whatis/definition/algorithm#:~:text=An%20algorithm%20is%20a%20procedure,throughout%20all%20areas%20of%20IT

Traditional programming algorithms following a set of instructions to transform data into a desired output

ML algorithms enabling machines to solve problems based on past observations without being explicitly programmed: compare data, find patterns, or learn by trial and error to accurately predict with no human intervention

Pattern recognition algorithms are about "the automatic discovery of regularities in data through the use of computer algorithms and with the use of these regularities to take actions such as classifying the data into different categories".

Pattern-matching algorithms are checking a sequence of tokens or data structures for the presence of the constituents of some pattern (regularities in the world).

  • Social/Political/Economic algorithms, constitutional codes, policies and norms, laws and regulations... As samples of regulatory compliance algorithms could be used the digital technology regulations and directives, legislations and regulatory frameworks: the the European Union Digital Services Act (DSA) to protect the rights of Internet users by holding ISPs, search engine, social media platforms, online marketplaces, content delivery networks and other online services legally accountable, as being transparent about their algorithm; the EU's General Data Protection Regulation (GDPR) for the protection of data in the European Union; the European Union Artificial Intelligence Act (EU AI Act) for the development, marketing, and use of AI to legally define AI and impose documentation, auditing, and process requirements for AI providers.

Crucial, the intelligent core of the the Universal World Model AI Engine is the World's Hypergraph Interaction Networks, acting as the Master Algorithm for all the traditional advanced, task-specific algorithms and programs, data-driven machine learning models, or AI technologies, from ANNs and DNNs to NLG/NLU to LLMs and generative AI.

Global Hypergraph Interaction Networks

Global Hypergraph Interaction Networks, I = W X W, covers all possible real world or abstract interactions, between and among entities, states, changes or processes, real-world and abstract, as in:

metaphysical mathematical or ontological interactions,

physical interactions,

chemical interactions,

biological interactions, as neural networks,

social interactions,

cultural interactions, scientific interactions

technological interactions,

environmental interactions,

digital interactions,

man-environment interactions

machine-environment interactions,

man-machine interactions, user-machine interfaces, brain-computer interfaces, data-data interactions, as artificial neural networks of different topologies.

Global Hypergraph Entity Networks

Global Hypergraph Interaction Networks, I = E X E, covers all possible real world or abstract interactions, between and among entities, real-world and abstract, as matter, mind or data, as in:

metaphysical mathematical or ontological systems,

physical systems,

chemical system,

biological system,

mental systems,

social systems,

cultural systems, scientific systems, conceptual systems,

technological systems,

environmental systems, as eco systems,

digital systems,

man-environment systems,

machine-environment interactions,

man-machine systems, user-machine systems, brain-computer systems,

data entity computation systems

As an example of the data entity systems could go statistical ML/AL classifiers or knowledge graphs. The last one is "a structured representation of knowledge that consists of entities, relationships, and their attributes", where nodes represent entities (real-world objects, concepts, or abstract objects, e.g., people, places, organizations, mental objects, or mathematical objects) and edges represent relationships between these entities (labeled with properties or attributes to provide context and information).

Knowledge graphs could store and organize data as information or knowledge for humans and machines, employed as ML modules, semantic search, recommendation systems, natural language processing, and AI models to enable data search and information retrieval, reasoning and inference.

Global Hypergraph State Networks

Global Hypergraph Interaction Networks, I = C = S X S, covers all possible real world or abstract interactions, between and among states, real-world and abstract, as in:

metaphysical mathematical or ontological changes,

physical changes,

chemical changes,

biological changes,

mental changes,

social changes,

cultural changes, scientific changes, conceptual changes,

technological changes,

environmental changes, as climate change,

digital changes,

man-environment changes,

machine-environment changes,

man-machine changes, user-machine changes, brain-computer changes,

data changes

Global Hypergraph Causal Networks

Global Hypergraph Interaction Networks, I = C X C, covers all possible real world or abstract interactions, between and among changes or events, processes or activities, real-world and abstract, all as causes and effects, as in:

metaphysical mathematical or ontological processes,

physical processes,

chemical processes,

biological processes,

mental processes,

social processes,

cultural processes, scientific processes, conceptual processes,

technological processes,

environmental processes,

digital processes,

man-environment processes,

machine-environment processes,

man-machine processes, user-machine processes, brain-computer processes,

causal data graph networks

IT/AI's Universal Data Architecture

Data Universe, D, D: W (E, S, C, I)< > M (N, O, I, P, R), a self-dual World-Data ontological identity, as in:

Universe <> Data Universe, as the metaphysical universe, the mathematical universe, the physical universe, the mental universe, the social universe, the technological universe, the digital universe, the virtual universe, the cyber-physical universe, the AI universe

Entity <> Data Entity, as the metaphysical entities, the mathematical entities, the physical entities, the mental entities, the social entities, the technological entities, the digital entities, the virtual entities, the cyber-physical entities, AI entities

State <> Data State; the metaphysical states, the mathematical states, the physical states, the mental states, the social states, the technological states, the digital states, the virtual states, the cyber-physical states, AI states

Change <> Data Change; the metaphysical changes, the mathematical changes, the physical changes, the mental changes, the social changes, the technological changes, the digital changes, the virtual changes, the cyber-physical changes, AI changes

Interaction <> Data Interaction, the metaphysical interactions, the mathematical relationships, the physical interactions, the mental relationships, the social interactions, the technological interactions, the digital interactions, the virtual interactions, the cyber-physical interactions, AI interactions.

For example, the statistical units of observations, or experimental units, sampling units, or data points, with a unit or analysis, the surrounding entity, as a set of entities being studied, belong to the data entities, would it be a quantum object, single object, person, animal, plant, manufactured item, country, planet, star, galaxy, or the universe. If the unit of observation being the individual, a data point might be the values of income, wealth, age of individual, number of dependents, etc. The data units are in one-to-one correspondence with the data values, being formally typed with the type of measurement scales.

In all, there are five levels of measurements or scales of measure describing the nature of information within the values assigned to studied variables (qualitative and quantitative scales, measures or data):

  1. categorical/nominal data scales, C, the data can be categorized or classified, mutually exclusive and collective exhaustive. Examples of categorical classifications include all categories, classes and typologies, ontologies or taxonomies, as ontological, scientific, technological or cultural classes. Ordinary examples of nominal variables include: parts of speech, genotype, blood type, zip code, gender, race, eye color, political party
  2. ordinal data scales, O, the data can be categorized and ranked, ordered or scaled. Examples of ordinal variables include socio economic status (low income, middle income, high income), education level (high school, college, BS, MS, PhD), income level (“less than 50K”, “50K-100K”, “over 100K”), satisfaction rating (“extremely dislike”, “dislike”, “neutral”, “like”, “extremely like”, sentiment analysis rating where ML classifiers are trained to analyze text for opinion polarity and output the text into the class: Positive, Neutral, or Negative).
  3. interval data scales, I, the data can be categorized and ranked, and evenly spaced. Examples of interval variables include: temperature (F), temperature (C), pH, SAT score (200-800), credit score (300-850).
  4. ratio data scales, P, the data can be categorized, ranked, evenly spaced and has a natural zero (examples of ratio variables include: enzyme activity, dose amount, reaction rate, flow rate, concentration, pulse, weight, length, temperature in Kelvin (0.0 Kelvin really does mean “no heat”), survival time, etc.)
  5. numerical scales, numbers, number sets or number systems R, natural, integers, rational, real-valued, and complex numbers, the data can be categorized, ranked, evenly spaced, has a natural zero and numbered, counted, labeled or measured.

Crucial, the UFO Data Universe determines Machine's Data Architecture, a set of rules, policies, standards and models that govern and define the type of data, create and manage the flow of data and how it is processed across IT/AI/ML/DL/GPT systems and applications. Specially, it concerns to enterprise ML/AI data architecture consisting of three different layers or processes:

  • Conceptual/business model: Includes all data entities and data interactions and provides a conceptual or semantic data model
  • Logical/software system model: Defines how data entities and data interactions are linked and provides a logical data model
  • Physical/hardware technology model: Provides the interaction data mechanism for a specific process and functionality, or how the actual data architecture is implemented on underlying technology infrastructure

THE UNIVERSAL AI PLATFORM FOR NARROW AI, ML, DL, AGI, ASI, AND HUMAN INTELLIGENCE

Now, we could understand how and why to create the real and truly intelligent machines as the universal AI networks integrating all interactive and interoperable AI forms and models, algorithms and systems (see the Resources):

Universal AI Platform = General AI = IAI = Real AI = Transdisciplinary AI = Man-Machine Hyperintelligence = UFO + Symbolic/Logical/General AI + Weak/Narrow AI + Machine Learning + Deep Learning + Federated Learning + ANNs + LLMs (GPT > ChatGPT >) +5-6G + Multi-Access Edge Computing + the Internet of Things = Global AI Internet of Everything

The Universal AI Platform (Trans-AI) is embracing the major AI innovations, such as specified in the 2022 Gartner Hype Cycle for AI and in the 2023 Hype Cycle for AI Technology and Emergent Technology:

Data-centric AI:

synthetic data,

knowledge graphs,

data labeling and annotation

Model-centric AI:

physics-informed AI,

composite AI,

causal AI,

generative AI,

foundation models and deep learning

Applications-centric AI:

AI engineering,

decision intelligence,

edge AI,

operational AI systems,

ModelOps,

AI cloud services,

smart robots,

natural language processing (NLP),

autonomous vehicles,

intelligent applications,

computer vision

Human-centric AI:

AI trust, risk and security management (TRiSM),

responsible AI,

digital ethics,

and AI maker and teaching kits.

Causal AI includes different techniques, like causal graphs and simulation, that help uncover causal relationships to improve decision making.

Conclusion

We have introduced UniMaCoM as the transdisciplinary modeling of reality in terms of universal metaphysics, ontology and mathematics, data science and computing science and artificial intelligence. It has all the conceptual potentials to be as the most effective master algorithm for truly intelligent machines.

The world's meta-mathematical formula could be programmed or pre-trained, encoded and embedded as the universal world model engine of the universal AI platform with the universal learning and understanding of reality following the universal development algorithm:

Reality causes Entity causes State causes Change causes Interaction causes Data causes Intelligence causes Real AI Technology causes Intelligent Reality.

Resources

Real AI Project Confidential Report: How to Engineer Man-Machine Superintelligence 2025: AI for Everything and Everyone (AI4EE); 179 pages, EIS LTD, EU, Russia, 2021

Content

The World of Reality, Causality and Real AI: Exposing the great unknown unknowns

Transforming a World of Data into a World of Intelligence

WorldNet: World Data Reference System: Global Data Platform

Universal Data Typology: the Standard Data Framework

The World-Data modeling: the Universe of Entity Variables

Global AI & ML disruptive investment projects

USECS, Universal Standard Entity Classification SYSTEM:

The WORLD.Schema, World Entities Global REFERENCE

GLOBAL ENTITY SEARCH SYSTEM: GESS

References

Supplement I: AI/ML/DL/CS/DS Knowledge Base

Supplement II: I-World

Supplement III: International and National AI Strategies

Trans-AI: How to Build True AI or Real Machine Intelligence and Learning

EIS HAS CREATED THE FIRST TRANS-AI MODEL FOR NARROW AI, ML, DL, AND HUMAN INTELLIGENCE

Why and How to Build Digital Superintelligence: Real AI, Superhuman Intelligent Machines, Superintelligent Machines, or Superintelligent AI

A Global AI Infrastructure: RAI vs. BRI as the Most Valuable Project on the Earth

DISTINGUISHING SCIENTIFIC AI FROM PSEUDOSCIENTIFIC AI

Universal Ontology for Artificial Intelligence: building machine metaphysics for machine intelligence and learning

Scientific AI vs. Pseudoscientific AI: Big Tech AI, ML, DL as a pseudoscience and fake technology and mass market fraud

The rise of Real AI Industry: Causal Interactive Learning vs. Deep Statistical Learning

Real AI vs. Unreal AI = Causal Intelligence Interactive Machines (CIIM): Converging Symbolic, Predictive, Generative, Industrial, and Causal AI

Machine's Worldview: Standard Universal Ontology (SUO): General Machine Intelligence and Learning = Real/True/Interactive AI/ML/DL/NNs

Space at large> Hyperspace > Hypercomputing > Hyperintelligence

Real AI vs. Human AI: the Best Ideas vs. the Worst Ideas

==========================================

Reality, Universal Ontology and Knowledge Systems: Toward the Intelligent World

NextGen AI as Hyperintelligent Hyperautomation: Universal Formal Ontology (UFO): World Model Computing Engine

FEDERATED LEARNING: A PRIVACY-PRESERVING PARADIGM TRANSFORMING AI

AI Foundation Model: a paradigm for Real AI Technology

AI Foundation Models. Part II: Generative AI + Universal World Model Engine

The Paradigm Shifts in Artificial Intelligence

SUPPLEMENT: Who studies Mathematical Metaphysics

There are a few meaningful studies of it, if any.

I have to mention, a Theory of Everything research, AN INTRODUCTION TO MATHEMATICAL METAPHYSICS

ABSTRACT: Since the time of Aristotle, metaphysics has been an ill-defined term. This paper defines it as a logically idempotent metalinguistic identity of reality which couples the two initial ingredients of awareness: perceptual reality (the basis of physics), and cognitive-perceptual syntax, a formalization of mind. The explanation has been reduced to a few very simple, clearly explained mathematical ingredients. This paper contains no assumptions or arguable assertions, and is therefore presented as an advanced formulation of logic which has been updated for meaningful reference to the structure of reality at large. This structure, called the Cognitive-Theoretic Model of the Universe or CTMU, resolves the problems attending Cartesian dualism by replacing dualism with the mathematical property of self-duality, meaning (for reality-theoretic purposes) the quantum-level invariance of identity under permutation of objective and spatiotemporal data types. The CTMU takes the form of a global coupling or superposition of mind and physical reality in a self-dual metaphysical identity M:L<>U, which can be intrinsically developed into a logico-geometrically self-dual, ontologically self- contained language incorporating its own medium of existence and comprising its own model therein.

Keywords: Metaphysics; Mathematics; Mathematical Metaphysics

Its author is Christopher Langan, an independent researcher. "The CTMU theory-universe-model dates from the mid- 1980’s, and has since been extensively developed in nearly total isolation from the academic community".

Langan's IQ was estimated on ABC's 20/20 to be between 195 and 210, and in 1999 he was described by some journalists as "the smartest man in America" or "in the world".

marnie delaney

#UnMuteYourself Advocate for international social change and collaboration to improve equity, inclusion, safety and the freedom to thrive

3 个月

It is always disturbing to read about “progress” in things like AI and genetic engineering without reference to the enormous implications which should be addressed in the context of ethics, equity, and moral principles. These discussions are being ignored or given short shrift in the race for “progress”. In fact, the definition of progress should be a central to structured assessments of ethical considerations by diverse and diversely-impacted communities. This applies both nationally and internationally.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了