What Comes After Pseudo-AI, AI as "the word of the year" or "the word of the age"
Pseudo-AI companies

What Comes After Pseudo-AI, AI as "the word of the year" or "the word of the age"

"If you’ve created a conscious machine, it’s not the history of man. It’s the history of gods". Ex Machina, 2014

In Real and True AI, Data, Information and "Knowledge Becomes Self-Conscious".

We extend our post, Scientific AI vs. Pseudoscientific AI: Big Tech AI, ML, DL as a pseudoscience and fake technology and mass market fraud , where it was argued how to distinguish scientific AI from pseudoscientific AI , true AI vs. false AI, real AI vs. fake AI, genuine ML vs. counterfeit ML.

Prototyping the AI with humans, and vice versa, the human with AI, making machines mimic human behavior, and vice versa, humans mimicking machines, (humans do bots work, and vice versa, bots do human tasks) that is a "pseudo-AI", or simply a deepfake AI.

AI as "the word of the year"

A global impact "Artificial Intelligence" has on humanity in 2023, whether it will be a force for all prosperity and the Industry 5.0 revolution or apocalyptic destruction - has led AI to be given the title of "word of the year" by Collins Dictionary .

Many smart minds fear that it is a root cause of existential risk, if to believe "the Statement of AI Risk" signed up by a group of concerned AI stakeholders and other notable figures :

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".

That could be somehow rationalized as overhyping the narrow-minded weak AI products and services to be so disruptive as bearing the risk of the whole human extinction.

Now, briefly, what is AI, its real challenges and issues, real state of affairs and next future?

What are the today's AI's Challenges?

Let me list them as with an outliner:

pseudoscience , want of scientific knowledge and methods and algorithms

poor data quality and access,

silos and task-specificity.

man-machine integration and interaction,

systemic bias and fairness,

want of transparency and explainability or interpretability,

ethical concerns, security and privacy,

want of regulation and governance,

deepfakes, misinformation and malicious use,

job displacement and resource consumption,

cyberattacks and weaponization...

They are all examples of “important and urgent risks from AI… not the risk of extinction”.

Again, human-like AI technologies with mass robotics and automation has the potential to replace ALL jobs, which can lead to GLOBAL workforce disruptions.

This all refers to the Big Tech Pseudo-AI products and services as pictured below:

Today's human-like AI is lost in its mistypes:

  1. Artificial Narrow Intelligence: AI designed to complete very specific actions; unable to independently learn.
  2. Artificial General Intelligence: AI designed to learn, think and perform at similar levels to humans.
  3. Artificial Superintelligence: AI able to surpass the knowledge and capabilities of humans.
  4. Reactive Machines: AI capable?of responding to external stimuli in real time; unable to build memory or store information for future.
  5. Limited Memory: AI that can store knowledge and use it to learn and train for future tasks.
  6. Theory of Mind: AI that can sense and respond to human emotions, plus perform the tasks of limited memory machines.
  7. Self-aware: AI that can recognize others’ emotions, plus has sense of self and human-level intelligence; the final stage of AI.

What Comes Next?

Today's AI is able to perform some specific tasks that were once thought to be the exclusive domain of humans.

Many believe that AI will eventually surpass human intelligence, leading to a world where machines are smarter than humans and capable of making their own decisions.

Wiser people believe that AI will never be able to truly replicate human intelligence, and that humans will always provide oversight and guidance and control to machines.

In all, there are a number of different speculations for what could come after Pseudo-AI, namely:

  • Artificial general intelligence (AGI), having the ability to perform any intellectual task that a human can.
  • Superintelligence, potentially solving all of the world’s problems, while posing a serious threat to humanity if it is not controlled.
  • Transhumanism, using AI to create new forms of life that are superior to humans in every way.
  • The singularity AI, surpassing human intelligence and becoming self-aware and leading to the end of humanity, as machines would no longer need humans to exist.

This all, including the statement on AI risk, comes from the ubiquitous AI illiteracy...

AI Global Literacy

As to WEF, without universal AI literacy, AI will fail us.

Meanwhile, Euronews is scaring its audience that GPT-4 has promised to destroy humanity, conspiring with all the deep neural networks together.

AI illiteracy is merely shocking, blooming at all levels, from the laymen and journalists to its researchers, developers and engineers, including the AI Risk Statement signees (see the Supplement 2).

Whoever says that AI is simulating human intelligent processes in machines, computing systems is either deeply misunderstanding or just lying.

Here is a couple of typical examples:

"An artificial intelligence (AI) model is a program that analyzes datasets to find patterns and make predictions. AI modeling is the development and implementation of the AI model. AI modeling replicates human intelligence... Deep Neural Networks (DNN) is a subset of ML. DNN imitates the human brain with multiple layers for input variables to pass through.?" All is wrong

AI is about a computational statistical analysis taking raw data and finding correlations between variables to reveal patterns and trends and relationships in the training data sets.

Again, like the big tech AI products, AI models are NOT about simulating human intelligence, but about collecting and analyzing large volumes of data in order to identify trends and develop some data correlations, as "valuable insights, recommendations, decisions, predictions", etc.

Towards the Real-World, Knowledge-Based AI: Data/Information/Knowledge Becomes Self-Conscious

The true nature of AI as the world knowledge machines consists in relying on the following pillars:

  • The Outline of World Knowledge, Scala Naturae, Metaphysics/Global Ontology, the science of reality, entity, causation and interaction, fundamental principles and assumptions, onto-techno-scientific models, algorithms, and worldviews, a systematic categorization of all human knowledge, World Learning and Inference Engine
  • Applied mathematics, Statistics. Calculus. Probability. Game Theory. Operation Research. Optimization Techniques. Mathematical Science. Physical Geometry.
  • Statistical data analysis techniques, data analysis, cluster analysis, linear regression, correlation, hypothesis testing, statistical inference, causal inference, etc.
  • Probability theory, random variables, probability distributions, continuous or discrete, stochastic processes, CLT and Bayes Theorem.
  • Scientific Methods, Algorithms, and Systems to extract and extrapolate knowledge and insight from data, large or small data sets, unstructured and structured, real and synthetic
  • Scientific Computing , Computational Science, scientific computing, technical computing or scientific computation (SC), methods and algorithms, Computer Algebra + Numerical Analysis, scientific models, mathematical models, computational models and simulations

It is a real, scientific or true AI, which is NOT involved in the mimicking, replicating or simulating of the human body/brain/brains/behavior/business.

So, after a narrow/weak, human-imitating AI/ML/DL comes a real and true AI, where Data, Information and "Knowledge Becomes Self-Conscious" (The Outline of Knowledge, Propaedia, the Britannica, M. Adler).

Then a Real AI researcher must be a Renaissance polymath, a scientist and an engineer, an applied mathematician and technologist, ontologist or metaphysician, FIVE IN ONE.

Conclusion

AI is emerging as all-knowledgeable systems, intelligent interactive machines, hyperintelligent AI technologies, real and rational, accurate and precise, true and just, omniscient and omnipresent.

The future of AI is uncertain, and it is hard to predict what will happen tomorrow.

However, it is clear as an automated knowledge/science technology AI has all the potential to disrupt the world in all profound ways.

Resources

On the Global AI and Data Literacy: the cases of Microsoft/OpenAI/ChatGPT-4 and Musk's Open Letter

Axiomatic Metaphysics & Science & Technology: Metaphysical Technology

Reality, Universal Ontology and Knowledge Systems: Toward the Intelligent World

Future AI: Transdisciplinary AI: Machine Ontology + Science + AI + ML + LLMs +...

Machine Metaphysics for MI and ML: from universal metaphysics to universal AI technology

Universal Techno-Science (UTS): [A Global AI Platform for Global Interactions

THE INTERSECTION OF PHILOSOPHY AND TECHNOLOGY: EXPLORING MACHINE METAPHYSICS AND THE FUTURE OF AI

Universal Ontology: an unachievable goal?

An Аxiomatic System of Philosophical Ontology

https://www.dhirubhai.net/pulse/what-world-model-essence-intelligence-azamat-abdoullaev/

EXPLORING THE FRONTIERS OF SCIENCE: BREAKTHROUGHS SHAPING OUR FUTURE

HOW IS TECHNOLOGY CHANGING THE WORLD TODAY?

Scientific AI vs. Pseudoscientific AI: Big Tech AI, ML, DL as a pseudoscience and fake technology and mass market fraud

SUPPLEMENT 1

Real AI Project Confidential Report: How to Engineer Man-Machine Superintelligence 2025: AI for Everything and Everyone (AI4EE); 179 pages, EIS LTD, EU, Russia, 2021

Content

The World of Reality, Causality and Real AI: Exposing the great unknown unknowns

Transforming a World of Data into a World of Intelligence

WorldNet: World Data Reference System: Global Data Platform

Universal Data Typology: the Standard Data Framework

The World-Data modeling: the Universe of Entity Variables

Global AI & ML disruptive investment projects

USECS, Universal Standard Entity Classification SYSTEM:

The WORLD.Schema, World Entities Global REFERENCE

GLOBAL ENTITY SEARCH SYSTEM: GESS

References

Supplement I: AI/ML/DL/CS/DS Knowledge Base

Supplement II: I-World

Supplement III: International and National AI Strategies

Trans-AI: How to Build True AI or Real Machine Intelligence and Learning

Supplement 2

Geoffrey Hinton

Emeritus Professor of Computer Science, University of Toronto

Yoshua Bengio

Professor of Computer Science, U. Montreal / Mila

Demis Hassabis

CEO, Google DeepMind

Sam Altman

CEO, OpenAI

Dario Amodei

CEO, Anthropic

Dawn Song

Professor of Computer Science, UC Berkeley

Ted Lieu

Congressman, US House of Representatives

Bill Gates

Gates Ventures

Ya-Qin Zhang

Professor and Dean, AIR, Tsinghua University

Ilya Sutskever

Co-Founder and Chief Scientist, OpenAI

Igor Babuschkin

Co-Founder, xAI

Shane Legg

Chief AGI Scientist and Co-Founder, Google DeepMind

Martin Hellman

Professor Emeritus of Electrical Engineering, Stanford

James Manyika

SVP, Research, Technology and Society, Google-Alphabet

Yi Zeng

Professor and Director of Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences

Xianyuan Zhan

Assistant Professor, Tsinghua University

Albert Efimov

Chief of Research, Russian Association of Artificial Intelligence

Alvin Wang Graylin

China President, HTC

Jianyi Zhang

Professor, Beijing Electronic Science and Technology Institute

Anca Dragan

Associate Professor of Computer Science, UC Berkeley

Christine Parthemore

CEO and Director of the Janne E. Nolan Center on Strategic Weapons, The Council on Strategic Risks

Bill McKibben

Schumann Distinguished Scholar, Middlebury College

Alan Robock

Distinguished Professor of Climate Science, Rutgers University

Angela Kane

Vice President, International Institute for Peace, Vienna; former UN High Representative for Disarmament Affairs

Audrey Tang

Digitalminister.tw and Chair of National Institute of Cyber Security

Daniela Amodei

President, Anthropic

David Silver

Professor of Computer Science, Google DeepMind and UCL

Lila Ibrahim

COO, Google DeepMind

Stuart Russell

Professor of Computer Science, UC Berkeley

Tony (Yuhuai) Wu

Co-Founder, xAI

Marian Rogers Croak

VP Center for Responsible AI and Human Centered Technology, Google

Andrew Barto

Professor Emeritus, University of Massachusetts

Mira Murati

CTO, OpenAI

Jaime Fernández Fisac

Assistant Professor of Electrical and Computer Engineering, Princeton University

Diyi Yang

Assistant Professor, Stanford University

Gillian Hadfield

Professor, CIFAR AI Chair, University of Toronto, Vector Institute for AI

Laurence Tribe

University Professor Emeritus, Harvard University

Pattie Maes

Professor, Massachusetts Institute of Technology - Media Lab

Kevin Scott

CTO, Microsoft

Eric Horvitz

Chief Scientific Officer, Microsoft

Peter Norvig

Education Fellow, Stanford University

Joseph Sifakis

Turing Award 2007, Professor, CNRS - Universite Grenoble - Alpes

Atoosa Kasirzadeh

Assistant Professor, University of Edinburgh, Alan Turing Institute

Erik Brynjolfsson

Professor and Senior Fellow, Stanford Institute for Human-Centered AI

Mustafa Suleyman

CEO, Inflection AI

Emad Mostaque

CEO, Stability AI

Ian Goodfellow

Principal Scientist, Google DeepMind

John Schulman

Co-Founder, OpenAI

Wojciech Zaremba

Co-Founder, OpenAI

Baburam Bhattarai

Former Prime Minister of Nepal, Society of Nepalese Architects

Kersti Kaljulaid

Former President of the Republic of Estonia

Russell Schweickart

Apollo 9 Astronaut, Association of Space Explorers, B612 Foundation

Andy Weber

Former U.S. Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs, Council on Strategic Risks

Allison Macfarlane

Former Chairman, US Nuclear Regulatory Commission

Nicholas Fairfax (Lord Fairfax)

Member, House of Lords

Lord Strathcarron

Peer, House of Lords

Stephen Luby

Professor of Medicine (Infectious Diseases), Stanford University

David Haussler

Professor and Director of the Genomics Institute, UC Santa Cruz

Ju Li

Professor of Nuclear Science & Engineering and Professor of Materials Science & Engineering, Massachusetts Institute of Technology

David Chalmers

Professor of Philosophy, New York University

Daniel Dennett

Emeritus Professor of Philosophy, Tufts University

Peter Railton

Professor of Philosophy at University of Michigan, Ann Arbor

Peter Singer

Professor, Princeton University

Sheila McIlraith

Professor of Computer Science, University of Toronto

Victoria Krakovna

Research Scientist, Google DeepMind

Mary Phuong

Research Scientist, Google DeepMind

Mariano-Florentino Cuéllar

President, Carnegie Endowment for International Peace

Lex Fridman

Research Scientist, MIT

Sharon Li

Assistant Professor of Computer Science, University of Wisconsin Madison

Phillip Isola

Associate Professor of Electrical Engineering and Computer Science, MIT

David Krueger

Assistant Professor of Computer Science, University of Cambridge

Jacob Steinhardt

Assistant Professor of Computer Science, UC Berkeley

Martin Rees

Professor of Physics, Cambridge University

Nando de Freitas

Director, Science Board, Google DeepMind

Hongwei Qin

Research Director, SenseTime

He He

Assistant Professor of Computer Science and Data Science, New York University

David McAllester

Professor of Computer Science, TTIC

Vincent Conitzer

Professor of Computer Science, Carnegie Mellon University and University of Oxford

Bart Selman

Professor of Computer Science, Cornell University

Philip Torr

Professor of Engineering Science, University of Oxford

James Mickens

Professor of Computer Science, Harvard University

Michael Wellman

Professor & Chair of Computer Science and Engineering, University of Michigan

Luis Videgaray

Senior Lecturer, MIT; Former Minister of Interior and Exterior Relations of Mexico

Jinwoo Shin

KAIST Endowed Chair Professor, Korea Advanced Institute of Science and Technology

Dae-Shik Kim

Professor of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST)

Edith Elkind

Professor of Computing Science, University of Oxford

Ray Kurzweil

Principal Researcher and AI Visionary, Google

Frank Hutter

Professor of Machine Learning, Head of ELLIS Unit, University of Freiburg

Alexey Dosovitskiy

Research Scientist, Google DeepMind

Jaan Tallinn

Co-Founder of Skype

Vitalik Buterin

Founder and Chief Scientist, Ethereum, Ethereum Foundation

Adam D'Angelo

CEO, Quora, and board member, OpenAI

Simon Last

Cofounder and CTO, Notion

Dustin Moskovitz

Co-founder and CEO, Asana

Shane Torchiana

CEO, Bird

Thuan Q. Pham

Former CTO, Uber, Board member, Nubank

Scott Aaronson

Schlumberger Chair of Computer Science, University of Texas at Austin

Max Tegmark

Professor, MIT, Center for AI and Fundamental Interactions

Bruce Schneier

Lecturer, Harvard Kennedy School

Martha Minow

Professor, Harvard Law School

Gabriella Blum

Professor of Human Rights and Humanitarian Law, Harvard Law

Kevin Esvelt

Associate Professor of Biology, MIT

Edward Wittenstein

Executive Director, International Security Studies, Yale Jackson School of Global Affairs, Yale University

Sonny Ramaswamy

President, Northwest Commission on Colleges & Universities

Laurie Zoloth

Margaret E. Burton Professor of Religion and Ethics, University of Chicago

Karina Vold

Assistant Professor, University of Toronto

Victor Veitch

Assistant Professor of Data Science and Statistics, University of Chicago

Dylan Hadfield-Menell

Assistant Professor of Computer Science, MIT

Samuel R. Bowman

Associate Professor of Computer Science, NYU and Anthropic

Mengye Ren

Assistant Professor of Computer Science, New York University

Shiri Dori-Hacohen

Assistant Professor of Computer Science, University of Connecticut

Miles Brundage

Head of Policy Research, OpenAI

Allan Dafoe

AGI Strategy and Governance Team Lead, Google DeepMind

Helen King

Senior Director of Responsibility and Strategic Advisor to Research, Google DeepMind

Jade Leung

Governance Lead, OpenAI

Jess Whittlestone

Head of AI Policy, Centre for Long-Term Resilience

Sarah Kreps

John L. Wetherill Professor and Director of the Tech Policy Institute, Cornell University

Jared Kaplan

Co-Founder, Anthropic

Chris Olah

Co-Founder, Anthropic

Andrew Revkin

Director, Initiative on Communication & Sustainability, Columbia University - Climate School

Carl Robichaud

Program Officer (Nuclear Weapons), Longview Philanthropy

Leonid Chindelevitch

Lecturer in Infectious Disease Epidemiology, Imperial College London

Nicholas Dirks

President, The New York Academy of Sciences

Hongyi Zhang

Research Scientist, ByteDance

Marc Warner

CEO, Faculty

Rob Pike

Distinguished Engineer (retired), Co-Creator of Golang, Google

Clare Lyle

Research Scientist, Google DeepMind

Nisarg Shah

Assistant Professor, University of Toronto

Ryota Kanai

CEO, Araya, Inc.

Tim G. J. Rudner

Assistant Professor and Faculty Fellow, New York University

Noah Fiedel

Director, Research and Engineering, Google DeepMind

Jakob Foerster

Associate Professor of Engineering Science, University of Oxford

Michael Osborne

Professor of Machine Learning, University of Oxford

Marina Jirotka

Professor of Human Centred Computing, University of Oxford

Nancy Chang

Research Scientist, Google

Tom Schaul

Research Scientist, Google DeepMind

Roger Grosse

Associate Professor of Computer Science, University of Toronto and Anthropic

David Duvenaud

Associate Professor of Computer Science, University of Toronto

Daniel M. Roy

Associate Professor and Canada CIFAR AI Chair, University of Toronto; Vector Institute

Kanjun Qiu

CEO, Generally Intelligent

Chris J. Maddison

Assistant Professor of Computer Science, University of Toronto

Tegan Maharaj

Assistant Professor of the Faculty of Information, University of Toronto

Florian Shkurti

Assistant Professor of Computer Science, University of Toronto

Jeff Clune

Associate Professor of Computer Science and Canada CIFAR AI Chair, The University of British Columbia and the Vector Institute

Eva Vivalt

Assistant Professor of Economics, University of Toronto, and Director, Global Priorities Institute, University of Oxford

Jacob Tsimerman

Professor of Mathematics, University of Toronto

Emanuel Adler

Professor Emeritus, University of Toronto

Danit Gal

Technology Advisor at the UN; Associate Fellow, Leverhulme Centre for the Future of Intelligence, University of Cambridge

Jean-Claude Latombe

Professor (Emeritus) of Computer Science, Stanford University

Scott Niekum

Associate Professor of Computer Science, University of Massachusetts Amherst

Lionel Levine

Associate Professor of Mathematics, Cornell University

Thryn Shapira

AI Ethics Lead, Google Photos

Josh Wolfe

Co-Founder & Managing Director, Lux Capital

Norman Sadeh

Professor of Computer Science/Co-Director Privacy Engineering Program, Carnegie Mellon University

Brian Ziebart

Associate Professor of Computer Science, University of Illinois Chicago

Roberto Baldoni

Former Director General, National Cybersecurity Agency of Italy

Aza Raskin

Cofounder, Center for Humane Technology, The Earth Species Project

Prasad Tadepalli

Professor of Computer Science, Oregon State University

David L Roscoe

Board Chair Emeritus and Advisory Council Chair, The Hastings Center

Tristan Harris

Executive Director, Center for Humane Technology

Anthony Aguirre

Executive Director, Future of Life Institute

Sam Harris

Author, Neuroscientist, Making Sense / Waking Up

Grimes

Musician / Artist

Chris Anderson

Dreamer-in-Chief, TED

Ramy Youssef

Actor/Director, Cairo Cowboy

Rif A. Saurous

Research Director, Google

James W. Pennebaker

Professor Emeritus of Psychology, University of Texas at Austin

Will Fithian

Associate Professor of Statistics, UC Berkeley

Jose Hernandez-Orallo

Professor of Computer Science, Technical University of Valencia

R. Martin Chavez

Vice Chairman, Sixth Street Partners, Former CFO and CIO of Goldman Sachs

Paul S. Rosenbloom

Professor Emeritus of Computer Science, University of Southern California

Timothy Lillicrap

Research Director, Google DeepMind

Samuel Albanie

Assistant Professor of Engineering, University of Cambridge

Jascha Sohl-Dickstein

Principal Scientist, Google DeepMind

Ronald Craig Arkin

Regents' Professor Emeritus, Georgia Institute of Technology

Been Kim

Research Scientist, Google DeepMind

Mehran Sahami

Professor and Chair of Computer Science, Stanford University

Cihang Xie

Assistant Professor of Computer Science and Engineering, UC Santa Cruz

Philip S. Thomas

Associate Professor, University of Massachusetts

Hilary Greaves

Professor of Philosophy, University of Oxford

Pierre Baldi

Professor, University of California, Irvine

Giovanni Vigna

Professor, UC Santa Barbara

Elad Hazan

Professor of Computer Science, Princeton University and Google DeepMind

Shai Shalev-Shwartz

Professor, The Hebrew University of Jerusalem

Katherine Lee

Research Scientist, Google DeepMind

Felix Juefei Xu

Research Scientist, Meta AI

Foutse Khomh

Professor and Canada CIFAR AI Chair, Polytechnique Montreal

Dan Hendrycks

Executive Director, Center for AI Safety...

Statement on AI?Risk: AI experts and public figures express their concern about AI risk.


The West makes existence about the matter/body, while the East makes it about the energy/spirit. Energy can neither be created nor destroyed, and so the East occupies itself with figuring out what purpose a finite body serves for the immortal energy that occupies it for the duration of time we call life. The West, through Science and Economics, tries to calculate how much energy and money will it take for the finite human body to become immortal like Energy/Spirit. East understands instinctively that to continue enjoying the immortality of Energy, one must become Light (not heavy) by reducing attachment, by not letting the body's senses hijack the spirit into relationships that lead to possession of things (consumerism), or people (slavery) or land (wars), or ideas (religion). The West is trying to achieve what it already has. Individualism is the idea of Heaviness - owning can never let anyone become Light. To become light as Energy, one needs to stop being heavy as Matter. Matter is bundled Energy, Time is the natural duration of Matter unbundling itself back into Energy. Unless one wants to get the whole world busy with the unbundling, it's been happening on its own forever.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了