#RealAITechnologyStack: why #bigtechai is a big hoax
https://utkarsh.com/current-affairs/ai-supercomputer-stargate-100-billion-plan-of-microsoft-and-chatgpt

#RealAITechnologyStack: why #bigtechai is a big hoax

The truth, the whole truth and nothing but the truth"

First things first.

Let me share what we need to know and learn about the world to create the most disruptive ever general technology, artificial intelligence, machine intelligence and learning:

Philosophical Sciences, Metaphysics or Ontology, Epistemology, Logic, Ethics...

Natural Sciences, Physics, Chemistry, Biology...

Formal Sciences, Mathematics, Statistics, Information science...

Cognitive Sciences, Psychology, Neuroscience, Linguistics...

Social Sciences, Economics, Politics, Linguistics, Anthropology...

Engineering sciences, Computing science and Engineering,...

If we miss something, then starts again; for your revolutionary invention without the complete world knowledge network, as hypergraphed above, will hardly be intelligent, interactive, autonomous, or real and true.

We proceed with our argumentation why the Big Tech AI is a giant hoax ever, which must be legally proceeded as the Global AI Big Tech class actions together with the less effective antitrust measures or congressional hearings or the Statement on AI Risk:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Hoax means "relatively complex and large-scale fabrications" and includes deceptions that go beyond the merely playful and "cause material loss or harm to the victim."

There are many types of hoax, academic, art-world, documentary, historical, religious, computer virus, UFO, urban legends, ALL Fools Day, Internet/social media, paleoanthropological or fake news for propaganda and disinformation.

But the #bigtechfakeaihoax is not simply a multi-trillion hoax or the fake news for propaganda and disinformation. It makes an existential risk threatening future human existence, intentionally or unintentionally.

As it has been warned in the Pause Giant AI Experiments: An Open Letter having 33708 signatures:

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs".

"Whereas the promoters of frauds, fakes, and scams devise them so that they will withstand the highest degree of scrutiny customary in the affair, hoaxers are confident, justifiably or not, that their representations will receive no scrutiny at all".

In the technical language, we try to make intelligible why the "AI chips" are falling under the hoax definition: "a widely publicized falsehood so fashioned as to invite reflexive, unthinking acceptance by the greatest number of people of the most varied social identities and of the highest possible social pretensions to gull its victims into putting up the highest possible social currency in support of the hoax".

A new example of #bigtechfakeaihoax is Microsoft and OpenAI Plot 100$ billion Stargate AI Supercomputer.

FTC has Launched inquiry into generative AI Investments and Partnerships: Agency Issues 6(b) Orders to Alphabet, Inc., Amazon.com, Inc., Anthropic PBC, Microsoft Corp., and OpenAI, Inc.

Introduction

AI technology is a global agenda today. All talks are about its impact and effects, policies and regulations, challenges and opportunities.

Meanwhile, we have no notion what it is all about, its real nature and mechanisms, and how machine intelligence and learning and its AI Robotics must be designed and developed, deployed and distributed.

Still, we don't have a Real AI Technology Stack (RAITS), as the collection of technologies, frameworks, libraries, tools, techniques, data and models to build and deploy true AI systems and applications.

There are a few eclectic research on the AI stack as a conceptual model that represents the different layers of technologies and components that make up an AI system.

But there are a lot of suggestions on the building blocks of generative AI:

https://shriftman.medium.com/the-building-blocks-of-generative-ai-a75350466a2f

https://www.orioninc.com/blog/understanding-generative-ai-a-tech-stack-breakdown/

Real AI Technology Stack (RAITS): from hardware to worldware

We propose the RAITS as consisting of 5 levels as a generalizing framework for generative and predictive AI stacks:

Hardware (Organism, Body, Genetics; AI Compute, Computing Infrastructure, AI Chips, Processors, Cloud, Networks, Platforms)

Software (Heredity; Coding, AI Algorithms, Programs/API, Databases, Libraries)

Brainware (Brain; Data, Neural Networks, foundational models, LLMs, Statistics AI/ML/DL)

Mindware (Mentality; the mental knowledge and procedures to solve problems or make decisions; AI models, logical rules, expert systems, symbolic, rules-based AI)

Worldware (Reality, the World, Environments; the World's data/information/knowledge; the Internet/web data, big data sets, scientific knowledge, the world modeling and reality simulation platform, the world processors for knowledge and learning and inference and interaction; Real-World, Interactive AI, RAI Cloud Platforms, Intelligent Robotics, RAI Supercomputers, Quantum/Physical AI, etc.)

The Real AI technology stack is founded by the AI Infrastructure Stack, its hardware, processors, as CPUs, GPUs, TPUs, etc.,

"AI chips" are sold as a specialized computing hardware to develop and deploy specialized AI system, with the leading fake AI chipmakers:

Nvidia, renowned for their powerful gaming GPUs like the A100 and H100 and the launch of its most powerful processor design named Blackwell, which is promised to revolutionize the AI landscape..

https://www.nvidia.com/en-us/ai-data-science/

Google (Alphabet) Google, under its parent company Alphabet, focuses on purpose-built AI accelerators.

Advanced Micro Devices (AMD)

Amazon (AWS)

Intel.

From the Fake AI Processors to the Real AI World Processors

A central main processor is commonly defined as a digital circuit which performs operations on some external data source, usually memory or some other data stream, taking the form of a microprocessor implemented on a single metal–oxide–semiconductor integrated circuit chip (MOFSET).

It could be supplemented with a coprocessor, performing floating point arithmetic, graphics, signal processing, string processing, cryptography, or I/O interfacing with peripheral devices. Some application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors.

A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions in the program.

Microprocessors chips with multiple CPUs are multi-core processors.

Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. Virtual CPUs are an abstraction of dynamical aggregated computational resources.

Microprocessors chips with multiple CPUs are multi-core processors.

We know a lot of processing units, as listed below.

Processors Taxonomy: from CPU to WPU

Central Processing Unit (CPU). If designed conforming to the von Neumann architecture, containing at least a control unit (CU), arithmetic logic unit (ALU) and processor registers.

Graphics Processing Unit (GPU)

Sound chips and sound cards

Vision Processing Unit (VPU)

Tensor Processing Unit (TPU)

Neural Processing Unit (NPU)

Physics Processing Unit (PPU)

Digital Signal Processor (DSP)

Image Signal Processor (ISP)

Synergistic Processing Element or Unit (SPE or SPU) in the cell microprocessor

Field-Programmable Gate Array (FPGA)

Quantum Processing Unit (QPU)

World Processing Unit (WPU)

A Graphical Processing Unit (GPU) enables you to run high-definition graphics on your computer. GPU has hundreds of cores aligned in a particular way forming a single hardware unit. It has thousands of concurrent hardware threads, utilized for data-parallel and computationally intensive portions of an algorithm. Data-parallel algorithms are well suited for such devices because the hardware can be classified as SIMT (Single Instruction Multiple Threads). GPUs outperform CPUs in terms of GFLOPS.

From Fake AI Accelerators ASIC to Real AI Accelerators

The TPU and NPU go under a Narrow/Weak AI/ML/DL accelerator class of specialized hardware accelerator or computer system designed to accelerate special AI/ML applications, including artificial neural networks and machine vision.

Big-Tech companies such as Google, Amazon, Apple, Facebook, AMD and Samsung are all designing their own Fake AI ASICs.

Typical applications include algorithms for training and inference in computing devices, as self-driving cars, machine vision, NLP, robotics, internet of things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability, with a typical NAI integrated circuit chip contains billions of MOSFET transistors.

Focus on training and inference of deep neural networks, Tensorflow uses a symbolic math library based on dataflow and differentiable programming

The former uses automatic differentiation (AD), algorithmic differentiation, computational differentiation, or auto-diff, and gradient-based optimization, working by constructing a graph containing the control flow and data structures in the program.

Again, the datastream/dataflow programming is a programming paradigm that models a program as a directed graph of the data flowing between operations, thus implementing dataflow principles and architecture.

Things revolves around static or dynamic graphs, requesting the proper programming languages, as C++, Python, R, or Julia, and ML libraries, as TensorFlow or PyTorch.

What AI computing still missing is a Causal Processing Unit, involving symmetrical causal data graphs, with the Causal Engine software simulating real-world phenomena in digital reality.

It is highly likely embedded in the human brain, as well as to be embedded in in the Real-World AI.

https://futurium.ec.europa.eu/en/european-ai-alliance/posts/trans-ai-meet-scientific-discovery-innovation-and-technology-all-time

The World Processor: the General Intelligence Kernel OSs

Real AI systems are running the world processing units (WPU) gaining the intelligence from its very real or general intelligence kernel (GIK/RIK).

The GIK is an intelligent computer program at the core of intelligent operating systems having complete control over everything in the now intelligent system, controlling interactions between software and hardware components, as CPU, GPU, memory or devices.

It is like the relationship between the Linux kernel and the operating systems built over this kernel, the application software and embedded systems and digital devices and platforms.

And Linux is deployed on a wide variety of computing systems, such as?embedded devices,?mobile devices?(the?Android?operating system),?personal computers,?servers,?mainframes, and?supercomputers.

There must be the GIK, first as a GIK data model than as a GIK code, if you wish your computing machinery, CPU, GPU, APIs or computing/digital/software/cloud platforms, supercomputers or quantum computers, to be real intelligent.

Such an AI is real as modeling the world and simulating reality, instead of human intelligence, and human-complete, as completing, instead of competing, human minds.

Real AI is to run on the GIK Code of the WPUs embedding the world knowledge and causal hypergraph networks, learning and inference algorithms, all to be capable of interacting with the world, humans, machines and its environments in the most effective and sustainable ways.

Conclusion

MILLIONS AND MILLIONS OF PEOPLE, INCLUDING ACADEMICIANS AND POLITICIANS, ARE SELF-DELUDING THAT ARTIFICIAL INTELLIGENCE MIMICS HUMAN INTELLIGENCE, OUR NATURAL PERCEPTION, LEARNING, THINKING AND ACTION.

MACHINE INTELLIGENCE IS ABOUT BUILDING THE WORLD MODELING AI MACHINES TRANSCENDING BUT COMPLETING THE HUMAN MIND.

THE FUTURE AI IS NOT ABOUT THE HUMAN-COMPETE LARGE LANGUAGE AI/ML MODELS, BUT THE HUMAN-COMPLETE WORLD MODELING INTELLIGENT SYSTEMS.

"There are real and true, scientific and objective AI vs. unreal and fake, imitating and subjective AI.

To build true intelligent machines, teach them how to model. simulate and effectively interact with the world, its realities and environments, entities and systems".

Again, "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs".

Resources

Global AI Big Tech class actions

[Generative] AI as a marketing gimmick, big bubble and mass delusion

The Big Tech Seven, Mass Delusion, and Catastrophic AI Risks

AI Bible: why Generative AI bubble is to burst and Interactive AI is to rise...

Computing Alchemic Intelligence: AI Alchemy and Digital Dark Ages

https://www.dhirubhai.net/pulse/ai-genai-ml-dl-llms-gpt-deepfake-technology-big-lie-azamat-abdoullaev-0pi2f/

https://www.dhirubhai.net/pulse/ai-alien-intelligence-computing-machinery-reality-vs-abdoullaev-rrrmf/

https://www.dhirubhai.net/pulse/big-tech-aimlllms-smart-ponzi-pyramid-schemes-bubble-three-azamat-julef/

https://www.dhirubhai.net/pulse/real-ai-critical-technologiescritical-systems-azamat-abdoullaev-dauaf/

Today’s AI is ‘alchemy,’ not science — what that means and why that matters | The AI Beat

Does AI stand for Alchemical Intelligence?

The Age of AI is a Marketing Gimmick — Hype for the Mediocre

SUPPLEMENT 1

How the Fake AI Works: a standard version from SAP

Artificial intelligence (AI) makes it possible for machines "to learn from experience", adjust to new inputs and perform human-like tasks.

Computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns in the data.

MACHINE ARE UNABLE TO LEARN FROM EXPERIENCE, AS WELL AS TO BE TRAINED HAVING ZERO INTELLIGENCE AND UNDERSTANDING WITHOUT ENCODED CAUSAL WORLD MODELS.

AI is a broad field of study that includes?many theories, methods and technologies, as well as the following major subfields:

Machine Learning, Neural Networks, and Deep Learning

AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data.

AI/ML automates analytical model building using methods from neural networks, statistics, operations research and physics to find hidden insights in data; using huge neural networks with many layers of processing units, taking advantage of advances in computing power and improved training techniques to learn complex patterns in large amounts of data.

Among the technologies enabling and supporting AI are:

Computer vision?relies on pattern recognition and deep learning to recognize what’s in a picture or video. When machines can process, analyze and understand images, they can capture images or videos in real time and interpret their surroundings.

Natural language processing?(NLP) is the ability of computers to analyze, understand and generate human language, including speech. The next stage of NLP is natural language interaction, which allows humans to communicate with computers using normal, everyday language to perform tasks.

Graphical processing units?are key to AI because they provide the heavy compute power that’s required for iterative processing. Training neural networks requires big data plus compute power.

The Internet of Things?generates massive amounts of data from connected devices, most of it unanalyzed. Automating models with AI will allow us to use more of it.

Advanced algorithms?are being developed and combined in new ways to analyze more data faster and at multiple levels. This intelligent processing is key to identifying and predicting rare events, understanding complex systems and optimizing unique scenarios.

APIs, or application programming interfaces,?are portable packages of code that?make it possible to add AI functionality to existing products and software packages. They can add image recognition capabilities to home security systems and Q&A capabilities that describe data, create captions and headlines, or call out interesting patterns and insights in data.

In summary, the goal of AI is to provide software that can reason on input and explain on output. AI will provide human-like interactions with software and offer decision support for specific tasks, but it’s not a replacement for humans – and won’t be anytime soon.?

SUPPLEMENT 2

Are the highly-marketed Tensor cores from Nvidia, Tensor processing Units (TPU) from Google, and other Deep Learning and Machine Learning processors just simple matrix-multiplication accelerators?

Let’s unravel the mystery and complexity of processors and so-called AI accelerators, as from Google, Nvidia, etc.

It is just simple data matrix-multiplication accelerators. All the rest is commercial propaganda.

Unlike other computational devices that treat scalar or vectors as primitives, Google’s Tensor Process Unit (TPU) ASIC treats matrices as primitives. The TPU is designed to perform matrix multiplication at a massive scale.

Here’s a diagram of Google’s TPU:

At its core, you find something that inspired by the heart and not the brain. It’s called a “Systolic Array” described in 1982 in “Why Systolic Architectures?”: https://www.eecs.harvard.edu/~htk/publication/1982-kung-why-systolic-architecture.pdf

And this computational device contains 256 x 256 8bit multiply-add computational units. A grand total of 65,536 processors is capable of 92 trillion operations per second.

It uses DDR3 with only 30GB/s to memory. Contrast that to the a Nvidia Titan X with GDDR5X hitting transfer speeds of 480GB/s.

Whatever, it has nothing to do with real AI hardware.


要查看或添加评论,请登录

Azamat Abdoullaev的更多文章

社区洞察

其他会员也浏览了