The world's smartest supercomputer vs. the world's fastest supercomputer

The world's smartest supercomputer vs. the world's fastest supercomputer

Introduction: A Weak/Narrow AI Supercomputer Global Race

In the age of Big Data, Information and Knowledge, we need sustainable supersmart computing machinery, rather than resource-hungry superfast exa-brain-scale supercomputers, as Frontier, the Oak Ridge Leadership Computing Facility

No alt text provided for this image

or Chinese Sunway supercomputer with a brain-scale AI model.

It is not exascale systems what is the next level of computing performance, but computing superintelligence. By solving calculations five times faster than today’s top supercomputers—exceeding a quintillion, or 10exp[18], calculations per second— will hardly enable scientists to develop new technologies for energy, medicine, and materials.

Chinese Newest Generation Sunway machine exceeds the US Frontier, as the world’s most powerful just weeks earlier. Its speed has made it possible to train an AI with 174 trillion parameters, rivalling the synapses in the human brain.?

The Chinese team used the Sunway machine to train the AI model – called?bagualu?which means “alchemist’s pot” – with 174 trillion parameters, rivalling the number of synapses in the brain for the first time.

Potential uses include autonomous vehicles and facial recognition, as well as natural language processing, computer vision, life sciences and chemistry.

The results were presented at a virtual meeting of Principles and Practice of Parallel Programming 2022, an international conference hosted by the US-based Association for Computing Machinery (ACM) in April.

Scientists of Bauman Moscow State Technical University have built a Teragraph supercomputer based on Leonhard multicore microprocessors, which could become the world's smartest supercomputer.

It is capable of processing ultra-large-dimensional graphs up to one trillion vertices (10 to the 12th power). Technologies of representation and processing of knowledge in the form of graphs have already become a breakthrough for those industrial solutions in which other methods have shown low efficiency.?

Integrated with the Trans-AI model, its world hypergraphs networks and algorithms, Teragraph could become the first in the world Real AI supercomputer.

Superintelligent AI supercomputer = Teragraph + Meta-disciplinary AI (Meta-AI) or Transdisciplinary AI (Trans-AI) + the World Hypergraph + Data Ontology + AI Models + ML/Deep Neural Networks + Human Intelligence

It can be used to model, simulate and understand any real-world or virtual reality systems, physical, biological,?social, economic, industrial, military, information and digital.

Its various intelligent applications are to process the variables of computerized models of real-world systems, military, governmental, commercial or industrial, to test and evaluate the effects of possible programs before they are implemented.

Bringing Intelligent Superpowers to Supercomputers

The world’s fastest and most powerful supercomputers as key parts of the world of?high performance computing?(HPC) is fast integrating a narrow and weak artificial intelligence (AI) running on artificial neural networks (ANNs).

Cerebras Systems in California, for example, is now making a system that could allow up to?120 trillion parameters?in a single computer.

Graphcore, Oxford, UK, has just announced a project that aims to support a massive 500 trillion parameters, nearly 3,000 times larger than GPT-3 and more than quadruple the synapses in a human brain.

It claims its machine will be an "ultra-intelligence AI computer", named as The Good Computer, in honour of?Irving John Good, who originated the concept now known as the "intelligence explosion" or?technological singularity. In 1965, he wrote:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind... Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.

If successful, by 2024, The Good Computer will feature 10 exaFLOPS of AI compute power (10 billion billion floating point operations per second), 4 petabytes (PB) of memory and 10 PB/s of bandwidth.

No alt text provided for this image

?At the International Supercomputing Conference (ISC) 2022, Germany, it was announced new hardware and software systems for the world’s fastest supercomputers. ISC High Performance is focused on bringing the most critical developments and trends in HPC, machine learning, and high performance big data analytics.

Big data analytics involves examining large datasets (“big data”) generated by heterogenous sources from ecommerce websites and mobile devices to social media platforms and the internet of things.

High performance compute-as-a-service (HPC-as-a-service or HPCaaS) is a relatively new category of cloud service that provides the hardware, software, and expertise to process workloads including big data analytics. Cloud providers like IBM, Amazon Web Services, Microsoft Azure, Hewlett Packard Enterprise (HPE), Penguin Computing on Demand, and Google Cloud Platform run and provide access to supercomputing infrastructure in their own datacenters, usually in the form of a collection of interconnected servers that work together in parallel to solve problems.

The intersection of HPC and?artificial intelligence/machine learning (AI/ML) is becoming an area that the ISC conference is likely to continue to highlight for years to come, as far as?applying AI/ML will take the computational power of supercomputers to a whole new level,

For example, “the Frontier system, using the?HPL-AI benchmark, has demonstrated capabilities to perform over six times more AI-focused calculations per second than traditional floating point calculations, significantly expanding computational capabilities of that system.”

Exascale supercomputers are used in computational science, scientific computing or scientific computation, involving the development of models and simulations to understand natural systems.

Supercomputing physical modeling simulations involve

simulations of the early moments of the universe,

airplane and spacecraft flight,

the detonation of nuclear weapons,

nuclear fusion,

cryptanalysis. etc.

There are new types of supercomputers for resource-hungry narrow AI, as deep neural networks models.

Meta has introduced the AI Research SuperCluster (RSC), as being among the world’s fastest AI supercomputers that will accelerate AI research and help build the next major computing platform, the metaverse.

It is to learn from trillions of examples; work across hundreds of different languages; analyze text, images and video together, and develop new augmented reality tools.

Meet a Teragraph: A fundamentally new Russian supercomputer

The Ministry of Science and Higher Education of the Russian Federation (Ministry of Education and Science) reports that at the Moscow State Technical University (MSTU) named after. N.E. Bauman created the world’s first microprocessor and supercomputer in which a set of discrete mathematics instructions DISC (Discrete Mathematics Instruction Set) is implemented at the hardware level.

The computing complex was named “Teragraph”: it is designed to store and process super-large graphs. It is planned to use a supercomputer for modeling biological systems, real-time analysis of financial flows, for storing knowledge in artificial intelligence systems, and in other applied tasks.?

Most important computational tasks require storing and processing huge arrays of discrete information. For efficient and parallel processing of sets, Bauman Moscow State Technical University has developed a unique Leonhard Euler microprocessor, which contains 24 specialized heterogeneous DISC Lnh64 cores. Leonhard takes over that part of the computational load that universal arithmetic microprocessors (for example, Intel or ARM) or graphics accelerators do not cope well with. The results of executing commands for processing sets or graphs from the Leonard Euler microprocessor are sent to the host system for further use during the computational process.?

The Leonard Euler microprocessor takes up 200 times less crystal resources than a single Intel Xeon microprocessor, while consuming 10 times less energy. With a relatively low clock frequency of about 200 MHz, the performance of the Leonard Euler microprocessor significantly exceeds the performance of Intel Xeon family microprocessors (3 GHz). This is achieved due to parallelism in processing complex data models, which allows it to process up to 120 million graph vertices per second.


No alt text provided for this image

The revolutionary innovation is led by Aleksey Popov, Bauman Moscow State Technical University?·?Department of Computer Systems and Networks,

Real AI Supercomputers vs. Fake AI Supercomputers

Discussing HPC in the context of AI/ML/DL, one need to make a critical difference between Real AI Supercomputers having real intelligence and Fake AI Supercomputers having the false and fake intelligence.

The Fake Artificial Intelligence is?the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

There are some examples of the FAI supercomputers.

No alt text provided for this image

Meta Platforms (former Facebook) has introduced the?AI Research SuperCluster (RSC)?as the world’s fastest AI supercomputers to accelerate AI research and help us build for the metaverse.

RSC is supposed to help us build new and better AI models that can learn from trillions of examples; work across hundreds of different languages; seamlessly analyze text, images and video together; develop new augmented reality tools and more. All to pave the way toward building technologies for the next major computing platform — the?metaverse, where AI-driven applications and products will play an important role.

Before the end of 2022, though, phase two of RSC will be complete.

No alt text provided for this image

At that point, it’ll contain some 16,000 total GPUs and will be able to train AI systems “with more than a trillion parameters on data sets as large as an exabyte.” (for comparison’s sake, Microsoft’s AI supercomputer built with research lab OpenAI is built from 10,000 GPUs.)

Now, the Real AI Supercomputer is?about the modeling and simulation of reality and mentality by its computer systems. Specific applications of the RAIS include NLP/NLU machines, Causal, Contextual and Composite Machine Intelligence and Learning Technology.

The Real AI as Machine Intelligence and Learning is to define the future development of all computing machinery, as personal computers, computing and mobile devices, data processors, electronic computers, computer nodes, networks. clusters, supercomputers, quantum computers, information processing systems, digital machines that can store and process data/information/knowledge automatically, autonomously, and intelligently, effectively interacting with the world.

https://www.dhirubhai.net/pulse/real-ai-computing-machinery-superhumanai-azamat-abdoullaev/

If the fake AI supercomputer, capitalized by big tech, its designers and developers write all sorts of classifiers, image, object, speech, text, etc., reanimating old statistic algorithms mixed with optimization, predictive data analytics, data mining and big data, impersonated as “machine learning or deep learning”.

Just sit all the day training and testing your special models applying all sorts of libraries of benchmark data sets and classifier algorithms, such as linear or logistic regression, tree models, or deep neural networks, for images, text, speech, etc., training, testing, tuning and validating algorithms performance by an infinite set of labeled or biased data.

Real AI Supercomputer (RAIS): The Good Computer + Teragraph + Trans-AI + World Hypergraph

If the real AI supercomputers, you study intelligence per se, and how it relates to the world and all its content and domains. It models reality in terms of digital mentality, namely,

what is in the world,

how the world works via causal algorithms

and how truly operationalize intelligent representations, and reasoning and learning.

As a result, its real classifier algorithms not only meaningfully assigning a class label or category to a data input, but provide causal predictions for different types of classification problems.

Such a superintelligent supercomputer is to effectively interact with its environment, thus exhibiting real intelligence without any biological intelligence limitations and boundaries.

It could serve as a general intelligence platform for research institutions, centers, commissions and committees for AI, with core competences in supercomputing, machine learning, NNs, computer vision, NLP, data analysis, optimization, operations research, multi-agent systems, etc., to R & D?the specific AI/ML/DL technologies and applications in healthcare, energy industry, geoinformation systems, manufacturing industries, and other industries.

Real AI is NOT about mimicking/replicating/simulating the human body/brains/mind/intelligence/cognition/behavior, ML and DL are NOT parts of Artificial Human Intelligence, as pictured below:

No alt text provided for this image

Due to its wrong assumptions, AI has no fundamental theory for its key elements:

Reality, with its modeling, mapping, representation and simulation

Data, with its modeling, mapping, representation and simulation

Intelligence, with its modeling, mapping, representation and simulation

As a result, machine learning, with its data, models, and algorithms, as NN/DL, has no theory, being a black box system, dependent on the biased training data sets, trial and error, strong compute, or application-specific computer hardware, as GPU, TPU, or NPU.

Accordingly, all large-scale language models NNs (LLMs), as LaMDA, GGT-3/4, OPT-175B, Megatron-Turing (MT-NLG), PaLM, Gato AI,?the transformer machine learning models,?are false positives. It is when a test result incorrectly indicates the presence of a state or condition (such as a disease or pregnancy or intelligence, when it is not present), while a false negative is the opposite error, where the test result incorrectly indicates the absence of a condition when it is actually present. Or, rejecting the null hypothesis or not.

Transformers, taking advantage of parallel computing hardware and graphics processing units (GPU) in training and inference. like other neural networks, are ONLY statistical models that capture regularities in data in clever and complicated ways, having no language understanding or any intelligence.

False and Fake AI/ML (FFAI/ML), which is the mainstream AI, going as Big Data Analytics, Narrow/Weak AI, ML, DL or ANNs, LLMs, NLP, Computer Vision, Machine Perception, and AI Supercomputers.

As such, the FFAI/ML supercomputers are about faking/mimicking/replicating/simulating the human body/brains/mind/intelligence/cognition/behavior.

[Narrow/Weak] AI is everywhere now, led by the big tech and promoted by the big media. It is in the sci-fi, movies, TV series, video games, metaverse platforms, and in your smart devices, i-phones, wearables, cars, cameras, smart TVs, etc.

You encounter it almost on a daily basis, without being aware of it.

An increasing number of digital technological developments are based on the narrow/weak AI:

The internet of things, AI-integrated sensors

Chatbots

Voice search

Facial recognition

Deepfakes, where the face and voice of one person, for example, is transposed onto a video of another

Writing music, compositions, books or software

Doing arts

AI-tutoring systems tracking student behavior, predict their performance and deliver content and strategies

AI-inventors discovering new materials, drugs

Self-driving transportation

Date matching…

Soon realistic, AI-generated avatars will have AI-generated conversations and sing AI-generated songs, and even teach your children.

And this only a fake AI of ML and DL, or ANNs, which requires massive training data to imitate learning patterns and making decisions and recommendations.

AI is now omnipresent, but will be omniscient, as the Trans-AI is emerging.

Conclusion

?The world's first Real AI supercomputers will be driven by Transdisciplinary AI (Trans-AI) or Meta-AI or?the Man-Machine Hyperintelligence, integrating Symbolic AI and Neural ML, be it Artificial Narrow Intelligence, Artificial General Intelligence, Artificial Superintelligence, with Collective Human Intelligence.

Resources

A fundamentally new Russian supercomputer — Teragraph has been created: Ministry of Education and Science of Russia

The World Hypergraph: Global Causal Graph Network: Meta-AI WWW

Real AI + Computing Machinery: Superhuman AI Supercomputers?

https://www.dhirubhai.net/pulse/real-ai-computing-machinery-superhumanai-azamat-abdoullaev/

Causal, Composite and Contextual AI?

https://www.dhirubhai.net/pulse/trans-ai-true-real-ai-machine-intelligence-learning-azamat-abdoullaev/

https://www.dhirubhai.net/pulse/gato-pathways-brivl-worlds-first-agi-azamat-abdoullaev/

https://cont.ws/@ashacontws/2255922

SUPPLEMENT 1

HPL-AI MIXED-PRECISION BENCHMARK

The HPL-AI benchmark seeks to highlight the emerging convergence of high-performance computing (HPC) and artificial intelligence (AI) workloads. While traditional HPC focused on simulation runs for modeling phenomena in physics, chemistry, biology, and so on, the mathematical models that drive these computations require, for the most part, 64-bit accuracy. On the other hand, the machine learning methods that fuel advances in AI achieve desired results at 32-bit and even lower floating-point precision formats. This lesser demand for accuracy fueled a resurgence of interest in new hardware platforms that deliver a mix of unprecedented performance levels and energy savings to achieve the classification and recognition fidelity afforded by higher-accuracy formats.

HPL-AI strives to unite these two realms by delivering a blend of modern algorithms and contemporary hardware while simultaneously connecting to the solver formulation of the decades-old HPL framework of benchmarking the largest supercomputing installations in the world. The solver method of choice is a combination of LU factorization and iterative refinement performed afterwards to bring the solution back to 64-bit accuracy. The innovation of HPL-AI lies in dropping the requirement of 64-bit computation throughout the entire solution process and instead opting for low-precision (likely 16-bit) accuracy for LU, and a sophisticated iteration to recover the accuracy lost in factorization. The iterative method guaranteed to be numerically stable is the generalized minimal residual method (GMRES), which uses application of the L and U factors to serve as a preconditioner. The combination of these algorithms is demonstrably sufficient for high accuracy and may be implemented in a way that takes advantage of the current and upcoming devices for accelerating AI workloads.

SUPPLEMENT 2

DOI: 10.18287/2223-9537-2021-11-4-402-421

Trans-AI: How to Build True AI or Real Machine Intelligence and Learning

Azamat Abdoullaev EIS Encyclopedic Intelligent Systems Ltd, EU, Cyprus-Russia Abstract We are at the edge of colossal changes. This is a critical moment of historical choice and opportunity. It could be the best 5 years ahead of us that we have ever had in human history or one of the worst, because we have all the power, technology and knowledge to create the most fundamental general-purpose technology (GPT), which could completely upend the whole human history. The most important GPTs were fire, the wheel, language, writing, the printing press, the steam engine, electric power, information and telecommunications technology, all to be topped by real artificial intelligence technology. Our study refers to Why and How the Real Machine Intelligence or True AI or Real Superintelligence (RSI) could be designed and developed, deployed and distributed in the next 5 years.

The whole idea of RSI took about three decades in three phases. The first conceptual model of TransAI was published in 1989. It covered all possible physical phenomena, effects and processes. The more extended model of Real AI was developed in 1999. A complete theory of superintelligence, with its reality model, global knowledge base, NL programing language, and master algorithm, was presented in 2008.

The RSI project has been finally completed in 2020, with some key findings and discoveries being published on the EU AI Alliance/Futurium site in 20+ articles. The RSI features a unifying World Metamodel (Global Ontology), with a General Intelligence Framework (Master Algorithm), Standard Data Type Hierarchy, NL Programming Language, to effectively interact with the world by intelligent processing of its data, from the web data to the real-world data. The basic results with technical specifications, classifications, formulas, algorithms, designs and patterns, were kept as a trade secret and documented as the Corporate Confidential Report: How to Engineer Man-Machine Superintelligence 2025. As a member of EU AI Alliance, the author has proposed the Man-Machine RSI Platform as a key part of Transnational EU-Russia Project. To shape a smart and sustainable future, the world should invest into the RSI Science and Technology, for the Trans-AI paradigm is the way to an inclusive, instrumented, interconnected and intelligent world.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了