Generative AI Equality – Equilibrium Through Deterrence

Generative AI Equality – Equilibrium Through Deterrence

With the introduction of the original transformer concept enabling sequence-to-sequence tasks in “Attention is All You Need”, the evolution of neural networks solely based on attention mechanisms laid the groundwork for high-performing generative AI and large language models (LLMs). The capability of predicting the next element in a sequence can be seen as a lossy compression of the Internet knowledge. With the combination of unsupervised pre-training and supervised fine-tuning of models (Radford et al., 2022) the quality of those “predictions” could be exponentially improved leading to the most prominent LLMs such as GPT-4, Llama2, Claude 2, Orca, and Cohere.

In the beginning, application scenarios were mainly centered around translation tasks and language processing, by now, LLMs scenarios span across almost all disciplines and industries. Recent advancements of generative AI are jaw dropping:

Solving math problems – applying LLMs for quantitative reasoning to solve mathematical problems has (amongst others) been introduced in “Solving Quantitative Reasoning Problems with Language Models“ (Lewkowycz et al., 2022). A main breakthrough however has recently been achieved by Google’s DeepMind. The company developed FunSearch (function search), an LLM (with an evaluator component) that searches for functions to solve specific mathematical problems. It is basically not searching for the solution itself but fills the gap in algorithms that approach the problem. FunSearch found a code that produced a validated solution to the famous “cap set problem” (the problem can be illustrated by the task to put down as many dots as possible with any three of them forming a straight line) with the largest size of set that was so far unknown. The number of dots determines the size of the set. Alhussein Fawzi, a research scientist at Google commented: “To be very honest with you, we have hypotheses, but we don’t know exactly why this works”

Material discovery in material science – the search for new stable inorganic crystals is important for the innovation of e.g. solar panels, computer chips, and batteries. So far, this has been an effort-intense trial-and-error process. Recently, Google DeepMind introduced a generative AI (GNoME) that speeds up that process and discovered 2.2 million new crystals of which 380.000 are promising candidates for synthesis. As subset of 736 materials have already been created in labs proving the stability of the structures.

Exponential growth of stable materials discovered by GNoME

Generative design in product development – generative design is an iterative design approach using generative AI to produce multiple design iterations based on input parameters such as mass, structural load, cost, integration with other components etc. in order to find an optimal design. Recently, NASA published an article “Generative Design and Digital Manufacturing: Using AI and robots to build lightweight instruments“ which showed that generative AI was able to outperform expert designers in almost all categories designing construction instruments. The stiffness/mass ratio was better by 3x, maximum stress was reduced by 7x which took the AI 1.5h whereas 2 human experts spent 2 days on average for the same task at hand.

Generative AI generated designs outperform human expert designs in all dimensions

While there is still partial skepticism that generative AI and LLMs are truly evolving into an artificial general intelligence (AGI), those examples are undoubtfully proving that something big is emerging. Just imagine your 2-year-old child beats you in e.g. chess, language capabilities, and major university admission tests (cp. “Sparks of Artificial General Intelligence: Early experiments with GPT-4”). What do you think will happen in the years to come? The pace of evolution and advancements in powerful GPUs, LLM effectiveness and efficiency as well as numerous application scenarios demonstrates that this “child” grows unpredictably fast. Even without changing the learning algorithm itself, purely by increasing computational power, those models will advance in a non-linear fashion.

All break-through innovations are equally a gift and a curse and while AI undoubtfully opens a door to unthought of innovations and productivity for the greater good of humanity, it can be devastating for mankind if being misused. There are numerous instruments to potentially prevent misuse such as policies, regulations, governance, laws, and incentives. The latter one is an important element in game theory and there has been a considerable body of research especially in the area of deterrence theory.?

Deterrence theory is based on the idea of preventing an unwanted action by making the potential “cost” of that action prohibitive high for the other party. By imposing a credible threat supported by respective capabilities to carry out that action, believable prohibitive high cost well-communicated to the other party, the deterrence scenario implements a Nash Equilibrium in which neither party has an incentive to solely deviate from the steady state of taking no harmful action. While the origin of deterrence theory lies in military research and especially in the context of nuclear deterrence, the basic idea and objective can be applied to technology-related scenarios where there is a significant threat for mankind through misuse.

A central component of deterrence is the “capability” to carry out the threat. Transferring this idea to AI technology, this poses the question on equality of capabilities amongst powerful parties / countries. An imbalance could lead to harmful actions while sharing AI technology (e.g. through open-source approaches such as Meta’s Llama LLM) might be an opportunity to assure equality of capabilities retaining an equilibrium of non-harmful behavior. In order to institutionalize a sharing approach, alliances and consortia of AI leaders can additionally help to establish equality and standardization of capabilities which are important elements to focus on the evolution of AI technology for the greater good of humanity.

Mohamed-Ali Said

Leitender Konstrukteur, entwickelt fortschrittliche, pr?zise und qualitativ hochwertige Elektromechanische L?sungen in der Automotive-Branche.

10 个月

Impressive advancements! Can't wait to see how regulations and incentives play a role in preventing misuse. ??

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了