What is special about Generative AI?
An atom - Generated with the MS365 (Generative AI) PowerPoint Copilot

What is special about Generative AI?

And what do … nuclear fission and… post-structuralism have to do with it?

The most common types of non-Generative AI are deterministic (i.e. they produce the same output when given the same input each and every time). On the other hand, the output from Generative AI (GAI) models is not necessarily consistent when fed the same input.

“Classical” versus Generative AI paradigm

In the more “classical”, non-Generative AI case, training a model aims at making the model better and better at achieving a certain objective. This is called maximizing or minimizing an objective function.? For example, if we are developing a machine learning (AI) model to forecast the price of a stock, the objective could be to minimize the difference between what the model predicts and what the price ended up being in the market. During training, the model adjusts its parameters so that it proceeds to lower and lower values of the objective function as shown in Figure 1.

Figure 1: Classical AI- minimizing the objective function.


Even though it is not guaranteed that the model will not get “trapped” to one of the local minima of the objective function, global or local the minimum, it is the final, “steady” state of the model. ?

In Generative AI the approach is all about not being steady – and to be dynamic and “flexible”. Instead of rolling down the objective function to stabilize somewhere at or near the bottom, the model is allowed to actively explore various outcomes and possibilities. In our example, these other possibilities could be ones where the model may not exactly minimize the difference between predicted and realized stock price, but where this may allow for the flexibility a model might need to anticipate never before seen extreme events (say, black swans).?

?Generative Science & Emergence

Basically, the term Generative in Generative AI has the same meaning as in Generative Science, namely the science that studies the interactions of the multiple parts of complex systems and how they lead to dynamic and creative behaviors, as well as to emergence of system properties that could not have been anticipated when looking at the parts of the system in isolation.

Here is one simple example of emergence:

Say we write a computer program where we instruct the computer that we have protons and neutrons in a nucleus that interact with each other: protons repel other protons (Coulomb’s law – due to their positive charge), whereas neutrons and protons both among themselves and with each other are attracted due to the strong nuclear force.

Figure?2: An atom - Generated with the MS365 (Generative AI) PowerPoint Copilot


Let’s assume that we change the numbers of nucleons – protons and neutrons - to see what their interactions look like and what this change may mean for the nucleus. It turns out that as we increase their numbers, at some point the Coulomb repulsion among protons exceeds the strong force attraction among the nucleons: the force outward becomes stronger than the force inward that keeps the nucleus together. Our computer finds that this nucleus is not “stable” and will split in pieces: we have just discovered fission! You probably have heard about this in the context of heavy nuclei such as uranium or plutonium.

We did not introduce the concept of fission into our program, yet the interactions of the parts of the system led to this emerging behavior. That is where the beauty (and risk) of Generative AI originates from.? And it is conceivable that that is where real intelligence may emerge from (and/or life itself may be a result of).

Post-structuralist theories, neo-cybernetic mechanisms & … simpler things

For individuals more philosophically and scientifically inclined:

The closed systems of contemporary Artificial Intelligence do not seem to lead to intelligent machines in the near future. What is needed are open-ended systems with non-linear properties in order to create interesting properties for the scaffolding of an artificial mind. Using post-structuralist theories of possibility spaces combined with neo-cybernetic mechanisms such as feedback allows to actively manipulate the phase space of possibilities. This is the field of Generative Artificial Intelligence and it is implementing mechanisms and setting up experiments with the goal of the creation of open-ended systems. It sidesteps the traditional argumentation of top-down versus bottom-up by using both mechanisms. Bottom-up procedures are used to generate possibility spaces and top-down methods sort out the structures that are functioning the worst. Top-down mechanisms can be the environment, but also humans who steer the development processes.

[Excerpt from: van der Zant, T., Kouw, M., Schomaker, L. (2013). Generative Artificial Intelligence. In: Müller, V. (eds) Philosophy and Theory of Artificial Intelligence. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 5. Springer, Berlin, Heidelberg.?https://doi.org/10.1007/978-3-642-31674-6_8]

The above excerpt touches on some central concepts for Generative AI, such as: closed vs open systems; non-linear properties; phase space of possibilities. But, if these or post-structuralism and neo cybernetics sound too exotic for you (cannot blame you), here is a simpler example to leave you with:

?Let’s take the recent very popular Large Language Models (LLMs) which have made Generative AI THE topic of conversation everywhere! By design, these models will not always return the same output for a given input even though they “know” what token is most likely to follow a series of tokens (phrase/sentence).

?More specifically, in the example below, it may be that the majority of time the word “book” should follow the sequence: “The little child opened her …” Always choosing the most likely word is closer

?

to the more deterministic, “classical”/Non-GAI case. GAI instead allows for permutations and deviations from this most likely scenario and that is the source for the creativity LLMs exhibit. It is not random that LLMs are better at poetry & fiction than non-fiction...

?

?

?

David DeLima Luria

20-year executive, global investment markets. Vice president.

1 年

Thank you, Argyro (Iro) Tasitsiomi, PhD. Your posts always get me thinking. At the end of the day, whether we are talking about non-Gen or Gen AI, do you see us beginning to bump against Heisenburg (scientist, not Breaking Bad) who said something to the effect of "you disturb it when you watch it"? (My gross oversimplification).

回复
Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

1 年

The intersections of nuclear fission, post-structuralism, and neo-cybernetics with Generative AI unveil a fascinating web of influences. In the realm of Generative AI, understanding the parallelisms might shed light on unique perspectives and inspire innovative approaches. How do you see these diverse elements converging, and do you think exploring unconventional connections could lead to breakthroughs in the evolution of Generative AI models?

回复

要查看或添加评论,请登录

Argyro (Iro) Tasitsiomi, PhD的更多文章

社区洞察

其他会员也浏览了