Reality: brought to you by AI
Monkeys on xerox machines, credit: the author / OpenAI

Reality: brought to you by AI

As of recently, researchers involved in exploring the learning capabilities of large language models like GPT admitted that these systems

already changed the way humans retrieve and process information. Whereas previously typing a prompt into Google only retrieved information and us humans were responsible for choosing what information worked to serve that query best, now, GPT can retrieve the information from the web but also process it for you”. (Source: Vice/Ekin Akyürek )

This technologically driven behaviour change may indeed mark the beginning of a new era, and not just in the quite obvious way that AI might become the next internet as generative AI is establishing itself as the next paradigm-defining platform technology . Beyond this admittedly big technological paradigm shift, there might be an even bigger, and certainly more daunting epistemological shift happening. A critical step-change in the way we access, process and even conceive information as we start off into this new era of the “AI-powered information age”.?

At first, it might not seem to make such a difference whether we actively search for (aka “google”) information or are now more and more starting to prompt (aka “chatgpt”) our way around the HTTP-structured knowledge graph we call “the internet”. Although it actually does, specifically when looking at the level of information degradation and reality reduction that comes with the added convenience of now simply being able to ask our friendly AI for an answer to any however complex question, just to then promptly receive an authoritative sounding, and for a naive user maybe already satisfying, answer.

For starters, what we might unwittingly become part of is one of the greatest social experiments in which, for reasons of convenience, we have accepted to access a “blurry jpeg of all the text on the Web” instead of searching the Web itself, and in consequence putting up with all sorts of? “hallucinations or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone ”. The actually critical aspect about the use of these LLMs is however that

the more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself”. (Source: The New Yorker )

No matter how large the respective models we will be using, the underlying principle of this approximative confabulation of information might hence lead to us collectively and increasingly degrade the very data source we are all using to fuel our ever growing thirst for knowledge. Not completely unlike how we are increasingly destroying this planet in our desperate search for happiness on this planet…

However delicate this “data degradation dilemma” might be, the real dilemma starts when we look at what we do with this data from an epistemological point of view. Much like in a temporally frozen state, we prompt our way through variations of a reality representation which can only ever be perceived through a static lens, a lens that myopically ends in 2021. Especially when adding the social habit-forming nature of the suddenly immensely popular format of the “prompt guide”, there is an imminent danger of a large amount of the digital population simply replicating existing perceptual as well as conceptual patterns. And this will in turn create a significant risk of us not only degrading the data upon which our much beloved AI-powered reality modelling is done, but we also risk degrading the very own thinking patterns we need to actually understand this reality.

No alt text provided for this image
Putin in the style of Andy Warhol, credit: the author, OpenAI

One might consequently ask: “How can we escape this AI-powered thought prison, which we ourselves are co-creating?”

No alt text provided for this image
Image from the movie "Enter the Dragon", Bruce Lee, (1973), credit: Alan Muscat

To answer this question, we need to take even one step further on our epistemological journey into the use of modern information technology systems of the type of LLMs like ChatGPT. Let’s assume that in reality this world is complex. In fact, let's model it for the time being as an interconnected system of various subsystems, which themselves are represented by multidimensional data arrays . Let’s further assume that these data arrays are constantly updated with new data points. This may at any point have numerous reciprocal effects on other connected data-points, in the way that there will be a constant flow of information, and consequently a constant, cause-and-effect like exchange of updates between each of these data elements. Now what happens, when you not only suddenly “freeze” all of these flows of information in time, but ask a very smart linguistic algorithm to give you the one, perfect answer to a specific question you might have about this “reality model” of highly complex, multidimensional and interconnected data arrays?

No alt text provided for this image
Bruce Lee, as imagined by the author via Midjourney

To avoid further confusion, this algorithm will simply “cut through all the clutter” and complexity, and present you with one plausible view on this complex reality, one that it has predictively chosen based on a lot of other people’s previous questions about that very same reality.??

From an epistemological point of view such an algorithmic approach, however technically elegant, might not be without serious issues as it could be seen as a critical oversimplification and blatant reduction of reality. In fact, by collapsing a multidimensional, future-open reality into a temporally fixed, unidimensional, and limited historical data based “flat version of reality”, we might collectively hinder or even impede our ability to challenge already outdated concepts and models of our reality.?

By getting accustomed to this flattened informational time warp we might even endanger the building of new future memories , meaning of new models about our collective future on this planet. Because such new models would existentially contradict the models we have used to guide our behaviour in the past. This might be considered extremely important given the urgency of finding new pathways into a more sustainable future and the existential unsuitability of the outdated schemas we have used to get to the point where we are now as human civilisation on this planet.

So we end up realising that LLMs like ChatGPT could be a much more dangerous “technological narcotic” for our intellect than what we might initially have guessed. Especially in the way in which these systems could actually impede our imagination by entrapping it in a reality-like ?hall of conceptual mirrors“. Luckily, the very same faculties which have given us LLMs are also able to help us overcome this conceptual entrapment: Creativity and imagination, because these are needed to break through existing conceptual barriers, may they be technical or social in nature.

Two potential guiding principles might be especially helpful when trying to overcome the danger of conceptual entrapment inherent in using LLMs:

  1. ?Thomas Kuhn ’s model for scientific paradigm shift can inspire us not only to be the eyes and ears of a collective knowledge graph held together by AI. Kuhn's thinking can also encourage us to spot the respective conceptual shortcomings of the models we construct to explain our “reality”, specifically to identify and point out in detail where and how they are becoming inconsistent, incongruent or logically flawed, so we can improve and ultimately overcome those models in order to improve our deeper understanding of the laws and principles governing our universe.
  2. Another concrete method to escape conceptual LLM-entrapment is inspired by Godel’s incompleteness theorems , basically stating that there is no however large or complex system that is at the same time both complete and consistent. In this way - especially by summoning sufficient imagination and applying enough creativity to break through old thinking molds and outdated conceptual paradigms - we might be able to use LLMs like ChatGPT to specifically look for that inconsistency and incompleteness in its confabulations, to actively develop new ways of thinking and identify new areas of research needed to prepare our next real paradigm shift in thinking. And for the area of A(G)I this might actually entail moving towards a completely new way of thinking about intelligence altogether…

No alt text provided for this image
A bee, as seen / co-imagined by the author and OpenAI

In closing, let’s ask ourselves why large LLMs like ChatGPT can't provide output that reflects any real understanding and critical thinking. The answer might actually be hidden in plain sight, specifically in the way we interact with these models: by prompting them, requesting an immediate (zero-shot) answer.

An AGI capable of output that reflects actual reasoning and understanding will at least have to try to understand the context and intentions behind any question it would be asked, meaning it would require more interaction, and specifically time, to build up a unique awareness about its human counterpart’s actual interests and needs. And with the required awareness to achieve such a capability, this AGI might actually also start to become, at least partly, self-aware, potentially starting to ask itself questions about its own “state of mind”.?

In the far-away future this could mean that, just like relationships with other people, we would have to first develop an actual relationship with an AGI powerful enough for us to expect any real understanding of who we are and what we want to achieve. Which brings us right back to the evolutionary roots of our very own intelligence, which likely have been social in nature. Meaning that, in order to achieve real understanding, any (human or digital) agent needs to first have a structural representation of relationships and a cognitive means to reflect upon the resulting social complexities by way of a self-reflective awareness. Lacking that, all we can expect from AI is recognition and prediction of abstract patterns - without any understanding of their actual importance or relevance.

No alt text provided for this image
Psychedelic monkey, co-hallucinated by the author & Open AI

要查看或添加评论,请登录

社区洞察

其他会员也浏览了