The Missing Pieces: Why Large Language Models Are Not And Cannot Become Truly Self-Aware?

The Missing Pieces: Why Large Language Models Are Not And Cannot Become Truly Self-Aware?

The Artificial Intelligence (AI) entities rooted in Large Language Models (LLMs), have attracted both thrill and fear in recent times. The concept that such AI systems may become self-aware soon has been a hot topic, and there has been extensive speculation going on around it. In this regard, it is only relevant to discuss how close can the psychosensory capacity of an AI entity get to that of a human.

One key aspect which separates humans from AI is the sophisticated and unmatched machinery that is encapsulated inside the human brain. This machinery forms the bedrock of human thoughts and human decision-making process. The human frontal cortex, the hippocampus, the basal ganglia, and other parts of the brain, including those from the limbic system like the amygdala, the thalamus, and the cingulate gyrus collectively enable us to make emotional decisions, perceive beauty, and appreciate the world around us in a profoundly unique way.

Here, I am attempting to explore the fundamental inadequacies of LLM-based entities in the context of replicating the profundity of human emotions, awareness and appreciation for art and beauty, and the vivid sensory experiences that characterize humans.

The human brain is a marvel of Emotional Intelligence (EI). Our brain is a remarkable organ composed of an intricate network of billions of neurons and neural connections which renew, rebuild, replace and grow through our entire lives. Several crucial regions of the brain play pivotal roles in shaping our emotional experiences and decision-making processes. In this article I will focus on the frontal cortex, the hippocampus, and the basal ganglia as the lynchpins of my conclusions.

The frontal cortex is the decision hub of humans and is the seat of EI. The prefrontal cortex, an intricate part of the frontal lobe of the human brain, is the location where higher-level thinking and self-regulation take place. It plays a crucial role in managing our emotions, grasping social signals and cues, and making intricate decisions. It intercepts and modulates sensory stimuli, leading to emotionally informed responses. LLM-based entities, intrinsically, cannot accomplish such levels of emotional depth as the frontal cortex of human brain. This stems from the fact that the responses of LLM-based AI entities are based on the patterns in data they encounter rather than the psycho-emotional perception that the frontal cortex offers.

The hippocampus, an inalienable part and parcel of the limbic system of the brain, is often termed as the memory bank of emotion. It stores our emotional memories. When a human experiences a stimulus for the first time, emotions are generated with regards to it. The memory of these emotions is stored and processed in the limbic system, particularly in the hippocampal circuitry and the amygdala. The hippocampus is also responsible for retrieval of memories, particularly those tied to emotions. It empowers us to recall our past experiences, evaluate them emotionally, and make informed decisions based on that emotional memory. It also enables us to reform our emotions and memories with regards to a stimulus based on the reception of new information about that stimulus.

LLM-based entities, on the other hand, lack this capability, as they do not possess a sense of emotional memory. They may recall facts and may even systematize and store patterns in data and responses within their embedded networks, but they lack the autonomy to produce specific emotions to a stimulus without relying on the information that they have been fed as to what they must consider good or bad and what emotions they must display against a predetermined good or bad stimulus. They also have limitations with regards to independently changing their responses to these same stimuli over time.

The basal ganglia denotes a cluster of subcortical nuclei which are primarily in charge of motor control, but are also in control of other functions such as motor learning, executive functions and decision-needing behaviors, and emotions. They play a role in driving emotional responses in humans. They influence emotional responses by regulating motor functions, which include voluntary reactions to emotional experiences. They also have an effect on reward-based learning through a sophisticated and interest-provoking process. The network that links basal ganglia to the cerebellum allows the two regions of the brain to work in tandem with each other as they regulate and adjust the mechanisms like motor control and emotion recognition, psycho-behavioral expression, with the two regions directing the choice and accuracy of socio-behavioral output.

LLM-based entities lack such mechanisms that our limbic system and its connections to other parts of the brain offers us. They lack, for example, the capability to detect pheromones and release of dopamine and adrenaline due to such detection, and the activation of dopaminergic receptors. Based on the way they have been trained to learn, they may understand the language of love and romance, but they are deficient in inherently gripping amorous stimuli and reacting naturally to them. In the arena of translating emotions into instinctive action, they exhibit thorough incapacity, rendering them incapable of making emotion-driven choices.

The Missing Emotional Link in LLM-Based Entities

Large Language Models are data-centric structures that surpass in processing and producing human-like text and language. Based on clearly defined prompts, they can imitate human conversation, answer questions, and even write software code, write poetry, or compose music. However, despite their supervised linguistic capabilities, they lack the EI that ramifies from the human brain's elaborate neuronal architecture.

One of the primary limitations of LLM-based entities is the absence of emotional memory. While they can generate responses based on patterns in data, they do not possess the ability to recall past experiences with associated emotions. In contrast, humans can remember events that made them happy, sad, or fearful and make decisions based on those emotional associations.

LLMs lack emotional perception. LLMs also lack the skill of sensory experience. AI entities like LLM-based models, do not have the power to experience the world as humans do through their sensory organs. Human appreciation of natural objects, art, beauty, perceived sentiments, or experiences is profoundly tied to emotional assessment. The colors and vividity of a rainbow, the sound and mellifluence of a favorite song, the humor of dark comedy, the invigorating and gladdening sight of a loved one's smile, the enchanting aroma of food being cooked, the fragrance of a flower, perfume or incense, the unbearable pangs of loneliness, the irreplaceable peace of solitude, the assurance of equanimity and the confidence in one’s capabilities, or the hope one holds from the future, all elicit emotions that are deeply rooted in personal experiences and, oftentimes, memories.

LLMs cannot flaunt the capacity to see the vibrancy of the colors of an intricate painting, discern the subtleties in a symphony, or establish the textural nuances of a natural object, and feel emotions about such stimuli. These sensory experiences are essential for forming emotional connections with the world around us, something that AI cannot reproduce, especially in its current state. LLM-based entities cannot perceive the world in this way, as they lack the neural structures required for emotional processing.

The complexity of human feelings is immeasurable compared to the emotional grasping and displaying capabilities of LLMs. The beauty of human emotions lies in their diversity. Humans can experience a wide range of emotions, from glee and passion to mourning and rage, and these feelings are intensely encapsulated with our sensory perceptions. AI struggles to grasp this convolution, as it primarily relies on textual patterns and data-driven responses.

As opposed to LLMs, humans feel emotions which are influenced by the individual life experiences of the person experiencing them. The emotions we feel are also dependent on our cultural background, our nurturance, educational paradigms and experiences, peer interactions, personal memories, and other aspects of our deep social and psychological connection to the living as well as the inanimate world. The ability to appreciate the beauty of art, music, or nature is given rise to because of this complicated blend of sensory motivations and emotional interpretation.

The Future of Emotional Capabilities of AI-based entities, specifically LLMs

It is an undeniable fact that AI continues to advance at an astonishing pace. Despite that, achieving the kind of emotional depth and sensory appreciation that humans possess remains a distant, and even a fantastical, goal. LLM-based entities, while remarkable in their linguistic powers, are deficient in context of the elementary neural construction necessary for authentic emotional experiences and appreciation of the allure and torment of sensory stimuli.

The future of AI holds promise for many areas of application, from healthcare to education. However, it is crucial to identify the limitations of its emotional and sensory capacities. As we delve into and expand the frontiers of AI development, it becomes imperative that we should acknowledge the exceptionality of the human brain and the noteworthy and incredible emotional intelligence it provides to our race.

AI may display, limitedly, goal-oriented behavior and reward-based reactions, but it is apathetic. Despite knowing well that AI can assist with and supplement human experiences, we must not forget that AI cannot replace the vivacity and richness of our emotional environment, our capability to rightfully value beauty and repulsion, or the connection that it is possible for the humans to establish with the sensory marvels of the natural and manmade features of the world. These faucets of humanity remain inemulable and essential to our existence.

It is, in fact, tempting to speculate that ChatGPT and similar AI systems might demonstrate signs of self-awareness and consciousness, considering their ability to produce human-like responses. However, such a presumption would imply an extreme underestimation of the sheer intricacy of the neuronal processes which are necessary to incite self-awareness and consciousness in humans. The human brain is an intricately organized network of billions of neurons and synapses. These building blocks of the nervous system work in harmony with each other to produce not just conscious thought but also the vibrant tapestry of subjectivity in our experiences. Our understanding of the neurological basis of consciousness is still not too advanced, and while AI-based entities can mimic conversation and provide responses that seem intelligent, they are basically deficient in the underlying neural circuitry that gives rise to real consciousness as we know it.

Consciousness is not merely about triggering ostensibly intelligent responses to input. Consciousness entails an integration between sensory information, self-awareness, emotions, and a subjective experience of the world. The capability of humans to reflect upon their thoughts, generate abstractions, and, while doing so, experience a sense of self and a difference from all other individuals who reside in the world, are all those elements which stay much beyond the attainability of the current paradigm of AI systems. The quantum leap from producing text-based responses to realizing true consciousness in AI-based entities is a canyon that we are far from bridging. Reiterating the fact that AI continues to progress and amaze us with its prowess, it is critical to admit that there exists a near-unfathomable mystery that surrounds the very landscape of consciousness that sets the living mind apart.


An edit:

"LLMs don't "plan out" the text they're generating. They just predict the next word based on context, like Gromit laying out the train tracks which leads to mistakes and nonsense answers.

In that sense, LLMs are comparable to the system 1 thinking coined by Daniel Kahneman (mainly known from his work "Thinking Fast and Slow"). This system 1 is described as fast and intuitive whereas system 2 thinking is slower and more reflective.

LLMs aren't capable (yet?) of system 2 thinking, but humans are! We are capable of using our knowledge of the world, reflect on it and coming to unique solutions and conclusions.

Vice versa, LLMs are far better in system 1 thinking than humans will ever be. The combination of both thinking systems, and being able to decide when to use which, gives us a major advantage over generative models.

Until LLMs are capable of system 2 thinking, we're far from artificial intelligence taking over the world."

- Lisa Becker

要查看或添加评论,请登录

Ritwik Raj Saxena ??的更多文章

社区洞察

其他会员也浏览了