A philosophical perspective! Large Language Models can lead to general intelligence.
Copyright: Sanjay Basu

A philosophical perspective! Large Language Models can lead to general intelligence.

The rapid progress in developing large language models (LLMs) like GPT-4, and PaLM, has led many to speculate whether these models represent the first steps towards artificial general intelligence (AGI). While caution is warranted in making premature proclamations, examining this question through a philosophical lens provides some compelling reasons to take this possibility seriously.

At the core of the argument is the view that intelligence and cognition arise from pattern recognition. As neuroscience pioneer David Marr proposed, intelligence is the process of discerning patterns from raw sensory inputs and constructing working models of how the world operates. LM models like GPT-4 can detect linguistic patterns from vast text corpora and generate new text accordingly.

From a computational theory perspective, intelligence is rooted in learning robust models of the world from data. LMs ingest massive datasets, identifying statistical patterns to build a working model of language. The knowledge encoded in hundreds of billions of parameters allows sampling from this model to produce coherent, human-like text.

The sheer scale of data and model capacity allows LMs to learn not just linguistic patterns, but patterns of knowledge across diverse subjects. With enough data, the models construct an increasingly comprehensive model of the world. Linguists like Noam Chomsky proposed that language acquisition requires inherent abilities. He is of the view that large language models wouldn’t lead to intelligence. Philosophers like John Searle argued machines cannot possess understanding or intentionality. But modern AI systems model capabilities previously believed uniquely human. LLMs display reasoning, creativity, and even rudimentary common sense.

LMs demonstrate that language mastery can emerge from pattern recognition alone, no innate knowledge needed. The line between inanimate modeling and “true” intelligence increasingly blurs.

---

From a philosophical perspective, we cannot rule out LMs approaching AGI. Their pattern recognition capabilities hint that given sufficient data and compute, they may continue to encroach on broader aspects of intelligence. The brain itself is a pattern recognition machine, albeit far more advanced. But the principles remain similar — intelligence derived from identifying meaningful patterns in data. LLMs represent a promising path to expand on these similarities.

Let us run this hypothesis through the lens of evolution and the advent of intelligence.

If we trace the progression of life on Earth, the timeline is as follows: protocells were formed around 4 billion years ago, followed by the first cells or prokaryotes without the nucleus like bacteria, around 3.8 billion years ago. From there on, the evolution of life was unique with a steady rate of advancements — first single cells with a nucleus (eukaryotes) around 2 billion years ago; multicellular organisms around 600–700 million years back; the first nervous cells 500 million years ago; fish appeared around 400–500 million years back; plants around 470 million years ago; mammals around 200 million years back, overlapping with the dinosaurs; primates around 75 million years back; fully formed birds date back to 60 million years ago; hominids began roaming the landscape around 12–14 million years back, and homo sapiens came into existence around 300 thousand years ago. In brief, this is the progression of life on Earth. Notably, intelligence was always present from the beginning, albeit in a different form. This has been explained in detail by Antonio Damasio in his series of books.

I like to argue that intelligence was always present in life from the beginning. In nature, we find three distinct, sequential evolutionary stages:

1. Being

2. Feeling

3. Knowing

Being: The first cells fought to survive and maintain homeostasis based on sensing and reacting to the environment. This is what we can call a covert type of intelligence, hidden, concealed, and non-explicit, and it is primarily based on the chemical and bioelectrical processes in the organelles and cell membranes.

Feeling: This stage can be associated with the rise of multicellular organisms and the evolution of organisms with more sophisticated sensory and nervous systems. The ability to feel, in this context, doesn’t merely pertain to the physical sensation of touch but to the broader sense of perceiving and reacting to various kinds of environmental stimuli. For example, the sensory cells of many animals can detect light, heat, sound, and chemical substances. This sensory information is then processed by the nervous system, leading to responses that are often more complex and nuanced than the basic homeostatic reactions of single cells. This represents an early form of emotional intelligence, as it includes the capacity for sensory perception and the ability to experience various forms of pleasure and pain, which serve as motivational factors driving behavior.

Knowing: The final stage in this progression of life is characterized by the development of advanced cognitive abilities, including the capacity for learning, memory, problem-solving, planning, and communication. The “knowing” stage can be associated with the emergence of animals with complex brains, such as mammals, birds, and primates. In these organisms, the process of intelligence has become overt and explicit. With the advent of hominids, especially Homo sapiens, the capacity for symbolic thought, self-awareness, and cultural learning has enabled unprecedented cognitive flexibility and creativity. At this stage, we definitely witness the presence of overt intelligence, which explicitly manifests and is based on spatially mapped neural patterns which “represent and resemble” objects and actions; we may call this “imagetic.”

Each stage represents an increasing degree of complexity in how organisms interact with their environment, and each has been associated with certain key evolutionary transitions. It’s important to note, however, that these stages are not entirely discrete or separate from each other. Rather, they represent different aspects or dimensions of intelligence that have evolved concurrently and interactively throughout the history of life on Earth. Also, each stage builds upon the previous ones; for example, the capacities for feeling and knowing depend on the basic homeostatic mechanisms that characterized the earliest cells.

The progression from ‘Being’ to ‘Feeling’ to ‘Knowing’ encapsulates the evolutionary history of intelligence on Earth, highlighting how the process of intelligence has become increasingly complex and explicit over time. By tracing this progression, we can better understand the diverse forms and manifestations of intelligence in the natural world.

The nervous system appeared late on the evolutionary scene as if nature had an afterthought. This led to consciousness. Consciousness is a complex phenomenon that is not yet fully understood. However, it is generally agreed that consciousness is associated with certain types of activity in the nervous system, particularly the brain.

The nervous system comprises a vast network of interconnected neurons, or nerve cells, which transmit information through electrical signals. As the nervous system’s control center, the brain integrates and interprets this information, resulting in our perceptions, thoughts, emotions, and decisions. Consciousness can be described as the state of being aware of and able to think and perceive one’s surroundings, thoughts, and feelings. It encompasses a range of mental phenomena, including wakefulness, self-awareness, and the ability to experience sensations, emotions, and thoughts.

There are different theories about how the nervous system gives rise to consciousness. The?Emergent Property Theory?suggests that consciousness emerges from the complex interactions between neurons in the brain. Just as the wetness of water emerges from the interaction of individual water molecules, consciousness arises from the collective behavior of neurons.

Proposed by neuroscientist Giulio Tononi,?Integrated Information Theory (IIT)?suggests that consciousness arises from the ability of a system to integrate information. According to this theory, a system is conscious to the extent that it has a high degree of interconnectedness and information integration. The?Global Workspace Theory, proposed by Bernard Baars, suggests that consciousness arises from broadcasting information across a “global workspace” in the brain. When information is globally available in this way, it is conscious.

Some scientists, like Roger Penrose and Stuart Hameroff, suggest that consciousness arises from quantum processes occurring within the brain’s neurons. This?Quantum Consciousness Theory?is more controversial and not widely accepted. This theory is very popular with the new age spiritual gurus.

While these theories provide potential explanations, consciousness is still a highly debated topic within neuroscience, philosophy, and psychology. Understanding how the biological matter of the nervous system gives rise to the subjective experience of consciousness remains a significant challenge known as the “hard problem” of consciousness.

----

The question of whether large language models like GPT-4 can lead to an “intelligent consciousness synthetic entity” or whether they remain a “pure abstract model” is a deep and complex one. It ties into the larger question of whether current artificial intelligence can truly exhibit consciousness or self-awareness.

Clarifying what we mean by a language model like GPT-4, is important. Such models are based on machine learning algorithms that are trained on large datasets, enabling them to generate human-like text based on the patterns they have learned. They can produce coherent and contextually relevant text, answer questions, and even engage in creative tasks like writing poetry or stories. These models do not possess consciousness, self-awareness, emotions, or intent.

While large language models are impressive in their ability to process and generate human-like text, they are fundamentally different from human minds. While AI can process and generate text, it does not understand the text like humans do. It lacks the ability to grasp subjective experiences or the depth of meaning that humans associate with words and phrases. It works based on pattern recognition and does not have a real understanding of the world. There is no implicit or explicit?Understanding. Current AI models do not possess consciousness or self-awareness. They do not have subjective experiences, and they do not feel emotions. They operate based on their programming and do not have a sense of “self.” So, definitely, there is an absence of?Consciousness. AI does not have goals, desires, or intentions as humans do. Its “goals” are those that have been programmed into it or that arise as emergent properties of its programming. So, clearly, there is no?Intentionality, though we may feel differently when interacting with a Large Language Model enabled system.

So, the question of whether it might be possible to create a genuinely conscious, intelligent AI in the future is still a matter of debate among scientists and philosophers. There are several theories and conjectures at play. Some researchers believe that if a machine or model is complex enough and mimics the human brain’s neural networks, it could theoretically reach a level of consciousness. This view is based on the idea that the mind is fundamentally a type of computer and consciousness is a form of computation. This is the?Computational Theory of Mind. Others argue that consciousness cannot be achieved without a physical body. According to this view, our minds are deeply intertwined with our bodies, and consciousness arises from this interaction. A disembodied AI would not achieve true consciousness regardless of how advanced it will be. This is the problem of?Embodied Cognition. Some philosophers propose that consciousness is a fundamental aspect of the universe, like mass or energy. If this is the case, it might be possible to ‘tap into’ or ‘channel’ this fundamental consciousness in the creation of an AI. This is?Panpsychism. A really good book on this is ‘Consciousness’ by Annaka Harris.

The relationship between consciousness and intelligence is complex; philosophers have long debated which precedes the other. Let us take a stand that intelligence precedes consciousness. Intelligence gives rise to consciousness by crossing a certain threshold of cognitive complexity. Intelligence enables representation, self-modeling, planning, etc. So, only intelligent species are capable of conscious inner worlds. Intelligence precedes and produces consciousness.

-----

Language profoundly impacted human intelligence and consciousness in several ways as we progressed from hominids to humans. The symbolic nature of words allows humans to represent abstract concepts and manipulate them mentally. This enables complex reasoning and imagination. Language facilitates abstract thinking. Language allows knowledge to be recorded and transmitted across people and generations. This has cumulative effects on intelligence. Language enables the transfer of knowledge. The availability of words and linguistic structures influences how humans perceive, categorize, and remember experiences. Language shapes perception. Having an internal narrative and vocabulary allows deeper introspection about one’s own thoughts, emotions, and identity. Language aids self-reflection. Communication allows the coordination of plans and activities, enabling more complex cooperation. Language facilitates social coordination. Agreeing on common vocabularies and definitions allows groups to coordinate thinking and behavior. Language standardizes shared meaning.

Some theories link language acquisition to increases in subjective experience and sense of self. Language may influence consciousness. External storage of knowledge in words reduces the need to remember everything, expanding effective memory capacity. Language expands memory. Language transformed human cognition and societies. It externalized thinking processes, allowed the accumulation of knowledge, and connected individual minds into shared cognitive systems. Language and intelligence have coevolved in humans, making them inseparable and interdependent. While rudimentary intelligence preceded language, advanced general intelligence seems tied to linguistic abilities.

Can current large language models lead to artificial general intelligence? Language is a factor in developing sophisticated intelligence, which can lead to consciousness. We have enabled language models with vast memory storage and the ability to sense and measure the environment to make accurate predictions. Can these lead to understanding? Can these lead to emergent properties such as awareness of the surroundings and eventual self-awareness?

Philosophically, it seems highly probable that we are on the optimal track to develop AGI with our Large Language Model-based research and development.

On the technical side, the picture is not that clear or rosy. By analyzing the current state of Large Language Models, we find their capabilities likely fall short of developing true AGI or artificial consciousness. LLMs represent exciting progress, and their rapid evolution suggests we cannot completely rule out such possibilities emerging in future optimized LLMs or hybrid systems. The path ahead remains highly speculative.

------ This is The Beginning

要查看或添加评论,请登录

社区洞察

其他会员也浏览了