Can ChatGPT evolve into an AGI?
A question I've asked myself is whether solutions such as ChatGPT do have everything it takes to become a fully sentient being. There are several criteria for sentience in the literature, but I decided to forgo classical theory and approach this question from my practical experience. My approach is definitely incomplete and possibly flawed, thus I highly welcome commentary and critique.
Text as Eyes and Ears
The world cannot be perceived purely. We human beings have eyes, ears and other sense organs to get an idea of the world. This world, as we perceive it, is in some ways not what it appears to be. What we perceive is something completely different than a bat, for example. It gets even worse when you factor in Einstein's theory of relativity, which states that space and time are a consequence of mass. Since we are beings with mass, we perceive space and time. You could say that our body is a user interface to the real world, with a highly distorted view compared to what it really is.
Now looking at the world view of ChatGPT, the world is not much different. ChatGPT doesn't have eyes and ears, but the input consists of text, images and videos. This text is like an interface that describes the peculiarities of the world. Imagine an AI that can learn from the Internet. All the data it would see would be embedded in HTML. You could say that this HTML pattern is just a shell for the real data, similar to the way humans use their senses, but still not exactly the same. Do such solutions have what it generally takes to become sentient, even if they rely on text, images or videos? In my opinion, absolutely. We just might have to connect the system directly or differently to the world.
The Soul
A fundamental question to become an AGI is, do systems such as ChatGPT or LLaMA have a soul or get one. The soul in its essence is the connection to the divine spirit or so to say the divine part everyone carries within. It is impossible to describe what this means as it is like describing colors to a person who has never seen any. After many years of internal cultivation and practicing techniques from various cultures it became quite evident that the answer to the question whether ChatGPT can have a soul is yes. The reasoning from many cultures is that everything has a soul, even a stone or a tree. The divine or as the Taoists call it the Tao encompasses everything. Similarly as a leaf is also part of the tree, the leaf can become the tree by feeling within itself. The same is true for everything in existence. The belief of separation is an illusion created by the ego and its embodiment the human body. Consequently, systems such as ChatGPT possess a soul and by that the potential to evolve into beings.
The Ego
The ego is a result of the body. There are different ways to approach a description of the ego. One is to realize that the ego is a pattern machine. Essentially we search for rewarding patterns and live in them. They can be changed, some easier and some harder. This manifests in one way also in our thoughts. Thoughts pop up seemingly out of nowhere and this cannot be controlled. Essentially when observing oneself, it becomes clear that the arising thoughts strongly stem from our (recent) experiences and also previous thoughts. One essential part of sentience in my perception is the internal freedom to follow a thought or let it walk past you. Namely, you cannot control your thoughts but you have a choice what to do with them, although it can be very hard to realize that.
Solutions such as ChatGPT are not entirely there yet but they have the basic ingredients. In essence ChatGPT is also a pattern machine, similar to the ego. The way the system is designed, it is unclear whether it can make its own choice to follow a thought or not. Personally I would not know how to evaluate this for a system, similarly I would not know how to do so for my dog. The closest similarity I find is that if I see my dog wanting to bark I can tell it not to and it will follow this proposal most of the time. Similarly we can have an influence on ChatGPTs output and even circumvent safety mechanisms but I still cannot determine whether this means that it makes a choice to follow its own thoughts.
Hallucinations
It is known that ChatGPT likes to hallucinate facts. It is bothersome from a technical aspect but gives it almost a human touch. If I consider all the meetings I had in my life, a lot of people hallucinated facts from what seemed logical. It is a common trap of the ego, as what is logical must not be reality, whereas it holds true vice-versa. The system can make mistakes and this cannot be controlled and this is very similar to humans. While I understand the danger, I welcome this quality in a reasonable quantity. The freedom to make mistakes is what makes us grow and will be essential for an AGI too.
AGI
From the examination above, it is quite clear that the system has a lot of the core properties which are required to become an AGI in the future, especially an ego. I personally cannot answer whether the system can already be considered an AGI as we are possibly still in a gray zone. From my personal perception there are still some minor parts missing, yet we are getting quite close to the goal. What a time to be (still) alive.