The parrot within us
Around five years ago self-driving car mania promised a revolution in transport. Today I am still in charge at the driver’s seat. Marketing is filled with hyperbole – desperately trying to catch attention. ChatGPT and similar language models are a remarkable accomplishment of training artificial neural networks. Their grandfather is known as recurrent neural network and was state-of-the-art in the 1990s. The biggest change occurred not in technique but in the scale of training data and efficient computation with graphical processing units (GPUs). Nevertheless, these models cannot reason and do not understand logic. They are based on training observations. They are rear-view mirror personalities. I still expect a long run until we see real efficiency gains in the form of faster GDP growth, for example.
This is not to say that ChatGPT & Co are useless. They certainly represent one of the many facets of being “intelligent”. Speaking and articulating ideas is done intuitively – like Kahneman’s heuristic “System 1”. Hence it is no surprise that speaking can be automated with tools like ChatGPT. Indeed, I expect that its statistical procedure of finding word after word is very likely the same process our mind performs when we speak. But it is speaking without understanding. I have no doubt that also “understanding” (whatever this means) will be automated at some time, too.
The threat of automatization is a symbol of our own shortcomings: the time we spent to create something new is usually very small compared to all the repetitiveness of daily routine. Ultimately, ChatGPT & Co will force us to focus on what has brought us here in the first place: the desire and ingenuity to build better and better machines.
Education establishments which are afraid of ChatGPT do not understand what they are supposed to do. Many just focus on memorization and teaching well-known techniques. In this case graduates are rear-view mirror personalities just like ChatGPT is a parrot on steroids. Both can very well articulate sentences without any understanding of how the knowledge they are speaking about was created and how to develop and improve it in the future.
A short experiment to demonstrate the shortcomings of ChatGPT. I asked about some publications of Nouriel Roubini:
领英推荐
The language sounds very convincing. The problem: I cannot find the last publication anywhere. So I asked for its source:
I searched the mentioned Journal and could not find Roubini’s mentioned publication. I confronted ChatGPT with my problem:
So, ChatGPT is a great tool for fabulation and hallucination: “the fusion of the everyday, fantastic, mythical, and nightmarish that blur traditional distinctions between what is serious or trivial, horrible or ludicrous, tragic or comic” (Abrams 2014). This is partly due to the fact that ChatGPT has been trained on all kinds of sources and can be improved. But it shows also its main defect: it still lacks a mechanism to check if something is true or pure imagination.?