ChatGPT, second thoughts
One of the major sources of criticism to chatGPT is that it makes factual mistakes, gets calculations wrong, seems to have problems with simple logical reasonings. We could perform a thorough syntactic /semantic analysis of the tool's performance, but let's keep it simple: there are things it does right and things it does wrong. But the things it does right are truly amazing and impossible to be done without true "understanding".
If I am not mistaken also the co-reference problem (knowing to which entity "this" or "that" refers) is not a problem for chatGPT. According to some linguists, solving this problem is would be really hard and would translate to solving intelligence altogether. Apparently the attention mechanism of the transformer was all we needed to overcome this hurdle. This hints that also the things that chatGPT does poorly could fall within its reach in a matter of years, if not months.
Part of the criticism is also due to a common misconception about AI, which brings critics to highlight what AI cannot do (yet). We should remind ourselves that AGI means Artificial General Intelligence, not Artificial Genius-level Intelligence. And there are many tasks that common people are not good at. I remember being at a pizzeria in Milano and, wanting a smaller slice of pizza, I said: "can I have 3/4 of that?" The lady stared at me speechless, until I added "a bit less please". Many people are not familiar with mathematical concepts and operations, which doesn't prevent them from being functional members of society.
So does OpenAI's chatbot, probably because such tasks haven't been "explained" with enough data. ChatGPT was trained on a huge text corpus taken from the internet, following the objective to predict the next word in the data stream. If we applied this method to human beings, we should take a newborn baby and have him/her read the entire internet, with no supervision. The fact that this learning strategy succeeds is already incredible.
领英推荐
Some say that ChatGPT is just another software tool in our repertoire ("ChatGPT bot is causing panic now – but it’ll soon be as mundane a tool as Excel", argues John Naughton on the Guardian). I disagree. A tool is an augmentation to human skills that needs an intelligent agent to be operated. ChatGPT has the potential to become that agent.
There are of course many techical hurdles that need to be addressed along the way, but chatGPT may represent for AI what the Flyer was for artificial flight. The first airplane built by the Wright brothers ushered humanity into the era of mechanical flight. There is a of course a big difference between the Flyer and the Airbus A380 or the B2 bomber, but the key ingredients of an airplane were assembled for the first time on december 3rd 1903, on the dunes of Kitty Hawk, North Carolina.
Finally, let's give to Caesar what belongs to Caesar. When talking about AI, the so-called "Godfathers" (Hinton, leCun, Bengio and Hassabis) are always cited, but I think it is right to give credit also to the next gen heroes. Ashish Vaswani is the first author of the paper that proposed the "transformer" neural network, the model that revolutionized the field of Natural Language Processing. Ilya Sutskever, OpenAI's co-founder and chief scientist, is (in the words of Pieter Abbeel, top AI researcher) "the man who made AI work". Their names, along with the names of many others (modern science is of course a joint effort), will probably land in history books. The question is how long will history last.