Demystifying ChatGPT or the reason why I'm not a Math Prof
Recently I wrote about my attempts to assess ChatGPT's capabilities in logics (see How I taught an AI logic | LinkedIn). All in all I was quite impressed of ChatGPT's performance. My concluding question was, whether ChatGPT really creates new insights based on a conversation, or whether it just re-phrases words. The former would mean that it really shows a kind of understanding, the latter that it is just talking.
Now, it seems that the latter is true and, more importantly, that I approached the task wrongly. I phrased questions in a way that was too close to existing literature. Edmund Weitz, a Math Prof in Hamburg, did it much better (see (1) ChatGPT und die Logik - YouTube, the voice recording of the video is in German but the whole conversation with ChatGPT is written in English).
Amongst other things, he wanted to figure out, whether ChatGPT understands the mathematical concept of "almost all". It basically describes the difference between an infinite vs. a finite subset of an infinite set. E.g. almost all natural numbers are larger than the number one billion. Prof. Weitz named the property "almost all" with the word "snirky" and asked, whether all numbers not being divisible by three would be snirky. ChatGPT didn't understand this properly and also couldn't manage a number other similar exercises.
领英推荐
So what is our résumé? We understand that ChatGPT acts quite impressive when analyzing existing text and when phrasing new text in consistency with what it has seen so far. While this mimics a human counterpart in a conversation quite well, it doesn't mean that ChatGPT can develop an understanding about the concepts being behind these words, not to mention to create new ideas out of such concepts.
That would be the next layer of abstraction. I guess there is still a long way to go for AI developers.
Pricing Analyst
1 年You say that you wouldn't want to be a math professor but imagine being the first to train the ability to solve a theorem in field theory with no prior proof. NN engineers are teachers in disguise. But I suppose there is no reasonable way to train abstraction (or that I am aware of). One can easily point to the lack of the ability to abstract as evidence against a theory of complexity breeding consciousness but considering the explosive growth of neural networks I would not be surprised to see this point be less pointed.