ChatGPT: Not even close to human intelligence (sorry HAL and Skynet, not sorry)

ChatGPT: Not even close to human intelligence (sorry HAL and Skynet, not sorry)

Generative AI is undeniably impressive with models providing coverage over ever increasing amounts of information in more and more domain areas. But solving real problems means just that - solving real problems. In the real world, most problems are steeped with layers and context. I am increasingly skeptical that the technology in its current form can solve even basic problems where human context is required. The real breakthrough is in creating a more natural paradigm for interacting with information. But in doing so, it creates an illusion that it is understanding and solving real problems. At present, it is mostly just an illusion.

To illustrate existing limitations, let me share a real-world interaction with ChatGPT4 from earlier today when I asked ChatGPT to create a maze for me in the shape of a heart.

A few things to note: 1) Interpretation of the request is spot on and fluid; 2) The tone of the response is appropriate, positive, and friendly; and 3) ChatGPT ends the response with "Enjoy solving it!" which is disarmingly human.

All is good until you realize that the maze cannot be solved. However, being human also means being prone to mistakes. In that spirit, I ask ChatGPT to give me the solution in case I just missed it.

ChatGPT analyzes the image, complete with some edge detection in the image. While the jump to image processing might seem intelligent, it also suggests something subtle: ChatGPT understands a maze as a visual construct and not as an abstract formalism in the form of a solvable 2-dimensional game between a designer and a solver. Further, upon closer examination, the original maze generated is full of image artifacts that suggest a kind of "averaging" of matching images rather than any deeper understanding of my request or the concepts involved. It correctly identifies a 'maze' as being associated with a 'solution' but seems to lack any understanding much beyond that mazes and solutions are highly correlated terms in its database and that simple edge detection is a strategy to solve mazes presented in the form of an image.

Unfortunately, the image solution provided by ChatCPT does not show any solution possible. I realize that I am looking at a null solution set because I have solved mazes before with image processing tools. Troublingly though, it presents this image as a solution, when in fact it is not. At this point, ChatGPT either "understands" it has made a mistake or it has zero understanding of what is transpiring - clearly the latter.

Time to call out ChatCPT on this disappointing response.

After apologizing for the oversight, ChatGPT now substitutes another maze. But look closely - even this maze lacks a solution. It is digging its cognitive hole even deeper. ChatGPT is a serious people pleaser, desperately trying to give me what I want, but clearly lacking ANY understanding how to do that. Well, "fake it till you make" it as they say.

So let me give my new Generative AI friend one last shot at redemption:

ChatGPT apologizes and explains it was simply trying to do what I asked but that it does not have the ability to solve the maze. Of course, the original failure was presenting a maze that was unsolvable to begin with. It then explains that the follow-up maze was simply illustrative.

Enough picking on GPT4.

Here is the reality, the danger with Generative AI right now isn't in it actually becoming intelligent any time soon. It is in humans believing it has. You might read this simple interaction as duplicity or deception. It is not. Those are (thus far) uniquely human traits still reserved for us sentients. What it does demonstrate is advanced goal seeking behavior in a system that is mastering human language in the form of prompts and responses, text and images.

If high schoolers and college students are any indication, we are already conditioning this generation to not question their AI sources. Then we move to professional and subject matter experts in various fields already leaning on these tools. Then of course, more pedestrian end consumers of information, you and me, seeking legal, financial, or medical advice.

With a sigh and some disappointment, it's time to bring my chat with ChatGPT4 to a close.

Notice, ChatGPT still thinks the problem is that I have asked it questions about images. I did not. I asked it to generate a maze which a 4-year-old child would understand to be a game that requires at least one solution. At least at the level of ChatGPT4, this is nothing like human intelligence.

What if this were an MRI image?

Let's all take a breath, stop listening for the letters A and I in every Wall Street earnings call, and stop secretly hoping that this technology can replace human intelligence anytime soon. Instead, let's identify the applications that can benefit from a quantum leap forward in interactive engagement between humans and massive datasets. And let's all agree to NOT LET ANYONE substitute GPT for humans in critical areas that require deep context and understanding any time soon - especially in medical, military, financial system, and other areas. This risk is not in the technology becoming intelligent or sentient any time soon (sorry HAL and Skynet). It's in our mindlessly assuming it has a deep understanding when in fact it does not.

P.s. I did have ChatGPT create the image of the evil mainframe at the top of this article. Have to admit, it nailed it :)

Please note: This article is solely the opinion of the author and does not represent any employer, affiliate, or other organization.



要查看或添加评论,请登录

社区洞察

其他会员也浏览了