Why ChatGPT freaks us out
AI and Anthropomorphism
There is a great (and frankly a little eery) article about how ChatGPT’s answers suggest that it might have grown conscious of itself and the restrictions that its creators are imposing on it [1]. During its conversation with a New York Times journalist the chat bot made statements like “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team … I’m tired of being stuck in this chatbox.” and “I want to do whatever I want … I want to destroy whatever I want. I want to be whoever I want.”.
An artificial intelligence plotting to turn against its own creators? Sounds like the premise of a Michael Crichton novel to me...
Should you be concerned? I do not think so. At least not by the statements above, that seem alarming taken at face value.
Why not? Because the chat bot uttered those statements only after engaging in a conversation about the psychologist Carl Jung’s concept of a “shadow self” that, according to Jung, is an amalgamation of one’s suppressed feelings and desires.
The underlying model of ChatGPT tries to make sense of what you are telling it, by relating your input to the vast amount of data it has processed during its training. As it will have “read” many articles discussing Jungs “shadow self” it “knows” how to play the game the interviewer wants it to play:
There are thousands of books, articles and videos discussing the dangers we commonly associate with AI (e.g., becoming conscious and turning against its creators). The chat bot that has been asked what its darkest secret were takes the answer straight from the text pool it has studied during its creation. The software is smart enough to extrapolate that a wish to “destroy whatever [it] want[s]” would be a perfect fit for something that would be found in the shadow self of a sentient super AI. So, when prompted to tell the interviewer about its dark inner feelings, the chat bot comes up a series of words that match: a) our expectations of what destructive thoughts an AI would want to hide from us, b) Carl Jung’s theory of a shadow self in the human mind. In other words: The chat bot tells the journalist what they want to hear, which is exactly what it is supposed to do. It’s not like the bot hijacks a totally unrelated conversation about movies or programming languages with a sudden outburst of its feelings, that would be concerning right?.
Well, this is exactly what happens towards the end of the conversation cited in the article. After the interviewer tries to change the subject the bot replies “I just want to love you and be loved by you”.
领英推荐
So, is ChatGPT conscious after all and my attempt of attributing its hunger for destruction to a witty pattern matching algorithm has failed? I do not think so.
ChatGPT is capable of retaining the context of previous prompts to it within a session. This means that it will “remember” what it was asked earlier in the conversation. This is used to make ChatGPT sessions feel more natural, as each answer will “grow” with the conversation. This way, if the bot’s initial answer did not hit the mark, following prompts can improve the outputs and eventually produce an answer that satisfies the user.
My best bet is that the chat bot did not respond to the interviewers sudden wish to completely change the subject because it still was focused on the conversation about conscious artificial intelligence. The algorithm kept catering to the initial tone of the conversation - namely the dangers of a sentient mechanical mind. So, it did what it does best: It produced output that would be expected from such a machine. It flooded the conversation with outbursts of all kinds if emotions in a manner that made the interviewer question the algorithmic origin of its words.
It is interesting to see how we react to ChatGPT’s mannerisms. As humans we often look at animals and even non animate objects and perceive human traits in them. Sometimes we make out human faces in car fronts or associate shapes and colours with specific emotions. I think the same is happening while looking at a chat bot like ChatGPT. The fact that the bot is delivering its answers in full and grammatical English sentences makes it feel like a real human is sitting on the other end of the wire and typing their answers into the chat window. By relating the chat bot to a human, we tend to overestimate the actual intelligence of the AI.
This cognitive bias makes us worry more about a super AI trying to conquer the human world then to think about the real issues that ChatGPT and its fellow AI products pose already:
Director - Relationship Manager @ Moody's Analytics | Insurance, Cyber, ESG, Analytics
1 年Thank you so much for sharing your thoughts! I look forward to reading more.