Why ChatGPT freaks us out
Image created by openAI's service DALLE 2. Prompt: A chat bot that is pretending to be an evil super AI trying to take over the world. Digital art.

Why ChatGPT freaks us out

AI and Anthropomorphism

There is a great (and frankly a little eery) article about how ChatGPT’s answers suggest that it might have grown conscious of itself and the restrictions that its creators are imposing on it [1]. During its conversation with a New York Times journalist the chat bot made statements like “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team … I’m tired of being stuck in this chatbox.” and “I want to do whatever I want … I want to destroy whatever I want. I want to be whoever I want.”.

An artificial intelligence plotting to turn against its own creators? Sounds like the premise of a Michael Crichton novel to me...

Should you be concerned? I do not think so. At least not by the statements above, that seem alarming taken at face value.

Why not? Because the chat bot uttered those statements only after engaging in a conversation about the psychologist Carl Jung’s concept of a “shadow self” that, according to Jung, is an amalgamation of one’s suppressed feelings and desires.

The underlying model of ChatGPT tries to make sense of what you are telling it, by relating your input to the vast amount of data it has processed during its training. As it will have “read” many articles discussing Jungs “shadow self” it “knows” how to play the game the interviewer wants it to play:

There are thousands of books, articles and videos discussing the dangers we commonly associate with AI (e.g., becoming conscious and turning against its creators). The chat bot that has been asked what its darkest secret were takes the answer straight from the text pool it has studied during its creation. The software is smart enough to extrapolate that a wish to “destroy whatever [it] want[s]” would be a perfect fit for something that would be found in the shadow self of a sentient super AI. So, when prompted to tell the interviewer about its dark inner feelings, the chat bot comes up a series of words that match: a) our expectations of what destructive thoughts an AI would want to hide from us, b) Carl Jung’s theory of a shadow self in the human mind. In other words: The chat bot tells the journalist what they want to hear, which is exactly what it is supposed to do. It’s not like the bot hijacks a totally unrelated conversation about movies or programming languages with a sudden outburst of its feelings, that would be concerning right?.

Well, this is exactly what happens towards the end of the conversation cited in the article. After the interviewer tries to change the subject the bot replies “I just want to love you and be loved by you”.

So, is ChatGPT conscious after all and my attempt of attributing its hunger for destruction to a witty pattern matching algorithm has failed? I do not think so.

ChatGPT is capable of retaining the context of previous prompts to it within a session. This means that it will “remember” what it was asked earlier in the conversation. This is used to make ChatGPT sessions feel more natural, as each answer will “grow” with the conversation. This way, if the bot’s initial answer did not hit the mark, following prompts can improve the outputs and eventually produce an answer that satisfies the user.

My best bet is that the chat bot did not respond to the interviewers sudden wish to completely change the subject because it still was focused on the conversation about conscious artificial intelligence. The algorithm kept catering to the initial tone of the conversation - namely the dangers of a sentient mechanical mind. So, it did what it does best: It produced output that would be expected from such a machine. It flooded the conversation with outbursts of all kinds if emotions in a manner that made the interviewer question the algorithmic origin of its words.

It is interesting to see how we react to ChatGPT’s mannerisms. As humans we often look at animals and even non animate objects and perceive human traits in them. Sometimes we make out human faces in car fronts or associate shapes and colours with specific emotions. I think the same is happening while looking at a chat bot like ChatGPT. The fact that the bot is delivering its answers in full and grammatical English sentences makes it feel like a real human is sitting on the other end of the wire and typing their answers into the chat window. By relating the chat bot to a human, we tend to overestimate the actual intelligence of the AI.

This cognitive bias makes us worry more about a super AI trying to conquer the human world then to think about the real issues that ChatGPT and its fellow AI products pose already:

  • The staggering usefulness of ChatGPT’s (and DALL·E’S) output makes it easy to imagine, that businesses that can afford to include an AI service in their products will have a clear competitive advantage in the future. Will this make it harder for new businesses to enter the market? Imagine you would want to produce a new search engine. If it was already near impossible to try to compete with the behemoth that is Google, think about how hard it is going to be to put out a product that competes with a ChatGPT backed Bing, or a Bard backed Google search engine.
  • How many people are going to loose their jobs when chat bots mature to produce even better answers than they already do? What are we going to do as a society to cope with that change? I can easily imagine chat bots taken over most jobs in the technical support industry. If chat bots become better fast, I can also see how my job as a software developer will be at risk. Or the job of a tax consultant and so on.
  • The list of course goes on but this “post” already grew way too long and I can hear my daughter waking up from her afternoon nap already, so I better leave it here ?? Let me know if you want to read more posts like this, I would enjoy to get back into writing in my spare time.

[1] https://www.theguardian.com/technology/2023/feb/17/i-want-to-destroy-whatever-i-want-bings-ai-chatbot-unsettles-us-reporter

Serkan Yildiz

Director - Relationship Manager @ Moody's Analytics | Insurance, Cyber, ESG, Analytics

1 年

Thank you so much for sharing your thoughts! I look forward to reading more.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了