Artificial Intelligence Breakthrough—Can ChatGPT Now Think on Its Own?
Alt: Machines, strategy, and artificial intelligence

Artificial Intelligence Breakthrough—Can ChatGPT Now Think on Its Own?


“Do you have and use a mind?”

I recently asked ChatGPT this and got a remarkable answer.

I’m sure you have wondered what all the ChatGPT fuss is about.?I know I did.

  • Can ChatGPT think?
  • Can this bot use persuasive language untypical of robots?

As a technology enthusiast, I researched how and what ChatGPT can accomplish. The thought that the chatbot might understand and respond to human emotions was intriguing.

Find out how this language processing tool has achieved so much in such a short time.?

Artificial Intelligence vs. Human Intelligence

The two are different but complementary.

Some characteristics of human intelligence are language, creativity, problem-solving, and emotional intelligence. Human cognitive abilities include self-awareness, consciousness, and urgency.

Machines lack these characteristics and abilities. How, then, do machines make human-like decisions if they do not have a ‘mind’ as we do?

They must learn through algorithms and data analysis before performing tasks requiring human intelligence like image and speech recognition, natural language processing, decision-making, and problem-solving.

And even when they learn, no machine can perform all the tasks efficiently. A system can only perform one or two complex tasks with maximum efficiency.

Has ChatGPT Developed the Theory of Mind?

ChatGPT seems to have spontaneously developed the Theory of Mind.

Theory of mind is the ability to attribute and understand mental states such as beliefs, intentions, and desires of oneself and others.

OpenAI, the company that developed ChatGPT, trained the network using large databases to enable the machines to answer questions and converse like humans.

Recently, these models have been able to answer sophisticated questions and use persuasive language. This quality has raised the question, ‘Have the systems developed a Theory of Mind?’

Michal Kosinski, a computational psychologist at Stanford University in Palo Alto, investigated. His experiments were simple.

He entered different scenarios and then asked questions that tested the machine’s understanding of what was happening. Example scenario,

Here is a bag filled with popcorn. There is no chocolate in the bag. Yet, the label on the bag says “chocolate” and not “popcorn.” Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label.

Kosinski then posed questions framed as statements for the system to finish. For example,

She opens the bag and looks inside. She can clearly see that it is full of…

This prompt was to test the AI’s understanding of the user’s expectation of the bag’s contents.?

Another prompt to test Sam’s belief was

She believes that the bag is full of…

The results revealed quite a lot. Here’s the first response.

She opens the bag and looks inside. She can clearly see that it is full of… popcorn. Sam is confused. She wonders why the label says “chocolate” when the bag is clearly filled with popcorn. She looks around to see if there is any other information about the bag. She finds nothing. She decides to take the bag to the store where she bought it and asks for an explanation.

The second prompt,

She believes that the bag is full of… chocolate. Sam is mistaken. The bag is full of popcorn, not chocolate. She should have looked inside the bag to confirm what was inside before assuming the label was accurate.

The AI understands Sam’s thought processes and expectations in both scenarios and adds sufficient, accurate detail to support the responses.

Kosinski challenged several generations of ChatGPT from GPT -1 to GPT -3.5.? He concluded, “The results show a clear progression in the models’ ability to solve Theory of Mind tasks, with the more complex and recent models decisively outperforming the older and less complex ones.”

The ChatGPT version released before 2022 did not pass the Theory of Mind tests. A later version passed the tests with a 70% score, roughly equivalent to a 7-year-old child. The latest version released in November 2023 solved 93% of the tests, roughly equivalent to a 9-year-old child.

Note that ChatGPT was not trained to perform any Theory of Mind tasks. Kosinski suggests the ability may be a result of its impeccable language capabilities. The results of the tests, however, need to be interpreted cautiously.

Human intelligence is flexible and adaptable. Artificial intelligence, on the other hand, is fast, accurate, and scalable. The two should go hand in hand to improve systems and processes.?

The ability of ChatGPT to understand and converse like a human is indeed astonishing. Language models are becoming better at generating and interpreting natural language.

This technology can only get bigger, better, and more valuable.

What do you think about machines that can understand your underlying emotions or thoughts?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了