Methinks We Think Machines Think Too Much
John Willis
As an accomplished author and innovative entrepreneur, I am deeply passionate about exploring and advancing the synergy between Generative AI technologies and the transformative principles of Dr. Edwards Deming.
Earlier today, there was a thought-provoking discussion on LinkedIn about a podcast featuring Lex Freidman and Edward Gibson. They discussed the understanding of the Monty Hall problem by LLMs. There is an ongoing debate about LLMs and their thinking abilities. Some believe?they?are merely pattern matching, while others question their cognitive processes and how they answer questions. However, using the Monty Hall problem as an example is a flawed argument. The Wikipedia page on the subject provides an accurate description of the issue.?In the case of this example, the LLM?uses?pattern matching?but with "attention."?
I attempted to deceive the LLM (ChatGPT) by asking the question with different variables (see Figure 1.)?
ChatGPT wasn't tricked and still associated my question with the Monty Hall Problem. As someone who knows me well would know, I am persistent about these issues. So, I decided to take a different approach. I modified the classic children's story, The Lady and the Tiger. At that time, ChatGPT was unable to learn (i.e., to match patterns.) ?
However, it responded correctly when I prompted ChatGPT with my probability question.
I may not know enough to answer questions regarding GenAI, AGI, and ASI, but I'm confident that HAL won't be locking our bay doors anytime soon.
I love the 2001 Space Odyssey reference!