The Google Super Bowl Cheese Mishap Got Me Thinking: Why AI—and Education—Needs Critical Thinking, Not Just Fact-Checking
TL;DR:
Google’s AI-powered tool, Gemini, falsely claimed that Gouda makes up 50-60% of global cheese consumption—a made-up fact confidently presented as truth. But the real problem isn’t just AI making mistakes; it’s how AI (and humans) process and accept information. AI mirrors the way we’ve been taught to think—focusing on "right" answers rather than questioning sources, assumptions, and perspectives. This isn’t just a tech issue; it’s an education issue. If we want AI to be truly intelligent, we need models that reason transparently—so we can critically examine their conclusions, just like we should with everything we’re taught.
The Internet is Not Always Factual—And AI Doesn’t Know That ??
We like to think of AI as an all-knowing, truth-detecting machine. But here’s the catch: AI doesn’t actually "know" anything—it just predicts likely answers based on patterns in data. And when that data comes from the internet, things get messy.
The Google cheese mishap wasn’t just a funny mistake. It was a perfect example of how knowledge is formed and spread—not through perfect accuracy, but through confidence, consensus, and convenience.
Education’s Role: Teaching Right vs. Wrong Instead of Critical Thinking
But here’s the bigger issue—AI doesn’t critically evaluate information because we don’t teach children to do it either.
From an early age, education trains us to think in binaries: ? Right vs. ? Wrong ? Correct answer vs. ? Incorrect answer ? Pass vs. Fail
Schools prioritize memorization of "the right answer" rather than teaching students how to analyze, question, and challenge narratives. The result?
If AI is reflecting our way of thinking, then it's no surprise that it confidently spreads misinformation in the same way many humans do.
Beyond Misinformation: The Real Problem
Even when information isn’t outright false, it is often:
领英推荐
Why AI Needs Critical Thinking, Not Just Fact-Checking
We’ve trained AI to retrieve facts, but we haven’t trained it to think critically about how those facts are shaped, distorted, and sometimes fabricated. The challenge isn’t just detecting misinformation—it’s understanding why different perspectives exist, what agendas they serve, and how knowledge evolves.
To make AI truly useful, we need it to go beyond "true or false" and develop nuance navigation skills:
But if we want AI to think critically, we first need to teach people to think critically. And that starts with reforming education so that questioning, challenging, and debating are seen as strengths—not as disruptions to the system.
Final Thought: AI That Reasons is Great—But Only If We Can See Its Reasoning
The next wave of AI models is moving beyond simple pattern-matching toward reasoning-based systems. This sounds like progress—AI that doesn’t just retrieve facts but actually thinks through a problem before answering.
But here’s the catch: if AI is reasoning, we need to be able to see how it reaches its conclusions.
A reasoning model that operates like a "black box"—where the logic is hidden—is just as dangerous as one that blindly pulls from the internet. If we can’t interrogate its reasoning process, we’re left in the same position we are today: trusting an answer simply because it sounds authoritative.
Instead, AI reasoning should be layered, allowing us to:
In short, AI should reason like a great thinker—but also show its working like a great teacher.
Because AI that reasons in the dark is no better than AI that makes things up. If we want critical thinking AI, we need AI that can be critically examined—just like we should critically examine everything we’re taught, from textbooks to search results.
I’m #MadeByDyslexia – expect creative thinking & creative spelling.