What is intelligence and what is artificial intelligence ?
Gautam Borgohain
Machine Learning Engineering | Data Science | Artificial Intelligence
This week’s update is a little different from my usual deep dives into AI frameworks, tools, and methods. I’m currently at my parents’ place for vacation, and my mom, ever the curious literary soul, decided it was the perfect time to grill me about how ChatGPT works! She’s a writer, and lately, she’s been testing ChatGPT’s capabilities, exploring just how much of her editor’s role can be automated. For my tech-only audience, think of the writer-editor dynamic as somewhat similar to that of a software engineer and a PM ??.
In her experiments, she’s noticed that ChatGPT has become impressively good at answering her questions, especially those that more complex and require a lot of thought and reasoning - for example,? scoring poems and essays on a 10-point scale across multiple aspects. Needless to say, she’s quite impressed but can’t help but wonder: How is ChatGPT able to understand a poem and evaluate literary work so impressively? It’s almost as if ChatGPT can think and evaluate just like humans do—sometimes even better.
After a healthy three-hour discussion where I did my best impersonation of someone who “totally knows how LLMs work”—you know, explaining how they’re just next-word predictors, mentioning knowledge cutoff dates, and throwing in some jargon for good measure—I left with a nagging feeling. Maybe there’s some validity to her point of view. Perhaps there’s more to ‘artificial’ intelligence than meets the eye.
The AI Practitioner’s Take
Most of us in the AI field have a pretty mechanistic view of things. We often describe Large Language Models (LLMs) like ChatGPT as sophisticated parrots—they’re excellent at mimicking language but don’t truly “understand” it. They’re essentially predicting the next word in a sentence based on vast amounts of data they’ve been trained on.
Yann LeCun, one of the godfathers of deep learning, recently pointed out that the fact LLMs take the same amount of time to answer a complex question as they do a simple one is evidence they don’t think or reason like humans. They’re not pondering the mysteries of the universe; they’re crunching probabilities.
Similarly, Andrej Karpathy, another heavy hitter in our field and former director of AI at Tesla, likened LLMs to “average labelers” in a recent tweet. His point is that these models generate responses based on the statistical average of the data they’ve been trained on—not through some high-level reasoning. “Average” here refers to the consensus of opinions they’ve absorbed, not the quality of their answers.
Despite the simplicity, LLMs are incredibly useful. Social media is flooded with content showcasing cool new AI tools that can automate tasks we once thought required human-level intelligence. Though, let’s be honest, we get a little too excited and underestimate the complexity of the tasks we want to automate away.
The Other Side of the Coin
But here’s the thing: if you’ve been using ChatGPT, Claude, or any of the latest frontier models, you’ve probably had moments where you thought, “Wow, that was really insightful! How did it do that, it would have taken me a lot of time to arrive at the same conclusion”. I know I have. And so has my mom. And so have countless others who don’t spend their days tweaking hyperparameters or debating transformer architectures on Reddit. These AI models have become so good and so useful that it’s easy to anthropomorphise their work and forget the mechanical simplicity of their underlying algorithms. We start to wonder if maybe, just maybe, there’s something more going on under the hood.
领英推荐
Making Sense of It All
This got me thinking: What even is human intelligence? Are we all just “labelers” in a sense?
From the moment we’re born, we’re absorbing information, recognizing patterns, and making associations—all based on our experiences and the data (or education) we’ve been exposed to. When we face new problems, we often rely on past knowledge to find solutions. Sound familiar?
We are also learning and ‘labelling’ continuously. Of course, we learn from more data in more modalities that AI. Humans also have consciousness, emotions, and subjective experiences—things AI doesn’t seem to possess. But perhaps the line between human and artificial intelligence isn’t as clear-cut as we like to think.
Wrapping Up
I’ve come to appreciate both perspectives. Yes, from a technical standpoint, AI models are statistical machines churning out probabilities. But from a user’s perspective—especially those not steeped in the nitty-gritty details—they can appear remarkably intelligent.
Maybe the real insight here is that intelligence, whether human or artificial, is more nuanced than simple definitions allow. And perhaps, just perhaps, recognizing this can help bridge the gap between how AI practitioners and everyday users perceive these technologies.
At the end of the day, whether we’re engineers, writers, or somewhere in between, we’re all trying to make sense of the world using the tools we have. And if an AI can help my mom evaluate poetry—or help me write this article—maybe that’s a sign that we’re onto something interesting.
What are you thoughts ? Have you had moments where an AI surprised you with its insights? I’d love to hear your experiences!
P.S. If you made it this far, thanks for joining me on this little philosophical detour. And mom, if you’re reading this, let’s schedule another chat soon. Maybe over some coffee next time.
Senior data engineer @ Quantexa
3 个月How do we know we're not biological machines churning out probabilities at a level we're not conscious of? A lot of our knowledge is intuitive/subconscious. Perhaps what distinguishes us from current AI are the layers on top (emotions, consciousness), but underlying these things is a system that functions somewhat similarly? :)
?? AI Art | ?? AI Fashion | ?? AI Design | ?? AI Storyteller | ?? SEO Writer | ?? Career Mentor
3 个月Insightful
Gautam Borgohain, that convo sounds deep. It’s wild how AI makes us question our own minds. What do you think? Are we losing the plot here?