Will you ever get smarter?
Photo by Ayo Ogunseinde on Unsplash

Will you ever get smarter?

A shorter version of this article was published as the intro to the July 4th edition of my Discomfort Zone newsletter.


It wouldn’t surprise me if you, dear reader, feel a tad redundant these days. A couple of years ago, you were at the peak of evolution, potentially one of the smartest creatures on the planet, if not the entire galaxy.

And now, suddenly, your brain ain’t so special.

The AIs are here and all we can do is hope that they’ll smile with benign patience as they shuffle us around like pawns on their 5-dimensional chessboards.

Who can blame you if you feel that way? Our gray matter is trapped inside skin and bones while AI runs on servers that can multiply indefinitely, powered by faster chips and limitless data.

But should we be so defeatist? Have we been approaching this from the wrong angle? What if human intelligence is more than the sum of one person’s neurons?


Those smartypants LLMs are becoming uber-geeks

Generative artificial intelligence spans a range of programs and products. The one that you’re probably most familiar with is an LLM (large language model) in the form of a chatbot.

The general vibe among tech bros and mere mortals is that chatbots are getting better all the time. The first, and still most well-known of the species, ChatGPT, was released to the general public in its GPT-3 iteration in November 2022.

We’ve since upscaled to GPT-4 and then to the GPT-4o version, an impressively “multimodal” LLM that can handle text, images, and audio.

GPT-5 (which I expect to be launched with a much snazzier branding such as ChatGPT-5ohcrapmyjobhasgone) should be appearing on a screen near you in late 2025 or early 2026, according to Mira Murati, Chief Technology Officer at OpenAI, the company that developed ChatGPT.

Murati sincerely believes that ChatGPT-5 will exhibit Ph.D. levels of intelligence:

“If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence,” Murati said last month in an interview with Dartmouth Engineering. “And then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at Ph.D. intelligence for specific tasks. Things are changing and improving pretty rapidly.”

It’s interesting to note that Murati’s boss, OpenAI CEO Sam Altman, often talks about achieving the holy grail of AGI (artificial general intelligence).

Yet a person with a Ph.D. has, by definition, a highly specialized area of expertise, which is the opposite of “general intelligence”. Murati admitted as much when referring to performance for specific tasks.

Those guys should get their story straight.


Chatbots are about to bump their botty heads on the performance ceiling

Either way, it’s clear that OpenAI is improving its products. No argument there. The question is, how much better are they getting and how quickly are they reaching these new levels?

OpenAI has a vested interest in making impressive claims about future performance, such as Murati’s statement, in order to secure investment and boost sales as it burns through cash.

Should we believe these claims? Well, there are signs that LLMs (Large Language Models) are approaching a performance ceiling.

Why would there be a limit on LLM performance? In a nutshell, the quantity of training data available for the models to ingest is shrinking fast.

Once you’ve scooped up (in other words, stolen) the entire internet, all you can do to improve performance is tweak how the data is processed. In addition, the compute power and the energy required to run AI platforms are becoming prohibitively expensive.

These diminishing returns mean that the performance curves of the various LLMs are tapering off.

Sure, give ChatGPT a narrowly defined task that would require a huge chunk of time for a person to complete, and it might beat the best available human in a given organization.

So if your goals are to save time and produce an above-average output, an LLM might do the job just fine. It’s like having a tirelessly polite and helpful sidekick who performs at 80–90% of expert proficiency.

But if you need an output in the 90–100% range… if “good enough” isn’t good enough… you’ll need a genuine expert.

This all makes sense, right? Well, the reality is that OpenAI needs to maintain the hype cycle if it wants businesses to invest millions in its AI products.

The Gartner AI hype cycle from https://www.gartner.com/en/documents/5505695. Image under fair use provision

At the moment, this approach appears to be working.

However, OpenAI is also carrying out in-house studies that are presented as pseudo-academic (i.e. non-peer-reviewed) papers to give the illusion that the hype is based on science. These studies provide a sheen of objectivity when hiding an extravagant, unsupported claim.


OpenAI’s not-so-intelligent assumption

Here’s a great example: in the conclusion of a paper published last month, LLM Critics Help Catch LLM Bugs, OpenAI boldly stated:

From this point on the intelligence of LLMs … will only continue to improve. Human intelligence will not.

The vast, vast majority of people who read that paper will, like me, only skim it. They will probably also read the opening abstract and the conclusion.

The latter is where the outrageous hype is neatly slotted in.

The trouble is, the two ideas presented in the claim above — that LLMs will get smarter but people won’t — are based on… nothing.

First of all, the law of diminishing returns outlined above, as well as the natural limits on processing power and energy, preclude gen AI models from infinite improvement.

But secondly — and more importantly — there’s no reason whatsoever to believe that human intelligence will not improve. That claim is based on a fundamental misconception of our cognitive abilities: that they are centralized in a single brain.

In truth, our intelligence is distributed.


Your brain is not in a vat

From the 2004 book The Wisdom of Crowds by James Surowiecki to a variety of scientific studies like the ones here, here, and here, the idea that the human race’s superpower is its collective, socially organized intelligence, has been established for some time.

We are a social species that evolved incredibly complex brains and languages to collaborate more successfully than other creatures. This collective intelligence is the reason we now — for better or worse — rule the world.

No single person’s super smart spaceship skills propelled us to the moon. Even a unique genius like Isaac Newton knew he stood “on the shoulders of giants”.

Intelligence is a team sport.

Our ability to share, compare, and process information constitutes our intelligence, not an IQ test or the ability to win at chess. The simple fact is that diverse groups of people perform better than individual experts (or even groups of experts).

Collaboration is the human superpower. Photo by

A recent article in The Guardian by neuropsychologist Huw Green makes an additional claim: that our thoughts and other mental processes are inextricably fueled by those around us.

We simply cannot think in a relationship vacuum.

The shifting social contexts we find ourselves in as we navigate the world provide a framework for our cognition. You were once a baby who learned how to make sense of the world from the people around you, and that process keeps happening even when we are proud, independent adults who believe that we can “think for ourselves”.

Ironically, the very existence of LLMs provides proof that human intelligence is still improving.

They are tools that leverage the collective intelligence of the past to fuel the collective intelligence of the future. No technology appears like magic. Aliens didn’t show us how to make computers. Gods didn’t teach us how to tame fire.

Everything in human culture, including artificial intelligence, is an expression, a crystallization, and a commodification, of human intelligence.

This is why I’m a pragmatist about AI, neither an optimist nor a pessimist.

We will develop myriad ways to use this technology; some good, some bad. But I am a pessimist about the influence of giant corporations and venture capitalists on the development of AI.

Beware of the hype. And believe in the human.


John B. Dutton’s new novel, a deadly AI satire called 2084, was published May 1 and is available at Barnes & Noble, Amazon, and select indie bookstores. For signed or personalized copies, visit johnbdutton.com. His Discomfort Zone newsletter comes out on Substack every second Thursday.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了