No YOU'RE Just Fancy Autocomplete!
Illustration by @YvetteGilbert

No YOU'RE Just Fancy Autocomplete!

It's been five months since ChatGPT was unleashed onto the world. Many have focused since then on how good it is, and all the bad things that might stem from widespread adoption. On the flip side, some have emphasized how bad it is, characterizing it as "just a parlor trick" or "glorified autocomplete":

It's easy enough to understand this perspective. We know how ChatGPT works, at least at a high level: it takes a bunch of text as input and spits out the word that it feels (based on its training data) is the most likely to follow. Run this over and over again, feeding in the latest text each time, and it will write what seems like very coherent and correct prose (or even poetry) in the language of your choice (just not Icelandic).

Obviously this is totally different from how humans generate language, so it is natural to be skeptical that ChatGPT is intelligent, let alone conscious.

But wait! Is this approach really so different from how our own brains operate?

To a large degree, the assumption that systems like ChatGPT work differently from our brains, and therefore cannot be truly intelligent, is a self-fulfilling prophesy. It has been depicted (perhaps even stoked) by movies like Blade Runner and TV shows like Westworld. Essentially, these portrayals tell us that, no matter how human an AI's behavior might appear, if there isn't a pulsating, intricate network of neurons bathed in a complex soup of neurotransmitters, cradled within the soft, convoluted folds of a moist, gray matter, then the AI can never truly be considered intelligent or conscious.

This strikes me as just so much AI-phobic nonsense. The truth is that we have very little idea of how humans actually generate language, the relationship between language and thought, or what makes us feel conscious (the latter being widely recognized as a particularly hard problem).

If we want to get to the bottom of what's happening inside our brains, one promising scientific approach would be to build a system that functions like our brain, as witnessed by its external interactions. Rather than condemning it as "fancy autocomplete" by definition, it would make more sense to presume that a system whose behavior is indistinguishable from a human is intelligent and conscious. And this is precisely what is happening with ChatGPT and other large language models (LLMs).

Perceptive readers might recognize this as exactly the Turing Test, as proposed by computer science demigod Alan Turing way back in 1950. The premise of the Test was that you couldn't achieve human-like behavior without something analogous to human intelligence. So if a computer could fool a skilled interrogator into believing it is human, we should accept that it is intelligent and conscious in some meaningful way. Ironically, as silicon-based systems have begun to look like plausible contenders, resistance to the very premise of the Test has mounted, triggering reactions like the following:

Sure, ChatGPT-4 is capable of incredibly human-like interactions, but we know how it works under the hood: it's just glorified autocomplete. So the Turing Test must be wrong!

Personally, I do not accept the premise that a system functioning very differently from our brains, regardless of how smart it seems, can not be considered intelligent or conscious. But even if we take this as a given, is it even certain that the way our brains work is that different from ChatGPT?

Now I can hear the protests:

But wait, when I say something, the words I utter are not chosen probabilistically based on which word is most likely to follow in a specific context!

Okay then, how exactly are they chosen? Certainly we don't consciously consider each word, weighing all the options before finally settling on the best one. We just start talking, and the words come pouring out.

And indeed, convincing research suggests that we tend to say and do stuff, then explain to ourselves why we did it in a kind of post-hoc rationalization. Seen in this light, the idea that our speech is generated using a probabilistic model similar to ChatGPT is actually very plausible. After all, our brains have an analogous structure (i.e. a huge, highly interconnected neural network) which is good for exactly that: taking into account a large amount of unstructured information and figuring out which output best fits this input.

This leads us to the even trickier question of consciousness. As far as I know, we haven't got a clue what leads to our subjective sense that we are experiencing the world from a unique, personal perspective, often referred to as qualia or the aforementioned hard problem of consciousness. But unless you believe there's some magic happening in the brain (which I don't), then our perception of consciousness must arise somehow from the complexity of the underlying neural network itself.

Now consider that the GPT-4 model underlying the latest version of ChatGPT apparently has about a trillion parameters. Still a couple of orders of magnitude lower than the approximately 100 trillion interconnections in the brain, but roughly in the same ballpark. Is it really that implausible that something like consciousness might arise from within that unimaginable complexity? And if not now, how about in the future when the complexity of the models inevitably reaches and ultimately exceeds that of the brain?

Knowing that ChatGPT just chooses the next most probable word over and over again to generate a text, it's easy to see why jokesters on Twitter claim that it "doesn't understand shit about what it learns." What they fail to account for is the huge size of the model. I've heard numerous anecdotes indicating that even the researchers at OpenAI were astonished by the bot's human-like behavior when they started feeding text into the fully trained model, around the time of GPT-3. The staggering complexity of GPT models defies human intuition, and their uncanny ability to mimic human-like behavior consistently surprises even the experts in the field.

Naturally, there are numerous open questions. GPT is just a language model, and clearly, our brains are far more than that. Our cortex, responsible for higher-order cognitive functions, represents just one component of a vast and intricate system. The role of emotions, for instance, remains unclear. Perhaps a ChatGPT-like entity would need some kind of "emotion module" to achieve intelligence and consciousness. Or maybe it could be intelligent in the same way as Star Trek's Mr. Spock, who famously lacked emotions.

Moreover, we have yet to fully comprehend the relationship between language and thought, as well as how these processes intertwine with other aspects of cognition. The multifaceted nature of the human brain goes far beyond the scope of a mere language model, and unlocking its true potential and understanding remains a formidable challenge.

Nonetheless, it is time to move beyond the human-centric assumption that nothing other than a human brain can truly be intelligent or conscious. A model as complex as the GPT's latest incarnations is impossible for us to fully understand, so there is no real basis for labeling it a "stochastic parrot" or "fancy autocomplete". Ultimately Turing had it right: if it walks like a brain and quacks like a brain, it's probably a brain.

Frank Diamond

Global Category Manager - Polymer Additives

1 年

Does ChatGPT have a view on the types of jobs which will be in highest demand in 2050?

回复
Daniel Backhaus

Digital Transformation, OmniChannel Commerce, Enterprise Solution Sales

1 年

A lot to unpack here, for sure. And doing so in detail exceeds the time I have available for this right now (maybe I should get ChatGPT to help? ??), so I'll keep it brief. Relatively. The point an AI can be considered "human" or conscious is a higher bar than the 70-year-old Turing test. And whether we form sentences in a similar manner as ChatGPT is immaterial to this. Humans often speak before they think (guilty here), say stupid shit (guilty), and occasionally say one thing while "meaning" another, sometimes inadvertently, sometimes unintentionally (the two are subtly different in my mind, though perhaps not in yours), and at other times deceptively, ironically, or other nuanced ways. I view the forming sentences part as a "mechanical" manifestation of deeper thought, emotions, chemical imbalances, past trauma, innate desires, and the many other things that make us human, imperfect in so many ways but thereby perfectly human. Whether an AI can ever achieve this level of sentience I don't know but it seems plausible. I think what most people are scared of is the "in-between", the state where AI is very human-like, perhaps nearly indistinguishable, but still lacking in critical aspects like empathy. Even Spock knew this.

回复
Malik Lakhani

Founder | Transforming Garage Startups into Multi-Million Dollar Businesses By Delivering Cutting-edge IT Solutions

1 年

I agree on your point "nothing other than a human brain can truly be intelligent or conscious." but at the same time We can achieve a lot of smart automation in our day to day life using AI. We can train it to perform some intelligent and complex tasks. Just like how I found you using AI. ??

回复

要查看或添加评论,请登录

Matthew Gertner的更多文章

  • After a 10 year break, I started programming again using modern AI. Wow.

    After a 10 year break, I started programming again using modern AI. Wow.

    I have an on-again, off-again relationship with software development. When my management workload gets the better of…

    6 条评论
  • Search is not a Conversation

    Search is not a Conversation

    We just went live with the webpage for a new product we're cooking up called Salsita Conversational Product Finder. As…

    1 条评论
  • With Conversational UI, E-Commerce is Having An Uber Moment

    With Conversational UI, E-Commerce is Having An Uber Moment

    In the winter of 2007, I was attending a conference in Paris with a colleague. On our way back to our hotel, we spent…

    8 条评论
  • Apple Just Killed Web Apps... Until They Didn't

    Apple Just Killed Web Apps... Until They Didn't

    Apple, it seems, is "killing web apps". This, at least, is the claim of a petition by Open Web Advocacy, an…

  • Conquering Conversational UI Challenges

    Conquering Conversational UI Challenges

    As discussed in my previous post, Conversational UI has the potential to become an entirely new user interface paradigm…

    3 条评论
  • Conversational UI is The Future of Human/Computer Interaction

    Conversational UI is The Future of Human/Computer Interaction

    Since its release in November 2022, ChatGPT has taken the world by storm. It amazed laypeople and experts alike with…

    10 条评论
  • Confessions of a Tech-Optimist

    Confessions of a Tech-Optimist

    A few weeks ago, prominent VC—and inventor of the web browser—Marc Andreessen published a post entitled "The…

    16 条评论
  • Nvidia and the Birth of a New Computing Paradigm

    Nvidia and the Birth of a New Computing Paradigm

    In 1945, Hungarian mathematician John Von Neumann wrote a seminal report proposing an “Electronic Discrete Variable…

    3 条评论
  • In Defense of Bosses

    In Defense of Bosses

    In these polarized times, it is harder than ever to achieve consensus. Everyone has their unique perspective, often…

    4 条评论
  • Some Ideas for Improving Software Estimates

    Some Ideas for Improving Software Estimates

    Like most software agencies, my company Salsita Software bills on a time-and-materials basis for custom software…

    2 条评论

社区洞察

其他会员也浏览了