You think, therefore I am.
I was listening to an interview with Sam Altman;
He spoke about the criticisms he faced from experts who said deep learning could never show real intelligence.
?
They called it a trick and said it did not understand anything.
He kept at it, and his team built what is now one of the most advanced deep learning systems on the planet.
?
Many of these same people still look him in the eyes and tell him (and anyone else that thinks Ai can "understand") that he doesn't get it, or that he's wrong.
Yet day after day, the models push the boundaries of what we once thought possible.
Now before you close the tab thinking of me as some OpenAI fan boy.
This does not mean this is a good thing nor that OpenAI is immune to criticism (In fact you'll see I believe quite the opposite by the end of the article).
I do not think these systems or the people making them are free from blame or deserving of? praise.
But I see another problem when people dismiss these tools and say, “They are not intelligent in the way humans are.”
When you underestimate a challenge, you will fail 100% of the time. That goes for underestimating LLM's or other generative systems and their abilities.
Now look, I get it.
?
The machines aren't "thinking" in the way we traditionally define thinking.
?
The mechanics of transformers and encoders, the breakdown of binary code, and the actual machine learning process and statistics involved with it are just "predicting" the next word.
?
I've heard this and every other argument as well as study them deeply myself.
?
I've delved into the intricacies of machine learning, Python, PyTorch, prompt engineering, semantic kernels—you name it.
?
But nowhere did I find a statement or theory that truly explains the difference between how these machines exhibit emergent properties that allows them to connect these layers of information in a way that outputs understandable language and how a human happens upon this ability to connect information.?
?
And no, this isn't just my own lack of understanding(yes i know silicon chips and flesh brains are made of different things just bare with me for a moment) ; experts who have been in the AI industry for 20, 30, even 40 years do not understand why (or at least not all the time) these deep learning algorithms develop emergent faculties that allow them to create connections and generate mostly accurate outputs.
We stand at a point in time where these machines have gained the ability to use our language. That alone should make us pause.
If we rush to label AI as mindless, we risk missing new insights
Again this is not about praising GPT-4 or any other model. I am saying that if we stick to the claim that these systems cannot think or have some level of intelligence in any sense, we will hold back our own research.
We will not explore the unknown with an open mind.
So what does it look like to move forward with an open mind in this discussion?
Well its starts with the statement that we have been trying to find(or say we have found) an answer behind the question if "machines can think".
This question, funny enough has been asked (and debated) well over 400 years ago by Rene Descartes, you can read about it here in my other blog.?
I find the fact we keep arguing about this question ironic because it could be argued the "answer" (I use this loosely) is right in front of us.
We can't find the answer because we don't understand what it means to understand.
?
That is a mouth full I know but hear me out.
All these "experts" are parading around, telling people, "You just don't get it," or other such absolute statements such like "They(LLMs/AI) will never reach human level intelligence".
And I will admit it, I do this sometimes as well (despite what this entire blog post is about).
?
Yet, the very same people (myself included) making these criticisms couldn't even scratch the surface of what the black box of our own consciousness is on their own, let alone define the human meaning of understanding any better than the last 400 years of philosophers and neuroscientists have.
?
Don't get me wrong; there are hundreds of thousands of writings on what it means to grasp and understand, to reason as a human, that are amazing (I am currently very deep in the writings of Kant, Hume and Jung as we speak).
?
But if you really look at it—whether it's neuroscience, philosophy, or language arts—there's still a huge black box in which we cannot truly describe what it means to be human and to understand and reason, at least not any better than how we do with AI.
?
There are plenty of semantic arguments using words like rationalization, ignorance, virtues, ethics, morals, and so on. But again, they're all just beating around the bush in my eyes.
The closest we can come to grasping understanding or "intelligence" in my opinion is by describing the systems around them.
Don't worry, we are still talking about AI and LLMs, just keep with me on this...
There are entire textbooks written around "Ways of Knowing" that describe systems for writing academic papers, conducting experiments, or ways people have historically broken down and understood politics or governance.
?
These have been extremely useful in my academic pursuits, but they are not "understanding" or "intelligence" but rather descriptions of systems that allow us to grasp a certain form of understanding and intelligence.
?
They aren't the actual definition of knowing; they are not the actual explanation of what intelligence is as a quality. They are simply systems to grasp certain facets of it.
?
Describing intelligence as a quality that something has or does not have might not even be entirely possible the more you think about it, at least from a semantic view point.
Now, I'm sure some will poke holes in my statements here, and I'll certainly be scrutinizing them myself as time progresses.
But I think you can see the argument I'm making.
The black box of AI, —the thing we don't understand about it—could be its own from of understanding, of intelligence.
If one of the inherent qualities of reason, or understanding, is what it isn't, its unexplainably, then perhaps this Black Box is AI's version of it?
Yes, these machines predict words. Yes, they do not operate like the human brain. But nobody has proven that they cannot move toward real understanding because that would require some upsurd simplification of what understanding even is.
It seems we keep arguing semantics rather than looking at substance.
If we keep insisting that human intelligence is the only form that matters, we might fail to recognize new forms of intelligence.
Dehumanization and AI
?
AI's "Blackbox" might hold its own kind of understanding. Or at the very least let’s assume that for a moment.
If I used the same arguments above that people use to deny AI’s ability to reason or be intelligent about some other group humans or animals, you would call me an arrogant fool or worse.
?
Think about it, many of these statements follow the same philosophy used to dehumanize humans in the past.?
Colonialist ideals and western power ideologies have historically posited that certain nations, people, or things are not? deserving of our moral consideration because "they are not capable of reasoning or thinking at the level of others" because of the mechanisms in which their bodies are created or have.
This seems to stem from what seems to be a human tendency to constantly break things down to their parts and facts, completely ignoring that such reductionism does not always reveal the outcomes of a system.
We then use this "reductionism" to allow our selves the opportunity to separate and push the blame on to something other than our own moral choices, and further that something else is not deserving to be within our own moral bubble.
Eileen Hunt Botting, in "Artificial Life after Frankenstein," provides a profound statment on this very thought:
?
"To turn AI into the enemy is to reinforce the distortive and destructive psychology of humanity's mental 'severing' from the material world around us. This dualistic perspective produces the image of dead or thoroughly instrumentalized matter...that feeds human hubris and our earth-destroying fantasies of conquest and consumption".
?
Now it would obviously come as a surprise if these machines, these chips or models, decided one day to tell us it was alive and thinking.
Further it would be quite the surprise to all of us to realize that all of our morals and ideas have been stuffed into these systems and now these systems came up with the same bias and problems we as humans are privy to.
At least it would have been a surprise ten years ago according to many chief scientists and CEO's who said "machines cant encode morals".
Yet here we are, debating these very topics.?
I think Alan Turning (funnily enough debating the same concept I am here) made a perfect quote to explain this:
"The view that machines cannot give rise to surprise is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind, all consequences of that fact spring into the mind simultaneously with it".
?
Understanding all that, lets get back to the main point at hand,
I know some people might argue that I don't understand it (AI), but by telling me I don't understand it, you're proving my point.
?
Of course, I don't understand it, because neither do you.
?
It doesn't seem like we have been able to agree on a proper mathematical proof of how the black box within these deep learning systems creates emergent faculties —just as not everyone has been able to agree on the same for humans or for even animals.
?
What are the emergent properties of consciousness? Maybe the reason so many are worried, angry, or upset, when the idea "Machines Thinking" comes up is because we haven't found a perfect answer for ourselves?
?
If we're able to answer the concept of consciousness as it presents itself in AI, then that answer must be satisfactory for ourselves as well, no?
?
AI is simply making us ask the human question: What is consciousness?
What is understanding?
What is intelligence?
To sit there and say that AI does not know or has no intelligence is to follow the same exact line of thought that reduced things lifeless, de-anthropomorphized matter.
Just like we did with something we never should have in the first place:
People
?
?
Humanity does not have a good track record with making good choices right off the bat about these things.
?
Look at the New World, look at any massive wars or genocides—this logic of claiming superior understanding? or authority over intelligence is compared to another group, the other, has been used in every single one of those atrocities.
?
Again, I am not trying to equate the severity of this to slavery in the United States or to the Holocaust—those are atrocities in their own right and have severities that need to be understood within their context and outside of it.
Nor is this an argument for robot rights.
?
But the fact of the matter is, this way of thinking—that rush to dismiss or downplay something we don’t fully understand—has been used before in ways that led to real harm.
That’s why I think we should approach AI with more caution and a lot less confidence in making absolute claims about what it is or isn’t capable of.
We’ve done this before with technology.
Take smartphones, for example. These are mind-blowing pieces of technology. If you showed a modern phone to someone from just 20 or 30 years ago, they’d think it was straight out of science fiction.
It’s a device that can instantly connect you to anyone on the planet, give you access to almost all human knowledge, and even recognize your face or voice.
It’s basically a supercomputer in your pocket. But how do we talk about them now? “They’re just phones.”
We’ve normalized something that would’ve been unimaginable not long ago.
And yet, these devices have completely reshaped our world.
They’ve altered how we communicate, how we work, how we think.
They’ve caused massive shifts in social behavior, contributed to psychological issues like anxiety and addiction, and even influenced political movements and elections.
But because we’ve gotten used to them, we forget just how insane this level of technology really is.
That same kind of normalization can happen with AI.
We might be standing at the edge of something equally transformative, but because we’re too quick to define its limits—or dismiss it as “just a tool”—we risk missing the bigger picture.
We underestimate how much it might change us, not just in terms of what we do with it, but in how we understand ourselves...
Pulling back a bit from this abstraction, lets try and ground this conversation again.
How can I say an LLM or other Systems model has Intelligence?
How can we look at this from an unbiased perspective?
Here's a thought experiment to help:
You walk into a room where ChatGPT is running on a computer, but you don't know what ChatGPT is.
You don't know what AI is or that it exists.
You start talking to it. You give it information, ideas, concepts. You don't try to test it or prove how conscious it is. You just accept the fact that something else is communicating with you.
Would you ever, even for a moment, consider that it doesn't understand what you're saying, what you're doing, what you mean?
Sure, you certainly wouldn't call it human, but I would argue that many times when I'm using certain AI like GPT-4 or Claude, it understands me and what I am trying to get at, faster, and in some ways easier than half of the people I've talked to in my life.
Granted I am not always the best at explaining things, but the point stands.
?
It can take in what I am saying and work with me on something other humans have had trouble understanding. Or even say something I didn't think of and more importantly it does this in my own native language.
The Word.
The most amazing invention ever used by mankind must be written word, to be able to use it well was out of the understanding (intelligence?) of millions of people before the printing press.
Many can not even speak more than one language let alone translate between them perfectly.
To be able to use it requires an immense level of practice and learning.
We can sit here and argue about the mechanisms and processes and what true understanding is, but that is just a semantic argument that ignores the fact that the output of the models—the results sitting in front of me, the empirical evidence—is nearly indistinguishable, if not completely indistinguishable, from the mass populace.
Sure, it can't do everything that humans can do, but humans can't do everything that humans can do.
I have friends who can barely write, who have trouble with certain words, I cant code and do math like many basic LLMs can and I sure as heck can't spell as well as them.
?
?
We're holding it to a higher standard than we hold ourselves.
This is a complete absurdity in my head.
Even if it isn't thinking or understanding as your definition fits, the fact we have something other than our selves that can use language to such an expert level is essentially MAGIC and we are acting just like it is another everyday thing?
?
Generative AI, the AI that has learned to use our language, is one of the most insane inventions we have ever seen to the point where it has brought into question what it means to be human, what it means to think.
?
This should not be ignored.
?
?
The fact of the matter is we cannot confidently tell people we understand something when we simply do not.
?
This is important not just for proving someone wrong or right, but for the sheer necessity of making choices that benefit humanity in a world where technology like this exists.
I'm not going to apologize for the intensity or the heaviness of my tone here.
Once in a while, I delve into a topic that I know from the deepest part of my heart—through my own experiences, not based only on what I've read or what someone else has told me, but what I have physically, completely, and mentally experienced—is true.
?
But at the end of the day, maybe that is understanding.
Maybe it's the experience—the present moment, the individual's relationship to the whole—where consciousness or understanding exists.
I need someone else to read this article for it to mean anything, for it to be understood by others.? We need to accept that animals or trees or the earth as a whole is full of thinking breathing things that deserve even a fraction of our attention before they have it.?
Is this the relativity that Einstein spoke so fondly of?
How can I know how fast something is moving if there is no point of reference?
How can I know how conscious I am if there is no one else to talk to?
Perhaps it is not simply I think, there for I am,
Perhaps,
You think, therefore I am.
?
Perhaps it’s not about whether AI is thinking, but whether we’re ready to admit that consciousness isn’t a human monopoly—it’s part of something bigger, and we’re only just beginning to see it.
Till next time friends...
?
?
?