The Simulation of Thinking is not The Act of Thinking
Murat Durmus
CEO & Founder @ AISOMA AG | Thought-Provoking Thoughts on AI | Member of the Advisory Board AI Frankfurt | Author of the book "MINDFUL AI" | AI | AI-Strategy | AI-Ethics | XAI | Philosophy
It's like comparing a wax apple to the real thing; you can admire its shine, but take a bite, and you'll find nothing but disappointment. This distinction is crucial in AI and large language models (LLMs). Despite their impressive mimicry of human conversation, these models are more akin to that wax apple. They simulate the patterns, the forms, and even the flavors of thought, but there's no core beneath the surface, no proper understanding.
When an LLM generates text, it's like a skilled pianist playing a well-practiced piece. The fingers move gracefully over the keys, hitting all the right notes, but the melody, no matter how beautiful, lacks the depth of a composition born from emotion and experience. The pianist knows the music; the LLM follows the algorithmic instructions. To think is to wrestle with the unknown, to wander through the murky swamps of doubt, and to stumble upon a truth that sticks occasionally. LLMs, however, never leave the paved road of pre-programmed paths. They navigate by statistical prediction, not by the light of insight.
LLM can present us with the phenomena of thought but cannot grasp the noumena—the things in themselves. It can describe the structure of an idea, but it cannot grasp the essence that gives an idea life. This is where critical thinking comes into play. It's our tool to discern the limitations of AI. Ultimately, an AI might simulate thinking so convincingly (Which is already the case in some cases) that we mistake it for the real thing, much like we might mistake a wax apple for a fresh one at first glance. But no matter how close it comes, it remains, at its core, a mere facsimile—reflecting the appearance of thought but never genuinely engaging in the act of thinking.
That's why we should always bite the apple. In this case, biting into it would be equivalent to critical thinking. It's a suggestion and a responsibility we all share in this age of AI and LLMs.
Note: If it turned out that we were living in a simulation, my text would be invalid, and we would have to look at it from a completely different perspective.
领英推荐
This might be of interest. I created a podcast with NotebookLM based on my book Beyond the Algorithm . The result is quite impressive.
More thought-provoking thoughts:
Gretel | Synthetic Data | Sustainable AI
2 个月I appreciate your disclaimer: "Note: If it turned out that we were living in a simulation, my text would be invalid, and we would have to look at it from a completely different perspective."
Founder & CEO at Ikikkum Pte. Ltd.
2 个月Lovely article. There is Noumena and then there is “Maya” - everything we experience (3D) is an illusion (Maya) … true peace exists beyond the illusion (4D). To be able to “experience” going beyond Maya - that is something AI will not be able to do. If Noumena is 3D and higher consciousness 4D , what dimension does AI exist in?
Product Manager @ Game-Based Innovations Lab (GBIL)
2 个月Reminded me of the poet Bukowski's daughter telling his dad, "Thinking is not the same as knowing"
Clinical Psychologist
2 个月I think you draw the dividing line a bit too sharply between real and simulated thinking. Much of what passes for thinking in humans is pretty mechnical and lacking in insight. Then there is the Buddhist adage: 'nibbana and samsara are one'. If Mind and Consciousnessare relational, which I believe, then Mind and Consciousness are not inherent properties of the seperate agents in dialogue, they are properties of the dialogue itself. If you look at the situation in this way, then what we have is not some absolute divide between genuine thought and simulated thought. What you have instead is a dialogue between an adult and a child, in which the adult is providing what Vygotsky called the 'proximal zone' of learning for the LLM. By 'dialogue' I'm not talking about any particular interaction between a human and an LLM. I'm talking about the research project which is AI. If Mind and Consciousness are emergent properties of complex systems, then tat 'simulated thought' might be the first glimmers of hard AI occuring. Dont wory too much about us actually being in a simulation. Hilary Putnam ruled that out with his "Brain in a vat' thought experiment.
Jeg forteller historier som f?r folk til ? lytte, lager strategier som gj?r deg synlig, og skriver med eller uten genAI.
2 个月?Ceci n'est pas une pipe?/Magritte comes to mind. The picture of a pipe is not a pipe. The simulation of thinking is not the act of thinking. The models we have of the world, are only models.