On the Irony of Expecting AI to Reason Better than Humans
Murat Durmus
CEO & Founder @ AISOMA AG | Thought-Provoking Thoughts on AI | Member of the Advisory Board AI Frankfurt | Author of the book "MINDFUL AI" | AI | AI-Strategy | AI-Ethics | XAI | Philosophy
Reasoning, that precious jewel in the crown of human cognition, has always been more of a cheap imitation than the flawless gem we like to imagine. We put reasoning on a pedestal, telling ourselves that our decisions are born from logic and reflection. Let's face it: in reality, we humans are a messy brew of prejudices, emotional impulses, and wishful thinking. And now we dare to demand that AI “think” logically as if it were the flawless heir to something we have never truly mastered.
Large language models are often criticized for not mastering logical thinking. Do we master it?
David Hume once posited that 'Reason is, and ought only to be the slave of the passions,' acknowledging our strong emotional underpinnings. It's ironic that we now expect LLMs to embody a platonic ideal of rationality when the human mind often operates on the basis of fallacies and assumptions.
We have established logic rules – syllogisms, formal inference systems, and the like. But like the rules of a game, they can be bent when emotions or convenience come into play. We should pause and reflect: how many of our decisions today were guided by a strict chain of irrefutable arguments?
Was it the systematic, deductive process of Aristotle?
Or was it a casual mental shrug supported by confirmation bias and selective memory?
And yet, we expect AI to untangle the mess of our inputs and respond with divine clarity ;-)
By asking LLMs to reason better, we are outsourcing a job we were never qualified to do. It's like blaming the calculator for a miscalculation, forgetting that we entered the wrong numbers. Perhaps we hope that by expecting flawless reasoning from it, AI will fulfill our promise to human rationality – a promise we have never kept.
We ask machines to reason without flaw, forgetting that we are masters of tangled logic.
The real irony is that we are disappointed when it doesn't turn out perfectly, conveniently ignoring the fact that our thoughts weren't either.
领英推荐
Murat
This may interest you. I created a podcast with NotebookLM based on my book Beyond the Algorithm . The result is quite impressive.
More thought-provoking thoughts:
Senior Yardi Professional with 15+ years of working experience with high profile clients | Human Behavior Researcher | Nonverbal Communication Analyst | Science Communicator | Blogger & Writer
2 周Very well written! It's the right time to contemplate - Exactly what being a human is?
Research engineer at Institut Laue Langevin
4 周I think there is a huge misunderstanding about what intelligence is. It is NOT "imitating human intelligence". It is "solving problems by data processing". Human intelligence is one form of intelligence, that is, of the ability to solve problems by data processing, problems that are, indeed, often emotionally shaped. But other forms of (artificial) intelligence can solve THEIR problems in a totally different way, and one has to admit that pure rationality is often extremely efficient in solving a problem when it is formulated clearly. Now the universal problem of any "life" form (that is to say, of an entity that can have a problem to solve) is to influence the external world in such a way that its problems get solved, which often means, eliminating or subjugating other life forms in order not to have them interfere with your way of organizing the external world as a function of your problem. The rationality of this approach (called instrumental convergence) is terrifyingly logical.
Professor at Hunan College of Foreign Studies
1 个月As we navigate the intersection of human cognition and artificial intelligence, we must embrace the imperfections of our reasoning. By acknowledging the biases and emotional influences that shape our decisions, we can develop a more realistic perspective on what reasoning truly is. In turn, this understanding can guide us in our expectations of AI, encouraging a partnership that respects the strengths and limitations of both human and machine intelligence. In the end, it is not about finding a flawless gem in reasoning or AI, but about understanding and optimizing the messy, complex interplay between the two.
UX Specialist | Design Thinking | Human-Centered Design
1 个月Your reflection reminds us to recognize the limitations of both human reasoning and AI. While AI is excellent at processing vast amounts of information and providing data-driven insights, it complements human decision-making rather than replacing it.? This interdependent relationship emphasizes the need for ongoing improvement in both areas. By acknowledging and utilizing our strengths and weaknesses, we can promote a more efficient and well-rounded approach to problem-solving.
Clinical Psychologist
1 个月Flip it: by asking LLM's to reason better, we are doing AI. The calculator gives a bad answer, check for user error, if none found, build a better calculator. I think the question is: what is Reason? Reason is not a specific formal system of deductive logic, and it isn't limited to deduction either. There's induction, which can't be formalized. While it might not be the most useful definition to an engineer, I like Hilary Putnam's take on Reason: it is the whole of human endeavor. It fits Classical definitions, Man is the rational animal. "And yet, we expect AI to untangle the mess of our inputs and respond with divine clarity ;-)" That expectation is the driver of research. It is the creative use of anthropomorphism. It's the ongoing Turing test. There's no getting away from anthropomorphism. If you accept that 'G' is simply a measure of the set of skills that are useful to humans, then the 'essence' of Reason, 'this mythical 'G', is simply our humanity.