#62 Beyond Reason: Is AI Finally Thinking?

#62 Beyond Reason: Is AI Finally Thinking?

In 2019, the conversation around AI was dominated by both excitement and skepticism. The notion that AI could one day rival human thinking was intriguing but seemed far off. At that time, despite the rapid advancements in AI, I questioned the very use of the term "Intelligence" when AI lacked something so fundamentally human — the ability to "think." I likened AI's abilities more to advanced automation or "Intelligent Automation," lacking the depth, randomness, and emotional underpinnings that shape human cognition.

Fast forward to 2024, and the landscape has transformed dramatically. OpenAI’s o1 model, with an IQ estimated around 120, demonstrates reasoning and problem-solving skills that eclipse most humans on standard tests like the Norway Mensa IQ exam. This represents a profound leap in AI's capacity to perform highly sophisticated cognitive tasks, narrowing the gap between what we call "intelligent" for humans and machines. It validates the concerns and questions raised back in 2019 about AI’s future trajectory — AI is now, in many ways, getting “smarter,” at least within the narrow confines of reasoning and logic.

Yet, what I mused in 2019 still holds. While o1 might excel at recognizing patterns and solving complex puzzles, can it truly "think" as humans do? Can it experience random thoughts, have moods, or engage in creative flights of fancy driven by feelings rather than data? The nature of human thought is far more chaotic, shaped by random stimuli, emotions, and abstract reasoning that are difficult to quantify. In contrast, Przemek Chojecki ’s 2019 comment still resonates:

AI's random "thoughts" are noise, modeled mathematically, bound by the architecture and data it was trained on.


This leads us to the ongoing debate on AGI (Artificial General Intelligence) — can AI ever truly rival human cognition? With o1's success in IQ tests, it's tempting to assume that AGI is around the corner, but that’s a narrow view.

Human intelligence isn’t just about logic or problem-solving. It’s the unpredictability, the creativity that emerges from what seems like randomness.

AI, while showing vast progress in certain domains, hasn’t yet demonstrated these qualities. Its "mood, can be thought of as a product of recent data and its architecture, unlike the complex interplay of human emotion and experience.

So, where do we go from here? The advancements seen with models like o1 suggest that AI is on an accelerating path toward higher-order reasoning. But as we move closer to AGI, the question isn’t just about intelligence in the form of IQ or problem-solving abilities. The real frontier is creativity, spontaneity, and emotional intelligence.

Will an AI ever wake up with random thoughts about what to eat, where to go, or what to wear, influenced by nothing more than its mood?
Or will AI remain, as it always has been, deeply task-oriented, lacking the unpredictable chaos that makes human intelligence so uniquely creative?

The arrival of AGI could be years or decades away, but one thing is clear: AI is closing the gap on human cognitive abilities faster than we imagined in 2019. The new o1 model’s IQ milestone is a harbinger of things to come — more sophisticated systems, greater capabilities, and perhaps, eventually, a form of thinking that blurs the lines between human and machine. Whether that makes the world more exciting or more terrifying is a question worth pondering, at least for humans.

For AI, that question remains—unanswered, at least for now.

(I will attempt to answer these and some related questions in collaboration with Frank Buytendijk and Philip Walsh, PhD in a forthcoming Gartner publication. Stay Tuned!)



To Automate or Not To Automate?

A Shakespearean Conundrum Reframed in the Age of AI!

Here's groundbreaking research from my esteemed colleagues, Gareth Herschel and David Pidsley , on decision augmentation. This work, led by Gareth—Gartner ’s Data and Analytics Summit keynote speaker and VP of Research—introduces the new seven-level Human-AI Delegation Framework (HumAID). It was indeed a privilege peer-reviewing and providing inputs for this great piece of research.


In our AI-driven world, balancing human and machine decision-making is crucial. Our research reveals that while many employees welcome AI assistance, the right level of AI involvement varies by task. HumAID helps organizations navigate this by providing clear guidelines for effective human-AI collaboration.

This also ties in nicely with my own published research (Gartner log-in needed) about how to enhance your predictive and prescriptive analytics with Generative AI. We dwelled upon it in some detail in the last issue of the newsletter.

Discover how HumAID can enhance your decision-making processes and mitigate risks. For Gartner members, the full research is available here (client login required) .



Global AI Regulation in Focus: Meta's EU Concerns and California’s New Laws


As AI technology advances rapidly, regulatory frameworks around the world are being tested. In two major moves, Meta has led an open letter expressing concerns about EU AI regulations, while California has enacted new laws aimed at curbing AI misuse, particularly around deepfakes.

Meta’s open letter, signed by over 50 companies, warns that Europe’s slow and inconsistent approach to AI regulation is hindering the region’s competitiveness. The letter calls for “consistent, quick, and clear” regulations, especially around data usage in AI training, as Europe risks falling behind in the global AI race. The letter highlights the need for regulatory clarity around open and multimodal AI models, which could drive the next big leap in AI advancements.

Meanwhile, in California, Governor Gavin Newsom has signed several AI-related bills into law, targeting the misuse of AI to create harmful deepfakes. One of the key laws, SB 926, criminalizes the creation and distribution of sexually explicit deepfakes that cause emotional distress. Social media platforms are also required to act swiftly in removing such content, while another law mandates that AI-generated content carry a disclosure to inform users.

The regulatory efforts on both sides of the Atlantic signal a growing urgency to balance innovation with responsible AI governance. In California, further regulation is pending, with Governor Newsom yet to sign the major Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. This legislation could shape how AI is governed in Silicon Valley, one of the global centers for AI development.

As governments, tech companies, and researchers navigate this complex landscape, the future of AI regulation will have far-reaching impacts on both innovation and consumer protection.



Signing Off

Why did the AI start meditating?


Because it wanted to process its "thoughts" before running any new algorithms!

Keep an eye on our upcoming editions for in-depth discussions on specific AI trends, expert insights, and answers to your most pressing AI questions!

Stay connected for more updates and insights in the dynamic world of AI.

For any feedback or topics you'd like us to cover, feel free to contact me via LinkedIn.

DEEPakAI: AI Demystifed Demystifying AI, one newsletter at a time!

p.s. - The newsletter includes smart prompt based LLM generated content. The views and opinions expressed in the newsletter are my personal views and opinions.

Jayshree Seth

Corporate Scientist and Chief Science Advocate at 3M

2 个月

Thanks Deepak… a lot to think about indeed! Our State of Science Insights survey shows that currently an equal percentage of respondents who believe AI will change the world as we know it, also believe AI needs to be heavily regulated. Also, the global public largely views AI as a tool for problem solving. These results underscore the importance of AI directed towards challenges that the public would like science and technology-based innovations to address? https://arxiv.org/abs/2407.15998

Can AI really match human thinking? Fascinating regulation insights too.

Brian Buffington

Business Development | Technology, Media & Telecommunications | Product Management | Marketing

2 个月

Very informative

Deepak Seth

Actionable and Objective Insights - Data, Analytics and Artificial Intelligence

2 个月
David Pidsley

Decision Intelligence & Agentic Analytics | Gartner

2 个月

Thank you for mentioning my research. HumAID is a Gartner framework that expands on the establish three Decision Intelligence (DI) styles and helps decision leaders and decision-makers avoid an inappropriate imbalance of human-machine codependency when adopting AI for decisions.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了