AI is Dead! Long Live AI
Stephen Salaka, PhD, MBA, PMP, CSM
Director of Software Engineering | Digital Transformation, Enterprise Architecture, and AI Integrations
Computerphile put out a great video yesterday about the current limits on how we process LLM (large language models). And the consensus is starting to look like we might be quickly approaching a plateau in what this paradigm will provide.
Artificial Intelligence (AI) has been a whirlwind of activity over the last decade. The rise of large language models (LLMs) like GPT-3 seemed to usher in a whole new era of possibility – and even fear – surrounding what these neural networks could achieve. But as we reach a critical juncture on the horizon of further LLM development, it appears AI's revolutionary promise might be stalling. The excitement might fade into a plateau… or will entirely new strategies for machine learning propel us into the age of true Artificial General Intelligence (AGI)?
For software development leaders, this means closely evaluating the limitations of AI as they stand now. Where are the potential benefits, but more importantly, where should expectations be tempered to avoid disappointment and wasted resources?
A Brief, Tumultuous History of AI
Though we often think of AI as a modern phenomenon, its roots stretch back to the 1960s when the first neural networks made classification tasks more sophisticated (don't forget ELIZA, the first program to almost pass the turning test). The 1990s saw the breakthrough of recurrent neural networks, allowing for some language generation but with severe memory limitations. The ability to analyze full sentences with transformers only came about a decade ago, paving the way for GPT-style LLMs we're familiar with now.
With each leap – GPT-1, GPT-2, and then the astounding GPT-3 – the potential for AI's applications seemed to grow exponentially. However, the jump between GPT-3 and the current GPT-4 shows less dramatic improvement than before. This raises concerns about whether scaling up the size of these models will offer diminishing returns going forward. We need exponentially large amounts of data, to produce smaller and smaller bumps in accuracy. Not only that, the training time and processing time to use the models increases, to the point where we run into computational complexity limits (huge energy and resource cost).
领英推荐
Key Limitations of LLMs as They Exist Now
The Future of AI: Where Does This Leave Us?
So, does all this point to AI hitting a dead end? To answer this, we need to separate the hype from the reality:
The Hope Burns Eternal
The arrival of true AGI might still be many years (or even decades) away. Instead, leaders should focus on concrete, near-term applications where AI offers provable value.
It's a fascinating time within the field and whether LLMs themselves will bring about the next revolution remains to be seen. One thing's for sure, AI's development won't end, but it's vital to set realistic expectations to avoid costly missteps.
Don't get me wrong, the advent of AI is huge and will shift how many things are done going forward. It may lead to new and better processing and algorithms that can help propel AI even further. I'll be interested to revisit this article in 5 years time to see if any of the predictions are correct. However, looking at the data coming out, it doesn't appear AGI is on the near horizon through the techniques we are leveraging at the moment.
Interesting perspective on the limitations of current AI development. You bring up a crucial point about the quality of data being a major factor in AI's effectiveness. How do you think we can improve data cleanliness and collection processes to take AI to the next level?
CEO at Cognitive.Ai | Building Next-Generation AI Services | Available for Podcast Interviews | Partnering with Top-Tier Brands to Shape the Future
5 个月Insightful perspective. AI's limitations highlight opportunities for innovative computational frameworks. Bad data remains a bottleneck. What's your take on distributional models complementing scaling? Stephen Salaka
★ Global Director at QA Mentor ★ Empowering Businesses with Scalable, Future-Proof Software Testing QA Solutions ★ Thought Leader in Emerging Tech & Quality Assurance Innovation ★ LinkedIn Top IT Voice 2024 ★
5 个月Stephen Salaka - its a fabulous take on AI and i agree with your thoughts and am pro AI ?? - but yes, totally insync with the tone of this post - Pairing Large Language Models with other classifiers is like crafting a gourmet dish – it requires clean ingredients (data) and skilled chefs (governance) to achieve a truly delectable outcome. ??
GEN AI Evangelist | #TechSherpa | #LiftOthersUp
5 个月AI applications are evolving rapidly, though immediate breakthroughs face challenges. Open discussions foster realistic expectations and identify opportunities. Stephen Salaka
Interesting perspective! AI faces challenges, but innovation can always surprise us.