AI is Dead! Long Live AI

AI is Dead! Long Live AI

Computerphile put out a great video yesterday about the current limits on how we process LLM (large language models). And the consensus is starting to look like we might be quickly approaching a plateau in what this paradigm will provide.

https://www.youtube.com/watch?v=dDUC-LqVrPU

Artificial Intelligence (AI) has been a whirlwind of activity over the last decade. The rise of large language models (LLMs) like GPT-3 seemed to usher in a whole new era of possibility – and even fear – surrounding what these neural networks could achieve. But as we reach a critical juncture on the horizon of further LLM development, it appears AI's revolutionary promise might be stalling. The excitement might fade into a plateau… or will entirely new strategies for machine learning propel us into the age of true Artificial General Intelligence (AGI)?

For software development leaders, this means closely evaluating the limitations of AI as they stand now. Where are the potential benefits, but more importantly, where should expectations be tempered to avoid disappointment and wasted resources?

A Brief, Tumultuous History of AI

Though we often think of AI as a modern phenomenon, its roots stretch back to the 1960s when the first neural networks made classification tasks more sophisticated (don't forget ELIZA, the first program to almost pass the turning test). The 1990s saw the breakthrough of recurrent neural networks, allowing for some language generation but with severe memory limitations. The ability to analyze full sentences with transformers only came about a decade ago, paving the way for GPT-style LLMs we're familiar with now.

With each leap – GPT-1, GPT-2, and then the astounding GPT-3 – the potential for AI's applications seemed to grow exponentially. However, the jump between GPT-3 and the current GPT-4 shows less dramatic improvement than before. This raises concerns about whether scaling up the size of these models will offer diminishing returns going forward. We need exponentially large amounts of data, to produce smaller and smaller bumps in accuracy. Not only that, the training time and processing time to use the models increases, to the point where we run into computational complexity limits (huge energy and resource cost).

Image from:

Key Limitations of LLMs as They Exist Now

  • Facebook Cats…Everywhere The saturation of specific data types has led to some bizarre AI behaviors. Since cats are popular internet content, LLMs are exceedingly good at producing cat images. But their grasp of less common concepts remains weak. This highlights a critical problem: AI only knows what we teach it. Bias within the enormous training datasets becomes embedded in the resulting models, and generating something that wasn't emphasized in the training corpus becomes unreliable. Sure we could train the model on 1,000,000 newly created pictures of purple dinosaurs, but what would be the return on that?
  • Infinite Data, Yet Finite Understanding Would feeding these models ever-more information improve them infinitely? Probably not. Current LLM techniques seem to have inherent limits. More training data will certainly provide marginal improvements, but we might be fast approaching the point where it's simply not worth the cost and computation needed.
  • AI's Successes Haven't Translated to Widespread Business Value Though it excels at content generation and image creation, LLMs haven't produced the "magic bullet" solutions many companies anticipated. Sure, they threaten some creative professions, but true business transformation requires more than just spitting out text or drawing pictures. Most organizations haven't seen a strong return on investment from AI projects, often discovering hidden issues with their data quality that no model can magically fix.

The Future of AI: Where Does This Leave Us?

So, does all this point to AI hitting a dead end? To answer this, we need to separate the hype from the reality:

  • AI Will Automate Some Things, Not Everything Routine tasks and certain basic content creation are prime candidates for LLM-driven automation. But expecting AI to understand business strategy, write complex software systems, or achieve "general intelligence" is a profound overestimation of current abilities.
  • AI Requires Good Data to Flourish Those "AI doesn't work" stories you hear? They often boil down to bad data. AI isn't a silver bullet, it's a tool. If your data is a mess, the results will be as well. Data cleansing and restructuring are likely to become core costs of any successful AI project. Most companies have decades of bad data, the cost of which to clean would be more than most small countries' GDP - so until they start seriously tackling these issues, using AI to help process is minimal.
  • Distribution is Key, Not Just Algorithms Breakthroughs in how we distribute computation (like we've seen in material sciences) are just as likely to push AI forward as simply scaling models ever bigger. The question isn't whether we'll find more clever algorithms, but how to efficiently execute them for real-world problem-solving. Case in point, we've been making discoveries in chemistry and material sciences since the 1990s using distributed computing. While applying AI techniques against these spaces has seen some promise (see some of the recent news on AI derived designs), it's not created the technological leap forward that would justify its burgeoning cost.

The Hope Burns Eternal

The arrival of true AGI might still be many years (or even decades) away. Instead, leaders should focus on concrete, near-term applications where AI offers provable value.

It's a fascinating time within the field and whether LLMs themselves will bring about the next revolution remains to be seen. One thing's for sure, AI's development won't end, but it's vital to set realistic expectations to avoid costly missteps.

Don't get me wrong, the advent of AI is huge and will shift how many things are done going forward. It may lead to new and better processing and algorithms that can help propel AI even further. I'll be interested to revisit this article in 5 years time to see if any of the predictions are correct. However, looking at the data coming out, it doesn't appear AGI is on the near horizon through the techniques we are leveraging at the moment.

Interesting perspective on the limitations of current AI development. You bring up a crucial point about the quality of data being a major factor in AI's effectiveness. How do you think we can improve data cleanliness and collection processes to take AI to the next level?

回复
Vincent Valentine ??

CEO at Cognitive.Ai | Building Next-Generation AI Services | Available for Podcast Interviews | Partnering with Top-Tier Brands to Shape the Future

5 个月

Insightful perspective. AI's limitations highlight opportunities for innovative computational frameworks. Bad data remains a bottleneck. What's your take on distributional models complementing scaling? Stephen Salaka

回复
Prashant SK Shriyan

★ Global Director at QA Mentor ★ Empowering Businesses with Scalable, Future-Proof Software Testing QA Solutions ★ Thought Leader in Emerging Tech & Quality Assurance Innovation ★ LinkedIn Top IT Voice 2024 ★

5 个月

Stephen Salaka - its a fabulous take on AI and i agree with your thoughts and am pro AI ?? - but yes, totally insync with the tone of this post - Pairing Large Language Models with other classifiers is like crafting a gourmet dish – it requires clean ingredients (data) and skilled chefs (governance) to achieve a truly delectable outcome. ??

Pete Grett

GEN AI Evangelist | #TechSherpa | #LiftOthersUp

5 个月

AI applications are evolving rapidly, though immediate breakthroughs face challenges. Open discussions foster realistic expectations and identify opportunities. Stephen Salaka

Interesting perspective! AI faces challenges, but innovation can always surprise us.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了