The AGI hype = Bored Apes
Marco van Hurne
I build Agentic AI companies | Data Science Strategist @ Beyond the Cloud | Data Governance | AI Compliance Officer Certified
I speak to a lot of businesses around AI, and particularly GenAI, and I’m sensing a hype fatigue. Part of this is due to the challenging of bridging the gap from PoC to production, but an ever larger challenge is the “bro-hype” coming from Silicon Valley, and a lot of that hype is coming from exactly the same people who brought us Bored Apes and Blockchain as “the future”, so you can excuse business folks and IT executives if they start rolling their eyes when the same people start making “society changing” predictions.
Before we start!
If you like this topic and you want to support me:
This isn’t AI people hyping, its money people
Back in 2022 I was concerned that hype would cause this issue, and I’d argue it sort of did, one of the reasons that GenAI exploded was because previous hype hadn’t really delivered. But there is a key difference between then and now, what I was worried about then was absolute real AI experts getting carried away with the potential and over-selling things that they have built.
When Yann Le Cun and Gary Marcus would get into an argument back then it was on technical details on what had been built and the interpretations of implementations that actually exist.
That worried me because it felt like such arguments and over-selling would lead to an expectation gap. But man, I was wrong, if I thought that was hype then I really wasn’t ready for the nonsense being spouted right now.
Money men hype like its a Bored Ape
NFTs and Cryptocurrencies have been one of the biggest scams, and sadly it was a scam that exploded when people were at home, disconnected, worried about money and boy did some folks exploit that, selling ideas that somehow slapping a pointer on a blockchain to a JSON document that pointed to a record was somehow a worthwhile thing.
Elon Musk sends milady NFTs soaring - The Defiant
I could list out a legion of articles from people who are today declaring AI to be the future who two years ago were proclaiming “Web 3.0” (or Metaverse) to be the future, and on many occasions convincing business people to make investments in those things as a result.
One of the jokes I had at the start of last year when people commented that everyone was talking about AI was:
How to judge your AI expert, just check on LinkedIn if last year they were an expert selling Metaverse or Blockchain
I was never selling Blockchain or Metaverse
Hyping the imagined
My problems in 2022 with the hype were that people who built things were over selling what they built. This isn’t the problem today.
People are making up what they think could be done in future and using that as a justification
Whether it is Elon Musk saying that AGI will be “next year” (7 years after saying FSD would arrive, 7 years ago)
That AGI is the pursuit of God
or this
There is a common theme running through this hype, which is that this is going to solve everything. Climate Change, Cancer, etc, etc
Now AI can help, but the hype here is almost that solving these problems is an emergent property of some future mega-AI, rather than something we can start directly addressing today. This is a problem.
This isn’t arguing about whether or not an AI solution can or cannot do X, its arguing that an imagined AI solution can do X. This effectively pushes a lot of things to being “solved in the future” and critically saying that these large scale models are “the way” to achieve those aims. What we now have is, pretty consistently, AI researchers arguing with money folks on what is an isn’t possible.
领英推荐
Subscribe to the TechTonic Shifts newsletter
Is Jam tomorrow to avoid regulation today?
One of my concerns with this “futures” only mentality is I do have a concern that it is being used to push regulations towards a mythical existential threat, rather than the very real threats that these current AIs bring. Only a year ago there was a big noise around the idea of “stopping” the growth of AIs, with again the idea that the current stuff was fine, but what is next was the problem.
It is a very smart move to call for regulation, but always regulation of what is yet to be rather than what already is.
(that threat was reversed, but remains interesting when put alongside other statements).
Slapping GenAI on everything
Another trend I’d say is damaging GenAI is half-baked “sidecar” GenAI approaches which often at best boil down to “we know what RAG is, but we aren’t sure what a chunking strategy is”, some vendors out there are doing some good things, but sometimes it really is just a ChatGPT bot slapped onto an application, with if you are lucky a poor quality RAG implementation. These things remind me a lot of data virtualization, in that people are super excited in the demo, super impressed with the PoC, then after a month and a bit of using it they are cursing everything that it doesn’t do.
Yet these software companies know it is essential for them to hype their GenAI credentials, further leading to disappointment as business users find out the promises were extremely hollow.
Why this hype damages AI today
There are two ways that this hype is damaging the view of AI within businesses today.
The first one is the “blockchain” or “metaverse” hype, the idea that this mythical technology if only everyone would adopt it and at some future point in the future it will be great.
The second is that it can be heard as saying “well you can’t help with climate change on current AI approaches” or that current approaches just aren’t good enough for real problems. This is, for me, more dangerous as it leads to unhealthy skepticism and potentially also over-confidence that nothing can go too badly wrong with these “low-powered” current AIs.
I’ve had conversations with multiple folks in the last month or so who are absolutely getting jaded by this “AGI” and visionary hype, and the huge amounts of capital being thrown at general purpose AI with only a distant promise of relevance.
AI works, but the noise is going to drive bad decisions
AI can solve lots of things today, and will solve more things in future. Will that be via mega-model General Purpose AIs? Or will that be via single or limited purpose AIs working within a goal-oriented agentic framework? Personally I’ll bet on the latter, but the hype is certainly fully behind the former. It is pushing ideas of ‘single’ solutions and mega-implementations being the way forwards and that somehow corporate intelligence is an emergent behaviour of Knowledge Management repositories. That is about as likely as general intelligence being an emergent property of reddit posts.
AI will have power in businesses and it will solve lots of problems, but that will not be done by deploying a single model that magically understands your business and magically makes the best decisions for your busienss and its strategy. It will require much more rigour, much more focus and much more control. Even if the mythical AGI does appear, I’m willing to bet that it will still require every one of those things, and possibly a lot more.
Don’t believe the hype, AI today can be boringly successful, not “despite” AGI, but because it absolutely isn’t.
Well, that's a wrap for today. Tomorrow, I'll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ??
Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn appreciates your likes by making my articles available to more readers.
Signing off - Marco
Top-rated articles:
Researcher, Consultant, Professor - Design and Artificial Intelligence
5 个月I think you are right on point Marco. I’ve been saying for a while now that predictions of AGI are poorly thought through. I call AI “dumb-smart”. It's a lot like the classic savant syndrome where humans have a very high skill in one specific area (like identifying the day of the week of arbitrary calendar dates). But this seeming miraculous ability often comes with significant social or intellectual impairments, like most AI ;~)? My skepticism of AGI comes from seeing that building useful AI still requires a huge amount of human labor in terms of drawing out and codifying expertise and foresight from actual humans. This is surprisingly hard and often produces brittle systems. I’m reminded of the failures of the ‘80s “expert systems.” I agree that current AI can be very useful and there is high potential for the future. But like you, I’m worried the false hype and hype fatigue will bring on yet another AI winter and cause us to ignore some very real problems with the AI we have now.