Is Strawberry Already Rotten?  The GenAI Subprime Crisis, & Altman's Circus

Is Strawberry Already Rotten? The GenAI Subprime Crisis, & Altman's Circus

In the past 2 months it is clear that the markets know that AI bloom is already off the AI rose, and with the launch hype of OpenAI's extremely overhyped Strawberry are we once again asking, Strawberry already fundamentally rotten? Gen AI business models unstainable? What damage could or will that do? Regardless of what AI profit Eric Schmidt says at Stanford. ?

How deep is the trough of disillusionment or how pervasive is the magical thinking? ?

I have now transitioned from GenAI explorer to GenAI skeptic because of the core issues and irrational thinking I am seeing. The numbers and risk here make the 2000's irrational exuberance look like a kid's party. The other aspects of AI like ML are still highly useful, but the handwriting is on the wall for the unprecedented GenAI hype. ?

I have taken most of this post from Ed Zitron's latest - The Subprime (Gen) AI Crisis and a few other sources where he asks what is the potential damage this new modality, other overhype and eventual inevitable backpedaling could cause? Ed's warning - Its coming sooner than you think. ?

Open AI will have to raise more money than any start up ever must survive. They are converting to a for profit corporate structure to attract investors. A warning sign is that they are having to raise from the UAE or Saudi's, because no one does that, because they want to. ?

Strawberry the OpenAI model o1 due to be released in the next two weeks as a preview is the first of their new reasoning models that can answer more complex questions. It's a standalone offering. It will think before responding. It uses a new training approach. It can explain its reasoning. It's much better at math. Still, it hallucinates less, in some cases and they have not solved that problem. It's a reset but doesn't do as well on factual knowledge as GPT-4o. It's not human thinking, but "feels" more human than prior models. A sampling of Developer feedback is decidedly mixed. ?

Right now, LLM's are not that smart, they cannot reason, they are predicting sequences of words based on patterns learned from vast amounts of training data. As OpenAI reportedly looks to raise more funding, its momentum depends on more research breakthroughs. The company says it is bringing reasoning capabilities to LLMs because it sees a future with autonomous systems, or what many see as the holy grail for productivity, agents that can make decisions and take actions on your behalf.?

Will Strawberry Deliver?

Unfortunately, they did not demonstrate solving problems where the solution was not known in advance. It's more prone to hallucinations and less inclined to admit it when it doesn't have the answer in advance. This means that users may see the logic and trust it more even if it is completely wrong. The reinforcement training process is described as thinking and reasoning when in fact it is guessing at the correctness of each step. At least humans make guesses based on real world experience rather than an inelegant mathematical flail. It has failed many simplistic tests. ?

Realistically 01's reasoning capabilities are relatively slow, not agent-like and expensive for developers to use. Is this worth calling it a breakthrough? Nope it's clunky and any smart investors won't be fooled. ?

And not only does it suck - but it's also expensive. Simplistically the generative AI business model does not work because it does not solve enough problems to justify its costs. Training data is running out. Hallucinations are impossible to resolve, making it ultimately unreliable. MS is barely selling CoPilot - less than 1% of its 440 million seats are paying for it. MS is also discounting their GPU per hour rate to Open AI to $1.30 compared to the $3.40-4.00 rate most pay substantially understating what it really costs to run Open AI. Are the hyper-scalers really making any money from this? If they were they would be talking about it more. ?

Will Strawberry save them? Does it live up to a glimpse of the future? While the model now checks the steps it takes to provide an answer rather than just spitting one out, it's not a breakthrough. The proof is the math and science problems it used to demonstrate had known answers allowing the model to guide the chain of thought through each step. So, the next generation model that we heard about since last November is DOA. Smell desperation anyone? ?

In addition to the cost and energy issues, Lawsuits are progressing against Stability AI, DeviantArt and Midjourney. If any of these prevail it will be a disaster for all the top Gen AI players, because all the models will need to be retrained from scratch to avoid the copywrite content and reduce their efficacy even further. To quote Ed Zitron: "The LLMs at scale are unsustainable, these companies have effectively stolen from millions of artists and writers and hoped they'd get away with it."?

Where is the revolutionary ideas? It's not real artificial intelligence. It's useful, yes, but hardly revolutionary. ?

Are the LLM's a technical dead end? Can they be combined with other approaches to reach the agent Grail? While it may be a new modality for models that makes questionable progress towards human levels of intelligence, you can count on the over hyped media cycle to continue to exaggerate the real value, and clearly far too early to make any investment decisions based on this. But that is why they kept Altman - or the new PT Barnum. ?

More long-term indicators are negative as other leaders are fleeing OpenAI. At least Mr. Long-standing patterns of being opaque and lying had the sense to step away from leading the Safety Committee yesterday. Maybe we will avoid AGI disaster...Nah... If anyone lies about this, he will, all while selling the next shiny new product to suckers. He made former employees sign NDA's preventing them from saying anything negative about the company. Good luck with that. Maybe he can start selling perpetual motion machines next. ?

What Could the Financial Collapse of GenAI Cause??

The NVIDIA correction was the single biggest rout in US market history, a indicator of the distorting influence of AI in the markets. ?

MS, Amazon and Google will continue taking a beating from the markets if they can't show increased revenue from the investments in GPU's and new data centers. If GenAI doesn't deliver on its overhyped promise, what will they do with these assets? ?

Are they out of new ideas? Did they try GenAI as their next big thing and found out it's not useful enough? It seems that customers are already telling them that.?

What happens next? - Do they pull back capital expenditures related to GenAI? Or far worse, desperate to find a new growth they continue to cut costs to fund this folly by layoffs? Does it become another Meta "year of efficiency"? The viable use cases do not support the current plans for CEOs to bring in AI consultants. Perhaps CEO's have started to realize that since PWC just laid off 1800. But that's not going to make all the other people who CEOs laid off prematurely anticipating big productivity gains from AI feel any better. ?

When will the punishment of big tech by Wall St for its sins of lust start? Soon. How bad will it be? Very Bad. Will some of them remain committed to the idea that AI is the future and commit suicide? Perhaps. Big Tech has been infiltrated by management consultants and not being run by people who used to build things. Microsoft appears to be deeply at risk as Copilot usage is far from expectations and leadership only funds tasks if they have AI taped to them. Employees lament that griping about a lack of bonuses or upward mobility in an organization focused on chasing an AI boom is useless because Satya doesn't care. ?

The Tech CEO's being tone deaf to their customers not wanting to pay for all these new AI features is the biggest indicator of how far out of control of reality this has become. There is no real differentiation between Generative AI integrations. They work the same way in that they all generate stuff based on other stuff rather than intelligence. So many of the pages on company web sites Word Salads and other nonsense -Red Flag.?

Gartner now predicts that 30% of GenAI projects will be abandoned after their POC's by 2025. Price pressure will increase on already negative margins that only the bigger can afford to sustain - making startups the first casualties. ?

But like Crypto and the metaverse many have decided to join the AI party that will automate everything without it ever really being proven to have a path to do so. ?

Why is this happening? It's not a new strategy. It's about increasing the share of the customer wallet. Create your own platform or ecosystem and make it hard for customers to leave. The creation of an ecosystem that is dependent on a few hyperscalers, positioned as a magical tool that could plug into most things to make them something different. Sell new stuff at all costs. ?

More people have woken up to Big Tech's charlatan behaviors, Are they really out of new ideas? Creating things of real value? What they are selling now is so discounted it will catch up with them. Will the price increases hold? Not unless real problem-solving value is created and the gap that exists seems far too wide to be bridged. ?

Ed closes with: "The tech industry is building toward a grotesque reckoning with a lack of creativity enabled by an economy that rewards growth over innovation, monopolization over loyalty, and (clueless) management over those who actually build things. ?

There's that clueless management over those who actually build (useful) things again. ?

Give me engineers over MBA's any day

Sources:

Ed Zitron - The Subprime AI Crisis www.wheresyouredat.at/subprimeai

Adam Conover and Ed Zitron: The AI bubble is Bursting with Ed Zitron youtube

The Verge.com - The rumored ‘Strawberry’ model is here, and the company says it can handle more complex queries — for a steep price

Bloomberg.com - OpenAI Fundraising Set to Vault Startup’s Valuation?to $150 Billion

Mashable.com - Former OpenAI execs call out the companies lack of transparency

The information - A trio of OpenAI leaders depart

CNBC - Recent Data shows AI job losses are rising, but the numbers don't tell the full story

Rich Heckelmann

Effectively Bridging Technology Development, Marketing and Sales as Product Portfolio Leader, Pragmatic Marketing Expert, AI Product Management, Product Owner, Scrum Master, Operations, QA and Marketing AI Strategist.

6 个月

A second red flag is the safety and ethical implications not the least of which is the reasoning transparency. Its problem solving capabilities could be used maliciously if not regulated. Monitoring for unethical use will require ongoing checking - how will that be managed? What are the standards for identifying exploitation? OpenAI's management history has already shown us that fast and loose will not work here, and with the urgent push to profits expect shortcuts to be taken

回复
Rich Heckelmann

Effectively Bridging Technology Development, Marketing and Sales as Product Portfolio Leader, Pragmatic Marketing Expert, AI Product Management, Product Owner, Scrum Master, Operations, QA and Marketing AI Strategist.

6 个月

The media coverage on Strawberry has been generally positive however a few red flags have surfaced OpenAI has threatened to ban users who probe its reasoning models. OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model. What and why are they hiding this? I am sure they have filed a patent for this so what's the problem? All that is going to do is entice hackers to find out. Using the term reasoning trace in a conversation will get you a warning email. In a post titled “Learning to Reason With LLMs” on OpenAI's blog, the company says that hidden chains of thought in AI models offer a unique monitoring opportunity, allowing them to "read the mind" of the model and understand its so-called thought process. Those processes are most useful to the company if they are left raw and uncensored, but that might not align with the company's best commercial interests for several reasons. "For example, in the future we may wish to monitor the chain of thought for signs of manipulating the user," the company writes.

回复

要查看或添加评论,请登录

Rich Heckelmann的更多文章

社区洞察

其他会员也浏览了