The AI Revolution Is A Boiling Frog

Reports of the failure of AI to transform the world are many. They’re also wrong.

Workers aren’t scared anymore that it’ll take their jobs, according to this story . Wired argued early this year that we should great ready for “the great AI disappointment .” Goldman Sachs issued a report in late June entitled "Gen AI: Too Much Spend, Too Little Benefit?" The latest issue of The Economist asked the question “what happened to the artificial intelligence revolution,” and went on to say:

“Beyond America’s west coast, there is little sign AI is having much of an effect on anything.”

People like to poke holes in widely-held beliefs or expectations. It get it, having been a lifelong Chicago Cubs fan and thereby deriving some sick satisfaction when I say “they’ll blow it” every time they win a few games.

But, and with apologies to Mark Twain, I say that reports of AI’s death are greatly exaggerated, for at least three reasons:

First, shifting tasks to AI is not as hard as we think or would like to believe.

Granted, ChatGPT and other LLMs are not aware of themselves, and finishing a sentence is not the same thing as curing cancer or deciding to annihilate us. Their “choices” of incorrect or invented data are the result of imperfect coding, not bad judgment.?

They have much more in common with your word processing spellchecker than with HAL9000.?

But here’s the rub: It turns out that the vast majority of what we human beings think, and certainly much if not most of what we do during the day is no more complicated than completing sentences. It might sound absurdly reductionist, but complex ideas are build upon simple statements.

No tech breakthrough is necessary to give AIs the ability to deliver those statements; it’s just a question of time (and available datasets) for training them to be more accurate and reliable.

Also, the threshold for being useful isn’t one of perfection but rather achieving the same level of imperfection that we humans achieve. In many ways, LLMs have already passed that benchmark.

Second, AI isn’t just a thing that companies and people use, but rather a capability built into everything.

And we’re already using it all the time. Internet search. Song suggestions. Smartphone apps. Customer service. Robots in factories and process management on screens. You can’t find a new jobs these days without passing muster first with an AI screener.

Then there are all the ways we’ve been using AI for years, considering the fact that any “smart” device has some level of artificial intelligence built into it. They just don’t need ginormous Nvidia brains to function.?

Every time you use cruise control in your car or you thermostat adjusts the temp in house to keep it constant, there’s AI working behind the scenes.

It just doesn’t get any respect.

These uses have flourished and will continue to do so, only we might not hear about it since their benefits marketed to us not as products of AI as much as outcomes from using so-and-so tool, device, or service.

Such uses will make things run faster, more often, and more reliably, which will have impacts both apparent and subtle on every aspect of our lives (as they already have).?

It will take longer for companies to switch over command of their most sensitive operations to AI — you don’t want your utility’s AI hallucinating where it should route electricity — but that transformational shift is also only a question of when, not if.

Third, we’re being distracted by the enthusiasm of AI’s promoters.

The Economist article reported that the biggest five tech firms have invested $400 billion in AI infrastructure this year alone. Google reports that its existing energy emissions increased almost 50% over the past five years, and one estimate sees AI accounting for almost a tenth of all electricity used worldwide by 2030.?

Hopes for reducing emissions are going down as demands for using fossil fuels are going up. The latest gains in the stock market have been attributed to enthusiasm about AI.?

Sure sounds like a mania wrapped in a fad inside a bubble to me.

There’s lots of money getting made promoting AI as the answer to anything...or the destruction of everything. Only there’s no legitimate way that AI today could to everything that it’s been promised it will do tomorrow.

But tomorrow will come.

We’re seeing that transformation happen more slowly than its investors hope, but that’s been the case with every major technological shift: I can remember the frothy blather about the Internet reinventing everything in the 90s even though it was hard to see the proof in BBSs or file sharing apps, or hear it in the screechy tone of a dialup connection.

And then we woke up one day and it was kinda everywhere, and we stopped talking about “it” and instead focused on the things we could do with it.

I think we’ll experience the roll out of AI as a similar set of implementations at work and in our homes. The media and VC-fueled startups might still talk about step-changes, like AGI, but that won’t change the substance, speed, or inevitability of the underlying process.

The die has been cast. The fix is in.

The water is slowing heating up.

[This essay originally appeared at Spiritual Telegraph]

Nir Kossovsky

Chief Executive Officer at Steel City Re

4 个月

Equally worrisome, Jonathan Salem Baskin, is that our behaviors are converging on AI's recommendations, which only reinforces the mean around which AI is anchored. When AI suggests phrases that I accept, we are both contributing to the next trove of documents that will educate the nexgen LLMs. Call it the Dolby(R) effect? Efficient, but dull?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了