Overcoming the AI plateau
VentureBeat
VB is obsessed with transformative technology — including exhaustive coverage of AI and the gaming industry.
It was a huge week for the burgeoning generative AI sector, and yet, in many ways, it felt minor in comparison to some of the hype we've encountered in the 1.5 years since ChatGPT burst onto the scene in November 2022.
In case you missed this week's news: on Monday, OpenAI, the company behind ChatGPT, announced a new macOS application version of the chatbot for desktop users, and a new underlying AI model to power the free and paid versions of that hit chatbot on the web, mobile and in the new Mac app.
GPT-4o is here
Called GPT-4o , the model offers increased speed, reduced cost, and an entirely new architecture for ingesting and responding to text, audio, imagery, even, at some point, video.
It immediately sparked a divisive reaction among observers and AI industry workers and entrepreneurs. Some thought it was evidence of OpenAI reaching a plateau in its quest to make ever-more intelligent AI models, while others said it was "essentially AGI," or artificial general intelligence, the company's stated mission — which is an AI that outperforms humans at most economically valuable work.
To be clear, the release of GPT-4o is being staggered, with the text and image analysis features being available now, while audio, image generation , and video are coming later.
So at present, nobody except those at OpenAI — or their trusted friends and allies, maybe some at Microsoft? — have had a chance to try out the full capabilities of GPT-4o shown off by the company in its presentation on Monday.
But even once we get the opportunity, OpenAI is clearly not positioning it as more intelligent than the older versions of GPT-4.
As Ethan Mollick , associate professor at Pennsylvania University's Wharton School of Business and AI influencer, wrote on X :
"For what its worth, I guess that GPT-4o is not designed to be smarter than GPT-4, it is designed to be as good, but cheaper & faster which enables new modes, like voice interactions & free access for all.
We don’t know a lot about GPT-4o yet but it represents a different approach."
Still, some people believed the lack of OpenAI releasing a GPT-5 or even a GPT-4.5 showed evidence that it doesn't "know how produce the kind of capability advance that the 'exponential improvement' would have predicted," as AI entrepreneur Gary Marcus put it on X.
"Each day in which there is no GPT-5 level model–from OpenAI or any of their well-financed, well-motivated competitors—is evidence that we may have reached a phase of diminishing returns."
Coming Soon: The AI Audit with UiPath
The latest stop on The AI Impact Tour is almost here!
Join us for an exclusive live event in NYC on June 5th as we delve into the fascinating world of AI Auditing at our latest event with our partners UiPath .
Discover strategies for examining AI models with fairness, performance, and ethical compliance.
You won't want to miss out on meeting other industry leaders. Learn more: https://bit.ly/4b8Psak
AI as airplanes
This idea of large language model (LLM) performance plateauing is a concern that has been raised in recent weeks and months, and gained even more currency following the release of 4o.
For example, Arvind Narayanan, director of Princeton University's Center for Information Technology Policy and a computer science professor, wrote on X that:
"In the late 1960s top airplane speeds were increasing dramatically. People assumed the trend would continue. Pan Am was pre-booking flights to the moon. But it turned out the trend was about to fall off a cliff.
I think it's the same thing with AI scaling — it's going to run out; the question is when. I think more likely than not, it already has."
OpenAI's imploding superalignment team disagrees
On the flip side comes the idea that AI models are getting too powerful, too fast, without appropriate safeguards.
That's roughly the view espoused today by Jan Leike , the former co-lead of OpenAI's superalignment team tasked with developing guardrails and "aligning" yet-to-be unveiled superintelligences — AI models that exceed human intelligence across a broad swath of tasks, part of OpenAI's quest towards artificial generalized intelligence (AGI), AI that outperforms humans at most economically valuable labor.
Leike, who resigned from OpenAI after working there for about three and a half years on the heels of his fellow superalignment team co-leader, OpenAI co-founder Ilya Sutskever, went on a tear on X .
Leike posted about how he had been "disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point," and that "we urgently need to figure out how to steer and control AI systems much smarter than us," but that "over the past years, safety culture and processes have taken a backseat to shiny products."
As it turned out, apparently the entire superalignment team has either resigned or been moved to other efforts at OpenAI, according to Wired (where my wife works as editor-in-chief).
This gels with the reporting from various outlets on Sutskever's move, along with OpenAI's former board-of-directors, to oust Altman briefly as CEO last year before an employee revolt and pressure from Microsoft and other investors in the company brought him back and caused the board to step down instead .
Apparently, a big part of that conflict was over the fact that OpenAI was seeking to commercialize its tech rather than spend resources on developing more safety measures. But this does not necessarily imply those safety concerns were valid.
Google's flurry of AI announcements greeted with a shrug
Separately from OpenAI's drama but ultimately just across the road in metaphorical terms, in the AI neighborhood, Google hosted its annual developer conference I/O ("input/output") this week and released a flurry of AI updates and new models across its existing product line.
Google Search in particular is being upgraded for everyone with AI generated results, to mixed and sometimes hostile reactions .
领英推荐
Even among AI influencers, entrepreneurs and workers who post regularly on social and X, Google's AI updates seemed to garner largely a tepid response. Many focused instead on its rambunctious opening act, a DJ and performance artist called Marc Rebillet, known for his almost comedic antics. At I/O, he emerged from a giant coffee cup and bounded around the stage in a bath robe.
Part of the reason for the dismissive attitude I saw toward Google's I/O announcements was that many were not available day-of, and it was unclear when, if ever, the public would get their hands on them.
Google has cultivated a reputation among tech observers and aficionados as showcasing many interesting technologies but ultimately failing to deliver, nurture, or even maintain them, such that there is even a website called the Google graveyard dedicated to all its abandoned efforts over the years.
The lack of interest in Google's announcements was less a reflection of AI or its limitations — though I wouldn't be surprised if many people are simply overwhelmed by the mere mention of the letters now, as there have been so many AI announcements across the board in the last 1.5 years — and more a reflection of a general souring toward Google among techies.
It doesn't help that its efforts in Gen AI have backfired publicly, such as when its Gemini image generation capabilities produced ahistorical and racially inaccurate imagery and text responses, inflaming Silicon Valley's contrarians, conservatives, and libertarians.
Unfamiliar terrain
Where does this leave the AI industry?
With lawsuits and legal actions rising against AI companies , and many people speaking out against AI data scraping policies , the idea that AI might not be performing up to the lofty expectations set by its own makers is of course an appealing one.
I'm willing to entertain the idea that AI may have reached something of a technological plateau with current training methods and computational resources, but ultimately, I doubt it.
More importantly, I think, is the suggestion it may be falling short of users performance expectations.
In the weeks and months following the launch of GPT-4 in early 2023, it was fashionable among tech leaders such as Bill Gates and Reid Hoffman (both AI investors) to suggest that Gen AI was as revolutionary as the internet itself.
Today, in the current moment, those pronouncements seem overblown. How has AI changed your day-to-day? How has AI changed most people's day-to-days? Looking compared to October 2022, the month before ChatGPT was launched, the answer is probably, "not much."
But with the advent of the ChatGPT Mac desktop app — which can ingest screenshots and watch you take actions live on your computer screen using screen recording permissions — more and more people may be able to start finding a use for these models in their workflows.
The biggest challenge facing AI adoption (imo)
The truth of the matter is, despite the pronouncements of tech leaders that Gen AI is a revolution — it's a difficult one to describe, and hence, to sell compared to those that preceded it in technology. The PC let you create and work like a pro right at home.
The internet delivered information on demand to your eyeballs, right in your house, giving you the power of a library and then some without ever leaving your desk. The mobile phone put that power in your pocket and gave a direct, instant line to your friends and family members around the world, with you wherever you went.
What does Gen AI give you — except a sassy, sometimes flirtatious, hallucinating voice that gets things wrong frequently enough that it can't be fully trusted with anything of critical importance?
Explaining to people how Gen AI can make their lives better in the day-to-day — beyond streamlining homework and making illicit deepfake pornography — will be the next challenge for model providers and developers.
But I believe the potential is actually there for Gen AI chatbots like ChatGPT and even Google's new implementations to make people's lives easier, better, and more creatively fulfilling.
The problem is that AI models are so flexible and can actually do so many things, it's hard to sell them on solving one key problem or offering people one clear advantage. Trained properly, they can help analyze medical imagery , streamline procurement for government agencies , and of course, create new digital content faster and more easily than ever before. AI can do many things really well, or help humans do them faster, even if it doesn't always get it right or hallucinates.
The familiarization
Now that OpenAI is making GPT-4o, its newest, fastest, most capable model available for free to users of ChatGPT — and more cheaply than older models to developers, and to those who want a dedicated app on their Mac desktop — I expect more AI to be integrated into many more apps, giving people a much better idea of what they can do with it and how it can help them.
They'll actually get their hands on it and find out themselves what it's good for. Figuring out how to make actual, productive use of Gen AI is I think, a highly personalized and individualized journey, much like setting up your own social feeds of friends.
You, the user, must be willing to invest some time and energy in it. It's not as immediate or obvious a technological asset as mobile, the internet, or the PC was. You'll need to experiment with Gen AI, try it and see how it fails at certain things and surprises you with how it does well at others, before you can fully buy into it. You need to familiarize yourself with what AI is capable of and where it falls short. While many techies have already started this process, those who aren't embedded in the industry ("normies ") or aren't young and trying to cut corners on their homework, are still just getting started.
But once you do familiarize yourself with AI chatbots and tools, I do think they will make a series of small positive differences, and ultimately add up to a major benefit in your life — and certainly in your work. You just gotta dive in and get started. Don't be afraid!
For the AI industry to overcome its perceived plateau of progress, users must overcome theirs — their unfamiliarity with the underlying technology, their questions about how it will help them, and why they should even pony up a few bucks a month for it (ok, at least $20 in the case of ChatGPT Plus). But I think they can and will get there, and making the AI tools more broadly accessible in more formats — as OpenAI did and Google tried to this week — is a good start.
That's all for this week. Thanks for reading, subscribing, sharing, and just being you (as long as you're not an asshole). Peace out for now. ??
Read more