What if AGI happens and nobody notices?
VentureBeat
VB is obsessed with transformative technology — including exhaustive coverage of AI and the gaming industry.
AGI: artificial general intelligence.
It's seen by those in the field as the "holy grail" of machine intelligence: a model or program that can outperform humans at most economically valuable tasks, in the words of OpenAI .
That's the end goal of what the company is doing by developing different models — producing AI that outperforms us. Why? As OpenAI stated on its website back in February 2023 :
"If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of?possibility. AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and?creativity."
Obviously, we still have a long way to go to realizing that vision.
Or do we? This week, OpenAI revealed a new model that had been long rumored. At first, it was thought to be codenamed Q* or "Q star," within the company, according to a report from Reuters dated November 2023 . At the time, here's how it was described:
"Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said."
Then, in July of this year , the outlet reported that OpenAI had actually renamed the project internally as "Strawberry."
Since then, there have been weekly, near daily rumors and rumblings on social media, and from reliable outlets such as The Information , that OpenAI was working on and close to releasing Strawberry. OpenAI co-founder and CEO Sam Altman even hinted at it last month.
And yet, when the model finally debuted this week under the name "o1" the overall response by the tech press and the wider public was muted...some might say, it was greeted with a shrug and maybe even a yawn.
In part that's because of OpenAI's own warnings about the model — available in two sizes, the larger, more powerful o1-preview and smaller o1-mini.
In both it's slower and more expensive than OpenAI's GPT series, and also less capable in many areas — not hooked up to the web so it's unable to search for new information, not able to be used to make custom GPTs, and not able to analyze attached files nor images, nor generate them like its predecessors.
In fact, the o1 model "family" as OpenAI refers to the duo of new models, is among the rarest of tech releases in my 15+ years in media.
Most tech products promise to be faster, cheaper, and more powerful than their predecessors. The o1 series fails two of those three general attributes, coming in both slower and more expensive by a factor of 4 than the preceding models.
But think of it more like a format jump: the first iPod at $399 was much more expensive than your average portable CD player of the early 2000s. The first VCRs were more expensive than film projectors of the time. Etcetera, etcetera.
OpenAI bills the o1 family as a new class of models it calls "reasoning," and as such, they are slower to respond because they are spending more time thinking and trying to correct their answers in realtime, and also to work through complex problems step-by-step, using what OpenAI and its supporters call "Chain-of-thought."
领英推荐
Many AI critics have pointed out that the o1 series models still get many things wrong. And lacking the capabilities of their predecessors, they don't seem likely to be in a position to handle most economically valuable tasks anytime soon.
But those who have tried the new models over the last month — a select group of OpenAI handpicked alpha testers — do report seeing gains in terms of their performance in coding and planning , and are bullish on their ability to get many more answers correct on third-party benchmarks (measuring tests). The o1 models even approach and in some cases or exceed the performance of trained, human PhD students in answering some questions correclty, according to OpenAI.
That's led some users to embrace the opposite view of the skeptics: that o1 formerly known as Strawberry or Q*, is actually bringing us much closer to AGI than we were before.
Some are even aghast that more people are not paying attention to the new o1 series — noting that for much of the world, the news was buried or not headlined at all.
Right now, there is ample evidence for all these perspectives. But as developers get their hands on the new OpenAI models and build new apps and services atop them, and users get to try them more, we'll certainly have a clearer picture of whether or not o1 is meaningfully better for many use cases.
I, along with I suspect many others, initially believed that if we got AGI, it would be like a light-switch flipping on — suddenly the world would change in a blink.
But that may not be the case after all. We may be taking a more incremental and slow pace toward it (if we reach it at all). And that may be by design, as OpenAI made a big deal about stating how this new o1 model family was previewed for the U.S. and U.K. governments and rigorously tested to be in "alignment" with its content guidelines.
As with getting a response from the o1 models now, we'll have to be patient, and wait maybe even longer than we thought or would like, to see for certain if o1 is the next step toward AGI — or a dead end.
Read More
Android app beta tester at Multiple,cognitive AI researcher/tester/trainer.
1 个月It has google deep mind made gato that was the foundation models for it ??♂???
International Business Development | Insights, Research, Digital Marketing, Advertising, Cross-Border Ecommerce
1 个月I think this is likely a scenario. But the bigger question is how do we rank/rate AGI. We already have IQ tests and personality tests to try and determine how smart/insightful/ingenuous a person is, and the same will be true for AI - it's completely subjective how impressive or "good" intelligence of any kind is. And our survival instinct assumes that any intelligence is likely to be a threat, unless it is an approved part of our "tribe" or family unit. And even then...
#educacaofinanceira #fe #co-cidadania #empreendedor
1 个月Boa noite desejo sucesso
Partner at Ingenuity LLP
1 个月and how dies this further world peace?