AGI has arrived
OpenAI released the GPT-4 model on Tuesday, an AI model that “can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem solving abilities,” according to an announcement on OpenAI’s website.???
Coincidentally, Tuesday is also Pi Day because 3.14 numerically represents the ratio of the circumference of a circle to its diameter.?Starting today, March 14, 2023, Pi Day will also be remembered as the day that Artificial Intelligence has effectively reached AGI.
AGI stands for "artificial general intelligence," which refers to a hypothetical AI system that can perform any intellectual task that a human can.
While five of my Silicon Valley CEO friends agreed with me in my recent chats with them (yes, in a way, I believe AGI has arrived even before GPT-4 release), I wanted to share my friend Dax Mickelson’s critiques first, as I know this can be on many people’s minds too:
“AGI is a big statement.?It needs to be shown with rigor.?
Please provide more justification for your claim.
I’m pushing you on this because crossing over the AGI barrier
is no small thing.?We really need to prove it’s happened.
Am I too rough/critical?”
I don’t disagree with Dax’s assertion that “AGI is a big statement.” I have tried GPT-4 and observed that it still made mistakes and was less capable than humans in a few areas.
But here are three reasons why I am convinced that AGI has arrived and humans have not admitted it yet.?
First off, GPT-4 is super super impressive.
It is a multimodal AI model that can accept image and text inputs and produce intelligent text outputs.?
Here is an example from the GPT-4 technical paper:?Amazingly, GPT can understand the picture and answer "what is unusual about this image?" just like we humans do.
I also watched a live demo by OpenAI president and cofounder Greg Brockman and I have to say, GPT-4 indeed performed intelligent tasks.
In one of the demos, Greg took a photo on his phone of a hand-drawn mockup for a simple website he had scribbled on paper. GPT-4 was able to convert that paper drawing into a website using HTML and JavaScript code.
In another demo, Greg asked GPT-4 to answer a complex tax related question.?GPT-4 is able to parse dense passages of text and explain an answer after Greg pasted 16 pages of the U.S. tax code. Greg said it took him 30 minutes to read and understand the tax code, but it only took a few seconds for GPT-4 to give a perfect answer.
Secondly, let’s stick with OpenAI’s CEO Sam Altman’s definition of AGI.
Sam’s definition of AGI is pretty clear: if the AI model has “the meta-skill of learning to figure things out and that it can go decide to get good at whatever you need” and it has the ability “equivalent of a median human”, then we have the AGI.
This is precisely what the GPT-4 model has achieved. See the chart below where GPT-4 exhibits human-level performance on the many professional and academic exams.?
领英推荐
I even asked whether SVB should buy a lot of 10-year treasury bills with their clients' cash deposits in 2020 and 2021, and GPT-4 said “the value of treasury bills can fluctuate based on changes in interest rates and inflation,” which is exactly the root cause for the SVB bank failure recently.?
Hey, if GPT-4 can pass a Uniform Bar Examination with a score in the top 10% of test takers and help to avoid the 2nd largest bank failure in history, I don’t know what else is.??
However, the most important reason I want to declare we are already in the “AGI era” is because of my final point.
Third, AGI does not imply flawlessness, as the median human is not flawless at all.
I completely agree that GPT-4 has not mastered all the skillsets of a human. There is no argument about it.?At the same time, a median human has more flaws in his/her day-to-day actions and decision-making, even though a human has more skillsets in theory.
So here I’m comparing human judgment demonstrated practically speaking (NOT the best/theoretical judgment ability) vs. GPT-4’s judgment.
Humans have a tendency to overestimate their own capabilities, especially when it comes to decision-making. We make many decisions every day, but many of these decisions are biased by factors such as emotion, self-interest, and inertia.
Consider the fact that 90% of drivers believe they are better than 50% of the population, or that 90% of MBA students believe their scores are above 50%. These beliefs are not based on reality, but rather on our own biases and self-perceptions.
Even large corporations like Meta, Google, and Amazon have been known to make decisions based on biases and incomplete information. For example, the recent layoffs at these companies raise questions about why they over-hired people in the first place, and whether their leaders truly did not imagine the future slow-downs.
This is exactly where GPT-4's AGI argument comes in. While the model may not be "smarter" than a human being, it is more objective and (sometimes harsh) fact-based. It is not subject to the same biases and emotional influences that humans are, and it can analyze data and make decisions based purely on the facts at hand.
In fact, in recent discussions with five CEOs in Silicon Valley, all five agreed that AI models are more "capable" than they are in many ways precisely because they all made mistake after mistake due to a lack of objectivity and fact-based decision-making. It is frankly super hard for CEOs to be “intellectually honest” enough because they often make some suboptimal critical decisions. There are many such scenarios, and one of them is that these CEOs sometimes appease the "short-term,” not the "long-term,” needs of their investors, customers, or employees. Imagine Silicon Valley high-tech CEOs are so vulnerable, then what about the rest of the humans on this planet?
AI keeps learning, even faster than humans
My friend Dax sent me the following tonight to defend his skepticism:
“A human makes a mistake, say they misread a quote or they trip.
A human can learn from the mistake, whereas the AI needs to be
rebuilt from training data by humans to learn.?"
"Being 'right' doesn’t show intelligence, it only shows knowledge.?
Intelligence, to me, is the ability to learn from mistakes.?
It's a process, not an endpoint.”?
I appreciate Dax’s critical and conservative view above despite his enthusiasm for AI.
But I still have a different perspective.?Based on my years of professional experience, humans learn a lot but do not learn as fast as the GPT-4 model.?GPT-4 has decent judgment as well as decent knowledge.?
For instance, today Mark Zuckerburg announced there will be an organizational change at Meta in the coming months -- I think Mark may want to hire GPT-4 to be his AI assistant to help plan the upcoming re-org too.
While there is still a significant skills gap and some degree of hype around GPT-4, it is clear that the model represents a significant milestone in the quest for AGI. Humans need to acknowledge that the generative AI model has reached AGI.?And as we continue to develop and refine these models, we may find that they become increasingly valuable AI assistants to us for decision-making and problem-solving in a wide range of industries.
Finally, the “AGI assessment” from OpenAI CEO
OpenAI CEO Sam Altman has NOT declared AGI along with the release of GPT-4. Sam won't do it for a long time to come, because people can easily argue that GPT-4 lacks this or that skill today.?Such a debate is fruitless.
Nevertheless, based on what I know about Sam Altman, he agrees that humans are vulnerable, biased, and do not often operate “at full capacity." While I have the conviction that the “practical effectiveness of GPT-4” is as good if not better than the “practical effectiveness of humans," it is a subjective comparison. Even if Sam Altman agrees with me fully, there is no benchmark to defend it anyway.
While my friend Dax is still unsure about the arrival of the AGI era yet, he is actually very much looking forward to embracing the amazing AI technology after seeing the personalized ChatGPT (ZChat) video demo that I published two weeks ago.
“I’m VERY excited! As I started when first talking with you
I’m hoping your ZChat will put me out of the job…
BUT it will hopefully open up even better opportunities.”
Wow, what a comment: "put me out of the job... but it will hopefully open up even better opportunities."
AGI officially arrived on Pi Day in 2023, and it will open up more and better opportunities for everyone who embraces it. In fact, Sam Altman tweeted in 2022 that “AGI is probably necessary for humanity to survive.”
?? Technology, Product, and Design "It's all we do!"
1 年I am continually amazed by this remarkable technology's various applications and use cases. It is awe-inspiring because we are only in the early stages of its development, similar to the Model T. I'm excited to see the possibilities this technology unfold or the coming days, weeks, and years. Undoubtedly, a game-changing technology.