Super AGI will arrive sooner than we think and we are not ready.
Image credit to https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

Super AGI will arrive sooner than we think and we are not ready.

For me, I feel that the year 2022 has been one like no other and a clear inflection point in human history.

What makes me say this?

Arguably, AI has matched the intelligence of an average human being.

And in some narrow domains, we've achieved super human performance.

Combine the two, and you have super artificial general intelligence (Super AGI).

Here’s some highlights:

AI has scored the 52nd percentile on the SAT test

No alt text provided for this image

AI has won 1st place at art competitions

No alt text provided for this image

AI has rivalled us at competitive coding , destroyed previous protein modelling attempts and bent us over at the most complex games. (Diplomacy, Starcraft and GO)

The big finale of the year that's still getting people into a frenzy: ChatGPT and it's ability to have coherant conversations and generate information across a broad domain of topics (and many other use cases).

This technology is so powerful, it's challenging Google Search ; a leading technology that has dominated its sector for over 25 years.

Why worry? Your examples are only 'super' in narrow domains, but none of them are 'super general' yet.

There are two potential scenarios for the future development of super AGI:?

  1. “A hard takeoff”, in which AI rapidly achieves superhuman intelligence and takes over the universe;?
  2. the "slow takeoff" scenario, in which AI intelligence increases incrementally and thus takes years for an AGI to achieve superhuman intelligence.?

As I elaborate further below, we can argue that GPT3 can be ‘roughly’ compared to 29/1000 of the human brain.?

Memory and processing power follows Moore Law. Based on our current trajectory, we can extrapolate that it will take another 13.8 years to compare the future GPT model to the human brain - this would technically be ‘a slow takeoff’.

Here’s what’s different about innovation this time. It’s a question of scale.

At the moment, the use of generative AI is primarily:

"the world’s leading engineers can apply AI that does a below average job to any problem"

ChatGPT is an example of taking this technology and making it accessible to the masses but produces below average results.

What happens when GPT4 arrives? (estimated this year - 2023)

If GPT2 to GPT3 was a x5 increase in performance, will GPT4 provide the same boost?

What does ChatGPT x5 look like?

And what happens when we change our previous statement to:

"anyone who can write a few lines of python can apply above average human intelligent AI to any problem"?

or even

"GPT-4 can apply AI to any problem".?

This is the exponential catalyst that could drive a ‘hard take off’.

You mentioned that GPT3 is at 29/1000 of the human brain. Isn't that a bit of a poor comparison?

There is some sense in comparing the computation power of the human brain to hardware, but it is believed that we need at least 10^16 flops to achieve human-level AI.

Nick Bostrom has suggested that a functional simulation of neurons could require up to 10^18 flops.?

However, the current limitations in AI seem to be more related to memory bandwidth and capacity rather than computation power.

For example, a GPU with less computational power but more VRAM may run better models than one with more computational power but less VRAM.

The flops performance of a 4090 tensorcore is around 1.3*10^15, which is significantly lower than the estimated flops needed for human-level AI.

Additionally, using tensorcores only leads to a 25% increase in speed for stable diffusion, indicating that the limiting factor is not computation power, but memory speed and capacity.

This means that an ML model running on a 4090 GPU is unlikely to have the equivalent power of even 1/1000 of the human brain, and could potentially be even lower.

When you take into consideration ChatGPT’s performance vs the VRAM and flops used:?

The largest version of GPT-3, called GPT-3 175B, has a VRAM requirement of 700GB VRAM.

This would roughly need 29 4090’s = ~29/1000 of the human brain.

But yes. This is likely an overoptimistic estimate and there's another arguement to be made.

Large language models are stochastic parrots that don’t understand language like we do.

Here's what's really interesting however:

When you combine these methods with multimodal training like GATO did (mentioned below), then you could argue these models come much closer to understanding things like we do.

Lack of compute and memory isnt the only problem Chris. These large language models have already consumed most of the internet - we are running out of high quality data.

While there is a limit to the current text datasets, and expanding that with high quality human-generated text would be expensive, I'm afraid that's not going to be a blocker.

Multimodal training already completely bypasses text-only limitations. Beyond just extracting text tokens from youtube, the video/audio itself could be used as training data.

The informational richness relative to text seems to be very high.

Further, as GATO demonstrates, there's nothing stopping one model from spanning hundreds of distinct tasks, and many of those tasks can come from infinite data fountains, like simulations.

Not to mention other methods such as supervised learning (we have a lot of people we can use to provide feedback) and reinforcement methods to compete the model against itself.

Ok Chris. You're scaring me now. How soon do you think Super AGI will arrive?

No alt text provided for this image

To quote Benjamin Todd on his 80000 hours article about existential risks :

In 2017, 350 researchers who had published peer-reviewed research into artificial intelligence at top conferences were polled about when they believe that we will develop computers with human-level intelligence: that is, a machine that is capable of carrying out all work tasks better than humans.
The median estimate was that there is a 50% chance we will develop high-level machine intelligence in 45 years, and 75% by the end of the century.21

This was taken back in 2017. Yes. 6 years ago. (Time flies).

If this survey was taken today, I would place a heavy bet that the median has moved forward to the 2030-2040 range.

My bet? 2035 if not sooner.

Why do you feel not enough is being done?

Aside from the fact there’s a chance of having a super AGI in 12 years time?

There are several reasons why humanity is not doing enough to ensure that super AGI does more good than harm. For one, many people are simply unaware of the potential risks associated with super AGI.?

This is because the topic is still relatively new and not well understood by the general public. As a result, there is a lack of public awareness and understanding about the potential risks of super AGI, which makes it difficult for people (mostly the government and the folks in power) to take action to mitigate those risks.

This is evident when you see many people quote something like: “Innovation creates more and exciting jobs that often pay more than it does replace”.

Which, while historically is correct, is outright wrong for this wave of technology.

Secondly, the development of super AGI is being driven primarily by economic and technological considerations. Many companies and research institutions are focusing on developing AI technologies that can improve efficiency, reduce costs, and generate profits.

While these goals are certainly important, they will take priority over concerns about the potential risks. As a result, there is a lack of emphasis on developing ethical frameworks and safeguards.

For these reasons alongside our history of doing things because we can, even though we shouldn’t, fills me with dread.

What can we do about it?

There are numerous measures humanity can take to ensure that super AGI does more good than harm:

  1. Develop a robust governance framework: Establishing clear guidelines and regulations for the development and deployment of super AGI can help ensure that it is used ethically and responsibly.
  2. Foster transparency and accountability: Ensuring that the development of super AGI is open and transparent can help build trust and accountability, and allow for greater oversight of its use.
  3. Encourage diverse perspectives: Ensuring that a diverse group of people are involved in the development and deployment of super AGI can help prevent biases and ensure that a wide range of perspectives are considered.
  4. Invest in research: Supporting research on the ethical and social implications of super AGI can help inform decision-making and ensure that super AGI is developed in a way that is beneficial to society.
  5. Engage in dialogue: Promoting open and honest dialogue between experts, policymakers, and the general public can help ensure that the development and deployment of super AGI is guided by the values and priorities of society.

But my voice is barely an atom in a ocean. I write this to hopefully grab the attention of those far more influential than myself to help make change where it matters.

Conclusion

We’re about to enter a situation like Brexit. Doing something terribly bad for ourselves but on a global scale.

We've decide we're doing something, but have a very weak plan before we execute and suffer the consequences with no way back.

While I’m confident in the long term and believe it’s likely AI is going to make an unmeasurable positive impact to mankind, there is still a minor risk it could go very wrong.?

Regardless of the long term; in the short term, there is going to be a lot of anxiety and pain from obvious impacts such as mass job losses.

Universal Basic Income or "tax the corps" is not a robust plan.

AI is a risk on par with climate change. What’s different however, is it can be our saviour.

In my opinion tackling super AGI should be humanities priority and focus, even above climate change.

At risk of applying a logical fallacy similar to Pascal’s wager - I could definitely be wrong about everything and writing makes me look like that crazy homeless guy holding "the end is nigh" sign.

But the risk is way too damaging to not plan for the worst.

The opportunity is so great, we must try make the best of it.

No alt text provided for this image
Image generated by Midjourney using prompt 'futurisitic uptopia architecture'
Eric Anderson

Helping ambitious engineers and tech pros to elevate their career | Upgrade your Linkedin profile, CV/resume, job search skills, and interview performance | GET YOUR DREAM JOB | Message me now

1 年

The hype and acceptance will fuel massive investment (that will be looking for ROI)

Fran?ois Piedno?l de Normandie

Chief Chip Architect, ground up design for Automotive safe computing. UCIe representative. Level 2-4 Autonomous drive hardware. Jet Fighter Pilot, Oscar Bravo One.

1 年

Are you sure? While we have scratched the edge of recognition, there is actually no implementation of common sense, and this is the hard part of "Intelligence" into A.I ... This is why, people working on it, like me call it "Machine Learning" and not "Artificial Intelligence", because frankly, we are very far from common sense, if you did not train today's AI with a solution set, there are no answer for common sense.

  • 该图片无替代文字
Adejare Emmanuel Oreoluwa Oluga

Art Tech for Radaint Titans in Media —> Fashion, Fine Arts, and Furnishings ?

1 年

Super AGI is the Super App of the future, it is simply inevitable. Although I’m not the biggest fan of ‘Idealogues’ and ‘Thought Leaders’, I appreciate and value their impact on society as a whole. Right now, we probably need them more than ever as things are changing too rapidly for many to get up to speed on all these developments. This is where I feel Idealogues and thought leaders can sort of fill in the gap for the mean time and be a bridge as we transition to this emerging era of Co-Cognition. As for Super AGI, voices like Cal Newport and James Clear come to mind for how these tools can be leveraged for Net positive effects in the near future. The far future is however too unpredictable. Great post and insightful articles. It’s also great connecting with you. Kudos

Oliver Cronk

Sustainable Architecture & Responsible Innovation | #ArchitectTomorrow & Consultants Saying Things Podcasts | R&D / Technology Director | Speaker & Facilitator | MBCS CITP | ex Chief Architect, ex Big 4

1 年

Excellent article Chris and one I highly recommend #architecttomorrow followers have a read of. I'm looking forward to our Podcast recording even more now!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了