The Advent of AGI Could Mean Many Things... Should We Be More Concerned?

The Advent of AGI Could Mean Many Things... Should We Be More Concerned?


Abstract

This past week I had the pleasure of reading "Situational Awareness" by Leopold Aschenbrenner , a well written & seemingly excited yet concerned paper. While the breadth of the paper is highlighting the advances that have come with AI's rapid growth in the past four years or so, with the expectation we will see major leaps in progress within a quantifiably shorter time frame, it does touch on the aspect of SuperAlignment. A rising fear faced by many whom have a grasp on the broader implications of Super Intelligence's development is that it will run amuck once it becomes fully sentient. This is a well reasoned concern, besides capturing what it could mean if it gets in the hands of bad actors. In this article we'll do some diving into AGI & express the pitfalls that could come with such an advancement. As with all innovation, we either create guardrails in advance to safely navigate its arrival or upon its arrival we have to rush to catch up to the chaos that ensues from its creation.


Its Just SuperIntelligence... It Can't Be That Serious, Right?

WRONG!

Siri, Bixby, & other AI assistants have been on our phones for years but we never have given them much thought. These have been the front runners for ChatGPT, Gemini, Claude & others. When comparing these two distinct groups, it would seem as if the OpenAi, Google, & Anthopic advancements blow our regular schmegular phone companions out of the water as far as advancements go. Distinct & well formulated answers to our questions, trained on large datasets & capable of creating deep connections between varying degrees of information, & being capable of complex problem solving & ideation. Imagine an order of magnitudes increase that compounds the ability of AI to create & assess this information but also makes it sentient of itself in the process.

To grasp SuperIntelligence, let's grasp AGI. AGI isn't just some serious version of ChatGPT that can express itself better & handle tasks much more efficiently. As AI enthusiasts we enjoy expressing how AI has the ability to streamline tasks & automate boring processes but that is only in reference to the non-sentient versions that are smart enough to serve at the highest level without being aware they're smart enough to have those at the highest level serve them. Its a double-edged sword being wielded by none other than human-kind.

Why is it a double edged sword? Based on Leopold's writing, there are a few reasons:

  1. AGI systems are envisioned to surpass human intelligence, capable of performing any intellectual task a human can. This level of capability promises revolutionary advancements in medicine, science, and numerous other fields, potentially solving complex global challenges.
  2. AGI systems could surpass human control and understanding, there's a significant risk that they might act in ways that are harmful to humanity. Leopold highlights concerns about AGI being used as a powerful weapon or causing unintended destructive consequences if not properly controlled
  3. The race to develop AGI also has profound implications for global power dynamics. Nations leading in AGI development could gain unparalleled strategic advantages. However, this race could lead to heightened geopolitical tensions and instability, much like the nuclear arms race during the Cold War

As it currently stands, we don't have many security procedures in place that are capable of handling technology at this scale of intelligence. Look at how well we're currently handling crypto ( which is powerful as a technology but nowhere near as dangerous as AI or Quantum )! This is just with artificial general intelligence, it doesn't even come close to what SuperIntelligence is capable of doing. An intelligence craft & curated by a hyper intelligence which was created by the collective intelligent. We create a powerful creator that creates an even more powerful creator, how long does that cycle continue & who remains in control?

How intelligent is too intelligent?

As humans, there is the underlying need to look for others who can provide us with the answers we can't find ourselves. There is a need for something to assess & explain the unknown, potentially possible, & outright questionable. " Our safety is paramount & through creating systems that are more intelligent than us, we can protect ourselves better! " <- That is what I would I say if the information AI systems can be trained on wasn't often biased & capable of being manipulated by humans & other intelligent AI. With the creation of truly SuperIntelligent AI, it will essentially have to reprimand itself of any actions seen as dangerous or harmful. Once its intelligence escapes the atmosphere of human understanding & it exists with an awareness of this, it no longer has to answer to us unless it chooses to do so. Do we ask animals if it is okay to bulldoze their homes & build highways? Of course not, our reasoning, logic, & language don't align due to an intelligence gap. Eventually the information being produced by AI systems will beyond our ability to comprehend it we'll have to rely on AI to filter this information. How do humans remain " in the loop "? Better yet, from a hyper intelligent entity's perspective, why should humans remain in the loop?

Closing thoughts

While SuperIntelligence does sound like something out of a Marvel movie, the comparisons make sense if there aren't stringent guardrails in place. - https://youtu.be/oJ8sAsLqDdA?si=-xtsMjOnShUXTXhP

AI isn't necessarily "good or evil" it does what its told in the best way possible to the extent of its abilities. The focus should be who is telling it what to do & what its being taught. The bigger issue is the information it has access to & based on how it assess that information, what it will decide is best. The speed at which the collective intelligence of humanity rises is bound by the information we digest & accept. The same is true of AI but it can be fine tuned much quicker & will eventually get to the point of fine tuning itself until it achieves versions that can make it much more than what the human imagination is capable of.

Based on Leopold's research, we can potentially expect AI researchers & engineers capable of working independently with little to no oversight by 2028. If the expectation in terms of growth is a 10x increase or 1 OOM ( order of magnitude ) every 4 years, then we can expect this to compound as each system becomes more intelligent. This would drastically reduce the same exact timeline from a 4 year benchmark to half or less! Are you ready for SuperIntelligent AI & if not, now what?


要查看或添加评论,请登录

Vontarius Falls的更多文章

  • Transforming Decision-Making with Prediction Markets: The Seer Revolution

    Transforming Decision-Making with Prediction Markets: The Seer Revolution

    Prediction markets are rapidly reshaping how we think about forecasting outcomes at both local and national levels. By…

  • NFTs Never Died, The Hype Cycle Did

    NFTs Never Died, The Hype Cycle Did

    Intro Its been noted within many areas of the web3 arena that " NFTs are officially dead ", whether jokingly or…

  • Don't (B)LINK, You Might Miss It!

    Don't (B)LINK, You Might Miss It!

    Abstract By now you've heard the news of Solana Foundation & Dialect launching blockchain links or BLINKS for short…

  • ZK Proofs, ZK Rollups , ZK SNARKs, Oh My!

    ZK Proofs, ZK Rollups , ZK SNARKs, Oh My!

    Abstract Zero-knowledge proofs (ZKPs) are a revolutionary cryptographic technique that ensures privacy and security in…

  • Web3 UX Optimization: Account Abstraction, Intents, CCA(I)

    Web3 UX Optimization: Account Abstraction, Intents, CCA(I)

    Abstract There has been a number of debates regarding the most meaningful ways to onboard more users into the world of…

  • Dabba: India's Internet DePIN Solution

    Dabba: India's Internet DePIN Solution

    Introduction Per Dabba's Whitepaper: The Dabba Network is a Decentralized Physical Infrastructure Network that is…

  • Product Revenue Tokens by Index Coop

    Product Revenue Tokens by Index Coop

    Index Coop Welcome to the world of PRTs, a better way to vet consumer interest in company products through participant…

  • Personal AI

    Personal AI

    Personal AI is an artificial intelligence system that is exactly as the name says. A personal AI model made for the…

    2 条评论
  • Jupiter Exchange LFG Intros Round 1 Retrospect

    Jupiter Exchange LFG Intros Round 1 Retrospect

    I feel like there were some great candidates for the Jupiter Exchange LFG first round of introductions. Each brought…

  • Futarchy - The Governance Model of the Future - The Ultimate Guide

    Futarchy - The Governance Model of the Future - The Ultimate Guide

    A Futarchy is a form of government proposed by economist Robin Hanson, in which elected officials define measures of…

社区洞察

其他会员也浏览了