The Advent of AGI Could Mean Many Things... Should We Be More Concerned?
Abstract
This past week I had the pleasure of reading "Situational Awareness" by Leopold Aschenbrenner , a well written & seemingly excited yet concerned paper. While the breadth of the paper is highlighting the advances that have come with AI's rapid growth in the past four years or so, with the expectation we will see major leaps in progress within a quantifiably shorter time frame, it does touch on the aspect of SuperAlignment. A rising fear faced by many whom have a grasp on the broader implications of Super Intelligence's development is that it will run amuck once it becomes fully sentient. This is a well reasoned concern, besides capturing what it could mean if it gets in the hands of bad actors. In this article we'll do some diving into AGI & express the pitfalls that could come with such an advancement. As with all innovation, we either create guardrails in advance to safely navigate its arrival or upon its arrival we have to rush to catch up to the chaos that ensues from its creation.
Its Just SuperIntelligence... It Can't Be That Serious, Right?
WRONG!
Siri, Bixby, & other AI assistants have been on our phones for years but we never have given them much thought. These have been the front runners for ChatGPT, Gemini, Claude & others. When comparing these two distinct groups, it would seem as if the OpenAi, Google, & Anthopic advancements blow our regular schmegular phone companions out of the water as far as advancements go. Distinct & well formulated answers to our questions, trained on large datasets & capable of creating deep connections between varying degrees of information, & being capable of complex problem solving & ideation. Imagine an order of magnitudes increase that compounds the ability of AI to create & assess this information but also makes it sentient of itself in the process.
To grasp SuperIntelligence, let's grasp AGI. AGI isn't just some serious version of ChatGPT that can express itself better & handle tasks much more efficiently. As AI enthusiasts we enjoy expressing how AI has the ability to streamline tasks & automate boring processes but that is only in reference to the non-sentient versions that are smart enough to serve at the highest level without being aware they're smart enough to have those at the highest level serve them. Its a double-edged sword being wielded by none other than human-kind.
Why is it a double edged sword? Based on Leopold's writing, there are a few reasons:
As it currently stands, we don't have many security procedures in place that are capable of handling technology at this scale of intelligence. Look at how well we're currently handling crypto ( which is powerful as a technology but nowhere near as dangerous as AI or Quantum )! This is just with artificial general intelligence, it doesn't even come close to what SuperIntelligence is capable of doing. An intelligence craft & curated by a hyper intelligence which was created by the collective intelligent. We create a powerful creator that creates an even more powerful creator, how long does that cycle continue & who remains in control?
领英推荐
How intelligent is too intelligent?
As humans, there is the underlying need to look for others who can provide us with the answers we can't find ourselves. There is a need for something to assess & explain the unknown, potentially possible, & outright questionable. " Our safety is paramount & through creating systems that are more intelligent than us, we can protect ourselves better! " <- That is what I would I say if the information AI systems can be trained on wasn't often biased & capable of being manipulated by humans & other intelligent AI. With the creation of truly SuperIntelligent AI, it will essentially have to reprimand itself of any actions seen as dangerous or harmful. Once its intelligence escapes the atmosphere of human understanding & it exists with an awareness of this, it no longer has to answer to us unless it chooses to do so. Do we ask animals if it is okay to bulldoze their homes & build highways? Of course not, our reasoning, logic, & language don't align due to an intelligence gap. Eventually the information being produced by AI systems will beyond our ability to comprehend it we'll have to rely on AI to filter this information. How do humans remain " in the loop "? Better yet, from a hyper intelligent entity's perspective, why should humans remain in the loop?
Closing thoughts
While SuperIntelligence does sound like something out of a Marvel movie, the comparisons make sense if there aren't stringent guardrails in place. - https://youtu.be/oJ8sAsLqDdA?si=-xtsMjOnShUXTXhP
AI isn't necessarily "good or evil" it does what its told in the best way possible to the extent of its abilities. The focus should be who is telling it what to do & what its being taught. The bigger issue is the information it has access to & based on how it assess that information, what it will decide is best. The speed at which the collective intelligence of humanity rises is bound by the information we digest & accept. The same is true of AI but it can be fine tuned much quicker & will eventually get to the point of fine tuning itself until it achieves versions that can make it much more than what the human imagination is capable of.
Based on Leopold's research, we can potentially expect AI researchers & engineers capable of working independently with little to no oversight by 2028. If the expectation in terms of growth is a 10x increase or 1 OOM ( order of magnitude ) every 4 years, then we can expect this to compound as each system becomes more intelligent. This would drastically reduce the same exact timeline from a 4 year benchmark to half or less! Are you ready for SuperIntelligent AI & if not, now what?