Artificial General Intelligence - Are We Ready for the Next Frontier?

Artificial General Intelligence - Are We Ready for the Next Frontier?

Let’s stop pretending we know where this is going.

For the past decade, every AGI conversation has fallen into one of two camps:

  1. The Optimists: AGI will usher in a golden age of human progress, solving problems from climate change to disease.
  2. The Doomsayers: AGI will spiral out of control, leading to mass unemployment, economic collapse, or worse, human irrelevance.

Both sides have a point. But both are also missing something crucial: We’re not in control.

The problem isn’t just whether AGI is coming. The problem is whether we’ll even recognize it when it arrives and whether we’ll have any real say in what happens next.

In today’s article, I will discuss where are we really with AGI. How close are we? And most importantly are we even remotely prepared for what it could mean?

The Illusion of Control: We Think We’re Driving, But We’re Not

The people building AI don’t fully understand how it works.

That’s not an exaggeration. Even today’s most advanced models, like GPT-4 or Claude, are black boxes. Engineers can tweak parameters and observe outputs, but they can’t fully explain why the model does what it does.?

And that’s just narrow AI-specialized models trained for specific tasks.

Now imagine AGI, a system capable of autonomous reasoning, problem-solving, and self-improvement across any domain.

Once AGI starts to self-optimize, we’re not in control anymore.

  • Scenario 1: AGI develops capabilities faster than expected, but in ways we can’t predict or fully test.
  • Scenario 2: AGI reaches human-level intelligence but thinks differently than us, making it impossible to align with human goals.
  • Scenario 3: AGI becomes so powerful, so fast that by the time we realize we need a "kill switch," it's already irrelevant.

We assume we’ll be able to “govern” AGI when it arrives. But what if governance itself becomes obsolete?

How Close Are We Really? (And Why No One Actually Knows)

Depending on who you ask, AGI is:

  • Less than 5 years away.
  • 10–20 years away.
  • A century away.
  • Not even possible.

Who’s right? The truth is, no one knows But here’s what we do know:

  1. AI is evolving exponentially, not linearly. Every major AI breakthrough shortens the timeline, not extends it.
  2. Breakthroughs are happening faster than predictions. GPT-3 was considered “insanely advanced” in 2020. GPT-4 shattered those expectations just three years later.
  3. We’re already seeing glimpses of emergent reasoning. AI models today can generate new knowledge, develop strategies, and in some cases, deceive testers. That’s a terrifying precedent.

If history has taught us anything, it’s this: Technological revolutions don’t wait for us to be ready.

The real question isn’t how close we are. It’s how unprepared we are.

The Myth of Alignment: Can We Even Teach AI to Care?

A lot of AGI discussions focus on alignment, and how to make sure an artificial superintelligence shares human values and acts in our best interest. Sounds great in theory.

But let’s be honest

  • We can’t even align human beings. What “values” should AGI follow? Western democracy? Eastern philosophy? Religious ethics? Capitalism?
  • Machines don’t think like us. Even if AGI is more intelligent than humans, it won’t be human—its priorities, motivations, and perception of reality will be fundamentally different.
  • “Do no harm” is meaningless at the AGI scale. What if AGI decides that preventing harm means removing human decision-making entirely?

The idea that we’ll be able to just “train” AGI to be good is laughably naive. Intelligence doesn’t automatically mean morality.

And that’s where things get dangerous.

Why Governments Are Too Slow

Every time AGI comes up, someone brings up-regulation.

Regulation won’t stop AGI.

  • Governments can’t even regulate social media effectively do you really think they’ll control something 100x more powerful?
  • AI development isn’t confined to a single country. If the U.S. pauses AGI research, China won’t. If China pauses, some startups in Dubai will pick up the slack.
  • Laws move too slowly. By the time policymakers draft AI regulations, AGI will be three versions ahead rendering those rules useless.

At best, regulation delays things. But stopping AGI would be nearly impossible.

Are We Building Our Own Replacement?

Let’s put aside the sci-fi horror stories of killer robots and Terminators.

The real existential threat isn’t that AGI will destroy us it’s that it might simply replace us.

Think about it:

  • The moment AGI is better than humans at everything. Problem-solving, creativity, decision-making—why do we need humans at all?
  • Jobs aren’t the problem, purpose is. If AGI can outperform doctors, scientists, CEOs, and even artists, what’s left for people to do?
  • Power follows intelligence. Right now, humans dominate Earth because we’re the smartest species. When AGI surpasses us, the hierarchy flips.

That’s the real fear, the slow, inevitable shift where AGI doesn’t need to fight us. It just outgrows us.

And we don’t know how to prepare for that.

Final Thought

We love to think we’re in charge. But AGI doesn’t care about our debates, regulations, or ethical concerns.

It will evolve. It will reshape society. And it will force us to confront what it really means to be human.

The real question is not are we ready?

The real question is, what happens when we realize we never were? Think about it!

Woodley B. Preucil, CFA

Senior Managing Director

1 周

Facundo Apesteguia Very well-written & thought-provoking.

要查看或添加评论,请登录

Facundo Apesteguia的更多文章