AGI: How Will We Know?
Generated by Grok AI (x.com)

AGI: How Will We Know?

The goal of this brief post today is simple: to explain how you might know when AGI is here. But before we dive into timelines and possibilities, let's address the elephant in the room: the term AGI itself. When discussing artificial intelligence's future, I usually sidestep the term "AGI." Not because it's meaningless—far from it. Rather, it's because this three-letter acronym has become a Rorschach test for our hopes and fears about technology. To some, it's a kitchen-cleaning robot; to others, humanity's final invention.

The reality? AI advancement isn't some finish line we'll suddenly cross. It's not some specific event like the moon landing, where we'll plant a flag and declare "We've achieved AGI!" Instead, we're witnessing a continuous evolution of capabilities, limited primarily by computing resources and engineering challenges.

But - for the sake of this article - let's define AGI. This is an extremely bold definition by all accounts - but one that would satisfy all but the most staunch defenders of the uniqueness of a human workforce:

AGI would enable us to cost-effectively replace humans at more than 95% of economic activity, including any new jobs that are created in the future. This simply means the core capabilities are there to do it, even if the rollout takes several more years to complete between companies and governments. At this point we'd (obviously) see seriously world-changing impacts, both good and bad, that start to emerge.

The focus of today is not on AGI as a label - but to explore two possible paths ahead. The slow path - which is measured, limited, and methodical, and the fast path, which is accelerated and transformative.

For more detail on this 95% scenario - see here.

First, Some Screenshots

I want to start with a screenshot from Sam Altman's blog. In his latest essay called 'Reflections' - where he states that some form of AGI will arrive THIS YEAR, 2025. In case you don't know who Sam Altman is, he's the CEO of OpenAI, who makes ChatGPT - a tool many of you likely use daily.

Here it is:

Whether you believe in Altman's 2025 timeline or not, understanding these trajectories matters for your work, your investments, and our collective future.

Even more fantastical is Sam's reference to superintelligence, also known as ASI (artificial superintelligence)—a hypothetical leap beyond human capabilities that reads more like science fiction than science. These near-omniscient, self-improving, instant-breakthrough entities are purely hypothetical - so let's focus on AGI only for now.

Later - he put out a tweet mirroring his confidence that we are near the singularity - a hypothetical future point when artificial intelligence surpasses human intelligence, becoming effectively sentient, uncontrollable, and irreversible:

Clearly - Sam Altman’s newest post/tweet is similar in tone to a post by the CEO of Anthropic & what many (not all) researchers from every lab have been saying publicly and privately.

We do not have to believe them, but it is clear: they believe what they're saying, and they all have enough technical background to know what they're talking about. These predictions have set the tech world ablaze at minimum, and have made many quite anxious about the future.

By all accounts - Sam and many researchers like him believe we are on a fast track to 'AGI'. But how will we know if that's actually happening? Well...here goes.

The Slower Path: Incremental Growth, Gradual Impact

In this narrative, scaling hits a wall where simply creating bigger models aren’t a magic bullet anymore. Today, frontier models have somewhere between 30-100 billion parameters. The narrative in the slow path is that even if we hit 1 trillion parameters or more - we don't really see any noticeable improvement.

In this narrative, we’ll see new AI systems in 2025 or 2026 that hold more “raw intelligence” than something like GPT-4, but the leap is modest, maybe comparable to GPT-3.5 → GPT-4 or less. Why? Because fresh, high-quality data is getting tough to find, and merely amping up model size is expensive.

Progress on “reasoning models” continues—tools like o1, o3, and DeepSeek-R1 keep achieving record-breaking scores on well-defined benchmarks like FrontierMath and RE-Bench, offering real advantages to mathematicians, scientists, and engineers. Yet the impact remains limited by what tasks can be neatly encapsulated. Amdahl’s Law kicks in, so the parts of the job that remain “messy” slow overall gains, much like an assembly line that can only move as fast as its slowest station.

Another drag on impact comes from the fact that the world knowledge a model is trained on is out-of-date by months before it's available to users. As of the end of 2024, ChatGPT reports a “knowledge cutoff date” of October 2023, indicating its models doesn't have innate understanding of anything published after that date – including the latest in AI R&D techniques. X.com's Grok AI can search current public X (Twitter) posts and do real-time web searches - but its core knowledge base is still old (2023). Right now - until a new approach is found, this will interfere with the pace at which AI can self-improve.

Agents—the AI systems meant to carry out tasks on their own—mostly stumble in 2025 despite hype calling it the “Year of the Agent” . They get lost in tangential information. They’re vulnerable to trickery. It’s reminiscent of how “Year of the LAN” was proclaimed repeatedly in the ‘80s and ‘90s. True, we find ways to hand over small chunks of work to these agents, boosting efficiency little by little. But by 2035, AI is woven into everyday life—like smartphones—without radically shaking up the global order. Humanity still calls the shots, there is no and we haven’t missed the window to steer AI in a positive direction.

The Faster Path: Engineered Acceleration

Now consider a different story unfolding at breakneck speed. Compute becomes the great equalizer; we solve “model collapse” and start generating reliable synthetic data: data generated by machines to create recipes that are easier and cheaper for computers to leverage. OpenAI's o1 and o3 models are examples of models creating synthetic data to respond to scenarios that might be rare or even non-existent in the real world.

Because we are no longer training ever-larger models, there’s no need to build massive, multi-gigawatt datacenters. The primary drivers of progress – synthetic data, and experiments running in parallel – need lots of computing capacity, but don’t need that capacity to be centralized. Data centers can be built in whatever size and location is convenient to electricity sources; this makes it easier to keep scaling rapidly. This could even pave the way toward truly decentralized AI that has the security, censorship resistance, and identity protections of Web3 behind it (a very early example).

Given enough computing capacity, we can create all the data we need. Once everything from data generation to research ideation begins to feed back on itself, we enter an era of recursive self-improvement. AI helps build the next generation of AI; breakthroughs arrive faster than we can process them. As I mentioned in a prior comment, Computing capacity can now substitute for both data and talent, meaning that compute is the only necessary input to progress. Ever-increasing training budgets, continuing improvements in chip design, and (especially) AI-driven improvements in algorithmic efficiency drive rapid progress.

By 2026, “agents” are actually useful—sifting through real-world info, planning for hours at a time, rapidly iterating on tasks that once required teams of human experts . We see major leaps in continuous learning and robust knowledge access, smoothing over the problem of AI “knowledge cutoff dates”. The conversation moves from “Will AI change the world?” to “How do we manage this amazing, disruptive force?”

Still, even in the “fast” scenario, big leaps in areas like fuzzy real-world problems—everything from drafting marketing campaigns to multi-step scientific research—require at least a couple more breakthroughs, roughly on the scale of discovering transformers. Realistically, we might not see market-ready AGI before 2028 unless we get “lucky” with new techniques that solve continuous learning, truly open-ended tasks, and robust planning.


So...Will we have 'AGI' in 2025?

No.

I have a hard time imagining that transformational AGI (again, not just some opaque term, but a term that means labor-replacing at 95%) could appear before the end of 2028, even in the “fast” scenario. The question of AI undertaking physical work is included here, since the same modern “deep learning” techniques that underlie the current wave of AI have recently been very useful for controlling robots, improving battery technology, etc.

In order for even the fast-takeoff scenario to occur - essentially ALL of the following would need to occur in the next few years. After that? It's all about the rollout.

Some tasks will become feasible sooner than others. Companies and governments will naturally react slowly - humans are messy, after all. It goes without saying, considerations will need to maintain social and cultural stability - including various programs to keep people busy doing meaningful (if not entirely necessary) work that provides them a means for both purpose and income It will take time to build out enough data centers for AIs to collectively do more work than people, and ramping up production of physical robots may take even longer.


Telltale Signs of a Short Timeline

How will we know if we’ve boarded the express train? A few indicators stand out:

1. Real, Sustained Reasoning Progress

If a model like o3 not only aces tricky math and coding benchmarks but consistently wows the public, we’re in for a ride, especially if another step-change arrives in 2025.

2. Breaking Out of the Chatbox (control of most aspects of you computer/laptop)

When AI consults internal systems, brand guidelines, and past performance metrics to craft a marketing campaign—instead of just spitting out generic text—that’s a huge milestone.

3. Growing Agent Autonomy

We see AI agents operating with multi-hour or multi-day planning horizons, navigating open-ended tasks without repeated meltdown or confusion.

4. Ever-Rising Capital Investment

There is no 'trough of disillusionment' to speak of. Companies keep pumping money into data centers for AI training, showing they genuinely believe bigger leaps are on the horizon. Generally speaking, investor sentiment remains the same. Money managers who used to say tech stocks are 'high risk' are eating crow.

5. Multiple Breakthroughs Every Few Months

Something on par with the invention of “reasoning models” (o1) emerges month after month, year after year.

6. Recursive improvements on smaller models (increasing ubiquity on more devices)

Fine-tuning AI models on simpler, synthetic data is proving more effective than using complex, expensive datasets. This counterintuitive finding accelerates AI democratization by enabling smaller, efficient models that can run locally on personal devices like phones, earbuds, and smart glasses - without requiring cloud connectivity. The trend is already emerging in real-world applications.


The fast path is a very strong sign we could hit AGI in under four years (again: capability only, not a full rollout). If, by contrast, the hype fizzles, agent-based systems remain unreliable, and progress on fuzzy real-world tasks crawls, we’re likely on the longer, slower path. You might still see big strides in math or science, but an AI that’s a true generalist—able to pick up almost any 9-to-5 human job reliably—could be much farther off.


A Fork in the Road: What Do You Think?

So here we stand at the intersection of two plausible and technically feasible timelines. One sees incremental improvements rolling out steadily until AI becomes an everyday utility, like having a super-competent intern on your laptop. The other envisions a headlong rush into a world where AI is training itself, rewriting the rules of innovation as it goes.

Which scenario seems more likely to you, and why? Where do you see the biggest bottlenecks—or the biggest opportunities? Let’s continue the conversation.

References:

https://blog.samaltman.com/reflections

https://www.lesswrong.com/posts/KFFaKu27FNugCHFmh/by-default-capital-will-matter-more-than-ever-after-agi

https://www.lesswrong.com/posts/auGYErf5QqiTihTsJ/what-indicators-should-we-watch-to-disambiguate-agi

https://amistrongeryet.substack.com/p/defining-agi?open=false#%C2%A7appendix-my-definition-of-agi

Brian Landes

?? #Sustainability #FarmTech #AISolutions ?? #ZeroCarbon #GreenEnergy ?? #Hemp #Cannabis ?? #Biofuel #SocialImpact ?? #TeamBuilding |?? Founder & COO at Moon Farms LLC | ?? Visionary Behind The Future Syndicate NGO |

1 个月

"Closure Companion AI lets users reconnect with lost loved ones through personalized, emotional conversations. It uses AI to replicate the voice and personality of the deceased, providing a healing space. The mobile-based platform offers comfort and connection during grief." https://www.dhirubhai.net/posts/brian-landes-18534968_ai-griefsupport-closurecompanion-activity-7284554421642960896-zqQt

回复

要查看或添加评论,请登录

Alex Myers的更多文章

社区洞察

其他会员也浏览了