Human-Level AI: Are We There Yet? Insights from Yann LeCun

Human-Level AI: Are We There Yet? Insights from Yann LeCun

Reflections about Keynote: Yann LeCun, "Human-Level AI":

https://youtu.be/4DsCtgtQlZU?si=0H26Fuvn-lfV2xMJ

Oct 13, 2024

As we navigate the evolution of AI, it's easy to get lost in the spectacle of present-day technologies—chatbots, image generators, recommendation algorithms. But where are we headed next? Yann LeCun, Chief AI Scientist at Meta, offers a thought-provoking perspective that invites us to not only appreciate what we've achieved but also to look critically at what lies ahead.

One of the most intriguing premises put forth by Yann is that the path towards human-level AI is fundamentally different from what we might assume. Contrary to some of the optimism swirling in the tech sphere, LeCun reminds us, “Current AI systems are not capable of reasoning, planning, or intuitive common sense to the level humans are.” This begs the question: What kind of journey do we need to undertake to reach human-level intelligence? And importantly, what are the limits of our current models in bridging that gap?

LeCun suggests that our future could very well involve everyone becoming a boss—not over human employees, but over a "staff" of AI assistants, each of us commanding a multitude of virtual agents that assist, guide, and amplify our abilities. But for this vision to materialize, we need more than language models regurgitating what they’ve learned. We need AI that can “reason, plan, and learn” the same way a child learns to clear the dinner table after observing once—something that is still leagues ahead of our most advanced AI.


The Core of Human Intelligence: What Are We Missing?

Imagine a satellite navigating space or a deep-sea explorer gliding beneath the ocean's surface. Both must understand their surroundings—the gravitational pull of celestial bodies or the shifting currents of the ocean—each dynamic and ever-changing. In the same way, AI needs a grasp of persistence, causality, and relationships to navigate the physical world. Yet, these seemingly straightforward capabilities remain a significant challenge for AI systems. For instance, while a diver can intuitively adjust to changing currents or an astronaut can maneuver in zero gravity, our most sophisticated robots struggle with tasks as simple as efficiently filling a dishwasher.

What is this “Maric Paradox” LeCun speaks of—where the simplest of human skills seem the hardest for machines, while complex feats like playing chess or creating convincing language models are, relatively speaking, easier? It’s a humbling reminder that intelligence is not linear. It invites us to consider: Are we measuring AI progress against the right benchmarks? Should “reasoning” and “intuitive understanding” become our new frontiers for evaluation?


Shifting Paradigms: From Generative to Predictive Models

LeCun is critical of our current dependence on purely generative models for pushing AI capabilities. He argues that these models—even the best large language models (LLMs) we have—are not enough to reach the next step of AI evolution. The reason? They lack the ability to truly understand the world. “We are never going to reach anything close to human-level intelligence by just training on text,” LeCun asserts. Instead, the road ahead might require systems to observe, understand, and predict physical interactions, akin to how humans learn through trial and interaction with the environment.

In LeCun's view, the answer lies in what he calls “objective-driven AI”—a kind of model predictive control approach, where the AI is actively trying to optimize outcomes based on goals we set, just as we do in everyday tasks. Imagine an AI planning a complex journey, not unlike our own decisions when moving through cities—from New York to Paris, with hierarchical decisions that narrow down until they manifest in basic muscle movements. “How do we achieve hierarchical planning in AI systems?” LeCun asks, reflecting on a fundamental challenge yet to be solved.

The Vision for an Open AI Infrastructure

Perhaps one of LeCun's most thought-provoking predictions is about how this powerful AI—if we manage to build it—should be governed and shared. He paints a future where AI assistants mediate our interactions with the digital world, always by our side, always learning and assisting. But, for such an omnipresent technology to remain ethical and relevant globally, LeCun strongly emphasizes that it must be “open source” and collaboratively built by entities around the world. He says, “We need these AI assistants to understand all languages, all cultures, all value systems.” In his view, AI built by a single company or culture cannot cater to the diversity of human needs.

This brings us to a crucial question—should AI be viewed as a fundamental utility, much like electricity or the internet, rather than being siloed into proprietary products owned by a few entities? If our goal is to foster diverse perspectives, ensure accessibility, and support innovation across the globe, then we must recognize the importance of building AI as a shared, collaborative effort. Open-source is one component, but beyond that, it's about creating a structure where knowledge, resources, and advancements are not restricted but flow freely, allowing everyone to contribute and benefit. Only then, it seems to me, can AI truly serve humanity in its entirety.


Optimism with Realism: How Long Before We Get There?

To conclude, LeCun takes a realistic view of the timeline to reach these next levels of AI. While optimistic about the trajectory, he does not mince words about the hurdles. “It could take years to decades,” he admits. Unlike the dramatic “AI superintelligence takeoff” scenarios, he sees progress as incremental—“not a single day when machines suddenly become superintelligent, but a gradual evolution.”

And perhaps that’s a comforting thought. AI that advances slowly allows us to adapt, understand, and influence its trajectory in ways that align with our values. LeCun makes it clear—“There is risk, but we can manage it.” He encourages us to think deeply about the “objective-driven” nature of our AI systems: Are we setting the right goals? Are these goals aligned with human well-being, creativity, and progress?

As we stand at this crossroads, it’s clear that the questions we ask now will shape the kind of AI future we get. What kind of intelligence do we wish to build, and what will be its purpose? LeCun’s insights are a clarion call to think bigger, deeper, and more inclusively—not just about technology, but about the world we are building alongside it.


Maric Paradox on chess board - inspired by Fauvism, Pointillism & Cubism.


要查看或添加评论,请登录

Robert Schwentker的更多文章

社区洞察

其他会员也浏览了