Limits of LLMs, AGI & the Future of AI
This article is mine notes of Lex Fridman with Yann Lecun - Chief AI Scientist at Meta.
The podcast is quite insightful and touches on the real limitations of LLM and how AGI would evolve.
Limitations of LLMs
These are four important characteristics of intelligent behavior:
LLMs can't fully achieve these; they can only do so in a very primitive way.
Most of our knowledge is derived from interactions with the physical world, not just language. LLMs can pass the bar exam, but they can't launch a drive in 20 hours, nor can they learn to clear the dinner table and fill the dishwasher. Essentially, they can't perform the daily physical actions that we do.
Most of our thinking is not in terms of language, but current LLMs are trained to predict the next word since there are finite possibilities for the next possible word. We plan our answers before we produce them, but LLMs do not; they just produce one word after another.
领英推荐
Can You Build a World Model That Has a Deep Understanding of the Real World?
This would involve understanding the world and why it is evolving the way it is. A world model would need to account for:
Such a model can't be achieved with current generative models. If we think in terms of video, it would involve predicting the next frame of the video, which we don't yet know how to do. The world is incredibly more complicated and richer in terms of information than text, which is discrete; video is high-dimensional and continuous.
Joint Embedding Predictive Architecture & How It Differs from Generative AI Architectures Like LLMs
Why Hallucinations Happen in LLM and Why It Is a Fundamental Flaw of LLMs
Future of AI
Owner | Technology & Business Growth Specialist
10 个月Sounds like a deep dive. How can you apply inspiring AGI thoughts practically in tech? Akash Agrawal