The Future of AI: Will Machines Learn like People Do?
Yann LeCun's paper has launched for commentary.

The Future of AI: Will Machines Learn like People Do?

A Path Towards Autonomous Machine Intelligence

If you enjoy articles about A.I. at the intersection of breaking news join AiSupremacy?here . I cannot continue to write without community support. (follow the link below). For the price of a cup of coffee, Join 79 other paying subscribers.

https://aisupremacy.substack.com/subscribe

Yann LeCun's paper has launched for commentary.

Hey Guys,

Will Meta AI help Yann LeCun with his manifest destiny? It's one of the more intriguing sub-plots of the fate of machine learning.

So yesterday I wrote an off-the-cuff post that was not well received re the work and legacy of Yann LeCun. Imagine the timing, when the?following day?he announces a Major paper that distills much of his thinking of the last 5 or 10 years about promising directions in AI.

When it comes to A.I. at the intersection of news, society, technology and business, I’m?opportunist, so without further adieu let’s get into it!

I’m liking his transparent tone on the release, it is available on?OpenReview.net ?(not arXiv for now) so that people can post reviews, comments, and critiques:

Download the PDF

Topics addressed:


  • An integrated, DL-based, modular, cognitive architecture.
  • Using a world model and intrinsic cost for planning.
  • Joint-Embedding Predictive Architecture (JEPA) as an architecture for world models that can handle uncertainty.
  • Training JEPAs using non-contrastive Self-Supervised Learning.
  • Hierarchical JEPA for prediction at multiple time scales.
  • H-JEPAs can be used for hierarchical planning in which higher levels set objectives for lower levels.
  • A configurable world model that can be tailored to the task at hand.

His LinkedIn post about it is going a bit viral and you can read the comments?here .

While Yann LeCun is being very open about his ideas here, we have to remember who he is and his position in BigTech now. He is nobody other than the VP & Chief AI Scientist at Meta, formerly called Facebook.

Furthermore he has been doing PR and talking about the content of this paper over the last few months including recently:

- Blog post:?https://lnkd.in/dHhb3ZSH

- Talk hosted by Baidu:?https://lnkd.in/db_eSSyA

- MIT Tech Review article by Melissa Heikkil?:?https://lnkd.in/gBJx8SHy

- Fireside chat with Melissa Heikkil? at VivaTech:?https://lnkd.in/g8S9PhsV

- A short post with the basic points of the paper:?https://lnkd.in/gHBf7m-h

I am embedding the Baidu video from YouTube?here :

Researchers in A.I. are?some of the most open and collaborative in the world,?this is important for the democratization of the future of A.I. This is more of a summary of the field than his original work or Meta’s influence on it.

https://twitter.com/ylecun/status/1541491973290446850

The Twitter commentary around his paper is also very illuminating and I encourage you to explore it.

Recommended reading : https://ai.facebook.com/blog/yann-lecun-advances-in-ai-research/

To summarize Tl;dr:

No alt text provided for this image

The Future is About Asking the Right Questions


  1. How could machines?learn as efficiently as humans?and animals?
  2. How could machines?learn to reason and plan?
  3. How could machines?learn representations of percepts and action plans?at multiple levels of abstraction, enabling them to?reason, predict, and plan?at multiple time horizons?

LeCun does not believe in AGI per se, but human-level AI as a distinct possibility. (HLAI).

The future of A.I. also encompasses an increasingly neuroscience and cognitive science integration of perspectives:

Keywords:?Artificial Intelligence, Machine Common Sense, Cognitive Architecture, Deep Learning, Self-Supervised Learning, Energy-Based Model, World Models, Joint Embedding Architecture, Intrinsic Motivation.


While the pursuit of AGI is great for hype and headlines, something A.I. firms crave, the reality for researchers in the field is actually quite different.?Full disclosure: I must for business reasons sometimes create headlines with said pseudo-science embedded.?This of course is not to offend my usual sense of clarity or objectivity.

Julian Togelius

Basically, I don't think the expression "artificial general intelligence" means anything, so discussions about when it will arrive or what risks or promises it might have are also meaningless. The same goes for every attempt I've seen at replacing the term with something better.

How is it possible for an adolescent to learn to drive a car in about 20 hours of practice and for children to learn language with what amounts to a small exposure….Still, our best ML systems are still very far from matching human reliability in real-world tasks such as driving, even after being fed with enormous amounts of supervisory data from human experts, after going through millions of reinforcement learning trials in virtual environments, and after engineers have hardwired hundreds of behaviors into them.

Clearly we are very far away from even achieving HLAI. (human-level A.I.)

The present piece (his paper) proposes an architecture for intelligent agents with possible solutions to all three challenges. The main contributions of this paper are the following:

  1. An overall cognitive architecture in which all modules are differentiable and many of them are trainable (Section 3, Figure 2).
  2. JEPA and Hierarchical JEPA: a non-generative architecture for predictive world models that learn a hierarchy of representations (Sections 4.4 and 4.6, Figures 12 and 15).
  3. A non-contrastive?self-supervised learning?paradigm that produces representations that are simultaneously informative and predictable (Section 4.5, Figure 13).
  4. A way to use H-JEPA as the basis of predictive world models for hierarchical planning under uncertainty (section 4.7, Figure 16 and 17).

Foundational Architectures and World Models


TL;DR: - autonomous AI requires?predictive world models?- world models must be able to perform multimodal predictions - solution:?Joint Embedding Predictive Architecture (JEPA).

Yann LeCun

@ylecun

- JEPAs can be stacked to make long-term/long-range predictions in more abstract representation spaces.

- Hierarchical JEPAs can be used for hierarchical planning.

Yann LeCun does many Tweet threads so he summarizes things pretty well.

Please refer to the Slides of the Baidu talk?here. ?It’s a good way to listen to the YouTube.

Meta AI is also getting better at documenting this paper’s journey and Yann LeCun’s thought leadership here. If you still have a Facebook account, following?his account ?is also a good idea.

Related Topics:

  • AI that can model how the world works
  • Proposing an architecture for autonomous intelligence

Meta AI States: LeCun proposes an architecture composed of six separate modules. Each is assumed to be differentiable, in that it can easily compute gradient estimates of some objective function with respect to its own input and propagate the gradient information to upstream modules.

You can visualize it as such: (from the Baidu slides)

No alt text provided for this image

His emphasis on the interaction of Neuroscience and AI here is quite thrilling.

I hope this wets our appetite to do more additional research on your own, obviously we’ve barely even begun and I’m out of space.

No alt text provided for this image


  • The centerpiece of the architecture is the predictive world model.
  • Reward is not enough. Learning world models by observation-based SSL and the use of (differentiable) intrinsic objectives are required for sample-efficient skill learning.
  • Humans and Animals learn Hierarchies of Models Humans and non-human animals learn basic knowledge about how the world works in the first days, weeks, and months of life. Now to replicate that for HLAI?
  • Yesterday my title was a bit misleading, about how AI will integrate how infants learn, but his paper is literally all about that.
  • I invite you to explore his work further and I will write more summaries of his paper hopefully in the near future.

No alt text provided for this image

If you found this useful please let me know. I’m always trying to create content with the most ROI for A.I. enthusiasts at all levels of the spectrum. Thanks for the 77 paid subscribers who are supporting me.?For the price of a coffee, the funds go directly to my rent, food expenses and the basic living requirements of my family.

If you refer a friend, they can get this time-limited 15% discount for the Summer of 2022. Just click on this button and share them the URL link.

Get 15% off for 1 year

Otherwise thanks for the support! I’m very grateful for it and don’t take it lightly.

Subscribe now

If you enjoy articles about A.I. at the intersection of breaking news join AiSupremacy?here . I cannot continue to write without community support. (follow the link below). For the price of a cup of coffee, Join 79 other paying subscribers.

https://aisupremacy.substack.com/subscribe

Trip down the memory lane: https://en.wikipedia.org/wiki/Cyc vs https://en.wikipedia.org/wiki/Cog_(project)

回复
Christopher Slabchuck

Fellow at the Academy of Political Sciences (disabled: now an advocate for Israel at AIPAC, WJC-AS, AJC for MI elected representatives and fan of JBS.

2 年

Why is this even a topic of consideration? AI's advantage is due to the fact that it doesn't learn inefficiently like humans. It leverages information directly. The question seems rooted in a lack of understanding exactly what AI is. AI is a machine not an organism. It doesn't require expending processing power on self maintenance functions.

POOJA JAIN

Storyteller | Linkedin Top Voice 2024 | Senior Data Engineer@ Globant | Linkedin Learning Instructor | 2xGCP & AWS Certified | LICAP'2022

2 年

Insightful share???? Michael Spencer

Bala Subramanian

Chairman, President & CEO at Synergism, Inc. and Owner, Synergism, Inc.

2 年

I question the premise, "humans and children learn faster" and develop "general intelligence" etc., Every child and individual may learn to speak and express thoughts independent of one another without there being any correlation among what they know, understand or have any common perceptions. They dialog, debate, and argue all their lives to corelate their thoughts, ideas, experiences but never ever get to be "intelligent". So called Nobel Laureates might all be mere conjectures and not necessarily true in any long-lasting way. If this were not so, we won't have histories of extinct civilizations. This civilization in all likelihood might be extinct eventually.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了