Objective-Driven AI
"Switched on robot" by OpenAI's DALL-E 2

Objective-Driven AI

Where we are going (years?)

About a month ago Yann LeCun , the chief AI scientist at Meta, gave a talk at MIT titled "Objective-Driven AI: Towards AI systems that can learn, remember, reason, plan, have common sense, yet are steerable and safe." But let me start with the optimistic statement that LeCun made at the end of this talk:

AI will bring a new era of enlightenment, a renaissance to humanity.

LeCun doesn't think we will get there with the current large foundation model approach -- with an auto-regressive generative architecture. What LeCun is highlighting is the problem with the current generative AI architecture in producing reliable outcomes. The current solution to this limitation is a human in the loop for steerage and correction which can then moderate the output of these models -- as LeCun points out, while the performance of current models is amazing, "...they make stupid mistakes: factual errors, logical errors, inconsistency, limited reasoning, toxicity..." LeCun envisions a new set of fundamental breakthroughs that will eliminate this requirement for human moderation and thus make AI much more valuable.

What LeCun envisions is a multi-component architecture for the next generation of AI systems. From his presentation --

  • Perception: Computes an abstract representation of the state of the world (possibly combined with previously-acquired information in memory)
  • World Model: Predict the state resulting from an imagined action sequence
  • Task Objective: Measures divergence to goal
  • Guardrail Objective: Immutable objective terms that ensure safety
  • Operation: Finds an active sequence that minimizes the objectives

There are a number of advantages that LeCun proposes would come from this approach. First, it could eliminate one of the downsides of the current models, the need for reinforcement learning from human feedback (RLHF). This work can be quite harrowing for the humans that do it since they are reviewing the potentially toxic output with the objective of restraining the model from generating that content. It also is imperfect and can still result in toxic and incorrect generative output from the models. LeCun envisions a system that can plan toward objectives which he proposes would be inherently safer.

LeCun's full presentation slides explore this topic in detail although without the voice over some will be impenetrable, especially if you aren't closely following the field. There are two important take aways to think about even if you don't have an interest in the specific technical details:

First, yes all the critics of LLMs are right that there are some severe limitations to the way that LLMs operate. While I believe that with the right human supervision, these tools can create enormous value, we will need to continue to develop new approaches to improve and correct for these limitations.

Second, this research is happening - LeCun's paper is just one example of research work that is focused on developing new architectures which will continue to improve the capabilities of these tools. We should expect advances over the coming months and years which will rapidly increase the capabilities and reliability of AI systems.

I for one hope that LeCun's optimism is correct and already feel that we are in a new period of renaissance for humanity.

Be sure to visit the #ORAICHAIN booth at TOKEN2049, it is a project that will revolutionize the field of artificial intelligence.

回复

Do you believe he meant there would be improvement over generative text\voice ai, graphical ai models or both? There are options using generative ai that have safeguards and don’t require much human interaction for specific uses. I believe his concept is great, but I still feel whether you look at a pixel or a picture, the computational error in the algorithms can have the same challenges and we will always need some form of human interaction or an ai that can coach and train an ai. Which we already have. He obviously has a better grasp, I just believe using different forms of AI generative and not for specific use cases can yield pretty reliable results. ??. I guess it boils down to what is the ai solving for.

回复
Ryan Hourigan

Project Manager | Director of Lead Generation | Business Development Specialist | Director of Marketing | PPC, SEM, & SEO Expert | Social Media Advertising Specialist & Senior Sales Analyst

1 年

I have benefited greatly form the idea of Ray Kurzweil's "singularity" Ray Kurzweil is the author of the?New York Times?bestseller?The Singularity Is Near?and the national bestseller?The Age of Spiritual Machines, among others. One of the leading inventors of our time, he was inducted into the National Inventors Hall of Fame in 2002. He is the recipient of many honors, including the National Medal of Technology and Innovation, the nation's highest honor in technology. He lives in Boston. Praise for Ray Kurzweil ? "Ray Kurzweil is the best person I know at predicting the future of artificial intelligence." —Bill Gates The Singularity Is Nearer by?Ray Kurzweil? Looking forward to the 2025 audio release!

Andrew Crawford

Strategy | Deep Tech | Finance | Foreign Policy

1 年

Does 100M+ of Insta followers for Kim K / an influencer not "amplify" the reach of their intelligence? Curious to see that you (and Reid Hoffman recently) dropped the term "augment" and now are using "amplify"... Is it that the current LLMs (for which the parlance not too long ago was "AIs", not "LLMs") can only iterate / broadcast text patterns from a source, but not improve / augment / add intelligence to them... So that we now are stuck with "amplify", [which I understand is the Meta vision, ie Kim K tweeting back to YOU], and just hope that at some industry future state we can regain the hyped-for "augment"? If every 7 years we have to call software something else for marketing purposes (this being a leap year for "AI", since Watson was 2015), I fully respect that. And a candidate can be re-run: "Augment in 2029!" But am I wrong that Web2 ca. 2008 was/is amplification? I share your optimism and the Professor's: just not sure if that hoped-for rennaissance will be, as it is not now, dependent on its "artificial"-ity. There are as many humans on Instagram as parameters in some of these models... are their perspectives worth as much, or more, than a parameter?

Naveeta Sehgal

AI Advisor | Created $2+ Billion value | ex-Accenture, TCS | Follow me for AI strategic insights, news and career growth.

1 年

Current LLM architecture is not scalable and flexible to support that level or functionality. I feel like we will get there in 12-18 months

要查看或添加评论,请登录

社区洞察

其他会员也浏览了