Towards the positronic robot
Exponential View via Midjourney

Towards the positronic robot

?? Thanks to our sponsor?Try It on AI , studio-quality AI portraits from the comfort of your home.


In a leap that brings us closer to the robotic futures of sci-fi,?Google DeepMind has unveiled RT-2, a vision-language-action (VLA) model ?trained on Internet-scale data that can be integrated into end-to-end robotic control. Previous robots have had to be programmed for specific tasks. DeepMind’s robotic agents learn how to do things from the Internet, generalise what they’ve learned to new tasks, accomplish these tasks, and even engage in rudimentary reasoning. Admittedly, they cannot do any of those things particularly well yet, but they are the first robots that can learn and “understand” (or at least, infer).

No alt text provided for this image

DeepMind has done it by combining robotic trajectory data with?Internet-scale vision-language tasks like answering visual questions . The technical brilliance lies in transforming natural language responses and robotic actions into text tokens and incorporating them into the training set.?

It’s significant because this is one of the the first time that large language models (LLMs) have been directly linked to the physical world with minimal human intervention. Previously, the way to make such a connection was through an API that linked the LLM to other applications that could then access the physical world. RT-2 creates the possibility of robots becoming more adaptable and independent, and the teaching of individual tasks a relic of the past.?

These developments come as a refreshing break from the generally?slow progress in robotics . Natural language-based AIs (especially LLMs) give robotics a new tool for progress that may jolt the whole field forward.?As I argue in my book , new technologies intersect and enable fresh discoveries in other fields, especially if they are general purpose. Perhaps new AI regulation should incorporate?Asimov’s three laws of robotics .


?? Today’s edition is supported by?Try It on AI , studio-quality AI portraits from the comfort of your home.

No alt text provided for this image

Last year a consumer portrait photoshoot cost an average of $750, while commercial shoots ranged from $800 to $5000, not to mention the weeks of planning, hair, makeup and styling. Today you can generate?professional studio-quality images for just $17 ?in 1-2hrs from the comfort of your home. If you still haven’t experienced the power of Gen AI, join over 150,000 individuals and 500+ companies who are using?Try it on AI ?to create professional AI portraits.

Exponential View readers get?15% off using?EVTRY15 ?on checkout for any package . Code valid for the next 48 hours only.

Visit Try it on AI


Latest posts

If you’re not a subscriber, here’s what you missed recently:

  1. ???Chartpack: The global heatwave
  2. ???Promptpack: How to build a second-brain (featuring AI)
  3. ???Promptpack: Getting started with Code Interpreter


Andrew Nuut

Company Visions communicated clearly and effectively

1 年
回复
Sierra Rohan

Making Campus Marketing Better

1 年

Absolutely love Try it on AI, as a business student with low budgets, try it on allowed me to have new and amazingly updated headshots for a fraction of the cost!

Adriana Lica

Entrepreneur | Early Stage Ventures

1 年

Nathan Landman and I are thankful for the opportunity to go from readers to sponsors ??

Paul Meersman

Igniting Growth and Shaping Change | Storyteller | Writer | Analyst | Marketer | AI Engineer | Photographer | Filmmaker

1 年

You may want to watch this review before handing over your cash to Try it on AI. https://youtu.be/MEqLwwTADSs

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了