The Robots Are Coming!
From a product perspective there are plenty of exciting possibilities when these things get mature enough, however from the AI perspective there’s an additional angle that's worth a read:
From day one ML researchers have used us humans as inspiration when solving hard real-world problems. One of the constant ideas is that there has to be a way to train systems with less need for annotated data because we see human babies learning these amazing feats of cognition with very little explicit supervision.
With the recent advances in self-supervised learning in first text and now vision we are now starting to unlock this human-like ability to learn complex tasks by training on huge amounts of unlabelled data (Open AI Codex,?Bloom,?Dall-e-2?to name a few). However, already we can see that for the sizes of these models and the compute we put into training them we are getting only a fraction of the “intelligence” we observe in humans.
领英推荐
So what’s missing? Arguably it’s the interaction with the environment that’s missing. Sure, a baby has not been explicitly told what a cup is, but neither did it only observe images of cups, it observed and interacted with the environment where cups were often an object of interaction or interest. For a baby to learn that fire is something dangerous it only needs one observation of someone getting burned, the look on that persons face and the loud aching sound really says it all. Compare that to our current models being trained on millions of images just to learn what fire looks like and you get a clear difference in learning efficiency. A third example would be?this crow going way beyond its physical capabilities in cracking a nut, great demonstration of emerging intelligence through observation and interaction with the environment.
Which brings us to why robots are a crucial piece for the further advancement of AI, they are the vehicle for these powerful models so that they can interact with the environment and learn beyond what is possible through offline training. Models that are mounted on to these (in the beginning very clumsy and awkward) robot bodies will start getting access to not only vision and sound inputs, but also feel, smell and possibly other signals we are not aware of. If they can become useful enough for us humans to keep them around, they will get their chance to learn as humans in the environment. This idea is already backed in theory by many notable AI experts (e.g.? Yann LeCun in his most recent paper) but there's a long way to go in practice.