Tapping into 15 Billion US$ Opportunity - Smart Robotics Empowered by Human-Robot Interaction

Tapping into 15 Billion US$ Opportunity - Smart Robotics Empowered by Human-Robot Interaction

More and more robots are replacing humans in activities which were previously dominated by humans. According to the World Economic Forum (WEF), Robots could take over 52 percent of the current workload in less than a decade. Further, the report by WEF predicts that the number of robots taking over jobs from humans will nearly double from the current 29 percent. The same report warns that intelligent machines could force 75 million people out of jobs as early as 2022.

So robots are becoming more and more deployable. Some of the very immediate foreseeable replacement of humans by robots would be in Manufacturing, Transportation, Services, Disaster Relief, Defense and Logistics. So don’t be surprised if in the near future you see robots serving you in a restaurant or robot doing complex jobs in a manufacturing plant or a robot acting as a police officer.

However, the most critical part of robots becoming more advanced depends heavily on human-robot interaction technologies in general and human-assisted robotic error detection and error mitigation technologies.

The Holy Grail of Robotics

How can robots recognize whether an error has occurred is the most critical as well as widely debated topic amongst robotics front runners? Even more complex is the second most debated topic in robotics, how human-robot interaction can be used to make robots intelligent enough to not repeat errors. Errors are not just deterrent in the growth of the robotic industry across different domains and activities, but they are also very expensive as well as are complex to resolve. The error recognition is an extremely difficult task because errors are unexpected and, thus, hard to detect even harder to predict. One of the simplest of all for robots to recognize that an error has occurred is to observe the behavior of the human. For that, the robot needs to know how humans behave in the event of an error, easier said than done. A robot needs extensive data set (videos, simulations, etc) to understand human behavior and hence to train robots with an infinite number of scenarios is always a challenge.

Errors in case of robotics can be segmented in two segments, errors due to robots and errors due to humans. While the first one is of importance here as a second one can be tackled only once we have more robots and humans are sensitized that robots need to exist next to us to enhance our quality of life. However, the first type of errors, that is errors due to robots can be subdivided as errors because robots have violated a predefined process programmed by a human or there is a technical failure. The later one again is manageable as we have the capabilities to tackle challenges in systems we have created. The biggest challenge is the former one, where the robot breaches a protocol created by humans.

This scenario more or less is because the robot doesn’t know what is needed to achieve a specific goal or it is not trained to tackle a specific scenario it has just encountered and ends up doing something which is not expected of it. Example being a robot stuck in the lift between humans and doesn’t know what to do.

Humans, Key To Make Robots Better Robots

Reinforcement learning (RL) is key to the success of robotics. RL enables robots to learn its optimal behavioral strategy in dynamic environments based on the feedback it is provided by humans. Explicit human feedback during robot RL is the key, an explicit reward function can be easily adapted which can ensure that a robot is trained to perform adequately well in scenarios where the same error or similar scenario arises.

However, RL is very demanding and tiresome, for a human to continuously and explicitly generate feedback to improve robot. RL in real-world robotic applications is challenging for different reasons: first the high-dimensional continuous state and action space in which robots operate, second the high-costs of generating real-world data and expensive real-world experiences which cannot be replaced by learning in simulation, and last but not the least no straightforward method to specify appropriate reward functions including reward shaping to specify goals. These challenges scale exponentially with the increase in the complexity of the task and the many pitfalls of the real world, which makes it often impossible to decide whether or not an act of RL was successful or failed.

Hence, the development of implicit approaches to solving this logjam to make solution scalable is of high relevance. Various experiments are already going on for improving human-robot interactions to achieve this. Some recent developments on the subject include use of an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initial results of intrinsically generated EEG-based human feedback in RL to improve gesture-based robot control during human-robot interaction are promising to say the least. Such systems use electroencephalography (EEG) to monitor a human’s brain signals as they watch a robot work. When the human has spotted an error, brain signals called “error potentials” are generated, these signals which are triggered can correct the robot’s action right away. So far in experiments accuracy of 70 percent has been achieved. This system gives human collaborators a fast, natural way to interact with robots. Communicating using brain activity is so intuitive that it doesn’t need the human to consciously formulate its intention. Humans could then become a passive go-between in the interaction, which makes this the most optimum solution to make robots better than humans in a few sets of activities. Empowered by the humans.

Various companies are trying to improve the efficacy and efficiency of robots, one such company Cognicept based out of Singapore is doing some exciting work. Cognicept provides Human-in-the-loop (HITL) error handling with our telerobotic networking technology and remote robot trainers. Cognicept’s supervised autonomy products make unpredictable robotics applications reliable and enable use cases that were previously impossible.

About the Author

Deepak is a well-decorated serial technopreneur who created one of the most respected AgTech company, MyCrop and in past founded one of the most innovative Social Data Analytics platform, HnyB and Angel Incubator for enterprising engineering college students, HnyB Tech-Incubations. He is currently advising Cognicept.

VERY GOOD ARTICLE.READ AD UNDERSTAND

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了