Keeping Robots and Humans Separate
At the beginning of this month, the Financial Times posted an article about the setup of hybrid systems—where automated processes and humans work together—and of the different approaches that they often take.
The story begins by telling of how a pedestrian was killed by one of Uber’s self-driving cars. It was an accident that occurred when the pedestrian was crossing the road and the system was unable to react properly, defaulting to driver control. That driver, local police concluded, was distracted—potentially because they were watching a TV show on their smartphone. It was an accident that may have been unavoidable even if the driver’s attention had been fully on the road, given that research from Stanford University says it takes six seconds for a human driver to recover awareness and take back control of the vehicle.
Three approaches taken by those designing automation are outlined. The first was in use in the Uber case, in which a human is used as a ‘backup’ to the automated technology; the second is when sensitive decisions are always left to the decision-making skills of a flesh-and-blood person; and the third is when the AI is not able to handle a task on its own and is merely an aid to a person.
In previous reports on this blog, I have written extensively about how robots and humans can work together, taking as a starting point the position that robots are not there to replace humans but will instead help them by taking on the routine, mundane tasks that require little creativity and are dependent on data. That would be the third approach outlined above. The other approaches have not been tackled since they do not fall within the operating parameters of current Retresco technology.
So where in the past I have spoken about how automated technology and humans can come together, it is also important to talk about how robots and humans can be separated.
It is important to keep in mind that separation is not always feasible. This is because some tasks have potential consequences that are so serious that human oversight is a necessity. How serious? Look at the first paragraph of this post. This was not the first fatality involving self-driving cars, nor is it likely to be the last. And outside of this article, there are interesting questions as to where fault lies when such incidents occur. But, as Fortune points out in another article, the biggest risk with self-driving cars still comes from humans, not the cars themselves.
There will be a lot of debate over which jobs can be farmed out to automated technology, and whether they should be farmed out totally or in part. And, if so, how much should be farmed out. But there are a few basic principles that we should look to if we want to go down the path of separation.
Firstly, any tasks not overseen by a human should carry no risk of serious harm. If a machine can do something, that is great. But if the consequences of those actions could be serious and adverse, a rethink is necessary. Likewise, an automated system should not be in the position where it could create issues of libel—again, this is a judgement call and one that needs careful and considered thought.
The key to solving this is to have a clear and robust development process, planned correctly and with a definite objective. Blind spots should be taken into account at the planning stage. The possible interpretation of the data should be solid and leave no room for ambiguity. This requires conceptual work, but that helps to prepare the development of such systems for certain contingencies.
The data so far, however, shows that self-driving vehicles and automated content are still much, much less prone to error than their human-sourced counterparts. But it will pay to be conscientious and realistic about the limitations of what we offer.
This article was originally published on Retresco's blog.