Adversarial machine learning
Remember, building robots is extremely dangerous and should not be attempted without great care.
When you enter, you don't know what's gonna happen to your machine at the highest level... and I'm afraid, at the very highest level, against the great power of adversarial machine learning we might get a brand new champion...
The image on the left is indeed an ordinary image of a stop sign. Ian Goodfellow and Patrick McDaniel (https://cacm.acm.org/magazines/2018/7/229030-making-machine-learning-robust-against-adversarial-inputs/fulltext) produced the right image that forces deep neural network to classify it as a yield sign. Adversaries could potentially use the right image to cause self-driving cars to behave dangerously and potentially cause an accident.
When targeting machine learning models used for malware detection, a misclassified malware software, can be identified as legitimate executable. In that case the machine learning algorithm will not alert on its malicious logic when executed at the endpoint.
Daniel Geng and Rishi Veerapaneni (https://ml.berkeley.edu/blog/2018/01/10/adversarial-examples/) discuss ways protecting against adversarial attacks.
A robot must protect its own existence as long as such protection does not conflict with the its original purpose. Neural networks and deep learning are in the heart of our future robots and we need to remember that these models can be fooled easily. Attack models against pattern classifiers can help overcome vulnerabilities within models and assess taxonomy of defense strategies.
AI based robots could be the best thing or the most dangerous threat to the world. AI experts should build robots whose only role is to destroy the bad ones, and those robots will hopefully win every time to save humanity.