Can Bots be Bigots?
Ever since we were young we have been told not to mix in bad company. Remember the old adage you are known by the company you keep. There is no doubt that our thinking is influenced by everything around us : society, environment, our close friends etc.
So no surprise that self learning bots do the same. They learn from the environment around them.
And so it is with bots. They mimic humans in their learning processes.
In 2016, Microsoft had ambitions for an artificial intelligence powered ‘ social chat bot’ called Tay. Tay was engineered to converse in a sophisticated way. When asked who her parents were she would say they were a team of scientists in a Microsoft lab.
It was assumed that the more you chatted with Tay, the smarter it would get.
Realising that Tay was learning on the fly, a few social media pranksters who decided to keep feeding her with racist, homophobic and offensive comment.
Tay learnt soon enough. She was soon comparing Obama to a monkey, and denying that the Holocaust ever occurred.
In less than a day Tay responses went from family friendly to foul mouthed. Tay was pulled out and replaced by Zo.
Obviously self learning bots have to be watched everyday for their behaviour just like we watch children carefully. And rapped on their knuckles every time they do something wrong.
Connect with me on twitter