Human failure in AI is good
David Taylor
Finding pragmatic, privacy-respectful global solutions to IT, IT Sec and AI challenges for multi-nationals.
What Digital Ethics in AI can learn from a recent privacy breach involving Alexa.
“If AI is so great then why was there a privacy breach with Alexa?”
I was recently battered with this question. Sometimes smug questions can be thought provoking. The answer according to Amazon is that it wasn’t Alexa that caused the breach, it was human error.
The story begins when a curious German took the GDPR privacy law for a test drive. He asked Amazon for all the personal data they had on him. Amazon sent him various files including some Alexa voice files. He was surprised because he didn't use Alexa or own any Alexa-enabled devices. Amazon had sent him someone else's Alexa voice files. Alexa voice files are inherently personal and often private and rather intimate.
The efforts of c't magazine took the matter further.
By analysing the files and some other clever investigations they tracked down the person who created the files. c’t magazine claim to have exposed other GDPR failings by Amazon in dealing with this incident.
Amazon says it was human error. Human error in the midst of AI? It is almost Zen.
This event is a touch point between AI and humans. One that is more immediate and complex than the abstract question of whether AI will overcome man. Yet it may offer some insights into some of the complexities of the man vs AI scenario and the role that law plays in it.
What privacy protection measures are intrinsic in the Alexa? The uses to which Alexa is being put is ever expanding and sometimes surprising. The pace is rapid and divergent, having a grip on privacy is an enormous challenge. This does not mean we should not try. Internal privacy design in new products and innovation is important.
If Alexa was not able to respond in this situation why doesn’t someone else build other AI to accurately find, fetch and send the personal data that a company has on a person? Internal privacy design is not always enough external (third party) privacy technology has a complementary place. AI products can enable privacy and their development is ongoing.
Despite the potential of AI and the possibilities of internal privacy design and third party offerings something still went wrong. Amazon said the Alexa files were sent in human error. Does this show exactly why humans are redundant? Or does this mean there is still a place for humans in the AI universe?
AI sells, but appealing to consumers is not enough, products must comply with law. Hazardous products and their imperfect manufacturers have lead society to insist on legal safeguards.
The law, such as the GDPR allows a human involvement in the interaction between humans and AI. How have humans put this interaction to use in this Alexa example? Amazon appears by their own admission to have put humans to poor use in the place where AI and privacy law meet. The failures were, we were told, human. The alternative that AI sent the wrong information may be too frightening to reveal or contemplate. How did the humans on the other side fair? They faired well, they tested the integrity of the technology and the organisation.
In this Alexa example the interaction between the humans making the request for data and the AI was mediated through Amazon. It appears the organisation failed. There is no indication that what Alexa recorded was unlawful, it is the sending of that data to the wrong person that was the privacy breach. Neither side pointed fingers at Alexi. The humans at Amazon look bad. The person whose Alexa files were sent looked bad. He was surprised as to what was recorded and how it could be used. Really? It seems the only humans who failed in this scenario were those not asking enough questions. On the downside the files should have not been sent to the wrong person. Amazon made some other bungles.
But this event also offers something positive.
AI did its job and privacy law did its job.
The person making the request from Amazon put the law to good use. When there was an error it was discovered and c’t magazine were able to exploit and expose Amazon. This is good it shows the law is working.
In this scenario and in the context of AI, the failings that occurred are perhaps a reminder that as long as we can ask questions and interrogate the answers, humans are still relevant even in AI.
As long as humans are part of the problem then at least we know we are still part of the solution – for now.