Interrogating an algorithm is a human right
"Being able to interrogate an AI system about how it reached its conclusions is a human right" - #Townsend2020 for President
What if we demand that companies give users an explanation for decisions that automated systems reach
What if the algorithms that calculate all those decisions have programmed themselves, and they have done it in ways that we cannot understand their behavior.
Early iterations of AI were constructed that AI ought to be reasoned according to rules and logic, making their inner workings transparent....
but now, machines are constructed like biology, and learn by observing and experiencing. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output, the machine programs itself.
This creates the bigly problem, you can’t just look inside a deep computer neurological network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers.
The neurons in the first layer each receive an input and then perform a calculation before outputting a new signal...
These outputs are fed, in a complex web, to the neurons in the next layer...
Until an overall output is produced
Within all that, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output
The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs the lower layers will recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog.
Just as many aspects of human behavior are impossible to explain in detail, maybe it isn't possible for AI to explain everything it does to our inquiring feeble minds.
I think if we’re going to use these things and rely on them, then let’s get our grips on how and why they’re giving us 'answers'. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each others.