Teaching AI to explain its thinking
Today’s artificial intelligence may be able to detect if a patient needs surgery or when someone is committing fraud, but humans are often left in the dark on how these machines reach those conclusions. So, Google’s Been Kim has developed a kind of human-machine translator to fill that gap. The tool, called TCAV, allows researchers to ask AI systems to identify the most important pieces of information that led to its conclusions.