What Do Chihuahuas and Muffins Have to Do with AI? (1/5)
Gary M. Shiffman
Economist ? 2x Artificial Intelligence company co-founder ? Writer
I used a popular internet meme of “chihuahuas and muffins” a few years ago in a series of lectures to explain Artificial Intelligence and Machine Learning (AI/ML). This meme-as-teaching aide has become widely referenced. I am revising and posting the content here to serve as an easy reference.?
My initial motivation was to empower regulators and those in regulated industries like financial services to understand AI/ML, so that the benefits of innovation could improve performance in the public safety missions they perform. But this content can benefit anyone in any technology-lagging sector of the economy.?
Small groups of creatives, coders, and developers invent amazing technologies all the time. But the world only changes when ordinary people – not just the math, design, and coding people – trust, adopt, and use these technologies. One of the first steps to getting people to trust and adopt a new technology is to teach them how to measure its performance.?
When you type the word “chihuahua” into an internet search bar, you might take for granted the countless images of chihuahuas. It seems easy for the search algorithm, and it is easy for you to evaluate the results. You see all chihuahuas and almost no “not-chihuahuas”. But you may also see a bias in the results (all the chihuahuas are dark-haired, or light-haired, or long-haired, etc.).??
Replace “chihuahua” with “human trafficker” or “money launderer,” and imagine the importance of innovation in industries which fight crime and exploitation, such as financial crimes compliance.?
But measuring algorithm performance in complex areas of human behavior is not as easy as looking at images of chihuahuas. This is why we have methods for measuring accuracy and bias:??
领英推荐
Accuracy is a simple numbers game; no complex math required. Accuracy allows us to talk about the combined efficiency and effectiveness of a tool.??
Bias is what most people refer to when they talk about the dangers of moving into a fully automated world – for example, when Amazon designed a recruiting algorithm that preferred men over women, or when a criminal justice program incorrectly identified Black defendants as higher risk for recidivism and incorrectly identified white defendants as lower risk.?
These are examples of machine bias. Human bias occurs constantly, and one bias often perpetuates the other. As a human, when making a decision without the certainty of facts, you default to bias. When unsure of the accuracy or biases of a new AI/ML system, you make decisions about which technologies to deploy based on what your community of peers is doing.??
In this series of articles, I will try to light the way toward the facts. I will open up the black box and explain what machine learning really means; the role of training and testing data; the role of the human in establishing thresholds to calculate accuracy; and what blueberry muffins have to do with all of this.
This is the introductory article of a 5-part series. See my video short on this topic or read Article 2/5.
Risk Advisory Principal at Kaufman Rossin
2 年???????????? awesome presentation and analogy