Why Artificial Intelligence is Inherently Biased, and How to Minimize the Damage
Don Peppers
Customer experience expert, keynote speaker, business author, Founder of Peppers & Rogers Group
You might think that artificial intelligence, because it issues from a machine, should be objective and unbiased, but nothing could be further from the truth. AI is inherently biased, and one of your goals, if your company or organization relies on AI, machine learning, and the algorithms developed by these technologies, should be to minimize this bias to the extent possible.
The problem is that machines can only “learn” by absorbing massive data sets and real-world feedback, but all such data sets and feedback originate with humans, and ALL humans are subject to biases and prejudices.
Sorry, but that’s simply how the human brain works – as humans, we cannot avoid making generalizations about people, situations, and the world around us. These are just mental shortcuts designed to provide us with the earliest possible indications of what to expect in our environment. And, while most of these shortcuts are helpful (expect higher productivity with an experienced worker), many can also perpetuate harmful or at least unhelpful stereotyping as well (expect the CEO to be an older white man).
Virtually all of us have implicit biases and predilections like these, and try as we might to discipline them, our biases can easily be revealed by analyzing the word patterns, images, and language we use in our ordinary lives – the same images and word patterns that make up the data sets that are used to train AI engines.
The point is, no matter how pristinely designed any data set is, it starts with human beings and their own mental shortcuts. Machines ingest whatever human generalizations are embedded in a data set, and then these assumptions are reinforced by repeated learning, to the extent that even a subtle distinction or bias can be magnified in an AI’s output.
Because AI comes from a machine it gives us the illusion of objectivity, but there’s a “black box” element to machine learning that makes it difficult to understand how decisions are reached, which means the biases are easy to miss.
The Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS system, for instance, is a computerized scoring algorithm designed to help judges and courts assess the likelihood that offenders will re-offend, so as to identify those who should be eligible for parole or release, versus those who ought to be incarcerated for society’s good. The system is based on more than a hundred questions about a defendant.
One of the major goals of having such a computerized system in the first place was to make the process more objective by reducing the bias that might be introduced by the less objective human judgments of individual judges, sheriffs, and other officials. Unfortunately, however, COMPAS correctly predicts recidivism just 61% of the time, which is only marginally better than a coin toss, and for some reason it is quite biased against black defendants.
The chart below, from a 2016 ProPublica study of cases in Broward County, Florida, maps the actual incidence of recidivism by COMPAS risk decile (with lower-numbered deciles indicating higher risk) for both white and black defendants. The chart brings into stark, visual relief the fact that COMPAS is much better at predicting recidivism among white defendants than among black defendants:
In addition to this inaccuracy, according to the study, blacks labeled higher-risk by COMPAS were actually found to be almost twice as likely not to re-offend as higher-risk whites (44.9% vs. 23.5%), while lower-risk blacks were nearly 40% less likely to re-offend than lower-risk whites (28% vs. 47.7%).
Even though none of COMPAS’s questions involve race or ethnicity, the racial bias in the data probably stems from how closely correlated to race and ethnicity some of the other questions are, such as whether your father, mother, or other relatives have ever been arrested or sent to prison, or whether your friends or family have been victims of crime in the past.
In the end, it may be virtually impossible to develop artificially “intelligent” systems that aren’t at least somewhat contaminated and biased, in the same way that it is impossible for a human mind to be entirely unaffected by bias. But if you want to ensure that your own machine learning tools are at least as objective as possible, then you should do your best to minimize the bias by:
- Hiring diverse teams to scan for implicit biases (i.e., diverse races, cultures, and genders);
- Taking time to poke holes in your assumptions, or perhaps even getting someone from outside to do it for you; and
- Bringing in a few experts from disciplines other than engineering or computer science, and giving them authority to make decisions.
Leadership and Keynote Speaker and member of the Data Science Research Centre at University of Derby
6 年It is well worth reading the full ProPublica report that Don identifies about the COMPASS system. Important reading for the UK justice system which is assessing a similar tool, with similar biased results.
Leadership and Keynote Speaker and member of the Data Science Research Centre at University of Derby
6 年Thanks Don Peppers for this timely reminder of the hype surrounding AI and ML. Many organisations are using Machine Learning to speed up and attempt to objectivise decision making based in profiling the data that they already have. This data is all created by human decision making and, therefore, fully incorporates human subjectivism, biases and variability. The ML / AI will, therefore learn how to behave as human decision makers, no more and no less. If organisations really want to ember a fully objective decision making process, the answer is simple, build the decision tree based on the objective criteria and implement this. It is far more explainable to the humans who are affected by the decisions. It is also fully #GDPR compliant, which ML is definitely not. AI / ML in such applications (loan applications, CV vetting, etc.) is essentially being lazy and only results ion an ML system which behaves with the same biases as humans.
keyboard activist ??
6 年Had to learn programming at school, and the first thing we learned was GIGO. So with AI it's Bias In Bias Out?
Engineering Manager The most personable , helpful and open one you’ll ever meet.
6 年Tuff subject simple thoughts. As we progress down this eneventible path let's make certain it serves mankind’s best interests better than we do.