AI Bias - An Ethical Challenge for Technologists
M. Ahmad S.
CTO | Chief Architect | CDO | Data and AI Strategist | Technology Leader | D.Eng Candidate in AI/ML
{With the renewed interest in AI driven by ChatGPT and other similar capabilities; I thought to rewrite my previous article and update it with some of the current challenges and dilemmas}
When it comes to Artificial Intelligence (AI) and its impact on our daily life, most of us have a mythical understanding of it; as we have let the science fiction define it for us. Thanks to pop-culture feeds of clunky robots or humanoid androids, talking or flying cars, bodiless omnipresent software like HAL or Skynet. A large segment of human population looks at AI in the same fashion as we discussion ALIENS or ZOMBIES. This figment of our collective imagination has overlaid our consciousness in a way that has made AI interesting, aspiring, dangerous, freaky, and supernatural at the same time. In a recent survey done by Pew Research Center, it is quite evident that more Americans are concerned about AI adoption then getting excited about the prospect of having Artificial Intelligence more intertwined in their lives. Digging deeper into these concerns, it became quite evident that fears of negative impact on human beings was the main purpose of this trepidation.
AI does pose several ethical dilemmas to practitioners and advocates. At its core we are giving a piece of code - an algorithm - power to recommend and/or make decisions. Most of these algorithms are Probabilistic in nature; implying there is a probability that recommendation coming being generated by an algorithm is not correct. The impact of inaccurate or faulty recommendation is directly tied to the business situation that we are trying to handle through AI. A wrong recommendation on my Netflix "next to watch" vs a wrongly identified perpetrator through a video-feed are both outcome of an AI engine have vastly different outcome on individuals and society.
We should also have directional understanding of where AI stand as it relates to ability to augment or replace human activities. Consider AI as an augmenter or expeditor of activities that are heavy on large scale mathematics, pattern recognition, and statistical reasoning. AI does not operate very well for activities that are based on self-directed goals, common sense, or value judgement.?
AI is becoming very pervasive and ubiquitous. Within a few years it has gone from academic world to our homes and workplace. We may or may not know it, but we are interacting with AI on daily basis. Some of the examples may surprise us:
领英推荐
In several cases, AI functions have already passed Turing Test. As an example, it is becoming very hard to know if a social media feed is generated by human or a bot; people are interacting with Alexa as if it’s a person; we may be interacting with a machine-generated voice agent on the other side of the phone that is interactively responding to our ‘words’.?This by itself raised numerous Ethical questions regarding transparency of AI use.
A bigger dilemma related to AI use is that its prone to biasness. There are numerous examples of AI biasness that has been well documented. As an example, the efficiency of facial recognition programs deteriorates with the color of your skin. We also see examples of stereotyping in language processing and translation bots with gender association with some roles – as demonstrated in the diagram below.?Interestingly, ChatGPT had similar stereotyping when translating from one language to another.
It is also vital to align human values into machines as they are becoming smarter. Autonomous AI systems need to be aligned to the goals and behaviors of companies and society at large. Each company has a social code-of-conduct and value statement, the question to be asked is that are data scientists aligning their AI code to these. In most cases they may not be aware of this requirement as there is no mandate to do so. Another, even larger, challenge is that there is nothing like global values… these are very culture specific. As an example, consider following diagram of run-away car. When asked about the choice of ‘who should die’; the responses vary between western culture and eastern. So, when programming an autonomous vehicle what should the engineer do? And whose values should be aligned??
As most of us are practitioners and professionals we should look at this space with deliberate approach. Like any technological advancements you need to see what to do with these advancements and how to implement these in professional and daily lives. There are some broad guiding principles that can help in tackling the ethical dilemmas linked to AI.