Biases – Et tu AI?

Biases – Et tu AI?

Cognitive Biases distort thinking, influence beliefs, cloud judgements and thus consequent actions. Some of these have dangerous ramifications. And we are all guilty of the same, in various measures. This is what makes us human.

The various cognitive biases include:

  • Stereotyping (all people / objects belonging to a set X exhibit the same set of characteristics always),
  • Confirmation bias (listen more often to information that confirms our existing beliefs),
  • Self – Serving bias (people tend to give themselves credit for successes but lay the blame for failures on outside causes)

Alfred Korzybski, the Polish-American philosopher and engineer coined a phrase in 1931 - The map is not the territory. He used it to convey the fact that people often confuse models of reality with reality itself. And the biases are essentially mental models tuned to output a particular set of values, which may not be the reality.

So, it may appear that as emotional beings, humans are susceptible to biases. A non-emotional data driven approach could be the panacea to eliminating biases in decision making.?And the advent of AI could be the answer to our problems.?But is it?

And the answer is NO! AI is as susceptible to biases as its makers. AI is a combination of models with their output tuned by the input data. And hence a biased set of inputs can lead to a biased set of outputs. And like in the human world, biases in AI can also lead to undesired and unfavourable outcomes. The following caricature from Instagram channel smbcomics sums it up perfectly.

Source : smbcomics

An interesting case study to understand the risks of biased AI usage is available in the Harvard Business Review article from 2019 titled - The Risks of Using AI to Interpret Human Emotions, by Mark Purdy, John Zealley, and Omaro Maseli. This article illustrates the impact of bias in AI technology used to decode emotional reactions in real time and how AI’s inability in differentiating between different cultural nuances can lead to incorrect outcomes and encourage stereotyping.

Some highlights of the article include the following.

  1. According to a study by Lauren Rhue, University of Maryland, emotional analysis technology assigns more negative emotions to people of certain ethnicities than to others. This can have ramifications in workplaces. An algorithm consistently identifying an individual as exhibiting negative emotions might affect career progression.
  2. With emotional AI, products or services can become an adaptive experience.Consider a product, which recognises emotions like joy, anger, fatigue. And it adapts a vehicle’s in-cabin environment accordingly. But a biased adaptive in-cabin environment could mean that some passengers are misunderstood. For eg., elderly people, might be wrongly identified as having driver fatigue (the older the age of the face, the less likely it is that expressions are accurately decoded). And insurance companies, dependent on such data, could charge higher premiums for older people. This is because the data would suggest that, despite many prompts to rest, the driver pressed on.
  3. Another example illustrates a product for measuring customer satisfaction. It used to identify compassion fatigue in customer service agents. It also guide agents on how to respond to callers via an app. An algorithm, biased by an accent or a deeper voice, might result in some customers being treated better than others. Thereby pushing those bearing the brunt of bad treatment away from the brand.

So, in its current form, biases can easily creep into AI algorithms. It is the responsibility of the businesses using the technology to avoid the same. They must understand thoroughly the limitations of their tool and train their algorithms over a variety of data to minimise biased outcomes.

AI is not the panacea we expected it to be. Not yet!


The inspiration of the article comes from an observation on biased AI development by my friend Anurag Bartare and a conversation on Ethical Intelligence: Navigating Ethics in the Digital Age by Valter Ad?o, Dr. Mark Nasila and Kris ?stergaard, presented by The Academy of Business Futures, Cadena Growth Partners.

I also started a substack so that you can find my thoughts and opinions easily and be alerted as soon as I publish a new article. Find this article at - https://tejvohra.substack.com/p/biases-et-tu-ai


References

  1. Cognitive Bias List: Common Types of Bias (verywellmind.com)
  2. A Philosophical Approach to Models | The Possible (the-possible.com)
  3. smbcomics on Instagram
  4. The Risks of Using AI to Interpret Human Emotions (hbr.org)
  5. Racial Influence on Automated Perceptions of Emotions by Lauren Rhue :: SSRN
  6. Ethical Intelligence: Navigating Ethics in the Digital Age

要查看或添加评论,请登录

社区洞察

其他会员也浏览了