Fairness Compass in AI

Fairness Compass in AI

An interesting experiment was performed around two decades ago on fairness and inequality on two Capuchins monkey that rejected unequal pay.

The two capuchin monkeys were kept side by side and these monkeys knew each other as they live in a same group. They were tested to perform a very simple task and in return they were rewarded with food.

No alt text provided for this image


In one test: Both monkeys were rewarded with the slices of cucumber after performing the task and they happily agreed to do the same task again and again without any conflict.

In another test: One of the monkey was offered grapes and the other monkey was offered cucumber which was reacted negatively by the capuchin monkey and it rejected the reward offered.


Here we demonstrate that a non-human primate, the capuchin monkey responds negatively to the unequal reward distribution. They refused to participate if they witnessed a conspecific receive a more attractive reward for equal effort, an effect amplified if the partner received more attractive reward without any effort at all.


Two pillars of morality : Fairness and Compassion.


Need for Fairness in AI

As AI techniques based on big data and algorithmic processing are increasingly used to guide decisions in our personal, social and professional lives, including hiring decisions, university admissions, loan granting, medical treatment, crime predictions, there is a growing need for fairness in AI and epistemic and normative quality of AI predictions.

There is strong evidence of algorithms amplifying rather than eliminating existing bias and discrimination, as a result have negative effects on social cohesion and on democratic institutions. Fairness is another growing concern that algorithmic predictions may be misaligned to the designer's intent or individual and social expectations such as discrimination to specific groups or individuals.

Machine learning is increasingly used to make critical decisions of our lives. ML algorithms typically operate by learning models from existing data and generalizing them to unseen data. As a result, problems can arise during data collection, model development, and model deployment processes that can lead to different harmful downstream consequences.

The Recent Examples of Biases in AI systems:

  1. Microsoft Tay chatbot was taught racist language by its social media users. In this case, AI designers had not anticipated the type of data that was provided by the actual users of the application.
  2. Amazon AI recruitment tool self taught itself that male candidates were preferable and penalized women candidates.
  3. Google photo classification software classified people of African and Haitian descent under the heading "Gorillas".
  4. Error rate in facial recognition system was a whopping 34.7% while it was 0.8% for light-skinned men.

Fairness is the desired behavior dictated by an underlying standard that can be statistical, social, moral etc.


Let us consider creating an AI system used to recommend loan approvals in a bank, based on a set of attributes about each applicant (eg. annual income, FICO score, prevailing debts, assets owned, etc.). Bank makes a decision to approve or reject the loan application based on these attributes.

In this case, one of the fairness criteria would be that the percentage of the loans approved by the bank should be the same for males and females. Suppose, if only 40% women applicants were approved of the loan, whereas, 70% male applicants were approved of the loan, then we potentially have an issue with biases and fairness.

No alt text provided for this image


The 7 Potential Biases in the AI system:

  1. Historical Bias: Historical bias arises even if the data is perfectly measured and sampled, if the world as it is or was leads a model to produce outcomes that are not wanted
  2. Representation Bias:

a) When defining the target population, if it does not reflect the use population.

b) When defining the target population, it contains under-represented groups.

c) If the sample population is not representative of the entire population.

d) If the past population is not representative of the current population.

3. Measurement Bias:

a) The proxy is an oversimplification of a more complex construct.

b) The method of measurement varies across groups.

c) The accuracy of measurement varies across groups.

4. Aggregation Bias:

Arises when a one-size-fits all model is used for data in which some groups or specific examples should be considered differently.

5. Learning Bias:

When modeling choices amplify performance disparities across different examples in the data.

6. Evaluation Bias:

Evaluation bias occurs when the benchmark data used for evaluation does not represent the use population.

7. Deployment Bias:

Deployment bias arises when there is a mismatch between the problem a model is intended to solve and the way in which it is actually used.

By framing the potentially sources of downstream harm through the data generation, model building, evaluation, and deployment processes, we can focus on application-appropriate solutions rather than relying on broad notions of what is fair.


Fairness is not one-size-fits-all.


Knowledge of an application and engagement with its stakeholders should inform the identification of these sources.


The Problem of Bias:

Most often we analyze data as one population and tend to overlook the possible existence of sensitive subgroups in the data. Since, the decisions made by machine learning algorithms often has profound impact on human lives, we need to have an eagle eye towards the data sets containing sensitive subgroups which can be for example, gender, race, or religion.

To analyze the potential bias in a machine learning classifier, splitting the results by the sensitive attributes into subgroups, and investigating possible discrepancies among them would lead to a step forward towards mitigating biases. Any such deviation, could be an indicator for discrimination against one sensitive group.

Group fairness : The idea of pursuing fairness on the basis of membership in one or more sensitive groups.

Individual fairness : Is achieved by aiming similar treatment of similar individuals, taking any attribute into account.

Explainable AI for Fairness:

Four major areas where XAI tool could assist in identifying issues of biased data:

  1. The XAI tools could identify imbalances within the data as it relates to over/under sampling.
  2. The XAI tools could identify attributes most influential in both local and global decisions.
  3. The XAI tool can consider the impact of user-labeled sensitive attributes on the model performance.
  4. The XAI tool can identify processing issues that had a distinct impact on the final model.


References:

[1] Sam Corbett-Davies and Sharad Goel. The measure and mismeasure of fairness: A critical review of fair machine learning. CoRR, abs/1808.00023, 2018.

[2] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. CoRR, abs/1908.09635, 2019.

[3] Council of Europe. Charter of fundamental rights of the european union. (2012/C 326/02).

[4] Solon Barocas and Andrew D Selbst. Big data’s disparate impact. Calif. L. Rev.. California Law Review, 104(IR):671.

[5] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Rich Zemel. Fairness Through Awareness. 2011.

[6] Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning. fairmlbook.org, 2019. https://www.fairmlbook.org.



Thankyou for reading this article. Hope you find this article informative.

Happy Learning !!

Rupa Singh

Founder and CEO (AI-Beehive)

www.ai-beehive.com








Fair AI seeks to ensure that the applications of AI technology lead to fair results. This means that AI technology should not result in discriminatory impacts on people with respect to race, ethnic origin, religion, gender, sexual orientation, nationality, disability, or any other personal condition.

When optimizing a machine learning algorithm, we must take into account not only the performance in terms of error optimization, but also the impact of algorithm in the specific domain.

Dr. Arpit Yadav

?? Senior AI Scientist at CCE ?? | ?? Researcher in Gen AI ?? | ?? Top AI ML DS Voice| ?? Ph.D. in AIML ?? | ?? Consultant | ???? Trainer | ??? Speaker | ?? Mentor | ?? Coach in DS/ML/AI | ?? Thought Leader in AI | ??

2 年

Great Article about Fairness Compass in AI

要查看或添加评论,请登录

Rupa Singh的更多文章

  • EU AI ACT: Shaping Governance for Tomorrow’s Innovation

    EU AI ACT: Shaping Governance for Tomorrow’s Innovation

    Because of the rapid growth of technological advancements and innovation, governance and regulatory mechanisms are put…

    2 条评论
  • Angulimala in Our Algorithmic World!!

    Angulimala in Our Algorithmic World!!

    Once upon a time, in the lush forests of ancient India, there lived a fearsome bandit named Angulimala. His name struck…

    10 条评论
  • AI Ethics Approach is Reactionary instead of Proactive

    AI Ethics Approach is Reactionary instead of Proactive

    In the recent past, AI solutions are pervasively deployed and at scale in many application areas of societal concerns…

    8 条评论
  • Discriminatory Hiring Algorithm

    Discriminatory Hiring Algorithm

    Algorithms do not build themselves. They often rely on human input and the choices they make about the outcomes.

    6 条评论
  • Race to the Bottom on 'Safety' Affecting 'Responsible AI Development'

    Race to the Bottom on 'Safety' Affecting 'Responsible AI Development'

    AI Ethics should not be treated as an aftermath, rather organizations must prioritize the incorporation of AI ethics at…

  • Auditing for Fair AI Algorithms

    Auditing for Fair AI Algorithms

    With the widespread deployment of AI systems, there has also been valid concerns about the effectiveness of these…

    4 条评论
  • Real World Biases Mirrored by Algorithms

    Real World Biases Mirrored by Algorithms

    We are going to grab the bull by its horns by tackling first the most challenging type of algorithmic bias: Cognitive…

    8 条评论
  • 5-Steps To Approach AI Explainability

    5-Steps To Approach AI Explainability

    The concept of explainability in AI is often related to transparency, interpretability, trust, fairness, and…

  • Is Your AI Model Explainable?

    Is Your AI Model Explainable?

    Why don't we just trust the AI models and accept the decisions made by the machines, if the machine learning model…

    3 条评论
  • Multicollinearity in Linear Regression

    Multicollinearity in Linear Regression

    Multicollinearity Multicollinearity is a statistical phenomenon in which two or more predictor variables in a multiple…

    5 条评论

社区洞察

其他会员也浏览了