Fairness Compass in AI
Rupa Singh
Founder and CEO at 'The AI Bodhi' and 'AI-Beehive' | Author of "AI ETHICS with BUDDHIST PERSPECTIVE"| Top 20 Global AI Ethics Leader | Thought Leader| Expert Member at Global AI Ethics Institute
An interesting experiment was performed around two decades ago on fairness and inequality on two Capuchins monkey that rejected unequal pay.
The two capuchin monkeys were kept side by side and these monkeys knew each other as they live in a same group. They were tested to perform a very simple task and in return they were rewarded with food.
In one test: Both monkeys were rewarded with the slices of cucumber after performing the task and they happily agreed to do the same task again and again without any conflict.
In another test: One of the monkey was offered grapes and the other monkey was offered cucumber which was reacted negatively by the capuchin monkey and it rejected the reward offered.
Here we demonstrate that a non-human primate, the capuchin monkey responds negatively to the unequal reward distribution. They refused to participate if they witnessed a conspecific receive a more attractive reward for equal effort, an effect amplified if the partner received more attractive reward without any effort at all.
Two pillars of morality : Fairness and Compassion.
Need for Fairness in AI
As AI techniques based on big data and algorithmic processing are increasingly used to guide decisions in our personal, social and professional lives, including hiring decisions, university admissions, loan granting, medical treatment, crime predictions, there is a growing need for fairness in AI and epistemic and normative quality of AI predictions.
There is strong evidence of algorithms amplifying rather than eliminating existing bias and discrimination, as a result have negative effects on social cohesion and on democratic institutions. Fairness is another growing concern that algorithmic predictions may be misaligned to the designer's intent or individual and social expectations such as discrimination to specific groups or individuals.
Machine learning is increasingly used to make critical decisions of our lives. ML algorithms typically operate by learning models from existing data and generalizing them to unseen data. As a result, problems can arise during data collection, model development, and model deployment processes that can lead to different harmful downstream consequences.
The Recent Examples of Biases in AI systems:
Fairness is the desired behavior dictated by an underlying standard that can be statistical, social, moral etc.
Let us consider creating an AI system used to recommend loan approvals in a bank, based on a set of attributes about each applicant (eg. annual income, FICO score, prevailing debts, assets owned, etc.). Bank makes a decision to approve or reject the loan application based on these attributes.
In this case, one of the fairness criteria would be that the percentage of the loans approved by the bank should be the same for males and females. Suppose, if only 40% women applicants were approved of the loan, whereas, 70% male applicants were approved of the loan, then we potentially have an issue with biases and fairness.
The 7 Potential Biases in the AI system:
a) When defining the target population, if it does not reflect the use population.
b) When defining the target population, it contains under-represented groups.
c) If the sample population is not representative of the entire population.
d) If the past population is not representative of the current population.
3. Measurement Bias:
a) The proxy is an oversimplification of a more complex construct.
b) The method of measurement varies across groups.
c) The accuracy of measurement varies across groups.
4. Aggregation Bias:
Arises when a one-size-fits all model is used for data in which some groups or specific examples should be considered differently.
5. Learning Bias:
When modeling choices amplify performance disparities across different examples in the data.
6. Evaluation Bias:
Evaluation bias occurs when the benchmark data used for evaluation does not represent the use population.
7. Deployment Bias:
领英推荐
Deployment bias arises when there is a mismatch between the problem a model is intended to solve and the way in which it is actually used.
By framing the potentially sources of downstream harm through the data generation, model building, evaluation, and deployment processes, we can focus on application-appropriate solutions rather than relying on broad notions of what is fair.
Fairness is not one-size-fits-all.
Knowledge of an application and engagement with its stakeholders should inform the identification of these sources.
The Problem of Bias:
Most often we analyze data as one population and tend to overlook the possible existence of sensitive subgroups in the data. Since, the decisions made by machine learning algorithms often has profound impact on human lives, we need to have an eagle eye towards the data sets containing sensitive subgroups which can be for example, gender, race, or religion.
To analyze the potential bias in a machine learning classifier, splitting the results by the sensitive attributes into subgroups, and investigating possible discrepancies among them would lead to a step forward towards mitigating biases. Any such deviation, could be an indicator for discrimination against one sensitive group.
Group fairness : The idea of pursuing fairness on the basis of membership in one or more sensitive groups.
Individual fairness : Is achieved by aiming similar treatment of similar individuals, taking any attribute into account.
Explainable AI for Fairness:
Four major areas where XAI tool could assist in identifying issues of biased data:
References:
[1] Sam Corbett-Davies and Sharad Goel. The measure and mismeasure of fairness: A critical review of fair machine learning. CoRR, abs/1808.00023, 2018.
[2] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. CoRR, abs/1908.09635, 2019.
[3] Council of Europe. Charter of fundamental rights of the european union. (2012/C 326/02).
[4] Solon Barocas and Andrew D Selbst. Big data’s disparate impact. Calif. L. Rev.. California Law Review, 104(IR):671.
[5] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Rich Zemel. Fairness Through Awareness. 2011.
[6] Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning. fairmlbook.org, 2019. https://www.fairmlbook.org.
Thankyou for reading this article. Hope you find this article informative.
Happy Learning !!
Rupa Singh
Founder and CEO (AI-Beehive)
www.ai-beehive.com
Fair AI seeks to ensure that the applications of AI technology lead to fair results. This means that AI technology should not result in discriminatory impacts on people with respect to race, ethnic origin, religion, gender, sexual orientation, nationality, disability, or any other personal condition.
When optimizing a machine learning algorithm, we must take into account not only the performance in terms of error optimization, but also the impact of algorithm in the specific domain.
?? Senior AI Scientist at CCE ?? | ?? Researcher in Gen AI ?? | ?? Top AI ML DS Voice| ?? Ph.D. in AIML ?? | ?? Consultant | ???? Trainer | ??? Speaker | ?? Mentor | ?? Coach in DS/ML/AI | ?? Thought Leader in AI | ??
2 年Great Article about Fairness Compass in AI