How to Raise a Gender-Bias Free AI Model
Wadhwani AI
An independent nonprofit building AI-based solutions for underserved communities in the developing world. We're Hiring!
Authored by: Gopika Gopan K
It is no secret that women are still light years away from having the same levels of participation in the workforce as men. Despite progressive policy changes and advancements in education and healthcare, women's participation in all walks of life that do not fall in the domestic realm is dismal.?
According to data from NASSCOM, there has been only a marginal increase in the number of women employees in the tech workforce in India - it has gone up from 30% in 2012-2013 to 36% in 2022-2023. This number dwindles further with every upward step on the corporate ladder -? only five percent of India's 500 listed companies have a woman as a Chief Executive Officer or a Managing Director. Of these 500 companies, 319 don't have women as Key Managerial Personnel.
In an age when AI solutions are deployed to solve real-world problems, this disparity can have far-reaching consequences. For instance, in workplaces, the trickle-down effects of biases in AI models can be seen right from the hiring stages. Human resource systems that have incorporated AI tools built on historical data that mirror biases against women in the workplace further hinder the selection and hiring of women. This leads to a boomerang effect that can widen the gender gap.?
In 2018, technology giant Amazon made headlines when its automated hiring algorithm was found to be biased against women. Its computer models were reportedly trained to vet applicants by observing patterns in resumes submitted to the company over ten years, most of which came from men in accordance with the male dominance in the tech industry.? The fiasco at the multinational tech giant can be seen as a classic example of gender inequality getting incorporated into AI solutions, with the risk of a solution amplifying them. Eliminating the risk of discrimination is a pressing concern, especially if AI models are being used to create social impact where the goal is to uplift those at the bottom rungs of the socio-economic pyramid.?
?
Gender Bias in the AI Lifecycle
?
Business Problem Formulation
Biases can seep into any part of the AI model lifecycle, starting with the formulation of a business problem. To minimise this, teams that identify business problems that can be solved using AI solutions should mandatorily involve women so that their perspectives, insights, and experiences are not discounted.?
For instance, a team building an AI solution that can generate advisories for farmers on effective ways to increase yield should take into account the limitations and the lack of agency of women farmers. The team needs to make the AI model capable of generating alternative advisories aligned with the situational needs of different categories of users—in this case, women farmers. Women may not have the final say in the amount of money spent on farming essentials, and an AI-drafted advisory that is ultimately rejected by the male members of the families, who are the decision-makers, will be of no use. Thus, having an adequate number of women in the teams that create AI solutions can ensure that women's lived experiences do not fall by the wayside and that the solutions have the necessary contextual backing to be relevant for a diverse set of people.?
?Data Collection
The most important aspect of any AI model development is the availability of validated, diverse, and reliable data privy to the contextual nuances of the problem being addressed. Biases in model predictions largely stem from biases in training data due to imbalanced representations of genders, people with disabilities, and other marginalised groups–a reflection of the prevalent inequalities in our society.?
领英推荐
The onus to ensure that the data has adequate representation of different genders and population groups falls on AI developers, unless of course the AI solution targets a particular group.?
Maintaining adequate diversity in the training data could mitigate this bias to a large extent. For instance, an AI solution that helps venture capital firms assess the eligibility of various start-ups for funding can be prejudiced against women. The risk of the model going rogue is high because the number of women entrepreneurs has increased recently. Historical data will influence the AI model to learn that only men can successfully helm entrepreneurial ventures. In instances where a collection of diverse data sets is a challenge, AI developers can try to balance the data by under-sampling the majority group, oversampling the minority group, or even augmenting the data sets with synthetic data.?
?
Model Design and Development
When developing a model, AI developers need to ensure that discriminatory beliefs prevalent within our social ecosystems are not fed into the AI model. Have you ever wondered how targeted advertising that relies on AI models bombards female internet users with ads for beauty products, apparel, and domestic and kitchen goods while men are shown ads for vehicles, real estate, and financial products?
An explanation of this would be using gender as an attribute to the model. In such cases, developers should ensure gender is not included as an attribute. They should also assess if any of the attributes end up serving as a proxy due to the correlation between these attributes and gender. Bias assessment and mitigation can be done using available fairness tools like IBM's AI Fairness 360 toolkit and Microsoft's FairML toolkit.
?Model Deployment and Monitoring
Despite running multiple checks in the stages mentioned above to ensure no bias, problems can emerge during deployment and maintenance. The usability of AI solutions for a given group of users depends on their digital literacy levels. A solution that is not aligned with the technical prowess of its target user group can create little impact.?
Many social impact AI solutions require users to own a smartphone and have some technical awareness to use different aspects of an application. However, considering there is a gaping disparity between men and women in terms of smartphone ownership and digital literacy, it could lead to fewer women using those AI solutions compared to men. Developers should also watch for biases emerging while deploying an AI solution in new geographies and social settings. This should be complemented by an assessment of the usage of the application with respect to various cohorts, especially gender so that corrective measures can be taken if a bias is observed.
?Conclusion
When using AI for social impact, collaborative efforts between the government, the people working on the ground, the AI developers, and the impact partners are paramount for creating an ecosystem where AI solutions foster trust and usher tangible impact in the lives of the target population groups in an equitable manner. Greater involvement of women in such projects at all stages can go a long way in tackling gender disparity at the grassroots level. It will also reduce the margin for future anomalies that may arise during deployment or when new AI models are built based on existing models.?
Disclaimer: This blog is made possible by the support of the American People through the United States Agency for International Development (USAID.) The contents of this blog are the sole responsibility of Wadhwani AI and do not necessarily reflect the views of USAID or the United States Government.
Product Manager AI (Voice Bot, Chat Bot, Low Code No Code), Data, Digital Transformation, Innovation for CCAS | Jarvis Technology & Strategy Consulting Pvt Ltd | Ex-Capgemini | Ex-Accenture | Ex-Tech Mahindra
5 个月Thanks for sharing, this can provide guardrail for steps to follow to raise a Gender-Bias Free AI Model