Introduction
What if an AI system accused you of a crime you didn’t commit because of your skin color or gender? This is not a hypothetical scenario, but a real possibility if we don’t address bias in AI systems.
In this article, we explore the relationship between DE&I and Generative AI and what problems they face. DE&I champions fairness, while Generative AI crafts content using computers. Yet sometimes Generative AI can produce offensive or biased content. We'll discover why this occurs and discuss potential solutions.
What is DE&I?
DE&I stands for Diversity, Equity, and Inclusion. It means treating everyone fairly, especially those who have faced unfair treatment because of who they are or if they have a disability. It's about leveling the playing field and offering respectful treatment to everyone. This cultivates a society where equal opportunities prevail, nurturing stronger, kinder communities.
Companies can better understand their customers by using effective DE&I practices. This helps them understanding about the different experiences their customers may have. This enhanced understanding can improve the company’s ability to serve its customers. This results in increased creativity, efficiency, and customer satisfaction.
Embracing DE&I helps shape a better world for us all. The focus of DE&I is on ensuring fair and respectful treatment of PEOPLE.
What is Generative AI?
Generative AI ("GAI" in this article) is a type of artificial intelligence (AI) that crafts new content like text, images, music, and stories. GAI can be amazing, but it can also be harmful. Here are some key differences between GAI and other types of AI:
- Purpose: GAI creates new content, while other types of AI perform specific tasks or make predictions based on data.
- Output: GAI makes new things, while other types of AI give answers or guesses based on what they already know.
- Learning: GAI learns to make new things, while other types of AI learn to find patterns and make guesses.
One way to relate GAI is by thinking about predictive text on modern mobile phones. Predictive text uses AI to suggest words or phrases based on what you’ve typed so far. It learns from your typing habits and the words you use most often to make better suggestions. This is similar to how generative AI works, as it also learns from data to create new content. Predictive text helps you type more quickly. GAI makes new things based on what it learned before. The focus of GAI is on creating new things with DATA.
Yet like predictive text can guess the wrong next word, there's a risk GAI might create offensive, biased, or hurtful content. For example, if GAI learns from images of one group, it might struggle to depict different skin tones. This exclusion can make some feel marginalized; or worse.
A Georgia Institute of Technology study found that self-driving cars may be more likely to hit pedestrians with darker skin tones. The study tested how well eight image-recognition systems could detect pedestrians. They used a large pool of images and divided them into two groups: lighter and darker skin tones. The research showed that it was harder for computers to recognize people with darker skin colors in pictures. This bias resulted from the lack of diverse data and the low weight given to learning from darker skin tones. In this case, AI bias could lead to significant consequences because of the obvious concerns to human safety.
How does GAI Work?
Generative AI is a potent tool crafting fresh content like text, music, and art. Yet, it's vital to recognize that bias can infiltrate these creations. Bias-free output requires training AI on comprehensive, diverse data. Instances of bias in contemporary GAI systems include:
- Natural Language Processing (NLP): Programs crafting offensive text due to biased data.
- Music Composition: Programs favoring some musical genres over others, yielding unoriginal compositions.
- Art Creation: Programs producing art that might offend or exclude.
A prime example of NLP is ChatGPT by OpenAI (https://openai.com). Find out more about ChatGPT, here: Introducing ChatGPT. NLP is a branch of AI that focuses on understanding and generating human language, such as speech or text. ChatGPT is a type of GAI that can make realistic text conversations by learning from lots of examples of how people talk.
A data set is a collection of information. ChatGPT uses a data set made up of many different types of text, like books, articles, and conversations, to learn. This helps ChatGPT understand language better. ChatGPT gains an understanding of language nuances through these examples.
The quality of data sets can vary. If a data set has unfairness or stereotypes, it can cause bias in AI. This can make stereotypes worse, hurt people, and falsely accuse innocent people.
Human reviewers guide ChatGPT's training, ensuring it grasps language nuances. They supplement its learning with more examples. But reviewers may inadvertently introduce their biases.
ChatGPT learns from a data set of text examples and needs regular updates to perform at its best. The information in the data set that ChatGPT uses can become outdated. This can affect how well ChatGPT understands current events and changes in cultural norms.
It is important to keep updating the data set used by AI to make sure it produces unbiased and relevant content. But refreshing the training data and retraining can be cost prohibitive.
AI Bias's Impact on Trust and Belonging
One of the possible consequences of AI bias is eroding trust in the technology and organizations that use it. Unfair or hurtful AI content fuels doubts about its reliability. This skepticism can extend to organizations using AI, as seen in these real-world examples:
- Amazon's Biased Hiring Algorithm: In 2014, Amazon's program showed gender bias, eroding trust.
- Microsoft's AI chatbot: In 2016, Microsoft launched an AI chatbot called Tay on Twitter. Tay learned from users and generated human-like responses. Within 24 hours, Tay began making offensive comments, including racist and sexist remarks. This caused widespread criticism of Microsoft and raised concerns about the ability of AI to create harmful content.
- In 2019, people in San Francisco rejected facial recognition technology due to concerns about bias. This made people lose trust in the technology.
These instances highlight AI bias's potential to erode trust and foster exclusion.
More Examples of AI Bias and Social Media
There have been several recent cases of AI bias in facial recognition, health care, and criminal justice.
- Robert Williams, a Black man, was wrongfully arrested due to a faulty facial recognition match. A lawsuit was filed on his behalf.
- In health care, AI systems can exhibit bias if they don’t have diverse data. This can make inequality in medicine worse. For example, if an AI system learns from the medical data of white patients, it may not work as well for patients of other races. This can lead to unequal treatment and outcomes.
- Risk assessment tools in the criminal justice system can make bias and discrimination worse. These tools use data and algorithms to predict the likelihood of someone committing a crime or reoffending. But if the AI learns from biased data, the predictions can also exhibit bias.
- Social media platforms and their algorithms can also contribute to the spread of misinformation and fake news. Social media platforms use special computer programs to decide what content people see. This can create “filter bubbles” and “echo chambers” where people only see things that they already agree with. This can make it harder for people to see different points of view and can reinforce their existing beliefs and biases.
It is important to be aware of these issues and work towards creating more fair and unbiased AI systems.
Identifying AI Bias Causes
- Biased Data Sets: Incomplete or faulty data can hinder AI's grasp of diversity.
- Biased Algorithms: An algorithm is a set of instructions that a computer follows to solve a problem or complete a task. In the context of AI, an algorithm is a recipe that tells the AI what steps to take and in what order to solve a problem or create something new. An algorithm might use unfair or inaccurate rules or criteria to make decisions or predictions. This can affect the output of the AI by favoring some groups over others, or by ignoring some important factors. Algorithms can perpetuate historical biases, like flawed recipes. If the data used to create an algorithm contains bias, the algorithm itself can also exhibit bias.
- Human Biases: Creators and users can introduce their own biases without realizing it, which can affect how AI operates.
- Biased feedback loops: A biased feedback loop happens when an AI system learns from its own decisions and strengthens existing biases. This creates a cycle where the AI system keeps making biased decisions, which can have negative effects. For example, a hiring algorithm can learn from past hiring decisions that favor certain groups over others. Then, it uses those decisions to make future hiring decisions. This can result in a pattern of discrimination and exclusion that keeps repeating itself.
Confronting AI Bias and Challenges
Tackling bias is essential to foster trust in AI systems and their ethical use. Ignoring bias can perpetuate stereotypes and harm individuals. Imagine a situation where AI accuses an innocent person of a crime due to bias. This underscores the urgency of confronting AI bias to ensure impartiality.
Confronting AI bias is a complex and challenging task. It can be hard and expensive to fix bias in AI. There are also tough choices to make. For example, how do we make sure AI is fair but also accurate? How do we make sure AI respects people’s privacy and rights? These are hard questions that need lots of thought and discussion. Fixing AI bias is a team effort that requires the cooperation of researchers, developers, policymakers, and users. We all need to work together to make sure AI is fair and unbiased.
Strategies to Confront AI Bias
Strategies to combat AI bias include:
- Diversify Training Data: Train AI models on inclusive, diverse data to prevent bias perpetuation.
- Design and Test Fair Algorithms: Assess algorithms for fairness and transparency before deployment.
- Monitor and Correct Bias: Continuously track AI systems for bias and rectify.
- Self-Correcting Algorithms: Improve algorithms to identify and rectify bias, ensuring ongoing mitigation.
- Educate About Bias: Educate individuals on AI bias and preventive measures for responsible AI use.
- Regular Data Set Updates: Keep training data current to reflect evolving laws, ethics, and societal norms.
- Equal Access: Make sure that AI is accessible to everyone, no matter how much money they have or their social status. This will help prevent harm to vulnerable populations.
- Inclusive Testing: Incorporate diverse experts and stakeholders to check and correct AI outputs.
Measures to Assure Diverse Training Data
One of the most important ways to help reduce bias is to assure training data diversity. Several steps can be taken to train GAI on diverse and complete data sets:
- Select training data sets that represent a diverse range of perspectives and experiences.
- Include data from different places, cultures, and groups of people.
- Regularly check and update the data sets to ensure they remain diverse and complete over time. This helps prevent bias from creeping into the AI system and ensures that it continues to make fair and accurate predictions.
Conclusion: Pioneering Fair AI
In this exploration, we've uncovered AI bias's origins, be it from skewed data or algorithms. Bias in AI can intensify stereotypes and unfairly target innocent people. It is like machines amplifying human mistakes on a larger scale. Our mission is clear: we must confront bias in AI to ensure fairness for everyone. By doing this, trust in AI systems can grow, allowing for their ethical and responsible use. A fair AI future is within our reach.
Next Steps
Don’t miss our next article! We’ll talk about how people like HR leaders can help reduce AI bias. We’ll also share specific things one can do to make sure AI is fair for everyone.
References
- McKinsey & Company. (2021, June 9). What is diversity, equity, and inclusion (DE&I)? https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-diversity-equity-and-inclusion
- McKinsey & Company. (2020, May 19). Diversity wins: How inclusion matters. https://www.mckinsey.com/featured-insights/diversity-and-inclusion/diversity-wins-how-inclusion-matters
- Harvard University. (2021, January 25). Understanding gender and racial bias in AI. https://www.sir.advancedleadership.harvard.edu/articles/understanding-gender-and-racial-bias-in-ai
- MIT Technology Review. (2019, March 1). Self-driving cars may be more likely to hit you if you have dark skin. https://www.technologyreview.com/2019/03/01/136808/self-driving-cars-are-coming-but-accidents-may-not-be-evenly-distributed/
- Wilson, B., Hoffman, J., Morgenstern, J., & Zisserman, A. (2019). Predictive inequity in object detection. arXiv preprint arXiv:1902.11097. https://arxiv.org/abs/1902.11097
- Tech Republic. (2020, December 8). Generative AI defined: How it works, benefits and dangers. https://www.techrepublic.com/article/what-is-generative-ai/
- Forbes Human Resources Council. (2021, October 14). Understanding bias in AI-enabled hiring. https://www.forbes.com/sites/forbeshumanresourcescouncil/2021/10/14/understanding-bias-in-ai-enabled-hiring/?sh=33b1fa307b96
- ACLU. (2018, October 23). Why Amazon’s automated hiring tool discriminated against women. https://www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-against
- The Verge. (2016, March 24). Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
- Microsoft. (2020, July 15). When bias begets bias: A source of negative feedback loops in AI systems. https://www.microsoft.com/en-us/research/blog/when-bias-begets-bias-a-source-of-negative-feedback-loops-in-ai-systems/
- CNBC. (2019, May 15). San Francisco bans police use of face recognition technology. https://www.cnbc.com/2019/05/15/san-francisco-bans-police-use-of-face-recognition-technology.html
- MIT Technology Review. (2021, April 14). The new lawsuit that shows facial recognition is officially a civil rights issue. https://www.technologyreview.com/2021/04/14/1022676/robert-williams-facial-recognition-lawsuit-aclu-detroit-police/
- Scientific American. (2019, February 11). Health care AI systems are biased. https://www.scientificamerican.com/article/health-care-ai-systems-are-biased/
- The Brookings Institution. (2018, June 26). Understanding risk assessment instruments in criminal justice. https://www.brookings.edu/articles/understanding-risk-assessment-instruments-in-criminal-justice/
- The Brookings Institution. (2020, October 2). How misinformation spreads on social media–And what to do about it. https://www.brookings.edu/articles/how-misinformation-spreads-on-social-media-and-what-to-do-about-it/
- Investopedia. (2021, October 14). Generative AI: How it works, history, and pros and cons. https://www.investopedia.com/generative-ai-7497939
- Harvard Business Review. (2019, November 18). 4 ways to address gender bias in AI. https://hbr.org/2019/11/4-ways-to-address-gender-bias-in-ai
- AI.NL. (2020, November 30). Bias in AI: What it is, how to mitigate it, and the need for ethical AI. https://www.ai.nl/knowledge-base/bias-in-ai/
- California Management Review. (2020, November 2). Algorithmic bias: Why bother? https://cmr.berkeley.edu/2020/11/algorithmic-bias/
- World Economic Forum. (2022, October 13). Open source data science: How to reduce bias in AI. https://www.weforum.org/agenda/2022/10/open-source-data-science-bias-more-ethical-ai-technology/
- TechGoing. (2022, January 10). How much does ChatGPT cost? $2-12 million per training for large models. https://www.techgoing.com/how-much-does-chatgpt-cost-2-12-million-per-training-for-large-models/
Disclaimer:?Joe Blaty (he/him/his) is an innovation leader with a passion for driving disruptive change, a storyteller, a trusted advisor, a futurist, and a Diversity, Equity, Inclusion, and Belonging advocate. The views and opinions expressed in this article are solely of Mr. Blaty and are not representative or reflective of any individual employer or corporation.
?? Career Coach ?? I help mid to senior level professionals get unstuck, gain clarity, and land their ideal role with more balance, pay, and impact in less than 90 days ?? Free Career Clarity Call in About??
1 年Thought provoking article. What stood out to me was how autonomous vehicle AI had been trained on more white people, than people of color leading towards a bias of hitting people of color. Love that you're helping raise awareness around these issues, and I look forward to solutions in the next post.