Reducing AI Bias: A Guide for HR Leaders
Created by Dezgo AI and Affinity Photo

Reducing AI Bias: A Guide for HR Leaders

Introduction

Imagine you are applying for your dream job. You have the skills, the experience, and the passion. You submit your resume online and wait for a response. But you never hear back from the company. Why? Because an artificial intelligence (AI) system scanned your resume and rejected you based on your name, gender, or race. This is not a fictional scenario, but a real risk of AI bias.

In this article, we will explain what AI bias in HR is, why it matters, and how HR leaders can reduce it. We will also share some specific things you can do to prevent and address AI bias in your work.

Why Talk About AI in HR?

For starters, this topic is relevant and timely. According to a recent survey conducted in December 2022 by Pew Research Center:

71% of Americans oppose the use of AI in making final hiring decisions, while only 7% favor it.
66% of Americans say they would not want to apply for a job with an employer that uses AI to help make hiring decisions, while 32% say they would want to apply.
47% of Americans think AI would do better than humans at treating all applicants similarly, while 15% think AI would do worse.
79% of Americans say bias and unfair treatment based on an applicant’s race or ethnicity is a problem in hiring, and among them, 53% think AI would improve this issue, while 13% think AI would worsen it.
61% of Americans say they have heard nothing about the ways AI systems can be used in the hiring process.

Regardless, AI is transforming the way we work, learn, and communicate. But AI can also have negative impacts if it is not designed and used ethically. AI bias is one of the most common and serious challenges that HR leaders face when implementing AI solutions in their organizations.

We explored AI bias in detail in our previous article, here. AI bias can have serious consequences for individuals, organizations, and society. It can harm people’s reputation, career, health, and safety. It can also erode trust in AI and its ethical use.

But there is hope. We can reduce AI bias by taking action. And one of the key players in this effort is HR leaders.

HR leaders work to ensure that employees receive fair treatment. They also help create and use AI systems that are fair for everyone in their organizations.

Benefits of AI in HR

AI in HR offers several advantages. It streamlines recruitment by identifying ideal candidates from a pool of resumes. By taking over repetitive tasks, AI frees up time for strategic work, enhancing efficiency. AI boosts productivity, providing insights that drive smarter decision-making. It elevates quality, reducing human error and ensuring consistency. AI also fosters innovation by predicting trends and spotting growth opportunities.

AI can also check how well your hiring process is doing by comparing important hiring numbers against your goals. This allows for real-time adjustments and strategic decision-making.

Furthermore, AI enhances employee engagement and talent development. By analyzing data trends, AI can determine employee needs and wants, aiding in the creation of effective engagement strategies. It can also develop personalized training programs and career paths, supporting employee growth.

Challenges and Risks of AI in HR

The use of AI in HR, while offering benefits, also presents several challenges and risks. These include legal, ethical, social and technical concerns.

Legal Risks: Law suits about AI use in HR are on the horizon. HR managers are using AI to make recruiting decisions, rank employee performance, and decide on promotions and firings. These cases often focus on unfairness in HR computer programs, background checks, hiring practices, and unfair treatment of people. Many states have already taken action to limit how HR can use AI.

Ethical Risks: AI’s potential for problems in areas such as data privacy and bias raises ethical concerns. For instance, if the AI system learns from biased data or uses biased rules to make decisions, it could lead to unfair outcomes. Ethical risks could affect the reputation and trust of the organization as well as the morale and engagement of employees.

Social Risks: The use of AI in HR also poses reputational risks to organizations. When a company uses unfair AI systems, it risks damaging its good name and losing the trust of its employees and the public.

Technical Risks: AI is only as good as the data that goes into it, so the validity and accuracy of data used to train the AI are crucial. If the data is inaccurate or not representative, the AI system could make incorrect or biased decisions.

These risks show why it’s so important and urgent to reduce unfairness in AI and make sure AI is accountable and transparent.

What Exactly Is AI Bias?

AI bias occurs when an AI system makes unfair or inaccurate decisions or predictions based on flawed data or algorithms. AI bias can affect anyone, but especially those who belong to marginalized or underrepresented groups.

AI bias can have serious consequences for individuals, organizations, and society. It can harm people’s reputation, career, health, and safety. It can also erode trust in AI and its ethical use.

Some common examples of AI bias in HR are:

  1. Gender Bias: An AI system can show favoritism towards one gender. For instance, an AI system trained mostly on men’s resumes may lean towards male candidates.
  2. Bias in Data Sets: The data given to AI systems shapes their learning. Biased data results in a biased AI. For example, an AI trained on data implying men excel in technical jobs may lean towards men for these roles.
  3. Job Description Bias: The language in job descriptions can sway who applies. Job descriptions that use language more appealing to one gender may deter the other gender from applying.
  4. Sample Bias: This happens when the AI learns from data that doesn’t match the real world. For example, if one group of people is over or underrepresented in the data.
  5. Algorithmic Bias: This type of bias comes from the algorithm itself, not the data it learns from. An algorithm is a set of instructions that a computer follows to solve a problem or complete a task. In the context of AI, an algorithm is a recipe that tells the AI what steps to take and in what order to solve a problem or create something new.
  6. Representation Bias: This bias happens during data collection, like sample bias. The data used to train or test an AI system does not reflect the diversity of the real world. For example, if the data only includes resumes from white men, the AI system may not recognize or favor resumes from women or people of color.
  7. Language Bias: This occurs when recruiters prefer candidates who speak a certain dialect.
  8. Resume Scoring Bias: HR professionals often use AI to score resumes and screen candidates. These tools use pre-trained datasets to find candidates that best fit the job requirements.
  9. Preprocessing Bias: This occurs when the methods used to prepare and process the data for the AI system introduce bias.
  10. Confirmation Bias: This happens when an AI system prefers data that agrees with what it already believes or assumes.
  11. Exclusion Bias: This is when data leaves out certain groups, so these groups don’t show up as much in the output.
  12. Societal Bias: This is when an AI system’s outputs reflect societal prejudices, such as racial or gender biases.

We explored AI bias in detail in our previous article, here. We provided examples and the causes of AI bias. AI bias can have serious consequences for individuals, organizations, and society.

Why Does AI Bias Matter?

AI bias matters because it can affect people’s lives in many ways. AI bias can lead to unfair or harmful decisions, such as denying someone a job, a loan, or a medical treatment. AI bias can lead to unfair treatment of certain groups such as women, minorities, LGBTQ+ people, older workers, people with different religious or political views, or people with disabilities. This can result in discrimination or prejudice against these groups. AI bias can also damage the trust and confidence that people have in AI systems and the organizations that use them.

Some examples of AI bias in HR and its impacts are:

  • Amazon’s AI Recruiting Tool: In 2018, Amazon had to roll back its secret AI recruiting tool because it was found to be biased against women. The AI was penalizing women in the recruitment process. This is a clear example of how AI, if not properly trained and monitored, can perpetuate existing biases and lead to unfair outcomes. You can read more about it here.

  • Bias in Job Descriptions: Jordan Birnbaum, VP and Chief Behavioral Economist at ADP, explains that the words in job ads can influence who applies. Words like ‘competitive,’ ‘analytical,’ and ‘independent’ can attract different people than ‘collaborative,’ ‘conscientious,’ and ‘sociable.’ For example, if a job ad uses the word ‘ninja,’ it might attract more men. This shows how AI, if trained on biased data, can perpetuate these biases.
  • Facial Recognition Bias: Joy Buolamwini, a researcher at the MIT Media Lab, found that some facial recognition technologies show bias. The software recognized a face only when she wore a white mask. This shows that if AI learns from biased data, it can make unfair decisions. You can read more about it here.

AI bias is a complex and pervasive problem that affects many aspects of our society and economy. HR leaders have a crucial role in ensuring that AI is fair, ethical, and beneficial for everyone.

Can HR Leaders and Other Professionals Help?

Here are some ways HR leaders and other professionals can reduce AI bias in their work:

1. Humans must regularly check AI systems. An example is performing regular audits on AI systems. This means looking at the data used to train the AI, checking the rules that help the AI make decisions, and watching how the AI works in real life to make sure it’s fair.

Another example, an HR leader might often check how the AI system is doing at reviewing resumes. They could look at the data set used to train the AI, making sure it includes a wide range of candidates. They could also check the rules that the AI uses to sort candidates, looking for any bias that wasn’t meant to be there. They could also look at how the AI makes decisions in real situations, comparing its choices to those made by people to see if there are any differences.

HR leaders can test the fairness of the AI system. This could involve running simulations to see how the AI system ranks different types of candidates and checking for any signs of bias.

By doing these regular audits, HR leaders can find and mitigate biases in their AI systems. This ensures a hiring process that is fair and even.


2. Make sure that AI teams and data sets are diverse and inclusive. Involve people from different backgrounds in creating and using AI systems. Use data sets that are representative of the population.

HR leaders can ensure diversity and inclusivity in AI teams. They can do this by hiring people from a variety of backgrounds, experiences, and viewpoints. The goal might be to have a mix of genders, ethnicities, ages, and educational backgrounds in the AI team.

HR leaders can also ensure that people from various backgrounds contribute to the creation and management of AI systems. This means getting workers from all over the company, who do different jobs and are at different levels, to help make, grow, and check AI systems.

In the end, HR leaders can use data sets that reflect the population. This means collecting data from many different people to make sure it shows the population’s diversity. For example, an HR leader could train an AI system to review resumes. They would use a data set that includes resumes from people of different ages, genders, ethnicities, and backgrounds.

By taking these steps, HR leaders can ensure that their use of AI is diverse, inclusive, and representative.


3. Learn about AI bias and its ethical implications and share this knowledge with others. Provide training on how to identify and mitigate bias in AI systems. Promote a culture of fairness and inclusion within your organization.

HR leaders and professionals can play a pivotal role in reducing AI bias in their work through continuous learning and training.

HR leaders can educate themselves about AI bias and its ethical implications. They can attend workshops, webinars, or online courses that focus on AI ethics. For instance, the World Economic Forum has created an HR toolkit for the responsible use of AI.

After they learn about AI bias, they can teach others in their group. They can do this by leading training sessions, speaking at events, or starting conversations about AI and bias.

For training, HR leaders can give special training on how to find and lessen bias in AI systems. This might include real-life exercises where workers can use AI systems and learn how to find possible biases.


4. Work with other stakeholders and experts to address AI bias challenges. Talk with AI teams, data scientists, HR leaders, ethicists, regulators, and other professionals to find ways to lessen AI bias.

HR leaders can talk with AI teams, data scientists, HR leaders, ethicists, regulators, and other professionals to find ways to lessen AI bias. This might include regular meetings or workshops where these people can share what they know and their experiences. Making decisions as a group can be a key part of these talks, encouraging diverse viewpoints.

For example, an HR leader might work with a data scientist to understand the technical parts of AI bias. They could also talk with ethicists to learn about the ethical parts of AI bias. Regulators could give insights into the legal parts of AI bias, helping the group stay in line with relevant laws and rules.

Also, HR leaders can get many people involved in picking and using AI-based HR tools. This could involve HR professionals and the workers who these tools will impact.


5. Speak up for fair and transparent policies for the use of AI in your organization. Push for clear rules for making, developing, using, and checking AI systems. Make sure workers can give their input in the process.

HR leaders can push for clear rules for making, developing, using, and checking AI systems. These rules should focus on clear ways to verify the AI systems are fair and transparent.

For example, an HR leader could work with their team to make a simple checklist. This checklist could look at important things like data quality, if we can make sense of how the AI system works, and how the AI system impacts different groups of workers.

They could also use explainable AI, which ensures that people can understand how the AI system works. Explainable AI, or XAI for short, is a kind of AI that’s easy for people to understand and trust. It’s all about making clear how an AI model works, what it might do, and any biases it might have. Want to know more? Check it out here.

Also, HR leaders can make sure that workers can give their input in the process. This might mean making ways for workers to give feedback on AI systems, or even getting workers involved in picking and using AI-based HR tools.

In the end, HR leaders can tell everyone in the group about these rules and give training on AI ethics. This makes sure that all workers understand how AI is used in the group and how to find and lessen bias in AI systems.

Conclusion

AI bias is a complex and pervasive problem that affects many aspects of our society and economy. HR leaders have a crucial role in ensuring that AI is fair, ethical, and beneficial for everyone. In this article, we shared some steps to make AI fairer, like using data from different groups, working with diverse people, training yourself and others, creating a fair culture, talking to stakeholders, and hiring without bias. These actions can help create a more welcoming and inclusive workplace for everyone.

But we also recognize that AI is not a perfect solution and that humans still have an important role to play in HR. We cannot take the ‘human’ out of human resources. We need to balance the use of AI with human judgment, empathy, and values. We need to check and test the impact of AI on our employees and our organizations. And we need to keep learning and improving our AI systems to ensure they align with our goals and principles. By doing so, we can harness the power of AI for good and avoid the pitfalls of AI bias.


References

  1. ITRex Group. (2021, March 9). What is AI bias really, and how can you combat it? ITRex. https://itrexgroup.com/blog/what-is-ai-bias-really-and-how-can-you-combat-it
  2. Forbes. (2021, April 14). How AI is changing the way we communicate. Forbes. https://www.forbes.com/sites/bernardmarr/2021/04/14/how-ai-is-changing-the-way-we-communicate/
  3. Brookings Institution. (2018, April 24). How artificial intelligence is transforming the world. Brookings. https://www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world/
  4. Devdiscourse. (2023, March 15). The power of AI: Transforming the way we live and work. Devdiscourse. https://www.devdiscourse.com/article/technology/2379851-the-power-of-ai-transforming-the-way-we-live-and-work
  5. Spiceworks. (n.d.). AI bias challenges in HR and 6 ways companies can address them. Retrieved April 7, 2023, from https://www.spiceworks.com/hr/future-work/guest-article/ai-bias-challenges-hr/
  6. Harvard Business Review. (2019, October 3). What do we do about the biases in AI? Retrieved April 7, 2023, from https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
  7. Microsoft. (2020, October 29). When bias begets bias: A source of negative feedback loops in AI systems. Retrieved April 7, 2023, from https://www.microsoft.com/en-us/research/blog/when-bias-begets-bias-a-source-of-negative-feedback-loops-in-ai-systems/
  8. SAP Insights. (2020, October 14). AI for HR: Practical solutions for a modern workforce. Retrieved April 7, 2023, from https://www.sap.com/insights/ai-for-hr.html
  9. Forbes Technology Council. (2019, February 6). The emerging impact of AI on HR. Retrieved April 7, 2023, from https://www.forbes.com/sites/forbestechcouncil/2019/02/06/the-emerging-impact-of-ai-on-hr/
  10. Harvard Business Review. (2019, October 25). Using AI to eliminate bias from hiring. Retrieved April 7, 2023, from https://hbr.org/2019/10/using-ai-to-eliminate-bias-from-hiring
  11. Pew Research Center. (2023, April 20). AI in hiring and evaluating workers: What Americans think. Retrieved April 7, 2023, from https://www.pewresearch.org/internet/2023/04/20/ai-in-hiring-and-evaluating-workers-what-americans-think/
  12. National Institute of Standards and Technology. (2019, December 19). NIST study evaluates effects of race, age, sex on face recognition software. Retrieved April 7, 2023, from https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software
  13. Caprino, K. (2021, January 7). How AI can remove bias from the hiring process and promote diversity and inclusion. Retrieved April 7, 2023, from https://www.forbes.com/sites/kathycaprino/2021/01/07/how-ai-can-remove-bias-from-the-hiring-process-and-promote-diversity-and-inclusion/
  14. Forbes Human Resources Council. (2021, October 14). Understanding bias in AI-enabled hiring. Retrieved November 7, 2023, from https://www.forbes.com/sites/forbeshumanresourcescouncil/2021/10/14/understanding-bias-in-ai-enabled-hiring/
  15. National Institute of Standards and Technology. (2022, March 9). There’s more to AI bias than biased data, NIST report highlights. Retrieved November 7, 2023, from https://www.nist.gov/news-events/news/2022/03/theres-more-ai-bias-biased-data-nist-report-highlights
  16. The Conversation. (2021, October 28). Artificial intelligence can discriminate on the basis of race and gender, and also age. Retrieved November 7, 2023, from https://theconversation.com/artificial-intelligence-can-discriminate-on-the-basis-of-race-and-gender-and-also-age-173617
  17. Codecademy. (2021, June 28). Why AI needs queer technologists & how to get involved. Retrieved November 7, 2023, from https://www.codecademy.com/resources/blog/lgbtq-identity-in-ai-systems-bias/Calligo: AI bias is frequently failing the LGBTQ+ community
  18. World Economic Forum. (2021, December 8). How to keep the ‘human’ in human resources with AI-based tools. Retrieved November 7, 2023, from https://www.weforum.org/agenda/2021/12/how-to-keep-human-in-human-resources-with-ai-based-tools
  19. SHRM. (2020, December 1). AI-related lawsuits are coming. Retrieved November 7, 2023, from https://www.shrm.org/resourcesandtools/hr-topics/technology/pages/ai-lawsuits-are-coming.aspx
  20. Dastin, J. (2018, October 9). Amazon scraps secret AI recruiting tool that showed bias against women. Retrieved November 7, 2023, from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
  21. Spiceworks. (2021, October 21). The bias is real: 5 experts on AI bias in HR and how it can be addressed. Retrieved November 7, 2023, from https://www.spiceworks.com/hr/hr-strategy/articles/experts-on-ai-bias-in-hr/

Disclaimer:?Joe Blaty (he/him/his) is an innovation leader with a passion for driving disruptive change, a storyteller, a trusted advisor, a futurist, and a Diversity, Equity, Inclusion, and Belonging advocate. The views and opinions expressed in this article are solely of Mr. Blaty and are not representative or reflective of any individual employer or corporation.


Dr. Suzanne LaGrande

Executive Director of the Blosser Center | Big Picture Thinker | Community Builder | Storyteller

1 年

A lot of the solutions seem to have to do with creating better programming. But I wonder if we need less programming and more direct communication? Technology is presented as the efficient solution but often the costs in human terms are less connection and real interactions.

Joe Blaty

Principal AI Innovator: Empowering Organizations with Holistic, Ethical, Human-Centric Tech Solutions

1 年

After writing several articles on the topic, I believe that HR has a key leadership role in controlling an organization's AI transformation. I believe that the best way an organization can adopt AI and ethically transform the business is to lead with talented and knowledgeable HR folks. HR leaders don't have to be technical; they need to assure that people are trained and ready for AI. Most importantly, HR leaders need to assure that people's concerns are addressed transparently, and that employees have a say about AI transformation all along the journey. A unilateral, top-down rollout strategy is doomed to fail. DE&I policies modified to include transparent, explainable, and ethical AI is the future of HR and business.

Ben Stein

?? Career Coach ?? I help mid to senior level professionals get unstuck, gain clarity, and land their ideal role with more balance, pay, and impact in less than 90 days ?? Free Career Clarity Call in About??

1 年

Absolutely crucial topic, Joe Blaty! The influence of AI in HR is undeniable, and addressing bias is paramount. These practical tips provide a clear path to ensuring fairness and equity in the hiring process. Collaborating with stakeholders and promoting transparency are key steps.

要查看或添加评论,请登录

Joe Blaty的更多文章

  • Revolutionize Your Consulting Business with AI

    Revolutionize Your Consulting Business with AI

    My Solopreneur Dilemma I’m thrilled to build my consulting business in the "Age of AI." Why? AI handles the tasks I…

  • DIRECT: A New Way to Build Better Bots

    DIRECT: A New Way to Build Better Bots

    Ever felt confused when trying to build a Chat(bot)? Wish there was a clear path to follow? You’re not alone. Today…

    1 条评论
  • AI: Supercharging IT and Business Alignment

    AI: Supercharging IT and Business Alignment

    Introduction Imagine a city that runs like clockwork, where every process is efficient, and every decision is smart…

    7 条评论
  • Using Microsoft Copilot for New Client Reconnaissance

    Using Microsoft Copilot for New Client Reconnaissance

    Introduction Imagine being able to understand your new client in less than half an hour before you even meet them for…

    5 条评论
  • Capabilities-based Approach to Employment and Hiring

    Capabilities-based Approach to Employment and Hiring

    Introduction Imagine you are looking for a new job. You have the skills and experience to do the work, but you don’t…

    2 条评论
  • DE&I and Generative AI Bias

    DE&I and Generative AI Bias

    Introduction What if an AI system accused you of a crime you didn’t commit because of your skin color or gender? This…

    1 条评论
  • Challenge: Define a future tech job title

    Challenge: Define a future tech job title

    Recently, I've been trying to find a job title for the leadership role I seek, but I haven't had much luck. I'm…

    15 条评论
  • Can We "Press Pause" on AI?

    Can We "Press Pause" on AI?

    In recent weeks, discussions surrounding Artificial Intelligence (AI) have intensified, fueled by concerns about its…

    6 条评论
  • What do you mean by "innovation?"

    What do you mean by "innovation?"

    I'm often asked, "What is your perspective on innovation?" That's the topic of this post, but I would love to hear your…

  • ChatGPT: Why hire me?

    ChatGPT: Why hire me?

    I am currently in the process of re-thinking my job search strategy, and thought I would share an intriguing idea for…

    2 条评论

社区洞察

其他会员也浏览了