A Comparative Analysis of AI and Minority Job Search
Akin Ayemobola, CPA, MBA, CTBME?
Finance, Accounting, and Business Management Executive
Introduction
What is ethics, and what is the ethical principle that should govern machine behavior in the advert of Artificial Intelligence (AI) are the two fundamental questions illuminating my mind as I think about the impact of machine morals and minority job search. Ethics is defined as well-founded standards of right and wrong that prescribe what humans should do, usually regarding rights, obligations, societal benefits, fairness, or specific virtues Velasquez M, et al. (2010). From this definition, the standard of right and wrong should apply to human and machine behavior. The fast development of Artificial Intelligence (AI) has changed various aspects of our lives, including the job searching process. Nevertheless, AI-driven recruiting tools and minority job searches raise some ethical concerns. This paper will compare some ethical concerns surrounding AI and minority job search.
Bias and Discrimination
Jean N (2024) explains the process for training data: “ Data for AI training may be naturally
generated by human activity and collected for use in an AI training dataset, or it may be
manufactured to create synthetic data that mimics real-world training data. Synthetic training data is beneficial when real-world data is limited or sensitive.” Most people share the sentiment in the statement because an AI-driven tool fed with biased data will largely influence the learning and the generated content, such as text or images. That said, some AI-driven recruiting
tools can aid biases and discrimination consciously or unconsciously if trained on biased data, which may lead to biased outcomes for minority job seekers. Similarly, minority job searches have the potential to be hindered by biases and discrimination in the recruiting process. “Both areas require careful consideration of potential biases and implementation of measures to ensure fairness and equity”. For example, if a company has historically hired a large percentage of people with European-centric names and the company decided to launch an AL-driven tool. There is a high probability that the dataset for the tool will start with what they have in their environment, such as names with European-centric names. Depending on how the algorithm is coded, the tool may have an inherent bias towards applications with names that are not European-centric. Hence, the tendency for the algorithms to favor candidates whose resumes reflect the majority demographic is very high, leading to the underrepresentation of minority job seekers. According to The Elephant in AI report, produced by Prof. Rangita de Silva de Alwis, where he looks at employment platforms through the perceptions of 87 black students and professionals coupled with an analysis of 360 online professional profiles to understand how AI-powered platforms “reflect, recreate, and reinforce anti-Black bias.” 40% of respondents noted that they had experienced recommendations based on their identities rather than their qualifications. Moreover, 30% also noted that the job alerts they had received were below their current skill level. One of the core responsibilities of companies is to ensure that their recruitment processes promote diversity. Why, because diversity and inclusion practice are a strength. However, this responsibility can be negatively impacted by a biased AI system.
Transparency and Accountability
The role of transparency and accountability regarding AI-driven recruiting tools for job seekers should not be minimized. Most job seekers, especially minorities, want to know why their applications are rejected or how to improve their chances. However, the AI system may need to be more transparent to produce this level of feedback, leading to a sense of unfairness among job seekers. Zapata D (2021) noted, “Silicon Valley is still prominently populated by white people, with men comprising the majority of leadership positions. It asks how the technology industry can create fair and balanced AL for the masses if there are still diversity challenges within the trams designing and implementing the algorithms upon which AL relies”—wholly aligned with the statement. An AI system is as intelligent as the dataset and the people feeding in the data. Also, some AI-driven recruiting tools often lack transparency, making it difficult to understand the decision-making process. Most minority job seekers want to know they are treated fairly and equitably. Ensuring transparency and accountability in AI and recruiting practices is critical for building trust, an essential value of thriving organizations, and addressing concerns.
Inequality
Employment discrimination is a critical process through which organizations can shape the
extent and nature of economic inequality in society (Bielby & Baron, 1986; Pager et al., 2009; Rivera, 2012). Despite the proliferation of equal opportunity and diversity initiatives in organizations (Kalev et al., 2006; Kaiser et al., 2013), discrimination based on race, in particular, remains pervasive in North American labor markets, consistently showing evidence of race-based discrimination. Résumés containing minority racial cues, such as a distinctively African American or Asian name, lead to 30–50 percent fewer callbacks from employers than do otherwise equivalent résumés without such cues (Bertrand & Mullainathan, 2004;?Oreopoulos, 2011;?Gaddis, 2015). Given the crucial role of recruiting in occupational attainment, this form of discrimination substantially contributes to labor market inequalities by blocking racial minorities’ access to career opportunities (Pager, 2007). With all of that, AI-driven recruiting tools may worsen existing inequalities if access to digital literacy varies among job seekers. Minority job seekers may face similar barriers, with limited access to resources and networks. Addressing access and inequality in AI and recruiting practices is essential for promoting fairness and equal opportunities. Hence, a strong argument exists for a robust regulatory framework to ensure AI is used ethically.
Human Judgment
Human judgment and context must be emphasized in recruiting, especially when a company intentionally wants to hire candidates with diverse opinions and backgrounds. Leaving the recruiting process solely to an AI system is undoubtedly flawed. Advocating for eliminating AI driven tools in the process is not practical. However, humans should be involved to minimize the potential for unfairness and inequality. According to Rodney Brooks, current artificial intelligence technologies still need to be able to lead us to accurate general artificial intelligence (AGI). He argues that no current models will reach the AGI stage because they need a model for representing the real world. Similarly, AI-driven recruiting tools may need to pay more attention to important contextual factors and human judgment, oversimplifying complex recruiting decisions. Minority job seekers may face similar issues, with their unique experiences and perspectives overlooked. Balancing AI-driven efficiency with human judgment and contextual understanding is vital in both areas.
Conclusion
According to the 2019 HBR study, employers using AI-enabled recruiting tools should analyze their entire recruiting pipeline – from attraction to onboarding – to “detect places where latent bias lurks or emerges anew.” Diverse teams must be involved in the development of AI-driving recruiting tools. If we do so, we have potentially had less biased models and algorithms. The ethical concerns surrounding AI and minority job search share common themes, including bias, transparency, access, and human judgment. By recognizing these shared concerns, we can work towards developing responsible AI-driven recruiting tools and promoting equitable recruiting practices that address the unique challenges faced by minority job seekers.
?
References:
Kang et al (2016), Whitened Resume′ s: Race and Self-Presentation in the Labor Market Administrative science quarterly, 2016-09,? Vol.61 (3), p.469-502.
Awad, E et al (2018), The?Moral Machine?experiment, Nature (London), 2018-11, Vol.563 (7729), p.59-64
Zapata D. (2021). New study finds AI-enabled anti-Black bias in recruiting. https://www.thomsonreuters.com/en-us/posts/legal/ai-enabled-anti-black-bias/
Human intervention in the development of AI, the concept of Human-in-the-Loop.. https://www.isahit.com/blog/human-intervention-in-the-development-of-ai-the-concept-of-human-in-the-loop
Blockchain Technology: Revolutionizing Data Security and Trust - MindStick. https://www.mindstick.com/blog/302468/blockchain-technology-revolutionizing-data-security-and-trust
Minority Job Seekers and Their Hobson’s Choice – ORPHIC MAGAZINE. https://orphicmagazine.com/2018/12/02/minority-job-seekers-and-their-hobsons-choice/
Generative AI ? TechTaffy. https://www.techtaffy.com/generative-ai/
New study finds AI-enabled anti-Black bias in recruiting - Thomson Reuters Institute. https://www.thomsonreuters.com/en-us/posts/legal/ai-enabled-anti-black-bias/
Chen, N., Li, Z., Tang, B., Li, Z., Li, Z., & Tang, B. (2022). Can digital skill protect against job displacement risk caused by artificial intelligence? Empirical evidence from 701 detailed occupations. PLoS One, 17(11), e0277280.