Four-Part Series – Applicants’ Perceptions of Algorithm Based Application Screening - Article 1: The Digital Shift in Job Applications

Four-Part Series – Applicants’ Perceptions of Algorithm Based Application Screening - Article 1: The Digital Shift in Job Applications


Hello, I am Amanda Fetch and currently based in NYC. I have a little over 20 years of experience working within the areas of Analytics, Data Science, Machine Learning, and AI in the biotech, retail, entertainment, and defense industries. I am currently working as a Head of Applied Science at data2 , Brand Ambassador (AI) for Narratize , and as an AI Analyst at GAI Insights . In addition, I am researching for my PhD dissertation in Technology and AI through Capitol Technology University focusing on space situational awareness and orbital debris management data analysis framework.

?

?? Welcome to Tech, Data, AI, Oh My! four-part series – Applicants’ Perceptions of Algorithm Based Application Screening! ??

?

In this compelling four-part series, I'll explore the evolving relationship between job applications and the advanced technologies revolutionizing them. Inspired by rigorous research conducted during my PhD in Technology and Master of Research Methods program at Capitol Technology University , I performed a study focusing on applicants’ perceptions of algorithm based application screening tools. In this upcoming series, in the form of articles, I aim to share my research outside of the classroom. These articles will peel back the layers of algorithmic screening tools in modern recruitment processes. With the invaluable input from my Harvard Business Analytics Program family, this series unfolds the complexities, ethical quandaries, and uncharted territories of AI's role in the screening portion of the hiring process.

?

This series will contain four articles:

?

? Article 1: The Digital Shift in Job Applications - Setting the stage for the digital revolution in job seeking and the urgency of understanding its dynamics focusing on the screening process.

?

? Article 2: Decoding the Algorithm: Purpose and Methodology - A dive into the objectives and the methods used within the study.

?

? Article 3: Revealing Insights: Study Results - Presenting the pivotal findings from the study, complemented with visuals to explain the intricate data.

?

? Article 4: Beyond the Data: Implications and Reflections - Discussing the broader implications of the findings, reflecting on limitations, and charting paths for future research.

?

Each installment of this series is designed to enlighten, inform, spark discussion, and give an inside glimpse into the research process. As we navigate through the evolving landscape of employment and technology, your insights and experiences are invaluable. Together, let's explore the potential of AI in shaping the future of hiring and consider the ethical dilemmas it introduces.

?

Stay tuned for thought-provoking content that promises to enrich your understanding of how AI is being viewed by practitioners in the industry focusing on algorithmic based screening within the hiring process. Let this journey inspire you to reflect on the role of technology and AI in reshaping employment practices for the better.

?

Subscribe to "Tech, Data, AI, Oh My!" to ensure you don't miss out on this insightful series. Together, let's delve into the future of work, one article at a time.

?

#aiinrecruitment #futureofwork #digitalhiring #techdataaiohmy #ai #algorithms #hbap #harvard #research #captechu



Introduction

The contemporary job application landscape has dramatically transformed from traditional methods like newspaper ads and mail-in applications to a process dominated by algorithmic screening. This modern approach, while seemingly efficient and unbiased on the surface, elicits mixed reactions from applicants. The use of algorithms can be perceived as a hallmark of a tech-savvy company or a source of concern over potential bias. A notable example from 2018 involves a resume screening tool tested by a client of a law firm in Minneapolis, MN, which favored candidates named Jared who played lacrosse, suggesting a bias towards certain demographics and potentially basing decisions off of factors like gender and race. This incident highlights the underlying biases in algorithmic decisions (Marks, 2022).


A study conducted by Harvard Business School and Accenture highlighted by Paul Marks, involving 8,000 job applicants and 2,250 hiring managers from the U.S., Germany, and Great Britain, reveal that resume screening algorithms could be excluding millions from employment opportunities due to excessive and often irrelevant requirements to actual job duties (Marks, 2022).


Applicants have resorted to finding loopholes in these algorithms, despite ethical reservations, even posting online guides on how to “beat” them (Marks, 2022). However, the absence of comprehensive research on the ethical implications of AI in Human Resource Management (HRM) has not deterred organizations from adopting these technologies. Evidence uncovered by HBS shows 99% of U.S. Fortune 500 firms use some form of recruitment software, with the market expected to hit $3.1 billion by 2025 (Fuller et al., 2021). HR practitioners, as demonstrated in the study, Human resource developments with the touch of artificial intelligence: a scale development study, view AI positively for reducing repetitive tasks and accessing broader candidate pools, despite ongoing debates on AI's ethical concerns and fairness in recruitment (Kambur & Akar, 2022).


In contrast, the study Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development reviews the literature existing as of 2020 regarding the argument that although using algorithms in human resources (HR) recruitment may save time and money, at what cost are these savings being made? Is it at the cost of allowing unfair treatment, bias, and discrimination? Not a lot of research had been found regarding the use of algorithms and AI in HR recruitment. There have been some studies done regarding HR selection processes, however only a few have addressed fairness perceptions and biases in HR recruitment (K?chling & Wehner, 2020).


Literature Review

Research on applicants' perceptions of AI in hiring shows limited focus on specific fields or the screening process itself. The study, AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context evaluated AI's fairness in HRM across various functions like recruitment and layoffs through a mix of qualitative and quantitative methods (mixed methods). This study of 446 participants assessed how AI versus human decision-makers are perceived in terms of justice, trust, and dehumanization. Findings revealed a general distrust and dehumanization associated with AI decisions, suggesting that participants believe AI is less capable of fair HRM compared to humans. The study suggests the need for further research in diverse sectors and decision-making scenarios (Bankins, et al., 2022).


Applicant Perceptions of Hiring Algorithms - Uniqueness and Discrimination Experiences as Moderators explored applicant reactions to hiring algorithms, investigating whether their use either attracts or repels job seekers. The study revealed that individuals with previous discrimination experiences prefer the objectivity of algorithms, whereas those with unique career trajectories or high self-perceived uniqueness favor human decision-makers for a perceived fairer chance at selection. Two investigations supported these findings: one involving 165 German employees assessing their perceptions of the hiring process's fairness and another online experiment with 255 U.S. participants with Amazon Mechanical Turk (MTurk) evaluating a fashion company's selection process. The conclusions indicated that algorithms seem to increase standardization at the cost of individualization of the hiring process, impacting the applicants' ability to demonstrate their unique skills and qualities (Kaibel et al., 2019).


Like the previous study, the study Applicants’ Fairness Perceptions of Algorithm?Driven Hiring Procedures examined perceived use of algorithms in selection and recruitment with the usage of MTurk. This research found that candidates often see algorithmic processes as less fair than those involving humans or a combination of algorithms and human judgment. This perspective persists regardless of whether the algorithmic decision benefits the applicant. One key concern is the belief that algorithms fail to appreciate the unique attributes of candidates as standing out from other applicants in the pool. Despite organizational advantages like efficiency and reduced bias, the study highlights a significant drawback from the applicants' viewpoint: the risk of overlooking individuality (Lavanchy, 2023). This was a recurrent theme in literature on algorithmic hiring.


What’s Next?

The transition to algorithmic screening in job applications has significantly changed the recruitment landscape, offering a blend of perceived efficiency and potential biases. Studies reveal mixed applicant reactions towards algorithms, from viewing them as indicators of a company's technological advancement to concerns over inherent biases, exemplified by a case where a resume screening tool favored certain demographics over others. Despite these concerns, the rapid adoption of AI in HRM continues, with research highlighting both the positive outlook of HR practitioners on AI's potential to streamline processes and the ethical debates surrounding fairness in recruitment. Literature reviews further delved into how applicants perceive AI, with studies indicating a general mistrust and perceived dehumanization by AI, alongside a preference for human decision-makers among those valuing uniqueness or having faced discrimination. These studies shed light into the complexity of algorithmic hiring, balancing standardization against the need for individual consideration.


The next article in this series, Decoding the Algorithm: Purpose and Methodology, will focus on the purpose and methods used to research this topic, aiming to unpack the nuances of AI in recruitment further within the screening process, and how this challenge was explored within this study.


References

Bankins, S., Formosa, P., Griep, Y., & Richards, D. (2022). AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context. Information Systems Frontiers, 24(3), 857-875. https://doi.org/10.1007/s10796-021-10223-8


Fuller, J., Raman, M., Sage-Gavin, E., Hines, K., et al. (September 2021). Hidden Workers: Untapped Talent. Published by Harvard Business School Project on Managing the Future of Work and Accenture. https://www.hbs.edu/managing-the-future-of-work/Documents/research/hiddenworkers09032021.pdf


Kaibel, C., Koch-Bayram, I., Biemann, T., & Muhlenbock, M. (2019). Applicant Perceptions of Hiring Algorithms - Uniqueness and Discrimination Experiences as Moderators.?Academy of Management Annual Meeting Proceedings,?2019(1), 1181–1186. https://doi-org.captechu.idm.oclc.org/10.5465/AMBPP.2019.210


Kambur, E., & Akar, C. (2022). Human resource developments with the touch of artificial intelligence: a scale development study.?International Journal of Manpower,?43(1), 168-205. https://doi.org/10.1108/IJM-04-2021-0216 https://doi.org/10.1007/s00146-020-00977-1


K?chling Alina, & Wehner, M. C. (2020). Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development.?Business Research,?13(3), 795-848. https://doi.org/10.1007/s40685-020-00134-w


Lavanchy, M., Reichert, P., Narayanan, J., & Savani, K. (2023). Applicants’ Fairness Perceptions of Algorithm-Driven Hiring Procedures.?Journal of Business Ethics, 1–26. https://doi-org.captechu.idm.oclc.org/10.1007/s10551-022-05320-w


Marks, P. (2022). Algorithmic Hiring Needs a Human Face: Artificial intelligence may be an unstoppable force, but in the recruitment market it has met an immovable object: humans. Something has to give. Communications of the ACM, 65(3), 17–19. https://doi-org.captechu.idm.oclc.org/10.1145/3510552



要查看或添加评论,请登录

Amanda Fetch, MSc的更多文章

社区洞察

其他会员也浏览了