Discriminatory Hiring Algorithm
Rupa Singh
Founder and CEO at 'The AI Bodhi' and 'AI-Beehive' | Author of "AI ETHICS with BUDDHIST PERSPECTIVE"| Top 20 Global AI Ethics Leader | Thought Leader| Expert Member at Global AI Ethics Institute
Algorithms do not build themselves. They often rely on human input and the choices they make about the outcomes. The effectiveness and potential biases of algorithms are ultimately shaped by the decisions made regarding desired outcomes, the choice of predictors to consider, and the training data used.?
The critical element of regulating algorithms is to regulate human who build it.
Human Decision Making Vs Algorithmic Decision Making
Human decision-making processes are often opaque, making it difficult for external observers to discern the underlying factors that influences their decisions. In cases where algorithms are involved, demonstrating discrimination can potentially be easier as compared to human-decision making process. Also, the law already prohibits discrimination by algorithms and by regulating the algorithmic design processes, measures can be taken to enforce this prohibition.
Reading the Algorithmic Code is Not Enough
To assess gaps or disparities, such as differences in hiring rates based on gender, it is not enough to just read the algorithm’s code to comprehend its inner workings. Instead, the focus should be on examining the input data provided to the algorithm, analyzing the output patterns, and conducting various tests and experiments to gain insights into its behaviour and assess potential biases or discriminatory outcomes.
"Algorithms introduce new avenues for historical discrimination to be perpetuated or biases to be amplified, thereby exacerbating discriminatory outcomes"
Implementing Effective Regulations
Adopting an approach that requires regulations for algorithmic design process also includes establishing detailed recordkeeping requirements to document the design and development process of algorithms.
The documentation promotes transparency regarding the decisions and choices made during the algorithm development and serves as evidence in evaluating algorithmic decision making, identifying potential biases, and creating accountability for any discriminatory outcomes.
Documenting the design process also shed light on different trade-offs involved in balancing various values and considerations and thus empowers regulators, researchers, and affected individuals to assess whether the algorithm align with ethical and legal standards, and whether the potential benefits justify any inherent risks or limitations.
Discrimination in the Age of Algorithms:
It is important to recognize that bias can emerge from at various stages of algorithmic development, which includes data collection, feature selection, and model training.?
A biased outcome can result from:
领英推荐
Addressing bias in hiring models requires a thoughtful and proactive approach. This involves carefully selecting features and data sources that promote fairness, implementing techniques to implement bias during training, and continuously monitoring and evaluating the model’s performance to ensure equitable outcomes.
Case Study: Gender Discrimination in Hiring
Let us consider a hypothetical case where a real estate company is accused of gender discrimination in their hiring. The company and the plaintiff acknowledge that company has hired fewer women than men in the past. Both the parties have differing opinion on the underlying reason behind this disparity. While the plaintiff alleges discrimination, the company argues that the difference is due to variations in qualifications.
During a trial, plaintiff’s legal team presents evidences demonstrating that the plaintiff, along with other female applicants, possesses more years of schooling as compared to male applicants who were hired by the company. And the firm counters this argument by stating that while they do consider education, they also take into account years of work experience, which tends to be higher among male applicants. The company asserts that work experience holds greater importance in their hiring decisions.?
The plaintiff argues that the company is using work experience as a means to justify the unequal representation of women in their workforce. The plaintiff’s lawyers raise concerns that the company?emphasises on work experience as a post-hoc justification which serves as a pretext for hiring more men.
This scenario highlights the complexity of addressing allegations of gender discrimination in hiring. ?Whether work experience is really the deciding factor in hiring process is not readily discernible.?
Hiring Process with Algorithms in the Loop
What if the firm had utilized algorithm in their hiring process within a well-regulated environment??
In such case, plaintiff lawyers would have the opportunity to request access to the screening and training algorithms employed by the company as well as the underlying dataset used for decision-making. They can get expert insights for analyzing the algorithmic screening rule and its impact on the hiring outcomes. Statistical techniques that simulate counterfactuals can be utilized to evaluate how applicants of different genders with similar qualifications are treated by algorithm.
Ultimately, a well-regulated environment that embraces algorithms would empower both sides to engage in a data-driven discussion. The screening and training algorithms, as well as the underlying dataset, would be subject to scrutiny and analysis. This level of transparency could facilitate a more informed legal process.
By conducting a comprehensive analysis of algorithm, legal experts can evaluate not only its impact on gender disparities but also its alignment with legal standards such as business necessity. In a quest for algorithmic fairness, it is essential to scrutinize not only the outcomes but also the objectives embedded in algorithms to ensure that they do not perpetuate discrimination while also considering the employer’s legitimate business interests.
Thankyou for Reading this Article. I hope you find this useful.
Polymath & Self-educated ?? ? Business intelligence officer ? AI hobbyist ethicist - ISO42001 ? Editorialist & Business Intelligence - Muse? & Times of AI ? Techno humanist & Techno optimist ?
1 年GN?SIS Grenoble ???? ????
Polymath & Self-educated ?? ? Business intelligence officer ? AI hobbyist ethicist - ISO42001 ? Editorialist & Business Intelligence - Muse? & Times of AI ? Techno humanist & Techno optimist ?
1 年AI Muse? Grenoble ???? ????
Data Science | Generative AI | Consulting & Strategic Planning |AI ML Architect
1 年Certainly! Discrimination is undeniably a significant concern, and I appreciate your recognition of both the challenges it poses and the potential solutions.
AI Executive | Futurist | AI Educator (100K+ Learners) | Global Keynote Speaker | Author | Board Advisor | World's Top 200 Innovators | Patent holder
1 年Discrimination is indeed one of the key concerns; glad you brought up not just the challenges, but the solution as well.