A number of organisations have experimented with AI, seeking to innovate and streamline their recruitment process. But alongside these benefits, there's an essential topic that must be addressed: AI bias. In a recent meeting, the Sten10 team discussed the potential sources of AI bias in recruitment, why it occurs and what the implications are for equitable hiring practices.
- Biased training data: AI models are only as good as the data they’re trained on. In recruitment, if past hiring data reflects biases then the AI models learn and replicate them, favouring certain groups over others. For instance, if historically an organisation hired more candidates from specific universities, AI may unintentionally prioritise these backgrounds, overlooking equally qualified candidates from less represented universities.
- Lack of representation: In fields where representation is lacking, AI systems trained on the majority data will likely perform poorly when evaluating minority data. For instance, if facial recognition software is trained primarily on lighter-skinned individuals, its accuracy will decrease when evaluating darker-skinned individuals, reinforcing bias and limiting fair assessments.
- Algorithmic design and prioritisation: Recruitment AI is often optimised for efficiency or specific performance indicators, which can lead to unintentional bias. For instance, an algorithm prioritising candidates who previously succeeded in similar roles may unintentionally favour demographics that historically had more access to those roles, thus sidelining diverse perspectives and backgrounds.
- Feedback loops: Once bias is built into AI, it can reinforce itself through feedback loops. For example, if a hiring model prefers certain candidates, those hired will further influence the AI’s data, reinforcing similar patterns and making it increasingly challenging to break the cycle of bias.
- Human oversight: AI systems rely on human judgment for design and implementation. If the individuals creating and testing the systems are not trained in ethics or inclusivity, biased outcomes are more likely. This oversight gap can lead to AI outcomes that disproportionately favour certain groups.
Unchecked AI bias can lead to missed opportunities for hiring diverse talent, narrowing the pool of skills and perspectives within an organisation.
Addressing AI bias in recruitment
- Data auditing and representation: Regularly audit training data to ensure it includes diverse backgrounds, experiences, and perspectives. By actively expanding the data set, organisations can help mitigate biases and create more equitable hiring recommendations.
- Algorithm transparency and accountability: Use transparent algorithms with clear, interpretable outcomes to ensure decision-makers understand how the AI is making recommendations. Combining AI insights with human judgment allows recruiters to verify and address any potentially biased suggestions.
- Design a fair and inclusive assessment process: Occupational Psychologists specialise in creating assessments that are valid, reliable, and equitable. They can advise on designing AI algorithms that assess candidates fairly by ensuring that the metrics used do not disproportionately disadvantage any group.
- Training and education: Occupational psychologists can train HR and tech teams on the nature of bias, helping them understand how unconscious bias can permeate even automated systems. Through workshops and training sessions, they can educate teams on the psychological and social aspects of AI bias, empowering those involved in AI deployment to recognise and address potential biases proactively.
As organisations continue to integrate AI into daily processes, understanding and addressing AI bias is no longer optional. Building fair, ethical AI systems is essential to ensuring workplaces remain inclusive and progressive.