We must confront our prejudices if AI is to help HR
Are you biased?

We must confront our prejudices if AI is to help HR

 ‘Power to the new people analytics,’ McKinsey told us in 2015. The new HR tech revolution was here; developments in people analytics technology were providing solutions to problems that had been plaguing HR professionals for decades.

 Roll forward two years and machine learning is commonplace. Our historic data is being trawled and our behavioural patterns tracked in order to guide future decision making in seemingly every area of our existence from our time at work to our health and finances. This is such a fascinating time for those of us watching the capabilities of technology grow by the day, and I know that many HR professionals have been considering the transformative potential that artificial intelligence could have on our workplaces for some time.

 AI has landed in the HR space and the possibilities are almost unbelievable; hundreds of millions of dollars are already being saved across multinational organisations as they solve recruitment and retention problems based on the insights being offered up by AI-based predictive behavioural analytics.

 What people analytics had done for HR, AI is now doing for people analytics. Engagement patterns are being uncovered, facial expressions tracked during video job interviews, and flight risks identified before their dissatisfaction manifests into a resignation letter, all thanks to AI; in 2016 already 32% of organisations told Deloitte that they were ‘harnessing the power of people-related data to solve business problems’, many of them using AI to do so.

 The case studies celebrating the growing role of machine learning and artificial intelligence in HR are being published with ever growing frequency, but we are quickly becoming aware that without careful mitigation, the AI we rely on to solve our problems will become an issue in itself.

 Probably the most public demonstration of the risks involved in teaching machines to learn from human behaviour is that of Microsoft’s AI chatbot, named Tay, who after only a few hours on Twitter reading conversations and interacting with users, began posting racist and sexist content and in one tweet even denied the Holocaust. Tay hadn’t been taught to behave this way from its creators, but had ‘learned to’ write this way from the racist Twitter users who interacted with it. Tay was taken quickly taken offline for ‘upgrades’ and the most problematic tweets were deleted, but a parable was created. AI reflects us. 

 So let’s talk diversity in the context of AI and people analytics. As HR professionals, we know we must focus on diversity to ensure our workforces are representative of our communities and to reach every possible pocket of talent, and as tech enthusiasts we know our space has a well-documented problem with diversity, or a lack thereof. No matter how much it makes us uncomfortable to admit, we all have biases and we have subjectivities. It’s in our nature to categorise things and we do the same to people.

 We are building machines and telling them to learn from us. Hopefully nobody reading this has prejudices as bigoted as those articulated by Tay, but we all absolutely have biases – some of which we are aware of and hopefully try to mitigate, some might barely seem noteworthy - something as seemingly innocuous as choosing the candidate with the same hobbies as other colleagues (“they’ll be a great team fit!”), but more still that we’ve never noticed.

 Already we are seeing issues of bias surface in HR technology. Last summer LinkedIn had to amend its search algorithm as users looking up typically female names were being asked if they had intended to search for men; Stephen instead of Stephanie, Eric instead of Erica and dozens of other suggestions were uncovered. LinkedIn responded saying that these results had occurred because the algorithm was learning from user behavioural patterns and that gender wasn’t a factor. However, those searching for typically male names were not offered their female equivalents. Gender may not have been a factor in the design – LinkedIn doesn’t even record its users’ gender – but user behaviour produced a gendered result.

 AI has huge potential to help us overcome problems of diversity in hiring; by teaching a machine to screen candidates without knowing their name, gender or alma mater we can begin to unpick those prejudices we worry about replicating.

 But if we teach machines to follow our recruitment patterns without ourselves critically reflecting on problematic hiring practices; screening CVs on the basis that we want more people who are like our current employees, or profiling against a set of pre-determined attributes that potentially have unconscious biases sown into them, we will only propagate our prejudices and in doing so fail to achieve the potential AI has to enable us to dismantle practices that belong in the past.

 It’s subtle biases where AI has the potential to really help us. After all, we’re professionals – we know what overtly biased behaviour looks like and how to tackle it. Where we often struggle is that no one person or team within an organisation has the oversight to monitor behaviours at a granular level; we miss opportunities to make interventions because it is simply not possible for us to know everything. But if we use machine learning to delve deep into our data, watch our behaviour and identify our bias blind spots we can know and learn so much more. For me, this is where the future of people analytics becomes massively exciting.

 If technology can teach us about our own biases, we become able to teach this technology to overcome them. In doing so we make our organisations forces for good, and do the best for and by the people who work within them.

References:

https://www.mckinsey.com/business-functions/organization/our-insights/power-to-the-new-people-analytics

 https://dupress.deloitte.com/dup-us-en/focus/human-capital-trends/2016/people-analytics-in-hr-analytics-teams.html

https://www.seattletimes.com/business/microsoft/how-linkedins-search-engine-may-reflect-a-bias/

要查看或添加评论,请登录

Matt Macri-Waller的更多文章

社区洞察

其他会员也浏览了