HR and Risk with AI: Are we dehumanising human processes?

HR and Risk with AI: Are we dehumanising human processes?


“With GPT-Vetting, you can interview 100x more candidates in less time & candidates get a more enjoyable, gamified, and less biased interview experience.” Ali Ansari, the founder of micro1 announced on social media, tweeting this while showing off his tool designed to help HR through the recruitment process for engineers.

The tweet was accompanied by a video demonstration of the AI “interviewer,” which seemed to work as promised. What Ansari probably didn’t anticipate was the response.

“I’m really curious to hear how job candidates feel about this experience,” one response noted. “I’d feel degraded,” was one response. “I’d hang up and feel offended if I got on an interview and saw this,” was another.

“If my first interview is with an AI, I’m out.”

“I would nope hard and fast out of that interview.”

“If I was a candidate and the company put a non-human in front of me for the first round, I would leave the interview. If you can’t find the time to meet with me, how will you ever work we me once I come onboard?”

Others pointed out that the interview is a two-way conversation and allows candidates to screen for the employers that they want to work for as well, and this technology solution denies them that opportunity. There were some positive responses too, but the negative responses were loud, and often aggressively so.

As a result, we can be sure that if this GPT-Vetting solution were adopted for first round screening, it would almost certainly result in several ideal candidates excusing themselves from the interview process.

While the technology is objectively impressive, and the intent behind it is meaningful, this is a good example of how HR teams will need to carefully balance the opportunities presented by AI with risk.

AI In HR: It Will Become Ubiquitous

AI’s application in HR is widespread, from talent management to recruitment processes, and it’s only going to become moreso. Almost all HR teams will end up using AI on some level, with statistics showing that nearly?82%?of HR teams plan to adopt more AI tools between 2021 and 2025 .?

Furthermore, the use cases for AI in HR processes are broad. AI can used to personalise candidate experiences, match candidates to roles, and even predict career paths.?HR teams also have the full support of the executive layer in exploring AI, with Gartner research showing that CEOs believe AI can drive significant value in HR, and?66%?endorse its potential .

Identifying And Managing Risk In AI

Despite the many clear benefits that AI offers HR teams, they do need to be careful about the potential risks that the technology introduces, because as the response to GPT-Vetting showed, there is the potential for it to become a liability. Other ways that HR needs to carefully consider how AI is applied to processes include:

Data Breaches: HR teams typically hold sensitive data on candidates and employees, making them a common target for cyber attacks and data breaches. As noted on HR Grapevine last year “cyber security must now be top of HR’s agenda.” For AI to be effective, it needs access to the same data that needs to be protected, making security an equally key priority for any AI applications that the HR team might explore.

Bias in Recruitment Algorithms: Studies continue to show that hiring algorithms can introduce bias, affecting women, ethnic minorities, and other protected groups. It may well be the case that no amount of data model training can completely remove the risk of bias. Importantly, even if it was possible to, the perception of bias in the results will remain. However HR decides to leverage AI, it will need to have transparent human screening processes to eliminate the risk of bias

Finally, there’s the simple risk that an overreliance on technology can creep into the HR team’s processes. “HR” stands for “human resources,” and so at a very basic level anything that risks removing the human element is at odds with the intent of the department. Even simple things like introducing automations to emails and communication can undermine the critical role that HR teams have in building relationships and understanding the human dynamics within a workplace.

Overcoming The Risks

In addition to implementing robust data security measures and working closely with the IT team to ensues that the risk of cyber breaches in minimised, the best approach that HR can have to mitigating the risk of AI is to simply use it as a tool, rather than a replacement.

To use the example above, perhaps that GPT-Vetting tool could still be of value, but only when used as part of an interview process, rather than the entire first interview. It could still screen candidates based on technical questions, which tend to be time consuming and inefficient for the hiring manager, but then the candidate would also have the opportunity to talk to a human before or after that test portion of the interview.

It might not be quite as efficient as a time saver, but it would allow for more candidates to be met in a shorter time.

As in most other areas, a successful application of AI within HR will come down to how effectively it’s used as a tool, rather than being viewed as something that can entirely replace processes and human capabilities. In that context, some of the innovation and ideas around how HR can shape AI to better outcomes for both the enterprise and its employees are truly exciting.

Lauren Fernandez

I help founders & business owners automate lead generation and marketing using AI. Over 120+ satisfied customers.

7 个月

Sounds like a cutting-edge event Balancing AI with the human touch is key. Dominic Patterson

要查看或添加评论,请登录

社区洞察

其他会员也浏览了