Artificial Intelligence, Social Values & Human Resources: Four insights about where we are and where we might go
Steve Hunt
Integrating business strategy, workforce psychology, and HR technology. Consultant, advisor, speaker and author of Talent Tectonics, Commonsense Talent Management, and Hiring Success.
Should artificial intelligence and machine learning (AI/ML) algorithms influence how people are hired, paid, trained, evaluated, and otherwise managed at work?[i] I recently discussed this question with employment lawyers from six countries and research scientists supporting AI/ML technology solutions that analyze data from more than 100 million employees and candidates annually.[ii] ?This article summarizes four insights from these conversations.
Insight 1. Asking whether AI/ML is ethical is like asking if math is ethical.
The book Artificial Intelligence published in 1975 noted that “if you asked physicists to offer a definition of their field, you would find substantial agreement. It is doubtful you would find such agreement if you asked scientists studying artificial intelligence.” Fifty years later, this statement is still true. The lack of a clear definition of AI/ML is one of its problems. People are wary of companies using methods they do not understand to make decisions that affect their lives such as whether they get a job. It is hard to understand something that does not have a clear definition. To make matters worse, many companies market AI/ML solutions as being endowed with mysterious, almost magical properties. ?Because AI has been presented as some type of futuristic wizardry, members of the public are understandably anxious about its use. This is leading to regulations about use of AI that are well-intended but unclear. These regulations are difficult to follow due to their ambiguity and could prevent society from benefiting from what is a highly effective and valuable mathematical tool for companies, employees and candidates when applied in the right way.
The book Decoding Talent discusses the use of AI/ML in human resources and defines it as “various types of advanced statistical analysis software that is especially good at processing complex and unstructured information”. AI/ML is neither artificial nor intelligent. It is just a complex form of applied mathematics that most people do not fully understand. There are countless examples of people trusting their wellbeing to technology that uses mathematical techniques they do not understand: smart phones, medical devices, online shopping, airplanes, elevators, the list is endless. People may not trust AI/ML because it sounds scary, but they are quite willing to trust complex math. ?One wonders if all the concern about AI/ML would have happened if it was called something more boring but descriptive like “iterative pattern recognition algorithms”. ?Society is unlikely to reach a common agreement on the definition of AI/ML any time soon. But what we can do is stop talking about AI/ML as though it is a mysterious method from the realms of science fiction. It is just math.
Insight 2.?AI/ML is not perfect, but it is often far better than the alternative.?
Criticisms of AI/ML highlight the imperfections of using mathematical algorithms to predict, measure or manage human behavior. Examples include algorithms that create hiring decisions that are biased against people from certain demographic groups or that monitor employees using data in a manner that is felt to be inappropriate or a violation of privacy. What these criticisms often fail to recognize is the alternative solutions that are used to solve these challenges may be far worse. ?For example, it is true AI/ML systems can display bias in hiring if they are not appropriate designed, but humans also show considerable bias when making the same decisions. AI/ML algorithms can be proactively analyzed and designed to ensure they do not promote biased hiring. This cannot be done for hiring decisions that are made by humans. The question we should ask is not “are AI/ML applications effective, fair, and unbiased?”.?The question we should ask is “are AI/ML applications more effective, fair, and unbiased than alternative methods we might realistically use?” The answer to this question is often “yes” provided AI/ML algorithms are appropriately designed, validated, and monitored.
Insight 3. AI/ML solutions can cause harm to human happiness and wellbeing
The book Decoding Talent observes that “many AI vendors talk about ‘trusted AI’ and ‘bias-free AI,’ but…any tool that is in any way opaque about how it operates – which a lot of AI technology is – can never be fully trusted or left to its own devices. ?Nor can we ever assume it is bias-free, even if an analysis at one point showed it was”. It is possible to create AI/ML algorithms whose use violates legal requirements guiding hiring decisions and treatment of employees. Because AI/ML algorithms are so complex, organizations may not even realize they are acting in an illegal manner. The use of AI/ML in HR also raises concerns about people’s perception of procedural justice and fair treatment. People want to understand how decisions are made that affect their employment, pay, and career development. Telling someone that “a machine decided you were not a good fit for the job” may not be perceived as being fair by some people. On the other hand, research suggests that in some employment contexts people are more trusting of AI/ML hiring algorithms than human judgement.?
The challenge facing companies is how to benefit from AI/ML while managing risks related to bias and fairness. There are several ways companies have addressed this challenge. First, create and publish a set of ethical guidelines governing the use of AI/ML techniques within the company. Examples include AI/ML guidelines published by UNESCO, SIOP, Modern Hire, and SAP. Second, establish processes to review and analyze applications of AI/ML to ensure they do not violate ethical or legal guidelines. This is one of the more challenging steps because it requires having the technical knowledge to determine whether AI/ML algorithms are predictively valid and unbiased.?Third, be transparent to candidates and employees regarding what data is used by AI/ML applications, how it is used, and what steps are taken to ensure it is being used appropriately. Some companies also enable candidate and employees to opt out of having their data included in AI/ML analysis if they feel it is inappropriate. Many vendors creating AI/ML solutions for HR follow these principles, but not all do so be cautious.
领英推荐
Insight 4. The most socially harmful uses of AI/ML are not in HR.
Taking action to ensure ethical use of AI/ML in HR is worthy of ongoing effort and resources. That said, applications of AI/ML in other areas of society arguably have a much greater negative impact on people’s wellbeing but receive far less regulatory attention compared to applications of AI/ML in HR. Examples include using AI/ML to guide credit ratings, monitor security, set insurance policies, and generate advertising revenue. A particularly striking illustration is using AI/ML in social media applications to capture user attention. AI/ML applications in social media have been tied to increasing rates of stress, loneliness and depression as well as social divisiveness and civil unrest. It appears that the mathematical techniques used to improve HR decisions through “artificial intelligence” can also be used to generate social media ad revenue through creating “artificial” fear, anxiety, and anger. ?
One of the major differences between use of AI/ML in HR versus areas such as social media is the presence of well-established employment laws and regulatory bodies that date back to the early 20th century and beyond. These create societal expectations and legal pressure to ensure applications of AI/ML in HR do not harm the wellbeing of people. Similar laws, regulatory bodies and social expectations have not been established regarding applications of AI/ML to many other areas of society. This is particularly true for social media since it did not exist in any significant form prior to the 21st century. We should not decrease our focus on ensuring AI/ML is applied ethically to HR.?But there are other applications of AI/ML where we should be focusing far more attention than we currently are.
Guiding our future by learning from our past
The advent of AI/ML enabled technology is transforming many aspect of our lives and societies. These solutions are improving our ability to accomplish things we value.?However, they also pose significant, frequently unintended, and often highly complicated risks. It is healthy to respond to growing use of AI/ML with some level of concern. The problem is few people understand how AI/ML works at a detailed enough technical level to critically evaluate whether AI/ML solutions are behaving in an ethically appropriate manner. We are faced with the challenge of how to ensure this sophisticated technology is being used appropriately without overly restricting its use.?This is not unlike the situation societies faced at the turn of 20th century when scientific and technological advances were radically transforming the creation and processing of food products & pharmaceutical drugs. We now live in a world where billions of people readily ingest medicine and food created using biochemical and molecular biology concepts they do not understand. People do this because they entrust their safety to a highly developed system for ensuring food and drug safety that was initially established over 100 years ago.
Where we are in 2022 in terms of using AI/ML algorithms can be compared to where society was in 1900 with regard to using scientifically designed medicine and food. The response we need might be similar to the one taken 120 years ago when people were confronted with a valuable, powerful yet potentially dangerous new form of technology.?It is likely to be a long, complicated journey. Where it takes us will depend far more on utilizing the organic intelligence of humans than the artificial intelligence of machines.
[i] Artificial intelligence and machine learning refer to complex, iterative mathematical algorithms used to recognize patterns and predict future outcomes. There are semantic differences between these two terms, but when they are discussed by the general public they tend to be combined into the same category.
[ii] New Developments on AI Inside the Workplace with Prof. Dr. Bj?rn Gaul , Orly Gerbi , Yardenne Assa , Cliff Jurkiewicz , Patricia Medeiros Barboza, Anthony Oncidi , Vikram Shroff, and Inge de Laat. ?The Ethics of Using Artificial Intelligence in Hiring with Andrea Shiah , Eric Sydell , and Caitlynn Sendra, Ph.D. Sendra. The concepts in this article are insights I took away from these sessions and may not represent the views of the individuals listed here.
Human Resources ?Artificial Intelligence ?Faculty ?Speaker
9 个月Your article remains relevant - perhaps more so today, 18 months after its publication, yet it feels as if you wrote it TODAY. So much is being written, yet I feel that much of it is just "fluff", while your articles always make me *think* - assuming it is a good thing.
HR Tech & Payroll Entrepreneur | Investor | Author | PayrollBADI
2 年Great article as usual Steve Hunt. Really enjoyed it and certainly thought provoking. Appreciate it and happy holidays.
?HR process & tech leader at 5 fortune 500’s, product strategy/mktg exec, HCM analyst/advisor. ??Top 100 HRTech Influencer 2023, 2024. @SBGHRTech?
2 年Great perspectives & insights as always Steve Hunt.
CEO | Co-founder @ PointLeader Predictive Analytics | Talent Management
2 年Questions that psychologists have debated since at least the late 19th century...
Helping C-suites design human capital strategies for the future of work | Co-Founder & CEO at Fractional Insights | Award-Winning Psychologist, Author, Professor, & Coach
2 年“The question we should ask is not “are AI/ML applications effective, fair, and unbiased?”.?The question we should ask is “are AI/ML applications more effective, fair, and unbiased than alternative methods we might realistically use?” ??