Ask Leslie: What Does The EEOC Say About The Use Of AI By Employers?
FarmersKey article -- July 2023 -- By Leslie Zieren, The McCalmon Group, Inc.

Ask Leslie: What Does The EEOC Say About The Use Of AI By Employers?

We are thinking of using AI for hiring. What should we be wary of?

?

Increasing numbers of employers are using software, algorithmic decision-making tools, and artificial intelligence (AI) to assist with hiring and promotions. The U.S. Equal Employment Opportunity Commission (EEOC) recently?addressed?how these tools must be monitored to help prevent adverse impacts and violations of Title VII of the Civil Rights Act of 1964 through their use. "Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964."

An "algorithm" is defined as "a set of instructions that can be followed by a computer to accomplish some end."

AI was defined by the U.S. Congress as a "machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments." In employment, AI typically means that the computer analyzes data "to determine which criteria to use when making decisions."

Examples of algorithmic software used by employers include "resume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; 'virtual assistants' or 'chatbots' that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and testing software that provides 'job fit' scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived 'cultural fit' based on their performance on a game or on a more traditional test."

Title VII prohibits "disparate impact" or "adverse impact," which occurs when neutral section procedures "have the effect of disproportionately excluding persons based on race, color, religion, sex, or national origin" and are not "job related for the position in question and consistent with business necessity."

An algorithmic decision-making tool can be a "selection procedure" under Title VII's "Uniform Guidelines on Employee Selection Procedures." If the tool has an adverse impact on a protected class, then its use would violate Title VII.

In many cases, employers are responsible for their use of algorithmic decision-making tools, even if the tool is designed and administered by a third party.

Before using a tool, employers should ask the vendor if "steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII."

The four-fifths rule states that "one rate is substantially different than another if their ratio is less than four-fifths"; however, this is only a rule of thumb and a difference of less than 80 percent does not guarantee that adverse impact is not occurring. Courts generally use statistical significance, and employers should look for vendors who go beyond the four-fifths rule in analyzing their tool for adverse impact.

The EEOC encourages employers to conduct self-analyses of their tools on an ongoing basis. If an employer discovers that a tool would have an adverse impact, "it can take steps to reduce the impact or select a different tool."

要查看或添加评论,请登录

Brad Sanford的更多文章

社区洞察

其他会员也浏览了