Listen up all my HR trailblazing tech guru friends! I'm freaking out here, and could really use your expert advice. In three months I'll be speaking at the HR Data Analytics and AI Summit, and I'm worried that I haven't dug deep enough.
The topic I'm tackling: Ethical Considerations in AI and HR Decision-making. The untamed frontier of AI and data analytics. Yes, next-gen tech is transforming how we make talent decisions from hire to retire. But with great innovation comes great responsibility.
I'm asking, and seeking to answer, questions like:
- How do we use AI ethically?
- How do we bake transparency into complex models?
- Can we be unbiased in designing unbiased systems?
If our mission is to harness cutting-edge tech to empower talent and uphold ethics, not exploit them, we must unravel the profound impact that machine learning, large language models, and autoregressive modeling will have in shaping the landscape of HR decision-making, before its too late.
- Transparency: the significance of transparent AI algorithms in gaining trust among employees and stakeholders, need for ongoing monitoring and auditing of AI systems to identify and rectify ethical concerns, regular assessments to ensure the ethical integrity of HR analytics, implications of opaque algorithms on employee trust and organizational transparency.
- Roles: establishing clear roles and responsibilities, possibly through the creation of an AI ethics committee.
- Understanding Bias: understanding and defining bias in the context of AI and its potential impact on HR decisions, recognizing the imperative of diverse dataset collection and inclusion of representative data sets to avoid reinforcing biases.
- Mitigating Bias: identify and rectify bias through algorithmic audits, continuous monitoring, and ongoing training to recognize / address bias.
- Privacy: handling sensitive employee data, going beyond "regulations" to doing what's right not just whats required, data minimization and the need to collect only necessary information, regulatory compliance (e.g., GDPR, CCPA) and their implications for HR data analytics.
- Human Oversight: how AI should augment, not replace, the expertise of HR professionals, how AI tools empower HR professionals by automating routine tasks, allowing them to focus on strategic, human-centric aspects.
- Informed Consent: the need for clear and understandable explanations of how AI algorithms reach specific decisions, obtaining informed consent from employees regarding the use of their data in AI-driven HR processes, sharing the purpose and implications of AI tools with employees.
- Equity: fair and equitable outcomes in HR decisions facilitated by AI, ensuring that AI-driven processes do not disproportionately impact certain groups of employees, strategies for actively promoting diversity in the data used for HR decision-making
- Unintended Consequences: avoiding the unintentional exclusion or disadvantage of certain individuals or groups, other potential unintended consequences of AI in HR, navigating ethical considerations that may arise from the intersection of AI and cultural diversity.
- Vendor Selection: The responsibility of selecting AI vendors and tools that align with ethical standards, what criteria should we use to evaluate vendors during the selection process?
Lead Talent Sourcer at UVA Health / Who's-Who Professional / Recruitment Leader in Healthcare, Finance/Tech in Non-profit and Start-ups, Writer/Contributor/Trainer/Speaker DE&I LGBTQ+ Inclusion Champion
1 年There was an intriguing episode on Computers vs crime- about AI profiling people as criminals. BUT what was amazing was that the developers got their foundation for profiling from RECRUITERS and tried to adapt it to find Criminals. This was all based on Amazons attempt to build an AI to eliminate Bias from Hiring, and they couldnt. WHY? Because even when they took all names, schools, clubs, fraternities etc out of profiles of their employees to help define what a successful hire was, that the AI would still hire to the old profile because the company was inherintly biased in its hiring practices. Here is the link. Everyone in Recruiting / Sourcing should watch this. https://www.pbs.org/video/computers-v-crime-um7cco/
Founder, Callify.ai
1 年Shally Steckerl AI tools that make hiring decisions are defined as AEDT (Automated Employment Decision Tools). NY has mandated that all AEDT tools that would cover NY based candidate need to be audited and certified under NYC-144 Bias Law. From employer perspective, Bias mitigation and compliance for talent acquisition, they need to ensure compliance to EEOC Uniform Guidelines on Employment Selection Procedures. EEOC focuses on preventing discrimination in employment practices based on race, color, religion, sex, national origin, age, disability, and genetic information etc. EEOC guidelines go beyond hiring into performance monitoring and decisions on pay and promotions etc. Callify.ai (our product) has been audited & certified for NYC-144 and has successfully satisfied assessment by legal teams of multiple large enterprises for deployment of our tool. Happy to share our detailed findings and requirements to satisfy AI compliance in hiring; both from Letter of Law and Spirit of the Law perspectives.
Data and Workforce Planning Analyst with a focus on recruitment, employment, and diversity within organizations that are highly regulated and policy driven (views on LinkedIn are personal and my own)
1 年This is what our audit includes related to these items: 1. Wage Solicitation and Salary History 2. Wage and Reward Transparency 3. Automated Employment Decision Technology 4. Pay Equity Analysis 5. Human Capital Disclosures to external regulators and labor market 6. Corporate social responsibility disclosures Each include aspects of what you are discussing (transparency, bias mitigation, resp data use, inclusive impacts) Ideas of what you are missing... 1. Public transparency related to these items 2. The importance of data veracity 3. The importance of tech stack assurance on bias etc 4. The ever changing global nature and mixed lexicon of these items
Check this out https://www.cbsnews.com/video/why-do-ai-image-generators-show-bias/
For bias from a regulatory perspective, different states have different laws. You may be able to use an AI assement tool in most states, but not in NY. True story. But from a process improvement side, it took our developers -less than two days to using ai, to reduce search time for candidates in our system and create a candidate overview--saving our Recruiters hours of search time and support for candiate overviews.