9 Steps to Implementing AI Without Being Racist
Phil Mandelbaum
Fractional CMO | Agency President + CEO | Brand, Digital, Social, Content, PR and Political Strategist | Award-Winning Writer, Ghostwriter and Editor
Eight Experts Weigh In on How Artificial Intelligence and DEI Intersect — and How to Overcome Human and Algorithmic AI Bias
A New York City man signed up for LinkedIn and used artificial intelligence to create a fraudulent profile for an AI-generated white male in his 20s or 30s: a Stripe alum and the founder of a non-existing startup. Within 24 hours, the fake founder received a message from a venture capitalist interested in investing. Notice the line, “A few ex-Stripe buddies of mine had great things to say about you.”
Unfortunately, the “you” in that statement isn’t real. Never was. And no other?story?I’ve heard better demonstrates the dangers of AI — to security, to privacy, and to?diversity, equity and inclusion?(DEI).
On the other hand,?AI?tech can enhance business productivity by 40% — and businesses that employ AI will double their cash flow by 2030 while brands that don’t will see a 20% reduction. Already, more than 75% of businesses are using or exploring AI; nearly three in four executives believe AI will be their greatest future business advantage; and the global AI market is pushing toward a trillion US dollars.?
My advice:
Download our report,?Everything You Need to Know About AI Right Now:
Then, before investing, read this.
Why Your Employees Aren’t All Embracing AI
Facial recognition technology uses artificial intelligence to identify us via physiological biometric identifiers. Studies have shown?it’s racist, algorithmically, and in implementation. ShotSpotter uses AI-powered audio identification and,?according to its website, “accurately detects, locates and alerts police to gunfire” — but, unfortunately, it doesn’t; in one city alone, there were?more than 40,000 errant AI-generated reports?and unnecessary police encounters in predominantly Black and brown neighborhoods in only two years. And then there are the DEI issues?also?impacting other marginalized groups like women, older people and?people with disabilities, including the macro- and microaggressions committed everyday in the workplace.?
And what about the oft-forgotten native community??According to Jeff Doctor, an impact strategist for Animikii, an Indigenous tech innovation and equity organization, “these ‘AI’ reshape our data back into the very same stereotypes we've been fighting so hard to counter.”
So, if your staff has been hesitant to embrace AI tech and adopt?your new AI strategy, it might be because tech isn’t your only hurdle —?or opportunity.?
Indeed, if you’re even?having?these conversations, you’re a phase or two ahead of at least some of your biggest competitors. And if your staff is concerned about bias in AI (in the workplace), you’ve probably already started investing in DEI. Which means?you have the opportunity to not only improve processes and output because of AI, but also your employee engagement and experience — because you see the value in talking about AI (and DEI)?outside of the C-suite.
Of course, you have to do it right; you have to overcome both types of bias in AI:
How to Incorporate AI, Without Being Racist: A Conversation
“AI will pick up the biases of whoever creates and trains it, and it also typically will meet their needs as a consumer because they are thinking about using the product from their own perspective,” DEI thought leader?Samantha Karlin?told me.
That's why they either need significant inclusive design training or a team of different types of people — also including non-American, non-Western people, if it’s a global product — to figure out if it meets the needs of different audiences and also ensure that it doesn't unintentionally reproduce harmful norms.
For?Aaron Winter, Lancaster University sociology professor,?Reactionary Democracy?co-author and?Identities?co-editor, there are limitations to AI?and DEI. “We already know that AI and algorithms can carry with them and exacerbate biases and inequities, and have played a role in pushing and platforming racism, misogyny, homophobia, transphobia, ableism and far-right ideas online,” he told me. Meanwhile, “DEI as an approach often operates top-down using material provided by third-party private organizations, and does not always address deeply rooted structural or institutional inequities or offer a significant departure beyond technological innovation.”
Thus, he continued, “Any AI would inevitably draw from that knowledge base, reproducing these issues.” Likewise, using AI?in?DEI would “take the oversight, experience and representational politics out of it all together… and put more money, power and responsibility in the hands of private tech.”
This is why “it’s important to enter into this discussion with a great deal of caution and critical input and perspectives, particularly from those most affected.”
I asked?Misa Chien, a tech, CX and inclusion influencer and CEO/co-founder of?Autopilot Reviews, a call center incentive program, if it’s even possible for brands to leverage or develop AI tech without the bias.?
“You have to make sure that you use a large, diverse set of data, and continually monitor the AI so you can discover why it makes the choices it does and correct any biases,” she said — and “share your best practices with the vendors you use.”
For?Vera?CEO?Liz O’Sullivan, a member of the US Department of Commerce’s national AI advisory committee and an expert on AI, “fair algorithms” and digital privacy, “battling bias in AI is a little bit like fighting it in the real world — there’s no one-size-fits-all solution, technical or otherwise; it’s highly dependent on the use case, the goal and the team, and it’s something that takes continual work over time.”
According to O’Sullivan:
A lot of modern AI makes a big mistake in devaluing the preliminary steps of data gathering and review. Scraping artists’ work to make an AI image generator for instance, is clearly exploitative. You also don’t know what kinds of patterns the machine will learn unless you have people reviewing the training data for potential biases or unsafe content.
Odie Martez Gray, president of the Diversity Cyber Council, broke it down for me:
The function of AI is to logically draw relationships to data and formulate or compile an aggregated result. Knowing this, a simplified overview of the challenge concerning AI and racism is first determining ‘ethical’ data sources that consistently produce unbiased data — because we all know, ‘bad data in, bad data out.’ The second challenge is in implementing compensating controls that moderate data and autocorrect an AI’s logic to a predetermined mean.?
But, he added, “I am doubtful we will be able to teach AI the emotional intelligence required to identify and remediate discrimination when the core of the technology's logic is based on the net of our behavior.”
A data scientist and Bentley University mathematics professor,?Noah Giansiracusa?concurred, providing a real-life example:?
Historical bias in the medical profession has led to certain populations getting less care. The AI just sees this data — not the background, or cause — and ‘learns’ that these populations don't fare as well. So, the AI might then recommend focusing efforts on patients from other populations, tossing in even more data doesn't help.
Then, he shared a real-life experiment that did:
A few years ago, people found, unsurprisingly, that large language models soak up and reproduce all the horrible bias and attitudes they see in the text they're trained on — much of which is from the internet. This way of training text tends to refer to doctors as ‘he’ and nurses as ‘she.’ So, they tried doubling the training set by doing a gender swap, and anytime the word ‘he’ appeared they created a new identical sentence but with ‘he’ swapped to ‘she,’ and similarly for all other gendered words. It forced parity, and the technique worked pretty well.
But, he added, “dealing with something like ableism seems like it would be a lot harder.”?
Jacques Bastien, founder of?boogie?and a UI/UX professor at the University of Albany, is more hopeful, telling me he’s “optimistic… because, unlike human beings, [AI] can be improved.”
Karlin agrees, adding that “it’s just like being a DEI trainer —?the number one thing is that you have to constantly be sensitive to all different groups of people and think about how they perceive things differently based upon their differing lived experiences.”?
The best way to do that is to ensure that “whoever is creating, training or using the AI has actually been trained on DEI principles and/or comes from a directly impacted group.”
Plus, Giansiracusa added:?
As more organizations think about developing their own AI systems, they should think very seriously about not just the data they feed it and the computational resources to train it, but also the importance of having a secondary training process of direct human feedback, which really seems to be the way to significantly improve a lot of the problematic aspects of AI we saw initially.
IMHO, though, nobody nailed it like?Dr. Jabari “Naledge” Evans, an assistant professor of race and media at the University of South Carolina and a visiting scholar at Harvard’s Berkman Klein Center for Internet & Society.?
“I believe that identity, culture and connected consciousness are the three most essential components of consumer experience in today’s society,” he said — and “DEI efforts will always fail if the focus is purely on presenting representation, versus disruption of matrices of dominance that typically loom over minorities and women.”
Digital tools like AI, meanwhile, “only help as far as those who create them allow them to travel,” he added. “Setting intentions to not be neutral… is imperative,” so “the only way to truly use AI in ways that prevent the ‘isms’ is to?overrepresent those who are typically marginalized by data mining and algorithms.”
As you consider adopting AI at your organization, following the steps outlined herein, keep in mind the concerns and suggestions offered by the AI and DEI experts.
9 Steps to Investing in AI,?While?Protecting Your Brand, Employees and Customers from Bias?
While DEI in corporate speak typically refers to an organization’s HR policy or program to standardize processes and procedures related to diversity, equity and inclusion, what DEI?really?means?is an?entire?organization?actively?not only hiring but?welcoming, valuing, respecting, supporting and promoting?all workers — and especially those from underrepresented populations. It also includes making the necessary investments to identify exactly what all your employees truly need and want.?
As Harvard Business School professor Robin J. Ely and Morehouse College president David A. Thomas?explain:
Being genuinely valued and respected involves more than just feeling included. It involves having the power to help set the agenda, influence what — and how — work is done, have one’s needs and interests taken into account, and have one’s contributions recognized and rewarded with further opportunities to contribute and advance.
This works for the business, too; organizations that follow a DEI framework benefit not only from better branding but:
Indeed, companies?devoted to DEI?earn?140% more revenue,?have?230% more cash per employee, and are 70% more likely to?capture a new market?and 35% more likely to?outperform their competitors.
So, be sure to adhere to all the included DEI requirements when developing your AI strategy.
领英推荐
Step 1: Do your DEI research?
Step 2: Hire a DEI director
As Mita Mallick, a LinkedIn Top Voice,?advises:?
Instead of looking for direct senior DEI leadership experience, consider people with broader backgrounds but all the right skills: the ability to influence and be a change agent, to design strategy and deliver results, to create metrics and drive accountability, and to communicate effectively across all levels of the hierarchy. Those with marketing, sales, or communications backgrounds might be a great fit. Also consider people who have been informal D&I champions or, more specifically, have served as an executive sponsor for an employee resource group. You don’t have to be a career HR professional to do this work.
My advice:
Once onboarded, this new hire should lead your DEI efforts.
Step 3: Ensure DEI alignment
For your DEI initiative to work, it must be supported by your people — and the only way to earn their buy-in is to demonstrate the need and the value, or the problem and the solution. The problem for most organizations is insufficient diversity (a recruiting and hiring issue), equity (a hiring, training, development and advancement issue) and inclusion (a company culture issue); the need is improvement in all three areas.?
Step 4: Build an equitable, inclusive culture and safe, comfortable working environment from the ground up
Step 5: Create a diverse and inclusive artificial intelligence working group
Once your organization is prepared to truly investigate the potential uses of artificial intelligence based on the principles of diversity, equity and inclusion, it’s time to build your AI working group.
Assign your CIO, CTO and/or IT director to the manager or coach role, and your DEI director as team staffing and inclusion consultant, and instruct them to work together to build out your working group with:
There should also be mid- to senior-level representatives of each of the impacted business areas, including:?
The goal of the AI working group is to complete the remainder of the steps without sacrificing your commitment to DEI.
Step 6: Determine if artificial intelligence is a worthwhile investment
To identify your organization's best use cases for AI, you’ll want to survey the most diverse set of workers, across departments and job functions and up and down the corporate ladder. Why?
By surveying everyone, you can:
The first mission of your AI working group must be to decide whether this exploration is even worth the organization’s time and resources. Kick off your introductory meeting by discussing and soliciting detailed responses to the following:
If, after reviewing the group’s written responses to all of the above questions, you and your working group leader believe you should continue exploring AI at your organization, task your tech experts with the next steps.
Step 7: Assess your IT infrastructure for AI capabilities
Even with outdated legacy systems and complicated tech stacks, you can still implement artificial intelligence, intelligently. Of course, before you identify?practical?use cases or develop an AI strategy, you need to determine:?
For smaller and/or older organizations transitioning from AI experimentation to implementation, overhead costs may skyrocket as AI tech becomes increasingly more complex, better positioning more strategic, meticulous, innovative and cash-flush organizations that heavily research and identify cost-effective systems and methodologies to run their AI software.
When considering the bandwidth, strength and integration capabilities of your current systems and tech stack, as well as what you’ll need to take advantage of all the benefits of AI and automation, prioritize the following:
Step 8: Identify AI use cases and test and select AI solutions
There’s an automated option for everything, but all artificial intelligence isn’t created equal. Before demoing the most buzzed-about or high-powered AI tools or hiring an AI developer to create your own, instruct your AI working group to first identify the?types?of AI tools and platforms that would most significantly impact the work and lives of their fellow employees.?
You can also ask them for their feedback on the following.
21 Types of Must-Have AI-Powered Tools to Maximize the Business Benefits of Artificial Intelligence
Bonus:?Check out EX Squared.
Step 9: Implement, test, monitor, report, iterate and optimize
As with any new tool, tactic or strategy, artificial intelligence isn’t a?set it and forget it?solution.?
The final role of your AI working group should be to confirm with you, the C-suite and all impacted managers all the new AI-related assignments. Your working group leader should then continue to oversee reporting on/from all campaigns, business units, processes and employees leveraging and/or responsible for artificial intelligence.?
This reporting will dictate whether you need to:
Learn More About AI and DEI
To learn more about artificial intelligence, download our report,?Everything You Need to Know About AI Right Now:
To learn more about diversity, equity and inclusion,?read this.
Originally published via CMP Exchange Series | More Like This
Building Trust in the Workplace, One Courageous Conversation at a Time | Leadership Coach, Consultant & Speaker | Cultivating Belonging Cultures | Join my Thrive Leadership Academy in April! ???
2 年Rocha Dollar (they/them)
Fractional CMO | Agency President + CEO | Brand, Digital, Social, Content, PR and Political Strategist | Award-Winning Writer, Ghostwriter and Editor
2 年Featuring Samantha Karlin x Aaron Winter x Misa Chien x Liz O'Sullivan x Odie Martez Gray x Jacques Bastien x Jabari "Naledge" Evans, MSW PhD x Noah Giansiracusa