IBM hosts AI Equity Summits

IBM hosts AI Equity Summits

The uncomfortable, unsettling question asked to members of a recent panel was, “Who is responsible for ensuring AI contributes to reducing and eliminating disparities?” and that is where the conversation really started.

In order to warm up our audience, we had to tell stories, lots and lots of stories. Stories that infuriate, stories of companies that were brave enough to look into the AI mirror, witness their own bias and take on the Herculean effort of institutionalizing needed change.

We defined equity like this:

The term “equity” refers to fairness and justice and is distinguished from equality: Whereas equality means providing the same to all, equity means recognizing that we do not all start from the same place and must acknowledge and make adjustments to imbalances. The process is ongoing, requiring us to identify and overcome intentional and unintentional barriers arising from bias or systemic structures.

Peter Drucker is quoted as saying, ‘Culture is what you do when no one is around,’ and as others have added ‘...and it eats strategy for breakfast.’

We started by telling the IBM story.

In 2021, the World Economic Forum wrote about IBM’s journey to curate AI responsibly. Of note was the fact that it took 3 years from the point that IBM hired an AI ethics leader to standing up our first AI Ethics board and publishing our first principles for trust and transparency. The catalyst or turning point was the recognition of algorithmic bias in facial recognition algorithms. That catalyst inspired our CEO to write a letter to Congress and having us very publicly pull out of the facial recognition business. The following years resulted in a flurry of investments in time and resources into education, communication, and process, all dedicated to the subject of AI Ethics. Today we use our story to underscore WHY IBM is concerned about the responsible curation of AI solutions and all that it takes organizationally to establish the right culture to curate AI responsibly.

Understanding how to match principles to culture and how to establish the right ethos for teams to work in a multi-disciplinary fashion in order to assess and mitigate the risks of disparate impact is critical to achieving AI Equity. We asked attendees to consider:

  • What are the principles that you agree upon as an organization ?
  • How do you all know these principles are something that every human in your organization can understand, recite and integrate into every part of the atmospheres ?
  • How will you know that you have setup the right ethos for all humans understand they are accountable for the impact the AI system will have on the world?
  • How will you know that the humans will follow the principles, create practices and perform actions that aligns with these principles - even if no one else is around?

As we know, earning trust in AI is not a technical challenge but one that is socio-technical and it demands sacrifice. The sacrifice is the boldness required by leadership to use the AI as a mirror to their own biases, recognize the calcified systemic inequities in an organization and insist on change. These intrepid leaders know that prejudice is an emotional commitment to ignorance (thank you Mr Rudden!). We asked attendees to consider, what is their organization’s worldview towards Fairness. Is it indeed EQUITY, or is it something else?

Here is an example of how various worldview on fairness can impact AI governance for an AI model:

A bank approached us claiming that equity (again, different from equality) was their worldview and that they had an unusually large number of single mothers as clientele. When speaking to them about making their principles ACTIONABLE in terms of how they govern their AI models, we gave them the example that today, divorced women have far worse credit scores than their ex-husbands. Oftentimes, this is because the major purchases like the house and car were in their husband’s name alone and not purchased jointly even though they were married at the time.

IF this bank had an AI model that determined the percentage interest rates on loans based on a key factor like CREDIT SCORE, these single moms would be unfairly disadvantaged. If the bank truly values EQUITY as a worldview, then they would need to put processes in place to address the disparity- like bringing a human in to make the final decision which would include taking the time to understand the woman’s unique circumstances. This bank started with asking themselves what their values were, and how to best align their AI governance practices (and how their AI models are designed) in order to align with their worldview.

For both our events in NYC and Chicago, IBM VP of Marketing Amy Swotinsky hosted clients across industries that held roles keenly interested in the subject of AI Equity for their organizations. My fellow IBMers Ana and Piper held a design thinking session dedicated to helping clients detail what were the inherent challenges that they are facing across PEOPLE, PROCESS, TOOLS in their organizations in order to better use technologies to address disparities. We hosted a panel where a diverse set of leaders spoke about their own unique life experiences and how those lived experiences is what they bring to work everyday to help influence more equitable worldviews, especially with how AI is built.

The organizational culture required to curate AI responsibly includes having an open, growth mindset with a healthy dose of humility. Then you need to make those DE&I goals actionable by ensuring that the teams working on AI are truly representative of the widest variety of humans you can get, and lastly (and possibly the most difficult) is to have those teams be multi-disciplinary in nature.

My keynote introduced the Diversity Prediction Theorem, a mathematical theorem not a theory as described by Scott Page in the book Diversity Bonus, that works for every single use case every single time. Imagine if you have a jar full of jellybeans and you are tasked to determine how many jellybeans are in that jar. This mathematical theorem PROVES unequivocally that the more diverse your group of jellybean guessers are, the faster you will come up with the correct answer. Pretty neat, huh?

In addition to diversity, in order to establish a culture that curates AI responsibly, one needs to have truly multi-disciplinary teams working together - because more often than not AI model owners do not know how to think about disparate impact. I underscored this point through a wide variety of comics that I use for my own art therapy. (As Melissa Heikkil? pointed out in the MIT Technology Review, it is tough working in this space!)

No alt text provided for this image
#arttherapy


Lastly, as we and our clients continue to focus on this extremely important topic, it is inherent upon us all to continue to advocate for more inclusive education on the subject of data and AI ethics, and make certain that there are individuals with enough power to be directly accountable within organizations for reducing and eliminating disparities in our AI models. Today we cannot stop at teaching the subject of unconscious bias to our teams and in our classrooms as part of DE&I initiatives without speaking about how that very same bias creeps into our data and get calcified into our AI systems. Those that develop AI systems must become far more aware of the nature of disparate impact and how they are 100% accountable for impacts to individuals and society.


BIG THANK YOU to the organizers of these two events and to all that attended. HUGE thank you to Meredith Broussard, Associate Professor of Data Journalism at NYU and to Rebecca Willett, University of Chicago Statistics professor. Presenting up on stage with you both was a real treat and I would gladly welcome the opportunity to do so again. Your students are truly very, very lucky to have you as their teachers. I am grateful you were there helping me tell the story. And... thank you to my colleagues John "Boz" Handy Bosma, Ph.D. and Denise Knorr at IBM who champion this space so tenaciously- I am honored to know you.

Kathryn Hunt

Comms & Product Marketing – Event Management – Thought Leadership & Content Strategy - Training & Leadership Develop. Experience across Enterprise, Startups, and Nonprofit Organizations

1 年

Phaedra Boinodiris - this is such a thoughtful and thorough reflection. #ibmconsulting is lucky to have you!

回复
Amy Swotinsky

Executive Transformation Coach, Team Innovation Coach, Fractional CMO, Creativity Catalyst, Champion for Diversity and Inclusion

1 年

Loved your session (and your comics!) Phaedra Boinodiris. Thank you for driving this important conversation and helping us all be better advocates. So important!!

Phaedra I appreciate the shout out. This has been a labor of love for years to bring recognition to a worldview that was long expressed by scholars and advocates in other fields, but conspicuously absent in what has been built. Systemic disparities are perhaps the biggest and hardest challenge to solve, and there is still so much to do. It's been wonderful working with you on this.

Thank you Phaedra Boinodiris for continuing to lead the charge on this important topic.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了