AI Biases EXPLAINED: When Can We Trust AI? | Phaedra Boinodiris, IBM Consulting’s Global Leader for Trustworthy AI
Anyone who has experimented with generative AI knows the tech is still flawed.
Despite massive investments in AI models, tools, and applications, the fact that AI outputs are still biased and inconsistently accurate raises global concerns regarding trustworthiness and who is responsible for making AI safe as it evolves at earth-shattering speed.
The unfortunate truth is that presently, the majority of AI models only reflect a narrow sample of our collective humanity, inevitably reinforcing the existing biases of those who programmed AI and the narrow data sets used, making today’s AI models inept at delivering diverse perspectives.
Unpacking the ethics and path to a safer, more responsible, and representative AI future is Phaedra Boinodiris , IBM Consulting’s Global Leader for Trustworthy AI. Phaedra is a top voice, author, speaker, and one of the earliest leaders responsible for reimagining AI initiatives.
Her recent book, “AI for the Rest of Us,” and her role as co-founder of the Future World Alliance highlight her commitment to integrating ethics into AI education and development.
“AI is like a mirror that reflects our biases back towards us.” — Phaedra Boinodiris
She’s here to discuss the need for inclusive AI that represents all of humanity, outlining the important considerations leaders should take into account to ensure their AI initiatives are ethical, inclusive, and able to effectively augment our capabilities without compromising human values
领英推荐
We also talk about:
Tune in to understand why we need to approach AI with the intentionality it demands so it can work for humanity, and not against it.
?? Learn more about Phaedra Boinodiris here.?
??? Listen to the episode here or watch it on YouTube below.