How Can We Achieve “Inclusive AI”?
A new report from the World Economic Forum calls out gaps and opportunities for making AI more equitable and inclusive. This report is a great foundational resource, especially for those who are starting to explore “inclusive AI.”?
It raises the important questions, how do we define “inclusive AI” and how can we use this framework to address inequality?
Oftentimes, being “inclusive” is equated with being “culturally competent” - knowing how to navigate different social contexts in a way that doesn’t offend or harm. What this perspective tends to minimize is the *unequal power dynamics* at play that cause people or groups to be - and feel - excluded or ignored.?
Building inclusive AI means empowering ALL stakeholders - especially those from socially, politically, and economically marginalized communities to join *and direct* the conversation. To avoid “participant-washing,” these voices need to have significant weight and authority to direct the development of new AI/ML technologies. We have to acknowledge and address the structural inequality that discourages - or even bars - people from the process.
Building AI technology that benefits those most marginalized cannot rely *solely* on public involvement. Those who are least resourced can’t do all the work to protect us from the harms of AI. If we want to prevent harm before it happens, we need to push for "equity literacy" among developers and deployers, in addition to public technical AI literacy for the public. Our technical teams need to understand how inequality is constructed and maintained by all of us, and take responsibility for the technology we build that can contribute to social harm.
领英推荐
While strengthening the pipeline of marginalized engineers and PhDs into AI is important, we also have to remember that it takes all kinds of workers to build and sustain AI, from data enrichers, and “last mile” gig workers, to content moderators and customer service workers.
As highlighted in the WEF report, we must include different sectors like academia, civil society, govt., etc. in developing and governing AI. We need to grapple with the different priorities and weigh different benefits and costs across different stakeholders and sectors. This is the multistakeholder approach we follow at PAI to develop guidance for practitioners and policymakers.
We also have to prioritize data sovereignty & ethical data management while we pursue inclusive and ethical AI development are essential. Being inclusive and protecting against algorithmic harm should not come at the cost of giving up other rights and protections, including the right to our own data and privacy.?
We need to normalize transparency in the AI development space and it is a core part of being inclusive. We cannot fight to keep technology accountable, especially to those who are harmed by it, without understanding the datasets and models behind the AI/ML systems. But to go a step further, documentation is needed for provenance, dataset maintenance, and transparency. If you want to go down a documentation rabbit hole, check out PAI’s ABOUT ML work.
This report is a meaningful contribution to a very necessary conversation about how to build technology more inclusively and equitably. Stay tuned as we’re releasing our own white paper later this month exploring some of these topics further.?