AI&
PAI’s quarterly update of ideas, events, and people making an impact on AI and society
EDITOR’S NOTE
While AI and philanthropy may seem disconnected, philanthropic organizations play a critical role in driving change towards equitable AI. Yet philanthropies are often left out of these conversations — a missed opportunity we were keen to change at PAI’s recent AI & Philanthropy Forum.
The forum came at a pivotal time, with 40% of nonprofits saying no one in their organization is educated in AI, despite the majority believing generative AI may be applicable to their communities. Amidst this landscape, the event convened leading experts, ethicists, nonprofit leaders, and philanthropists to explore the convergence of AI, philanthropy, and social impact.
One overarching theme emerged — only through cross-sector partnership can we responsibly harness AI’s potential to improve decision-making, accelerate solutions to global challenges, re-conceptualize philanthropy's impact, and ultimately uplift humanity.
“Technological change is nothing new and how it plays out is not inevitable,”?remarked Andrea Dehlendorf, an expert in labor, tech justice, and economic justice. “AI is a set of choices that we make as a society and those choices have to be made by a broad group of people who can be meaningfully at the table.” For more insights on AI and philanthropy, continue reading to hear from Don Chen, President of the Surdna Foundation.
SPEAKING OF AI
Three minutes with Don Chen from the Surdna Foundation
PAI: In your keynote, you posed an interesting question about exploring how AI tools can be "reparative" and help address historical patterns of discrimination and exclusion. Can you elaborate on what you envision a “reparative AI” approach might look like?
Don Chen: There’s a broad spectrum of debate about AI, from how to regulate potential abuses to the promise of new solutions. Stakeholders committed to racial justice—like the Surdna Foundation—need to focus on every band of that spectrum.?
At one end of the spectrum, regulators, civil society, and business leaders focus on harm prevention. For example, AI for the People and Data for Black Lives challenge us to combat racial bias in the algorithms that increasingly drive decision-making in all aspects of our lives.?
At the same time, we should consider how technology can help address past harms. Optimists—including many tech leaders—have made ambitious predictions about how AI will solve longstanding challenges like poverty, disease, and climate change. What about racial repair? For example, a recent McKinsey analysis notes, "generative AI has the potential to widen the racial economic gap...by $43 billion each year." What would it take to ensure a fair share of that new wealth and significantly reduce the racial wealth gap?
And what about other types of societal repair? There’s plenty of evidence that extremism in social media generates profits. Instead of fueling division across differences, could public-interest-driven technology help us identify and act on shared values, interests, and aspirations? I’m willing to bet that there’s tremendous value in fostering reconciliation, repair, belonging, and robust civic institutions. How can we capture and share that value??
What role do you see partnerships and collaborations playing in the foundation's efforts to navigate the complex challenges posed by AI? How do you approach building and maintaining effective partnerships with diverse stakeholders?
As I mentioned in my keynote, most decision-makers across sectors—government, the private sector, foundations—are not technologists. Meanwhile, AI and other technology tools are transforming our society in real-time, and we’re largely unprepared for that transformation. It’s essential for civic-minded people to collaborate based on shared principles and goals so that we can learn and act together.?
The AI industry is fraught with competing ethical values and visions. The lack of tech literacy among decision-makers has real consequences. Concerns raised by those who have expertise but don’t necessarily have institutional or governing power have often been ignored or acknowledged too late. I hope the public interest technology community becomes more powerful. Then, the types of issues raised by experts like Timnit Gebru and Rumman Chowdhury—former ethics leads for Google and Twitter, respectively—could be addressed in a timely way.?
领英推荐
Looking ahead, what are some of the key areas or issues related to AI that you believe the philanthropic sector should prioritize over the next few years?
Strengthening the field and organizational infrastructure for public interest technology is a top priority. Some foundations may focus on specific AI concerns, such as eliminating algorithmic bias or improving public health responses. But we all would benefit from having a well-resourced, well-established set of civil society organizations, governmental partners, industry associations, and other public interest stakeholders who work together to inform technology policy, ethical standards, learning, and problem-solving.?
At Surdna, we would rely on that infrastructure to ensure that technology tools and our grantmaking priorities can help address longstanding racial biases and injustices. For example, the Andrus Family Fund (which is based at Surdna) focuses on abolishing youth prisons and family separation by developing alternatives to incarceration and harmful practices in the foster care system. We’re already seeing great concern about predictive models for child abuse, the effectiveness of different types of treatment and punishment, and the data systems to prevent children from “falling through the cracks” within labyrinthine bureaucracies. Though some show promise, most predictive models are based on longstanding assumptions that have demonstrated racial bias.?
It’s imperative for us to hold those tech applications to very high standards, not just to “do no harm” but to generate transformative alternatives. Doing that work from within issue areas is critical, but it’ll be much more successful if those efforts are buttressed by a robust field of public interest tech.
Just for fun, what was the last thing you used AI for?
My wife is the board chair for an environmental justice nonprofit in Connecticut. She needed to convert an audio file into meeting minutes. So, I used an AI tool to generate a transcript and voilà — I not only helped her deliver, but also scored some serious spousal bonus points.
Watch Don’s full keynote address below:
RECENT PAI WORK
THE MORE YOU KNOW
Listen—Watch—Read
?? Listen: A conversation with the U.S. chief data scientist: Dominique Duval-Diop | The TechTank Podcast