Nonprofits and AI

I’ve participated in a lot of conferences, panels, discussions etc. about “nonprofits and AI,” “foundations and AI,” “AI for good”* and so on. The vast majority of them miss the point all together.

It’s not really a question of these organizations using artificial intelligence, which is how every one of these panels approaches it. For most civil society organizations, they may be buying software that’s going to use algorithmic analysis and some AI on a large dataset, perhaps through their vendors of fund development data or software. And then, yes, there are legitimate questions to be asked about the inner workings, the ethical implications, the effects on staff and board and so on. Important questions but hardly worth a conference panel (IMHO) - those are important software vendor considerations, and it is important for all organizations to understand how these things work, but not the “black magic” or “sector transforming phenomenon” that a conference organizer would want you to think.

The REAL issue is how large datasets (with all the legitimate questions raised about bias, consent and purpose) are being interrogated by proprietary algorithms (non-explainable, opaque, discriminatory) to feed decision making in the public and private sectors in ways that FUNDAMENTALLY shift how the people and communities served by nonprofits/philanthropy are being treated.

  • Biased policing algorithms cause harm that nonprofits need to understand, advocate agains, deal with, and mitigate. 
  • AI driven educational programs shift the nature of learning environments and outcomes in ways that nonprofit after-school programs need to understand and (at worst) remediate, (at best) improve upon. 
  • The use of AI driven decision making to provide public benefits leaves people without clear paths of recourse to receive programs for which they qualify (read Virginia Eubanks’s Automating Inequality). 
  • Algorithmically-optimized job placement practices mean job training programs and economic development efforts need to understand how online applications are screened, as much as they help people actually add skills to their applications.

This essay on “The Automated Administrative State” is worth a read.

The real question for nonprofits and foundations is not HOW will they use AI, but how is AI being used within the domains within which they work and how must they respond?


* I make it a policy to avoid conversations that are structured as “_[blank]__ for (social) good” and situations that are “_[blank]_ for social good” where the [blank] is the name of a company or a specific type of technology.

Arlene R. Atherton

Health AdvocacyLobbyist; cultivation strategy, strategic thinking, stewardship, Leverarge resources, monitor network, collaborate stakeholders

5 年

I am with you. Born in Silicon Valley, the Gizmo guys think if they created it, people should buy it. Forced decisions Rob our soul of learning. Motivation for social good must be intrinsic. America cannot let the prowess of technology supplant moral value. We would turn into Soviet propaganda coming from a computer, rather than speakers on the subway. Non-profits have always been targeted as slow to adopt technology. AI better served in demographics.

回复
Janet Cruz

Social Innovation

5 年

A very interesting challenge for the NGO sector for partnering with tech industry to think about Rigths in this amazing digital transformation era.

回复

As always, grateful for Lucy taking the lead thinking through really important, complicated issues that affect our sector's ability to live our values (or what we say are our values.)

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了