The following is an essay I've been wanting to write for a few months. It's a bit long, but I hope you will give it a few minutes of consideration. An important note of clarification: When I speak of AI here, I am speaking only of generative AI. There are many, many useful things that computers can do that aren't generative AI. In some cases, corporations have labelled existing programs "AI" in the past year or so to fool investors who want to see that buzzword in annual reports.
Over the past year and a half or so, as we have all encountered the hype around AI spread throughout media platforms, I've seen many webinars, trainings, and conference topics emerge around generative AI use and the nonprofit sector. The themes of the promotional language are consistent:
- Generative AI is here to stay
- You don't want to be left behind
- Learn to harness the power of generative AI today and get a leg up
- It is a given that generative AI is only going to get better and more powerful
- The smartest people all see a future powered by AI, and the intelligent thing to do is to join them
- Generative AI is so powerful that using it will save your organization time and money
Examining an example of "helpful" AI
I've heard guests on non-profit podcasts provide truly remarkable questions when asked about the benefits of generative AI. In one memorable instance, the guest gave the example of how their organization had done an extensive review of published literature and statistics on a topic, how time consuming it was, and how AI could one day save them weeks of staff time and create that report for them near-instantaneously.
Let's take a moment to examine that answer, as it is informative on several levels:
- The best use-case this guest could provide was something that they merely hoped generative AI could one day accomplish. This was a guest who was excited to promote AI and the belief that it is going to be game-changing for non-profits.
- Let's imagine a world in which ChatGPT 5.0 or 6.0 has access to all real-time information from the live internet and from academic journals (a major barrier yet to be overcome). If the AI tool were tasked with this project and able to deliver a report as requested, how would we know it was accurate? We already know that generative AI routinely "hallucinates" answers -- a polite term for "is wrong". Attorneys using tools like ChatGPT have filed court briefs citing precedent from cases that don't exist. Google's AI summary at the top of search results are routinely wrong, even when they don't accidentally pull answers from jokes on Reddit. In my own experiments, I've gotten generative AI bots to tell me that walruses are the largest aquatic mammals. I've asked AI bots to tell me about our non-profit and they've gotten all the details wrong while sounding vaguely right.
- If we can't know that the report from AI is accurate, we will need to thoroughly vet the entire report. We will need to get the AI to provide us with a list of the works it referenced (something AI doesn't generally do), make sure the sources all exist, ensure the sources are all relevant, double-check that we aren't missing important literature on the subject, and then verify that the AI has properly interpreted each of the sources. At this point, has the AI tool saved us any time or effort?
- Now, let's imagine that somehow we could trust the AI report without having to recreate it. I want to emphasize that by imagining this we are making an incredible leap for the thought experiment that is not supported by evidence. But let's imagine that we do this and end up with a reliable report without having a human or team of humans involved in creating or checking it. We now have a report that is detached from any human expertise. There is no human expert that can present on the report's findings as an expert or answer questions about it. At our organization, we often end up generating a report that identifies one or two top-line numbers for general public interest. Even so, it is quite important that we have some staff members who understand the work behind the top-line numbers and who can speak knowledgeably to the complexity below the surface simplicity.
In sum, the example provided of a helpful use-case for generative AI in the non-profit industry is purely speculative about future improvements, but not even particularly helpful if there were ground-breaking advancements that made them possible.
Other use-cases
Maybe the above example just isn't a good one. Let's take a look at some of the use-cases for generative AI for non-profits suggested by Google's AI. I have categorized the supposed use-cases based on my criticism of them. Anything italicized is a direct quote from the Google summary. Anything bold is my criticism of that category.
Unhelpful due to potential hallucinations
- Summarizing program impact for grant reports
- Developing educational materials for potential donors?
Undesirable because people want meaningful content from other humans and not a string of words generated by a sophisticated predictive text machine
- Writing blog posts, social media updates, and newsletters?
- Generating website copy and landing page content?
- Crafting personalized donor appeals and donation requests?
- Creating compelling stories for fundraising campaigns?
- Developing educational materials for potential donors?
- Generating video scripts and visuals for social media
Possibly useful but something that can be done by existing algorithms and programs that aren't actually generative AI
- Creating personalized invitations for events and donor meetings
- Identifying high-value donors and segmenting audiences?
- Analyzing donor data to predict giving behavior?
- Developing email marketing sequences with personalized content
In sum, Google's generative AI summary on the topic was unable to give an example of a truly helpful use case for generative AI in the nonprofit space. I have also consulted lists from humans which effectively boil down to the same suggestions. I will note a few additional use cases I have read or heard about from actual humans:
- I know of a few people who have found ChatGPT helpful when beginning an official letter, resume, or report; although I will also note that it is only helpful in these cases because it is drawing from existing human-drafted templates online that are easy to find. Or they were easy to find, before Google destroyed its own search tool.
- I have also seen suggestions that nonprofits could adopt generative AI chatbots for enhanced customer service, but I will leave it to you to consider whether or not an AI chatbot has ever enhanced your customer service experience. In the rare instance that you interacted with a chatbot that had a well-written and correct answer to your question, it is very likely that it was thoughtfully programmed and written by a human and not an actual generative AI chatbot.
- Some articles have mentioned nonprofits making use of AI to screen job applications. Honestly, this is nightmarish to me. Many corporations, for-profit and nonprofit alike, have adopted computer-screening for job applications and it has overwhelmingly been bad for applicants. Job-seekers now have to learn how to prepare a resume and cover letter that somehow appeals to both an algorithm and an eventual human reviewer. These are two very different audiences, and algorithms in particular are inscrutable for the applicant who might be rejected because the computer does not recognize a synonym for a required keyword. This approach is common but is unfair and certainly is not the most effective approach for recognizing top talent.
- It is pretty common to see AI note-takers in virtual meetings. This is maybe the most useful version of generative AI I have seen for nonprofits, although I have reviewed notes before and noted that the results are hit or miss on key points, sometimes misinterpreting conversation and routinely failing to evaluate what points are important or not to include in a summary. It also does concern me that a tech company may have access to an A/V recording, transcript, or summary notes from a meeting with my staff, another organization, public officials, etc. Even so, this is the best use-case of which I am aware.
A bit about ethics
There are many people who have written more comprehensively and expertly on the ethical problems surrounding generative AI than I can. I won't belabor this section but will simply provide a short list of some highlights:
- Generative AI requires massive amounts of electrical power. This is putting a major strain on electrical grids and is actively harming our ability to meet important climate goals on energy usage. Even when you see stories about AI companies wanting to build their own nuclear power plants or solar farms, note that they are simply trying to ensure there is enough power for their product and are not providing a net benefit regarding renewable energy and sustainable demand.
- Cooling the intensive hardware infrastructure needed to run server farms to support generative AI requires vast amounts of freshwater, straining our scarcest natural resource.
- Generative AI has no respect for intellectual property and is parasitic on the creative work of humans. Artists, writers, journalists, academic researchers, etc., all have difficulty maintaining funding for their important labor already, and generative AI worsens this situation by incorporating their works into its process to generate worse versions of their work to replace them.
- On top of the theft of intellectual property, generative AI risks a world with fewer people practicing creative arts, to the detriment of all us.
- Education is facing a major threat from generative AI, as the tools are enabling students to hand in work that they had no role in creating. This doesn't just make the job of teachers more difficult or interfere with accurate assessments and interventions for student learning. It is actively harming the education of children and young people who are missing out on the chance to do assigned readings, reflect on what they have read, and produce a comprehensible essay from that process. This has potentially devastating consequences for the literacy of society as a whole.
- Because generative AI is built on the compilation and averaging out of existing resources, generative AI has the tendency to perpetuate bias. If the written texts of a society have a general bias, that bias will be distilled when those texts are averaged out into an AI product.
- Generative AI is a gift to anyone interested in wielding influence without regard for truth. It has been put to use to operate armies of bots on social media, to scam seniors out of money, to spread misinformation online, etc.
- The ultimate goal of generative AI is monopolistic power and wealth for a hyper-minority. At the end of the day, there are two things driving the massive corporate projects developing generative AI: (1) the desire to reduce dependence on human labor in order to capture more profits and (2) to build a product on which human consumers become dependent, ensuring consistent revenue for the AI provider. As these companies seek to displace and replace millions of workers from their work, there is no indication that they envision a future in which the masses have many more hours available for play and rest while machines handle necessary labor on our behalf. The Luddites weren't wrong when they saw advances in industrial weaving machinery pointing to a future in which worse mass-produced versions of their own products would lead to their own depressed wages, loss of agency, and immiseration.
Taken together, I find that these present a pretty compelling case against the use of generative AI. I would propose that a nonprofit considering the adoption of an AI tool should begin with the assumption against using generative AI and require a pretty overwhelming case of benefits in order to decide for adoption.
But what if generative AI gets better?
Having considered all of this, a question still remains. Will you be missing out and left behind if generative AI gets better? To answer this question, let's assumes that "gets better" refers to all of the issues with generative AI. This means that we are imagining a future in which generative AI not only becomes immensely more useful, reliable, and accurate, but also has somehow overcome its many ethical problems.
If this somehow comes to pass, having been an early adopter of the worse earlier versions of AI won't confer much of an advantage. A truly useful generative AI won't require users to have a degree in "Prompt Engineering" and won't require advanced expertise to integrate it with your operations. The difference between generative AI as it exists today and this speculative generative AI is a matter of kind, and not degree. You aren't missing out by waiting for a working, ethical option that may never arrive.
Concluding thoughts
Speaking personally, I want to maintain the human element throughout our nonprofit. Of course we make use of technology for helpful efficiencies that empower us as people to do better work. But when it comes to the meager offerings of generative AI, I see far more benefit to us as an organization, to the people who make up the organization, and to the community we serve if we invest our resources into growing human expertise, experience, and skills and purchasing helpful tools. In almost every way, the adoption of generative AI would mitigate against our vision in which communities are connected to land, food, and opportunity, so that every neighborhood is healthy, equitable, vibrant and diverse. That is simply too high of a price to pay.
IPM Writer/Editor for the University of California Statewide Integrated Pest Management Program
2 个月Well said, Sam. All extremely important points.
Bridging the gap between Knowledge and Action | Financial Advisor | Ally for Local Businesses | Community Leader | Girl Dad | Fishing Enthusiast | "People helping People"
2 个月H.I. Richard Rummelhart
Chief Executive Officer at Agricultural Institute of Marin (AIM)
2 个月My biggest concern is about data privacy. What happens to all the data and information that is submitted to ChatGPT? Who owns this information and intellectual property?