Preserving the community’s voice in the age of generative AI
User: Good afternoon ChatGPT, how many users do you have?
ChatGPT: Good afternoon! I don't have access to real-time data, so I can't provide you with the current number of users. My knowledge is based on information available up to September 2021, and I don't have the capability to track user numbers or statistics. Is there something else I can assist you with today?
180.5 million people used ChatGPT a total of about 1.4 billion times in August 2023. Countless more accessed similar Large Language Model (LLM) chatbots via Bing Chat and Google Bard. At Busara, we have embarked on exploring the potential of LLMs in our work, which revolves around understanding contexts, diagnosing problems, proposing solutions, and testing their efficacy. The potential efficiencies and resultant cost-savings that LLMs can introduce at almost every stage of our behavioral science work are exciting. But there are risks we cannot overlook.
Some are well-documented, such as the fabrication of information sources or references. But another risk that is less well-documented and that cuts to the core of Busara’s institutional values and emphasis has us concerned: that of using generative AI to develop interventions for people in the Global South while excluding them.?
The use of LLMs that power tools like ChatGPT are likely to be incorporated into the design of solutions intended for implementation in the Global South. To illustrate this we asked ChatGPT to give us an example of decision-making rules of thumb, which in behavioral science often go under the label of heuristics and biases, that are specific to populations living in Kibera, an urban informal settlement in Nairobi, Kenya.
“Certainly! Informal settlements in Kenya, such as Kibera in Nairobi, often exhibit unique socio-economic and environmental characteristics that can influence the development of specific heuristics or biases. Here are a few examples: Confirmation bias: In an informal settlement, residents may have limited access to reliable information and resources. Therefore they tend to seek out information that confirms their existing beliefs or perceptions about their living conditions. For example, residents may only pay attention to news or stories that reinforce negative stereotypes about their settlement, while disregarding positive developments or potential solutions”.?
Next, we asked ChatGPT to suggest an intervention, and in its authoritative tone, it provided us with the following recommendation:
领英推荐
“One intervention from behavioral science that can help combat heuristics and biases in informal settlements is providing accurate and diverse information through targeted communication strategies. Here's an example: Information campaigns: Design and implement information campaigns that aim to challenge and counteract biased beliefs and stereotypes about informal settlements. These campaigns can utilize various channels such as community meetings, local media, social media platforms, and mobile technology to disseminate accurate and balanced information about the living conditions, achievements, and potential of the settlement. The messaging should focus on highlighting positive aspects, showcasing success stories, and dispelling misconceptions.
Notice what has happened here: we have diagnosed a series of problems and recommended solutions for people in Kibera, all without talking to a single person or without questioning the validity of the approaches. Behavioral science focuses on the person and their context to counter top-down solutions. But now, ChatGPT makes it easy, even automatic, to disregard the person and their context. It also assumes that the provision of information necessarily leads to behavior change, which, due to how context shapes behavioral responses, we know is not the case.
This is profoundly uncomfortable, yet the potential loss of voice goes even deeper. The data on which LLMs are trained are based mostly on populations in the Global North and likely lack representation from the Global South. Obscuring the underlying data means that while we may be confident of the outputs of generative AI tools, these outputs may conceal important biases
We need to remain vigilant about the risks presented by generative AI, though it has its benefits like increasing efficiencies in some creative processes. We cannot overstate the importance of ensuring communities are not excluded through its use.
So ChatGPT, how do you make sure you generate responses which reflect the community?
Generating responses that reflect the community or align with societal norms and values is a challenging task for AI models like mine. While these strategies aim to improve the alignment of AI responses with community values, no AI model is perfect, and challenges remain. Therefore, users should exercise caution and critical thinking when interacting with AI models.
Well, there you have it folks.
Busara is committed to advancing poverty alleviation in the Global South. If you would like your organization to participate in shaping this dialogue, please contact us.
Reference 1: https://www.tooltester.com/en/blog/chatgpt-statistics/ Reference 2: https://www.statista.com/statistics/1384323/industries-using-chatgpt-in-business/