Considering ChatGPT Implications for Civic Tech: A Coforma Technologists’ Panel

Considering ChatGPT Implications for Civic Tech: A Coforma Technologists’ Panel

Everywhere we look, people are talking about and experimenting with ChatGPT right now. At Coforma, we’ve been thinking about how AI generative models such as ChatGPT, DALL-E, and others might be used in civic technology, along with all of the benefits and risks that their use might introduce.?

Whereas many have offered either their Pollyanna-esque dreams of an AI-run future or their condemnatory reviews of the current state of AI tools, we’re focused on a balanced reflection, holding possibility and hope along with current realities and challenges.?

Join our small, cross-functional team as we ponder if it is possible to use AI responsibility in civic tech. And, if so, what do we need to know to move forward?

Participants:

Alyssa Liegel (she/her), Senior Visual Designer

Ann Buechner (she/her), Director of Content Design

Chelsea Kelly-Reif (she/her), Principal Software Engineer

Christie Ibaraki (she/her), Senior Data Scientist

Jenny Mayo (she/her), UX Designer

Kristin Ouellette Muskat (she/her), Senior UX Designer


Participants were asked to answer three or four of the following questions.

1. If you had to describe your feelings in a single word about applications for generative AI (like chatGPT and Dall-E) in civic tech, what would it be?

Alyssa: Excited

Ann: Curious

Chelsea: Fascinated

Christie: Nervous

Jenny: Uncertain

Kristin: Excited

2. What are some applications that come to mind for generative AI in civic tech?

Kristin: Applications that I’m excited about include developing a generative AI that can interact with the public on the government’s behalf – to answer questions quickly and weave together the patchwork of support resources out there. A real-world example: I recently filled out a W-4, and even though I’ve done it many times, I still have no idea what I’m doing. Could an AI help me understand it better by answering specific questions I have about my unique situation???

Chelsea: Writing plans and requirements for projects, designing services, helping autocomplete parts of code, suggesting ideas for improved test coverage, and even helping us interact better with our users all come to mind. Artificial Intelligence already helps my teams at Coforma speed up the software development life cycle, taking away some of the busy work and giving us more time and energy to focus on what really matters: serving the public. In the rapidly-evolving landscape of civic tech, AI is helping us build custom digital services more quickly, reliably, and personally than ever for the communities we serve.?

Blue quote text that reads: In the rapidly-evolving landscape of civic tech, AI is helping us build custom digital services more quickly, reliably, and personally than ever for the communities we serve. -Chelsea Kelly-Reif, Principal Software Engineer

Christie: Within government technology more broadly, my first thought is that generative AI can be helpful for text summarization. Providing government benefits often requires a government employee to read and process large amounts of text before making a decision. Using automated text summarization as a decision-making aid could speed up this process.

3. What benefits and/or risks do you think could come with adopting generative AI-based tools in civic tech?

Kristin: I believe there are a huge number of risks to any new technology–misinformation is a big one with AI, along with security and privacy. I don’t want to downplay those risks, but I do think that generative AI is here to stay. Legislators need to be thinking about regulation, and to do that effectively people in government need to learn more through hands-on engagement and direct experience, looking into questions like: How could something like ChatGPT, with the addition of lots of guardrails, help in a limited context? What are the biggest risks to the public, and how can we mitigate those risks??

Slow is not always bad with technology adoption, but I’d love to see the civic space start now and be at the forefront of research with AI. I think there is a potential huge benefit to government learning and engaging with generative AI early. And the quicker that civic technologists pick up and experiment with it, the more likely it is to be regulated and used effectively in a government context.

Blue quote text that reads: Slow is not always bad with technology adoption, but I’d love to see the civic space start now and be at the forefront of research with AI. I think there is a potential huge benefit to government learning and engaging with generative AI early. And the quicker that civic technologists pick up and experiment with it, the more likely it is to be regulated and used effectively in a government context. - Kristin Oullette Muskat, Senior UX Designer

Chelsea: In addition to facilitating the software development life cycle that I mentioned before, we can use AI to better understand our communities. For example, governments can use AI to analyze large amounts of feedback received from constituents in order to identify trends, preferences, and key areas of concern among various demographics that can help agencies like Coforma tailor solutions that better meet the needs of users. AI can also be used for predictive analytics which can help identify potential problems–even immediately save lives–before people would otherwise know that something bad is about to happen.

On the other hand, I see extreme risks in the way AI can be–and already is–used to harm people. When AI is in the wrong hands, people with bad intentions–or even people with good intentions who make mistakes, like unconsciously teaching biases–can further marginalize people, or worse, at scale. For every actor that is doing good with AI, there is another that is doing harm with it.

I'm also terribly worried about AI systems being taught with people's personal and private information without their consent and possibly disclosing that information to others. The same goes for classified information.

Alyssa: I am excited and hopeful to see AI-powered tools created to bring benefits to folks with cognitive dysfunction or those who may have physical difficulty performing tasks such as typing or reading a screen for extended periods. People with POTS, Long COVID, Lyme Disease, and many others have to use their limited energy on what is most important. Creating tools that can limit the time and energy needed to complete the paperwork necessary to secure disability or unemployment benefits could help this population save their energy for more critical or enjoyable tasks.?

4. How could using generative AI as a public point of contact with the government create barriers of access to services??

Chelsea: Allow me to take off my engineering hat for a moment and speak personally about some of my experiences encountering barriers to government services as a woman of transgender experience.?

In the United States, transgender people face intense discrimination in all areas of our lives. In government and/or healthcare services, transgender people are often denied equal treatment by organizations and officials, including judges and court officials. It is so bad that only one-fifth of us have been able to update all of our identity documents–in fact, one-third have updated no documentation at all. (Source: National Center for Transgender Equality).

Despite being out and living as my real self in an accepting area of California for a few years now, and being a worker in the government myself for even longer, I was so terrified of actually talking to a government employee about my documents that I did not get my Social Security Card and driver's license updated until January 2023.

My worry is that for marginalized groups such as mine, AI will improve access to services in some areas and create barriers to access in others, due to the levels of bias being introduced by those training it. Consider the differences that might exist for me in a chatbot simulating a DMV agent in a state like California versus a state like Texas. And if this is my experience as a white trans woman born in this country, I’m even more concerned for trans women of color and undocumented people.?

Alyssa: Using AI-based tools to replace certain services instead of as a tool to better them undoubtedly creates barriers to services. This is already a pattern in consumer customer service spaces. Often, you see chatbots presented as a point of contact when seeking solutions to issues you have with a service with alternative ways of contact buried deep into a website or underneath layers of menu options read to you on a never-ending phone call.?

If similar solutions are employed in the civic space without consideration for those already experiencing the adverse consequences of the digital divide–those with little or no access to the internet, those who are not technologically savvy, or those who simply cannot interact with a chatbot because it's not offered in a language they understand–they will be further divided from the "AI-improved" civic services.?

Since tools like Chat GPT and other AI continue to "learn" by evaluating the data that gets input into their systems, this also means that these folks will not be included in the datasets that their peers on the other side of the digital divide are, continuing to widen the gap as these tools become more common practice.

Blue quote text that reads: Using AI-based tools to replace certain services instead of as a tool to better them undoubtedly creates barriers to services. This is already a pattern in consumer customer service spaces. If similar solutions are employed in the civic space without consideration for those already experiencing the adverse consequences of the digital divide . . . they will be further divided from the "AI-improved"? civic services.  - Alyssa Leigel, Senior Visual Designer

Jenny: I think of folks who already experience a barrier of access to services and who may be discouraged by AI tools due to a lack of access to the Internet and devices, who may experience differing levels of technical literacy, and language barriers, among other things. Instead, how can current tools be improved to ensure folks equal access to the information and assistance they need so that AI is an additional tool if that is your preferred method?

Ann: Federal plain language guidelines say “wordy, dense construction is one of the biggest problems in government writing” and instruct writers to be concise. The ChatGPT website admits that it’s “excessively verbose,” which would replicate or proliferate current challenges. Its conversation style doesn’t meet plain language or accessibility standards, making information generated potentially unusable.

ChatGPT could also mislead users with incorrect information if they ask an “ambiguous” question because it will guess what a user meant rather than ask follow up questions. Imagine all the questions people have about their health, their benefits, their taxes, their immigration status, and more—an incorrect answer from an “official” source could have catastrophic effects. We have to consider the potential for doing harm at scale. It’s one thing to see an illustration of what ChatGPT is capable of—another to implement it.?

We’d need much more insight from research to understand how or if these tools are usable by and useful to the people they seek to serve.?

Blue quote text that reads: It’s one thing to see an illustration of what ChatGPT is capable of—another to implement it.   We’d need much more insight from research to understand how or if these tools are usable by and useful to the people they seek to serve. - Ann Buechner, Director of Content Design

5. Are there ways that generative AI could be used moderately in government to improve the experience for the public without harming them?

Kristin: Human-centered user research and ethical experimentation for a set of very narrow use cases is a good place to start. Although I have these epic visions for a future where we can interact with a bot-like ChatGPT as a one-stop shop for questions about anything government related, I think experimenting in a small use case–like a chatbot for a local DMV, for example–could be interesting. Progress can be achieved through moderation and small-scale efforts.

Alyssa: What comes to mind immediately is using AI as a brainstorming tool to get teams out of their comfort zones and more effectively "ladder up" ideas to aid in creating effective strategies and solutions to problems. Using AI to identify common inquiries or existing solutions to challenges can help teams identify the strengths and pitfalls of those solutions and then go beyond what already exists to develop innovative solutions.

Christie: One way that government already uses AI “moderately” is by building it into government-facing technology. Many government services rely on complex decision making, so building machine learning or AI driven technology that reduces the cognitive burden on human adjudicators (for example, by generating text summaries that can guide a human reviewer to salient points within an application) *can* improve government services to the public. And by keeping the technology within the government, the government has more control over how the technology is used (e.g., how users interpret the output). However, I think ensuring fairness and mitigating harm in this context is still extremely difficult, and I’m not sure the government is prepared to do that yet.

Blue quote text that reads: Many government services rely on complex decision making, so building machine learning or AI driven technology that reduces the cognitive burden on human adjudicators can improve government services to the public. And by keeping the technology within the government, the government has more control over how the technology is used. - Christie Ibaraki, Senior Data Scientist

6. Do you think the use of generative AI would build or erode trust in government services?

Alyssa: I don't think it's one or the other. Populations that benefit from considered, thoughtful, and effective use of AI-driven solutions will continue to do so or learn to trust the government services that offer them. The populations that do not benefit or are left out of these benefits will lose trust in these services and the agencies that provide services with them.?

Jenny: Used as it stands, generative AI in civic tech may erode trust rather than improve it. There are internal biases that continue to exist in our policies that disproportionately impact marginalized groups, so I wonder about the checks and balances that would need to be put in place so that AI tools can effectively and responsibly serve as an equitable tool for communication. AI use in civic tech should be less about early adoption and more about determining ways that its use can establish trust in the government services themselves.

Blue quote text that reads: AI use in civic tech should be less about early adoption and more about determining ways that its use can establish trust in the government services themselves. - Jenny Mayo, UX Designer

Ann: It depends on the AI and how it’s being used. Anyone working with anything touching something digital likely uses AI every day. You might use Gmail’s Smart Compose to help complete an email, or ask Alexa to order more granola bars, or get a suggestion from Netflix.??

If we’re talking about ChatGPT at this moment in time: erode. Open AI makes ChatGPT’s limitations clear. And its limitations–many of them discussed previously–wouldn’t make it a good candidate for a public-facing tool. Which isn’t to say I’m anti-AI. Just pro-caution, particularly when it comes to services people must use.


About Coforma

Coforma crafts creative solutions and builds technology products that elevate human needs. They’re impactful by design. Visit coforma.io/ to learn more.

要查看或添加评论,请登录

Coforma的更多文章

社区洞察

其他会员也浏览了