Embracing AI in Government with Humility, History & Optimism
Abhi Nemani
government technology scholar, fmr. public servant (CDO of LA) & entrepreneur / executive (SVP @ Euna).
The rapid evolution of artificial intelligence (AI) has sparked a global conversation, filled with both excitement and concerns, about its implications for society. As AI technology becomes increasingly integrated into our daily lives, it is crucial to not only amaze at its potential but address the technology's underlying issues: privacy, security, bias, and explainability — issues made more acute in the public sector. As a kind of primer and guide, this article charts the (brief but relevant) history of "AI" in government to inform the concerns and opportunities ahead.
Looking Back: "AI" in Law Enforcement
Indeed, AI is not a new phenomenon in government. It has been used, debated, and even litigated in the context of criminal justice and law enforcement for years. From facial and sound recognition algorithms to "black box" decision-making machines, AI has been both a source of controversy and a catalyst for progress.
Let's take a closer look at an example that underscores the importance of transparency and fairness in AI systems. Developed by Northpointe in the 2010s, COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is an algorithm used across the United States to predict the risk of a defendant committing future crimes. However, it has faced scrutiny for alleged racial biases. ProPublica's study revealed that the algorithm had a high rate of incorrectly predicting future criminal behavior, with black defendants being disproportionately labeled as high risk. This highlights the urgent need for fair and accountable AI systems, particularly in consequential contexts such as criminal sentencing.
The Rise of Generative AI & City Policies
The AI landscape has been revolutionized by the explosive growth of generative AI, with Large Language Models (LLMs) like OpenAI's GPT-4 and Google's Bard leading the charge. Trained on vast amounts of text data, these models possess the ability to generate human-like text, unlocking immense creative potential and opening doors to countless applications.
In this space, two cities have emerged as leaders: Boston and Seattle. Boston has actively encouraged its employees to embrace Google's Bard LLM system, recognizing the transformative power of AI in improving government effectiveness and efficiency. The focus is not just on how to govern AI, but also on exploring how AI can be used to enhance governance itself. Seattle has adopted a generative AI policy that acknowledges both the opportunities and risks presented by this technology. By requiring permissions for accessing or acquiring generative AI products, validating their output, and avoiding sensitive data inputs, the city demonstrates its commitment to responsible AI use. Furthermore, the formation of a Policy Advisory Team signals their dedication to formulating a comprehensive policy on generative AI.
Navigating Privacy and Security Concerns: Protecting the Public Trust
While generative AI holds incredible promise, it is essential to address the significant privacy and security concerns it poses, especially in public institutions and regulated industries. These concerns include accidental data sharing, heightened biases, security threats, and intellectual property challenges.
领英推荐
Lessons from New York City's AI Law: Guiding the Way Forward
As we grapple with these challenges, we can draw inspiration from existing legislation that paves the way for responsible AI use. According to the New York Times, NYC's AI law, enacted in 2021, offers valuable insights and practical guidelines. The law mandates that companies using AI software in hiring notify candidates, conduct annual independent audits for bias, and disclose the data collected and analyzed. Violations of these regulations are subject to fines. This groundbreaking law has even spurred the growth of the AI audit business, emphasizing the importance of accountability and transparency.
One notable aspect of the New York City law is the introduction of the "impact ratio." This calculation measures the effect of using AI software on protected groups of job candidates, sidestepping the intricate details of algorithmic decision-making. As AI systems grow increasingly complex, achieving full explainability becomes more challenging. The impact ratio approach strikes a balance between understanding the effects of AI — through AI audits — and avoiding an overwhelming emphasis on explainability.
Looking Ahead: Embracing Opportunities for Generative AI and LLMs
As we navigate the complexities of AI integration in government, it is crucial to recognize the vast opportunities that generative AI and Large Language Models (LLMs) offer. These transformative technologies have the potential to make a significant impact in several areas:
Looking Ahead & Finding Balance
As promising as those concepts may be, I suspect they merely scratch the surface of what is possible — for better or for worse. As AI continues to evolve and become more integrated into our lives, we must approach its use thoughtfully, especially in the public sector. The opportunities presented by generative AI and LLMs in government are vast, and with careful planning and implementation, they can significantly enhance efficiency, transparency, and accessibility. And at the same time, careless use of machine learning and generative AI can threaten privacy and security, all the while undermining democratic values of fairness and explainability.
So we must find a balance: a balance of principles and practicalities, of policies and personalities, and of means and ends. And finding that balance will be the challenge for good governance — and for good stewards, both public and private — in the age of AI.