Why democracy belongs in artificial intelligence By Josh Simons and Eli Frankel
Princeton University Press
Princeton University Press is a leading independent publisher of trade and scholarly books by the world's experts.
In the last year, the art and literature worlds have seen new and rather unorthodox debuts. An up-and-coming writer churned out poetry, fiction, and essays with unprecedented speed, and several new visual artists have been generating other-worldly images and portraits. These artists are not people but artificial intelligence systems that appear, on the surface at least, to actually be intelligent.
Appearances can be deceptive, though. Behind the venture funding, sleek keynotes, and San Francisco high rises that generate systems like ChatGPT, there is a more straightforward kind of reasoning: prediction. Try?typing something into it .?What you see is not a system that understands, internalizes, and processes your request before producing a response. It’s generated by a neural network—layers of algorithms that have learned how to predict useful outcomes from all the text on the web. It looks like understanding, like watching an original author in action, but it isn’t. It’s prediction, an exercise in mimicking. Even some of the most complicated “AI” systems out there are actually powerful forms of machine learning in which algorithms are learning to predict particular outcomes from the patterns and structures in enormous datasets.
This matters because most of the real harms these systems can cause—but also the opportunities they can afford—are nothing to do with robots taking over the world or self-generating AI systems. They are to do with what, how, when, and why we should use powerful predictive tools in the decision-making systems of our political, social, and economic organizations. How, if at all, should we use prediction to decide who gets a loan or mortgage, which neighborhoods police officers are sent to, which allegations of child abuse and neglect to investigate, or which posts to remove or results to display on Facebook? We shouldn’t expect the answers to be the same across different questions. The moral and political questions about the use of data-driven prediction in policing are often extremely different from the questions about its use in the allocation of credit, and both are different still from its use to shape and moderate the public sphere. This means the policy solutions we develop to regulate the organizations that use data to make decisions, whether simple linear models, machine learning, or even perhaps AI, should be quite different in policing, finance, and social media companies.
Since the policy challenges that predictive tools present depend enormously on what those tools are being used to do, we need an underlying idea that can animate the regulatory solutions we develop across domains. That idea should be the flourishing of democracy. From this idea, we can draw out principles—like the need to establish and protect political equality among citizens, to have a healthy public sphere, and to ensure that public infrastructure is shaped and guided by democratic structures—that can help us build a vision for governance of AI, machine learning, and algorithms.
The first step is to identify points of human agency, the choices that actual human beings make when they build data-driven systems. This requires unpacking how computer scientists and engineers define target variables to predict, construct, and label datasets and develop algorithms and training models. This stuff can sound a lot more complex than it is, so it’s worth spending some time getting to grips with it.?Everyday choices made by computer scientists in government, business, and at nonprofits implicate moral values and political choices. When, for example, computer scientists and policymakers used machine learning to more efficiently respond to domestic child abuse complaints, they found themselves inadvertently relying on data that reflected decades of prejudicial policing. There is no neutral way to build a predictive tool. What’s more, my research unpacks the political character of the choices involved in building predictive tools to show that, more often than not, we end up being confronted with new versions of old, deeply rooted problems.
This addresses perhaps the oldest challenge in democracy: securing meaningful political equality among citizens. When parole boards in the United States began using data to predict recidivism risk, they ran into a century of racism captured in data. Histories of prejudice in the American criminal justice system are recorded in the data used to train machine learning algorithms, and those algorithms can then reproduce and supercharge those patterns of injustice. What makes predictive tools an interesting object of moral and political inquiry—and ultimately public policy—is that you?have?to decide what attitude to take to that historic injustice when you build those tools. If you try to assume a neutral stance, to simply build the most accurate tool, the effect will be to reproduce and entrench the underlying patterns of injustice. That’s what prediction does: it reproduces the patterns of the past, and when those predictions are used to shape the future, the future is shaped in the image of the past.
领英推荐
My research explores another challenge democracies have wrestled with since ancient Athens and the Roman Republic: maintaining a healthy public sphere. When Facebook and Google use machine learning systems to predict what content will most engage users and then rank that content in order of most to least likely to engage, they create a public sphere structured around engagement. And, again,?any?way of using prediction to rank and order the information and ideas that circulate in the public sphere implies a set of moral and political principles about what the public sphere?should?look like in a democracy. When we ask about the proper content targeting goals for social media sites, we surface ancient debates in moral philosophy about truth and access to information in the town square. That means in policy and regulation, we must confront underlying questions about what we want our public sphere to look like to support a healthy democracy—questions that we often pretend we can ignore or delegate to superficially technocratic regulators.
In the end, the widespread use of prediction in our world may force us to turn back to democracy and appreciate that everything?is?political all the way down. We can only build structures of governance and regulation for AI, machine learning, and algorithms by wrestling with questions about the character of our shared world and how we relate to one another as co-inhabitants of physical and digital public spaces. And that is ultimately what democracy is for: providing a structure, a shared set of processes and institutions, to empower us to answer those questions as a society over time. We should be grateful we live in one.
Josh Simons?is a research fellow in political theory at Harvard University. He has worked as a visiting research scientist in artificial intelligence at Facebook and as a policy advisor for the Labour Party in the UK Parliament.?Eli Frankel?is a student at Harvard College. He has worked as a researcher at the Edmond and Lily Safra Center for Ethics.
Next Trend Realty LLC./wwwHar.com/Chester-Swanson/agent_cbswan
1 年Thanks for sharing.