Time for A Treaty on Artificial Intelligence
Merve Hickok
President @Center for AI & Digital Policy | Founder @AIethicist.org | CFR-Hitachi Fellow | Lifetime Achievement Award - Women in AI of the Year | 100 Brilliant Women in AI Ethics | Lecturer @University of Michigan
By Merve Hickok and John Shattuck [1]
Over the next few months, the Council of Europe is expected to wrap up negotiations for the first international treaty on Artificial Intelligence. The work is important and timely. Many governments are developing AI strategies and policies, but there is still little agreement on the basic norms to control this rapidly emerging technology. ?And there is growing concern that as AI systems are more widely deployed, the protection of fundamental rights and democratic values could be pushed aside.
The challenges are clearest in the private sector. There is countless evidence from healthcare to hiring, credit and insurance decisions that unregulated AI systems replicate bias and produce unfair outcomes. AI-powered surveillance systems in the private sector, from workplaces to school classrooms, suppress people’s rights and freedoms and now gather personal data for elaborate training models without restraint. Former Presidential Science Advisor Dr. Alondra Nelson called for an AI Bill of Rights.
Anticipating these problems, the Council of Europe began work on an AI treaty several years ago. The goal was a comprehensive and far-reaching agreement among nations about basic rules to govern AI that safeguard human rights, democratic values, and the rule of law. The first round of work resulted in the recommendation of a legally binding treaty that covers activities undertaken by both private and public actors.? Because Council of Europe treaties are open for ratification by countries worldwide, many non-European nations, including the United States, the UK, Canada, and Japan, are participating in the ongoing negotiation.
Through much of the work, hopes remained high that the drafting process would produce a robust framework equal to the task of managing one of the most transformative technologies in history. But difficulties have emerged as negotiations approach the final stages. At a time when even the private sector agrees with the need for regulation, countries such as the United States are reported to push for a “carve-out” for private-sector AI systems. Security forces would like to remove national security AI systems from the scope of the treaty.
AI experts have sounded alarms about these recent developments. British computer scientist Stuart Russell, one of the world’s leading AI experts, told delegates that an AI treaty that fails to cover AI systems in the private sector will ignore the greatest risk to public safety today, and a national security exclusion could make nations more vulnerable to foreign adversaries and could also be used as an excuse for domestic mass surveillance and narrowing of rights.
A recent survey from the members of the Institute of Electronic and Electrical Engineers (IEEE), a leading association of computer professionals, confirms these concerns.? A large majority of US IEEE members said that the current regulatory approach in the US to managing?AI systems is inadequate.?About 84 percent support requiring risk assessments for medium- and high-risk AI products, as the recently adopted European AI Act requires, and nearly 68 percent support policies that regulate the use of algorithms in consequential decisions, such as hiring and education. More than 93 percent of respondents support protecting individual data privacy and favor regulation to address AI-generated misinformation.
These concerns are widely shared. More than one hundred civil society organizations in Europe have now urged negotiators at the Council of Europe to remove the blanket exceptions for the tech sector and national security. Similar campaigns have brought together experts and advocates in Canada and the United States.
Polling data shows growing public concern about AI. In the United States, the Pew Interest Research Center found that Americans are far more concerned about AI than they are enthusiastic, a gap that has increased over the last several years.
We believe there is a solution. Return to first principles. The reason for a treaty is to bring nations together in support of common commitments. If some non-European nations have difficulty aligning with the common objectives, give them time for implementation, and if, absolutely necessary, allow exceptions for specific purposes. But do not lose sight of the need for common commitments now among nations ready to move forward. The AI treaty does not prescribe domestic methods for implementation. Countries may differ in their legal systems and traditions. Differences in legal systems should not prevent us from uniting in the protection of human rights and democracy.
Several years ago, former Massachusetts Governor Michael Dukakis first called for a global accord on AI. “My concern is what happens to these technologies and whether or not we use them for good reasons and make sure they are internationally controlled,” said Dukakis. He warned that AI could hack elections, displace jobs, and replace human decision-making. He urged an international agreement for AI, and also a new agency similar to the International Atomic Energy Commission to ensure AI is used for constructive purposes. Governor Dukakis has since launched the AI World Society and has spoken with world leaders about the need for an AI treaty.
And support is growing. In December, Pope Francis called for a legally binding international treaty to regulate artificial intelligence. He said algorithms must not be allowed to replace human values and warned of a "technological dictatorship" threatening human existence. The Pope urged nations to work together to adopt a binding international treaty that regulates AI development and use.
The new treaty presents a unique opportunity to address one of the great challenges of our age – ensuring that artificial intelligence benefits humanity.
[1] Merve Hickok is President of the Center for AI and Digital Policy, a global network of AI policy experts and advocates in more than 100 countries. John Shattuck is Professor of Practice in Diplomacy at the Tufts University Fletcher School and former US Assistant Secretary of State for Democracy, Human Rights, and Labor.
GoalGetter, Business Analyst, and a Firm Believer!
4 个月My theology is that treaties can be the beneficial apex from those that haven't experienced it!
Research Group - Applied Mechanics, IOP | Lead Practitioner - Physics | STEM Education Architect
8 个月Embracing a treaty on AI could mark a pivotal step in harmonising tech policies worldwide, fostering global cooperation in the digital age. Eager to see how this evolves!
Data Analyst (Insight Navigator), Freelance Recruiter (Bringing together skilled individuals with exceptional companies.)
8 个月Exciting news on the horizon for AI governance. Your insights shed light on the importance of inclusivity in shaping impactful AI policies. ?? Merve Hickok
-
8 个月Great insights shared in the article! ??
Futurist - Generative AI - Responsible AI - AI Ethicist - Human Centered AI - Quantum GANs - Quantum AI - Quantum ML - Quantum Cryptography - Quantum Robotics - Quantum Money - Neuromorphic Computing - Space Innovation
8 个月Looking forward to the outcome of the international AI treaty discussions!