Key insights from the 2023 AI Policy Summit at ETH?Zurich
Jan Kleijssen, Dominick Romano, Wojciech Wiewiórowski, Ayisha Piotti, Paul Nemitz, Charlotte Burrows

Key insights from the 2023 AI Policy Summit at ETH?Zurich

Introduction

On November 3rd and 4th, 2023 during a week considered to be likened to the World Cup of AI Policy, over 107 Countries focused their attention on the historic ETH Zurich in Zurich, Switzerland. ETH Zurich is a school with over 180 years of history, a place where Einstein once roamed the halls as a young student. For the last 2 centuries civilization has faced crises of innovation, and deliberation of the potential humanitarian impact. The last being the debate of Nuclear Sciences and the proliferation of Nuclear Weapons at War, the latest being Artificial Intelligence.

With the Bletchley Park Summit concluded, we took to Zurich for a dynamic event of understanding the state of current AI Policy with consideration for progress made throughout the week. I was honored to be able to take part in the AI Policy Summit at ETH Zurich hosted by the ETH Zurich Center for Law and Economics and RegHorizon, where I had the opportunity to invite key guests and take part in closed roundtable sessions where openness was encouraged to think through the problems and solutions around AI Policy. This event was packed leaders, academics, and policymakers including Gabriela Ramos , Amandeep Gill , Roger Dubach , Tawana Petty , Ojenge Winston, PhD , Robert Trager , Gry Hasselbalch and Charlotte Burrows . Here are the key takeaways of the entire event.

Day One:

“Innovators have a moral obligation to bridge the gap between the technological world and democracy”?—?for an early start this statement by Paul Nemitz the Godfather of GDPR deeply impacted me. The night before, I had the opportunity to have dinner with Paul, and other leadership from the European Commission, and European Council; his brilliance never stops.

Something I found striking about policy leadership in Europe is their honesty, the EU AI Act is not complete, the job is not done. It’s an evolving document, a starting point, and European Policymakers are more open to suggestions and changes than many would assume. The sense of their willingness to make improvements based on merit rather than lobbying is frankly nothing short of breathtaking and that should excite innovators around the world; as it did me.

Europe will be a market for business in the age of Artificial Intelligence, but engineers should follow policy compliance by design principles, rather than retrofitting after designing a system.

As the prolific Professor Avi, Abraham Bernstein said to me while walking to lunch during the summit: “The importation of technology is the importation of culture”.

From the view of the EU, why regulate?

  • Value System
  • Drive for Tech Leadership
  • Human Machine Teaming
  • AI Enables new applications never before seen to the world (ex. Facial Recognition).

The priorities:

  1. Build on existing rules.
  2. Tech neutral regulation.

The underlying issues: a broad approach for longer impact now, and meaningful impact later.

“The duty of Democracy in the technological age is to harness and control technological power”

This quote was an underlying part of many discussions both in public and behind closed doors where the proliferation and massive scale of online platforms and their ability to interfere with, and influence democracies is of concern to many nations throughout Europe.

“We cannot accept that we will never understand/explain AI”?—?this statement stood out to me, there is a deep willingness in Europe to go the distance in order to understand and harness the power of this technology. Research on it, will not pause, it will not stop. If the good stop researching AI to stay ahead of bad actors, the bad actors will inevitably win, and the global community cannot accept that security risk. Which indicates, we need to act now to find common ground internationally to avoid potential proliferation of systems by bad actors.

There was an exchange between Gary Marcus , the world renown NYU Professor Emeritus of Psychology, and Thomas Schneider , Chair of the Committee on Artificial Intelligence at the European Council during a panel which hammered on the need for increased international cooperation of standards and basic principles.

During the last panel of the day, I ( Dominick Romano ) had the pleasure of taking the stage with Kristina Podnar , Farah Lalani , Cornelia Schaurecker , Lisa Bechtold, Ph.D., LL.M.


During my contribution to the panel, I highlighted that regulations can help accelerate innovation if done right by laying out a path to market, and the need to design software for compliance prior to developing. I also laid out a 4 pronged approach to ethical considerations of designing and deploying software in the age of Artificial Intelligence:

Baseline: Cyber Security?—?safeguarding AI systems:

  • Cyber threats
  • Protection of data

1. Security and Infrastructure

  • Open Source
  • Private Models
  • Confidential Computing

2. Fairness and Transparency

  • Bias Mitigation
  • Data Retention
  • Data Governance
  • Transparency / Explainability
  • Partner Ecosystem
  • Data Sourcing

3. Policy and Societal Impact

  • Public AI Policy
  • Regulatory Updates
  • Data Diversity
  • Cyber Bullying
  • Deep Fakes
  • Sextortion
  • Synthetic training data
  • Human in the loop deployment

Day Two:

The second day of this event was where we went to work, fighting for change. The sessions were closed door, so I will not be able to mention who was present or who said what but let’s dive into the key takeaways with respect for the rules of the discussions.

Where are we today in the role of governments and enforcement of AI Policy?

  • Rules and regulations are needed to actually enforce.
  • Education on the subject is key.
  • Software engineers need to think about software as you think about brick and mortar, a building must be designed for compliance with regulations to ensure its safety, and Artificial Intelligence used at a mass scale will be no different.
  • We are lacking standards, not only nationally in various countries, but internationally accepted standards.
  • Labeling of a product class is important to understand the potential risks.

“The model is not intelligent!”?—?Yes yes, it’s not. The mainstream nature of such a complex computer science and mathematics topic has obfuscated the conversation of what the real risks are and has led to overreactions about risks which have yet to be supported by any scientific merit, however do require careful consideration of potential impact with respect to regulation.

  • It’s very important to distinguish the difference between the model developer, and the model deployer because the 2 are not always the same entity or party.
  • GDPR differentiates use, process, and deployment.
  • As developers continue to rely on AI there are dangerous implications in the potential failure to patch systems which are made live by engineers lacking a deep technical understanding of the software they deploy.
  • Lacking translation of existing national, and international laws for AI, globally.
  • We are lacking experts with the competence to bridge the technological world with Democracy.
  • Global power has shifted since the advent of large digital platforms which can, and have impacted democracies.
  • We are missing critical guidance and design guidelines for software developers building Artificial Intelligence models and systems leveraging AI.
  • The impacts of the EU AI Act will deeply impact a wide swath of use cases.
  • GDPR is a, and will continue to be a, major component of AI Regulations in Europe, and the ethical use of data used to train AI models.
  • There is an imminent need to introduce Ethics training in computer science programs at Universities. The risk here is: changing one generation of educators traditionally takes 40 years.

Conclusion:

One of the most impactful and possibly important takeaways I had from the summit was an opinion I held prior change in a dramatic way. Prior to my time in Switzerland, I did not believe we needed a CERN for AI, an institute for multinational investment in evolving the math and sciences powering Artificial Intelligence.

Today, my opinion has strongly changed. It is critical we have an institution of international collaboration on Artificial Intelligence. Why?

  • International Consensus is needed to protect human rights in the age of Artificial Intelligence.
  • There are countries which do not have the national compute capacity to train a ChatGPT.
  • The imbalance of computation capability in the age of Artificial Intelligence is not just a crisis many nations are facing, it’s a global security crisis that impacts each and every human being alive today, especially SDG Impacted Nations which are too frequently under-represented in such conversations.
  • The imbalance of computation power is being further complicated by global warming. Federal computation facilities around the world strategically placed in areas where natural resources could be used for cooling, such as rivers, which are now drying up further threaten computational capacity.
  • Leadership is needed now more than ever, we need to bring every nation to the table, yes, even adversaries need to come to the table to find a consensus as a global community, as a single race of people to protect humanity from the potential existential threats of a future where humans will continue to team with machines. That was one of the most beautiful milestones of the Bletchley Declaration on AI Safety where 28 countries which do not necessarily agree on everything found it within their moral compass to agree on something, the need to collaborate on Frontier AI to reduce global risk. The US, EU, China, India, Japan, Brazil, Ireland, the United Kingdom, the Kingdom of Saudi Arabia, the United Arab Emirates, and Nigeria include some of those 28 nations.
  • The Bletchley Declaration fell short in bringing the larger global community to the table, with many feeling left out, wondering what is next, or seeking actionable guidance. Although historic.
  • A resoundingly alarming fact is that during the “World Cup of AI Safety and Policy” I have not heard enough about AI use during armed conflicts. It felt like the elephant in the room no one wanted to talk about, but uneasily we all know it’s there. There is a vital need to come to a consensus or translation of existing international human rights laws with larger international support and with a specialized focus for protecting human rights against AI use during armed conflicts. New international mechanisms may need to be developed and negotiated in order to enforce such an agreement.

A quote from Day One: “Switzerland remains open and hopeful for multilateral cooperation and common principles.”


Another set of important takeaways on the influence of Academia and Technology Executives, and the existing knowledge gaps needing filled:

  • There is a big gap between Academia, Executive Leadership, and Policymaking which is persisting in the conversation on AI Policy and Regulation. This requires immediate attention of policy round table organizers who need to bring the competent global leaders of bridging the divide between the three to the table, and to the forefront of educating the general public on designing systems for compliance with regulations.

Academia understands the computer sciences behind AI but do not emphasize designing and implementing AI systems to be compliant with evolving Law, Policy and Regulations, and this subject is not well enough addressed in Academic Institutions throughout the world. Many academics we rely on for information vital to Artificial Intelligence:

  • Lack an understanding of the reality of production adaptation of AI technologies in compliance with evolving law and policy across multiple jurisdictions.
  • Are paid for research by the large technology companies which stand to gain the most from their influence.
  • Choose to open source risky frontier models with little to no consideration for the jurisdictions which these models will eventually be running, societal impact, or potential global impact of models without guardrails.
  • Academia lacks funding and computational power, vital to advance critical research.

Executive Leadership of large technology companies lack deep technological understanding of Artificial Intelligence, with there being a real concern for attempts at regulatory capture, and exaggeration of dangers/risks (without scientific merit) to sound superior for PR. Others signed the “Pause AI Development” petition earlier this year, but went on to do the opposite which is seemingly disingenuous.

  • Artificial Intelligence is not a topic Bureaucrats, Technology Executives, and Academics belong colluding on, alone; the risks are too high, diverse input of practicality, safety, risk, and implementation with respect for the underlying sciences is needed to bridge the technological world with democracies.
  • More competent computer science experts in charge of designing and deploying frontier AI systems need to stand up and contribute to policy discussions.
  • Lobbying and Bureaucracy should not interfere with doing the right thing with respect to Human Rights, AI Policy and Regulatory Implementation. Nor should it guarantee a seat at the table to influence policy and regulations.
  • Any United Nations Agency which allows private memberships for voting on standards, should require continued contribution to research & development in order to maintain membership.
  • History will depend on us at this moment in time, in defense of Democracy.


Considerations for software engineers and architects:

The a 3 Principle approach to deciding not if we can, but if we should:

  • Net Utility?—?Across different use cases
  • Net Value?—?Value add to society
  • Upholding Human Rights

Design software to be compliant with the strictest regulatory legislations, it might be more difficult, it might not seem like worth the investment, but it will make international adoption and compliance a lot easier in the long run. For technology intended to import into the EU this involves some of the following considerations:

  • Compliance with GDPR and clear labeling of how a user’s data will be used including if trained into an AI Model. The user will have the right to remove their data from a training corpus, so in the event a user’s data is trained into a model, there needs to be considerations for how to ensure the resulting weights are removed.
  • Obfuscation of data with respect to data privacy in many cases will get you closer to where you need to be to preserve a user’s privacy rights in the EU, however requires respect for the user’s right to control how their data is used. This was a topic I discussed with both the EDPS - European Data Protection Supervisor , and the European Commission director of GDPR Implementation.
  • Moving algorithms to the data, rather than the data to algorithms. In many countries it is illegal to export customer data across borders. This increases the complexity of compute constraints in nations however must be designed into systems.
  • Reference research published by the Imternational Telecommunication Union . With a history as the UN’s oldest agency, predating the United Nations itself, the ITU has worked with researchers to provide years worth of research, and valuable insights into the development, and deployment of Artificial Intelligence systems, which is too frequently overlooked when discussing Standards and Policy.
  • Document everything, design, ethical considerations, principles, define it all, keep track of updates, and have a clean history to show in the event of an audit one day.


Insights on the United States Executive Order on Artificial Intelligence:

  • This is not the start of policy, it is a signal to Federal Agencies in the United States laying out the initial strategy, priorities, and deliverables on the subject.
  • This will have very little impact on companies engineering AI Systems, as of right now.
  • This will likely have an impact on companies engineering AI systems to be sold to the US Federal Government.
  • Highly capable AI Systems will likely be considered dual-use under the US Defense Authorization Act, and exports will likely be regulated by the US Department of Commerce.
  • Equal Employment Opportunity is a priority.
  • We have more to do to explain to policymakers that in AI computation means nothing if the data is garbage. Garbage in, Garbage out.


I would like to thank ETH Zurich Center for Law and Economics, Director of AI Policy Ayisha Piotti and RegHorizon . I would also like to thank everyone who attended and made the event a special moment in history, I believe a lot of progress was made. A very special thank you as well to drainpipe.io Trust and Safety Advisor Lisa Thee for her brilliant contributions to our trust and safety framework.

I remain committed to AI for Good, the safe and ethical design, development, and deployment of technology with respect to the jurisdictions of which its users will reside. As I continue my world tour into the end of 2023 I look forward to continuing to make progress in educating the general public as to where we are with AI Policy, and how to begin designing systems for the policy frameworks already being put in place.?

My next stop, Washington DC.


Markus Senn

Global Head of Strategic Data Science

1 年

Thanks for the summary, Dominick Romano. Great event! "We are lacking experts with the competence to bridge the technological world with Democracy." -- Indeed, it was striking to see that practitioners from the tech side largely abstained from the discussion. "I have not heard enough about AI use during armed conflicts." -- Yes. More broadly, AI tools in the hands of state actors can tremendously impact individuals: we have heard examples from the Netherlands and Australia. We can not assume that governments will always deploy AI competently and in democratically legitimate ways.

Ojenge Winston, PhD

Senior Research Fellow and Head of Digital Economy Program at the African Center for Technology Studies; Lecturer at Technical University of Kenya in Computer Science and Robotics

1 年

How nice this looks!

回复
Farah Lalani

Improving Trust & Safety at Amazon Prime Video | Policy and Operations | AI Ethics

1 年

Loved hearing about your work and was an honour to sit alongside you on the AI Ethics Panel!

Aldo Lamberti

Founder @ Syntheticus | Vice Chair @ IEEE

1 年

was a pleasure meeting you!

Hannah van Kolfschooten

Lecturer in (Digital) Health Law at University of Amsterdam | Jurist Gezondheidsrecht

1 年
回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了