Aviation to AI: How to Regulate Artificial Intelligence?

Aviation to AI: How to Regulate Artificial Intelligence?

Discussions from AI Governance Day at the 2024 AI for Good Global Summit

The 2024 AI for Good Global Summit opened not with fancy demonstrations or futuristic predictions, but rather with a grounded discussion on AI Governance.

Most agree that artificial intelligence— especially recent iterations — represents transformative technologies that need regulation, but governance creation is complicated for many reasons, including:

  • Regulation will likely struggle to keep up with the rapidly evolving developments.
  • Over-regulation may stifle innovation or give bad actors an advantage.
  • Lack of regulation brings risks reminiscent of dystopian novels.

The most tangible examples of what a large-scale framework for AI governance could look like are built on lessons learned from the regulation of other industries. This article highlights the proposals from several Summit speakers:

  • Dr. Stuart Russell on Learning from Other Industries to Regulate AI
  • Thomas Schneider on Context-Based AI Regulation
  • Emma Inamutila Theofelus on Leveraging Existing Frameworks for AI Governance
  • Jim Zemlin on Open Source and Regulatory Burden


Dr. Stuart Russell on Learning from Other Industries to Regulate AI

If you are only going to watch a single clip from AI Governance day, let it be the seven-minute segment from Dr. Stuart Russell in the session The Critical Conversation on AI and Safety. Dr. Russell drew on his extensive 48-year career in AI to explain the potential and dangers of emerging technology.

Dr. Russell’s Segment: 3:16:51 to 3:23:42


Aviation Safety as a Model

According to Dr. Russell, perhaps the best industry we can learn from when debating how to govern AI is aviation. He pointed out that air travel, once fraught with risk, is now one of the safest modes of transport thanks to rigorous regulatory standards. “We are very happy to be flying on extremely safe aircraft,” he said, highlighting that the safety protocols include certification of airworthiness, pilot training, regular inspections, and international collaboration through organizations like the International Civil Aviation Organization (ICAO).

Russell suggested a similar multi-layered approach for AI. Just as aircraft must meet strict safety standards before being released to the market, AI systems should undergo thorough testing and certification. This would involve not just the technology itself, but also the broader ecosystem, including supply chains and operational environments.

Learning from Nuclear Power

Russell also referenced the nuclear industry, known for its stringent safety measures. He noted that every kilogram of a nuclear power station is backed by seven kilograms of paperwork, reflecting the exhaustive safety documentation required.

Despite these extensive regulatory efforts, incidents like Chernobyl highlight the necessity of constantly upgrading our safety standards.

Russell points out that AI, particularly with technologies like deep learning and transformers, are not receiving the same safety-engineering commitments despite their comparative potential for impact. He describes these systems as “black boxes,” whose internal workings are not well-understood, making it challenging to ensure their safety. He notes nuclear power plants have a “mean time to failure” that has risen from 10,000 years to 10 million years. Shouldn’t we expect the same from technologies that could be just as transformative (and potentially destructive)?

Pharmaceuticals and Clinical Trials

The pharmaceutical industry’s use of clinical trials to ensure drug safety provides another instructive example. Medicines must pass extensive testing phases before reaching the market. Russell proposed that AI systems should similarly undergo phased evaluations, ensuring they meet safety and ethical standards before widespread deployment.

Thomas Schneider on Context-Based AI Regulation

In an earlier session, State of play of major global AI Governance processes, Thomas Schneider ,Ambassador and Director of International Affairs for the Swiss Federal Office of Communications, also looked to other industries and emphasized the importance of context-based regulation for AI. In particular, he drew an analogy to the development and oversight of engines. He noted that engines are regulated based on their application, not as standalone technologies. “We regulate the people that are driving the engines, the infrastructure, and those affected by it. So it’s all context-based,” he explained.

Schneider suggested applying this logic to AI by focusing on the specific risks and impacts of various applications rather than the technology itself. This approach allows for tailored regulations that address the unique challenges of different AI uses while maintaining a cohesive global framework.

Emma Inamutila Theofelus on Leveraging Existing Frameworks for AI Governance

Emma Theofelus , Minister of Information and Communication Technology of Namibia, focused on the strengths of the UN system and highlighted the importance of leveraging existing regulatory frameworks for AI governance. In her session The Government’s AI dilemma: how to maximize rewards while minimizing risks?, she stated, “We don’t necessarily need to create new institutions or build new ones; we can already build on existing capacities and institutions.”

Theofelus advocated for a human-centric approach to AI that builds on established human rights frameworks to guide AI governance. She also highlighted the need for inclusive data governance and global cooperation to ensure that AI benefits all regions and communities.

Jim Zemlin on Open Source and Regulatory Burden

Finally, we heard from Jim Zemlin of The Linux Foundation in the session To share or not to share: the dilemma of open source vs. proprietary Large Language Models, where he took a more measured approach than earlier speakers. Jim Zemlin highlighted how regulated industries can still be given technological freedom. He noted how complex systems such as airports can still rely on open-source software demonstrating that outputs instead of tools can effectively be regulated. “All of the jets, air traffic control systems that are regulated for our safety run on open source software,” he noted.

Zemlin argued that the regulatory burden should fall on the industries that implement these technologies, not on the developers. This ensures that innovation continues to flourish while maintaining necessary safety and compliance standards. By learning from how other industries manage open source within regulatory frameworks, AI can similarly benefit from both innovation and robust regulation.

Conclusion

The 2024 AI for Good Global Summit outlined key strategies for effective AI regulation, often drawing lessons from established industries. Experts like Dr. Stuart Russell and Thomas Schneider emphasized the importance of rigorous safety protocols and context-specific regulations. Emma Inamutila Theofelus advocated for UN-style systems to leverage existing human rights frameworks and global cooperation, while Jim Zemlin highlighted how technological freedom could still be achieved even in highly regulated industries.

While none of the industries mentioned are perfectly analogous to the systems needed to govern artificial intelligence, each provides insight into effective regulation systems. By learning from other sectors, the global community can accelerate the time frame for rolling out effective regulatory frameworks that balances progress with safety and inclusiveness.

Learn More

If you want to dig deeper into the governance discussions, a full video of the plenary sessions is available on YouTube:

AI For Good: Day 0 — AI Governance Day

You can also explore the full programme and list of speakers on the AI for Good Global Summit website:


SDGCounting is a program of StartingUpGood and tracks the progress of counting and measuring the success of the SDGs. Follow us on social media:

For the latest on innovative entrepreneurship and social enterprise, check out StartingUpGood on Twitter/X and LinkedIn.


Disclaimer: Generative AI tools such as OpenAI’s GPT and Google’s Gemini were used in the creation of this article to assist with summarization and proof reading.


要查看或添加评论,请登录