Why a handful of nations shaping the global governance of AI is a recipe for disaster

Why a handful of nations shaping the global governance of AI is a recipe for disaster

Thanks to OpenAI's launch of ChatGPT last November and UK's hosting of an AI Safety Summit today and tomorrow, the entire world is now cognisant of the immense risks posed by AI for human safety and unaccountable concentration of power and wealth, and of the need for unprecedented global action to tackle them.?

Many do's and don'ts can be learned from the ways we managed the risks nuclear weapons and the promises of nuclear energy, after WW2.

The choices back then of the five winner of WW2, and only veto-holders of the UN Security Council, prevented a nuclear catastrophe so far. But we came very close to it several times, and nuclear risk is higher today than it ever was.

They chose to manage the risks via an informal and loose coordination of their intelligence agencies and various pressures on states, while they kept mostly for themselves the science of nuclear energy. It was only after all five of them had achieved solid nuclear weapons capability that the IAEA was established in 1957, to prevent others from doing the same.

Today, the risks are even larger with AI, its proliferation harder to control, and time to act shorter.?

Attempts by a handful of nations and firms to shape and control the global governance of AI, as it happened for nuclear, are a sure recipe for disaster, mainly because not enough nations and people will trust it enough to comply with the global bans and oversights that will be needed.

Enacting a process to build such governance that reconciles participation, inclusivity and effectiveness is ripe with huge complexities and risks.?

For these reasons, last June 28th, our Trustless Computing Association launched the Harnessing AI Risk Initiative to call for a critical mass of globally-diverse nations to gather in meetings, in Geneva and online, to design and agree upon the Rules for the Election of an Open Transnational Constituent Assembly for AI and Digital Communications to create suitable intergovernmental organizations: https://www.trustlesscomputing.org/harnessing-ai-risk-proposal

We are seeking partners of all kinds, and donors for a $1.5m grant proposal to carry on this work: https://www.trustlesscomputing.org/join-or-donate

Join us!

#aisafetysummit #aigovernance

AMIT DAS

Head -Centre for Artificial Intelligence & Machine Learning, Head Office Of International Relation & Studies , The ICFAI University Dehradun , AI Economy , AI enabled Society & Digital Diplomacy, Chinese AI Policy

1 年

Yes, It would be a strong recipe for disaster in the form of huge global inequality such as financial inequality (due to AI- redefinition of the global product market), weapon-system inequality (AI would create autonomous and more lethal Systems, so it could be tough for non-AI or low-AI Skilled countries to cope with them), technological-inequality, responsible-AI use by the countries..............many more.......... It will change the global dynamics and behavior of the world with unpredictable patterns due to the deployment of uncontrolled chip-based Intelligentsia.

要查看或添加评论,请登录

Trustless Computing Association的更多文章

社区洞察

其他会员也浏览了