AI Safety Summit in Bletchley Park, UK—why the safety of AI should concern us all

AI Safety Summit in Bletchley Park, UK—why the safety of AI should concern us all

Among all the concerning world news over the past weeks and months, one of the potentially most important conferences for the future of humanity went nearly unnoticed by the global media, politicians, and the public. The first global AI safety summit was held last week in Bletchley Park, UK, where the UK’s Prime Minister Rishi Sunak had invited leaders from across the globe.

Not long ago, I listened to a panel with former United Sates Secretary of State and National Security Advisor Henry Kissinger at the Bund Summit 2023 in Shanghai, where he spoke about the safety of artificial intelligence (#AI) and the need for global governance being the most important tasks of today’s leaders. He also unveiled that, in Bletchley, leaders from around the globe, including from China and the US, would come together to discuss how to regulate AI and how to make it safe.

The major foreign political coup by Rishi Sunak was probably not only to bring China and the US to the table, but also to have both join the group of 28 nations signing the declaration that defines an agenda for safer AI. This declaration discusses the opportunities for AI and the risks—in surprisingly great detail, talking about the risks of “frontier AI,” which, according to the joint declaration, can be understood as highly general-purpose models, including foundation models. Risks were mentioned across multiple fields, such as large-scale disinformation, cybersecurity, and biotechnology threats, with potentially catastrophic results.

Finally, the declaration calls not just blindly for regulation but does indeed propose international cooperation that should be pro-innovation, while having a proportionate governance and regulatory approach. It also calls for many stakeholders to work together: nations, academia, companies, international fora of initiatives, and civil societies. The declaration sets an agenda for identifying risks and building risk-based policies together. It also stresses the point of building inclusive cross-country research on AI safety that encompasses existing players.

All this is probably more than even Kissinger expected when he discussed the risks and the need for such a declaration, specifically mentioning the key players in AI, China and the US.

So, what is next? What can we learn from the summit and its promising results?

1.?????? AI Leadership: Is it China or the US—or both—or all countries?

Unsurprisingly, both China and the US claim to be leaders in AI who have the capability to lead by example and make other nations follow. US Vice President Kamala Harris spoke at a press conference at the summit and underscored the leading role of the US by saying, “When it comes to AI, it is America that is a global leader and that can catalyze global action and build global consensus in a way that no other country can.” The US also released a new executive order signed by President Joe Biden to that extent. China’s tech vice minister Wu Zhaohui emphasized the model of China’s Global Artificial Intelligence Governance Initiative as an example for other countries could follow, as well as suggesting the stronger inclusion of the Global South. His call was echoed by India’s IT Minister Rajeev Chandrasekhar, who expects leading nations not to leave other nations behind, saying that the time is over for global tech dominance and that the benefits of AI should be there for all countries, not demonized. This is a very important point: Many participants seemed to have shared the view, that this race is not about being #1, but what we can do together and how we ensure participation from all countries. Hopefully, this position will be a leading one carried into future summits.

2.?????? What is the best approach to safe AI?

Prof. Stuart Russell from the University of California at Berkeley, US, said during the summit that the overall approach to safe AI must change. He believed that one cannot first develop AI and then put a team of smart people together, wondering how to make it safe. Instead, safety must be built in from the very beginning, a point he also made earlier. I believe he is right in saying we should stop thinking about how to make AI safe but should think about how to make safe AI. This is clearly a call for all companies and academics who constantly develop new models to push capabilities further. Some voices, including Elon Musk, have been calling for a moratorium of new developments, while others have called it “try to catch up.” Some say their own companies’ capabilities are behind. But that is a different story.

Nevertheless, building safe AI starts with clean datasets, ensuring the lowest possible bias in the algorithm and the data, often called “data hygiene.” Questions about data also include topics such as the completeness of data to cover all aspects of a problem that we want the AI model to solve. Second, building safe AI concerns verifying the systems that have been built—that is, deep insights into what could possibly go wrong and the development of intelligent tests for it. But it does not end there—one also needs to continuously monitor systems once they are deployed, to observe behaviors, and correct the systems and models.

All this is often easier said than done, as Microsoft’s principal AI researcher Kate Crawford acknowledges. The systems are very complex and opaque, making it difficult to fully understand what happens inside them and how and when they make mistakes.

3.?????? Do not panic!

Tesla’s CEO, Elon Musk, mentioned during the AI safety conference in a conversation with UK Prime Minister Rishi Sunak that he believes all jobs will eventually be extinct, and we humans will have no more jobs we need to do. Earlier this year, Open AI CEO Sam Altman, also an attendee of the AI safety summit, warned general public about the potentially high risks of large language models concerning cyberthreats or large-scale misinformation through AI. These are important voices. Nevertheless, let me be clear. They do not call for us to panic now. Panicking will not solve the risks. Instead, as we did with other technologies, we should build AI in a safe way and strive for global agreements so we, humanity, can control the risks and leverage the opportunities.

John Simpson, foreign affairs editor of the BBC, recently mentioned an interesting comparison. When the first traffic regulations in London were introduced in the 1930s, more than 6,000 people died in traffic accidents across the UK. Now, while we have 30 times the number of vehicles, the death toll has been reduced to 1,695 in 2022 due to vehicle and safety regulations.

4.?????? The road ahead—where do we go from here?

This AI safety summit was about setting a joint agenda. This was very important, and the world may one day look back on this day. However, now, this issue concerns how regulators can fill that agenda with action, which means defining universal rules for safer AI. This could include rules about what we want AI to decide and what we do not. That means decisions about defense or medical treatment could potentially be decisions about life and death. The action plan will also include rules about building, validating, and deploying AI models.

The summary of the chair of the AI safety conference mentions five key areas for action:

a.?????? The necessity of immediate action to build a shared understanding of frontier?AI

It seems obvious that one should reach common ground first before moving forward, but it is something I realized at many conferences this year: Everyone understands something different when we discuss “AI,” that is, what it includes and what it does not. The Bletchley Declaration clearly mentions that it is important to work toward a common understanding of what frontier AI really is, which could be driven through a United Nations AI advisory body. The Organization for Economic Cooperation and Development (OECD) could be another partner in this effort.

b.????? The need for an inclusive approach to address frontier?AI?and other risks

The inclusivity of?AI?plays an important role. Nevertheless, for the universal acceptance of AI, it is important that it be inclusive and that it help bridge the digital and developmental divides, narrowing rather than widening existing digital inequalities. To that end, participants encouraged discussion in a range of forums, including future?AI?safety summits. Examples of other initiatives mentioned included the?G20, the?UN,?and its bodies, including the United Nations Educational, Scientific, and Cultural Organizations (UNESCO). There are many country-specific initiatives on AI governance. For example, initiatives include the Republic of Korea’s New Digital Order proposal, China’s Global?AI?Governance Initiative, the recently agreed-upon Santiago Declaration, and the African Union’s development of a continental?AI?strategy.

Across both days of the summit, participants also supported the view that the equitable realization of AI’s benefits includes the importance of breaking down barriers to entry and challenges faced by groups such as women and minorities. An important point is the opportunity to utilize?AI?to realize the?UN’s?Sustainable Development Goals.

c.?????? The importance of addressing current?AI?risks alongside those at the frontier

Not surprisingly, if government representatives contemplate risks, those relate to the governance of nations, whether it is about societal stability through the spread of false narratives, harming the credibility of individuals, the illicit influence of electoral processes as well as?AI-enabled misuse for crime, and the dangers of?AI?increasing inequality and amplifying biases and discrimination.

The G7?member countries also noted at the summit the project-based work committed to the?G7?Hiroshima?AI?Process, which includes specific action on disinformation and election integrity, and the cooperative Global Challenge to Build Trust in the Age of Generative?AI.

d.????? The value of appropriate standardization and interoperability in?AI

Participants discussed the benefits of establishing interoperable approaches, often supported by appropriate standardization and, where suitable, shared principles, codes, or similar frameworks.

Of course, it is crucial to strike the right balance between domestic and international actions. Whether through multistakeholder institutions, such as?the OECD?and the Global Partnership on AI or the International Standards Organization (ISO), a global digital standards ecosystem should be constructed that is open, transparent, multi-stakeholder, and consensus-based. Participants set out expectations that a set of agreed-upon principles and codes would establish a baseline for developers at the forefront of?AI?and anticipated building on them through further multi-stakeholder engagement.

e.?????? The need to develop the broader?AI?ecosystem, including skills and talent

Of course, there is no forum concerning innovation, research, or development in a world where the role of talent and skills would not be addressed or mentioned. At Bletchley, beyond the skills to develop and use AI models, access to AI infrastructure has a major impact on the participants. AI requires a great deal of capacity regarding IT infrastructure, data centers, and energy. This is important for topics such as participation, specifically beyond the countries that have the resources and can afford this.

5.?????? Summary

From Kissinger to Musk, many important voices agree that AI has been the most disruptive technology for a century. To that end, it was very important that policy makers and thought leaders from governments, industry, and science met to set an agenda on how to make AI safe. It is an enormous challenge ahead of us, with risks we may still not fully understand.

But, in my opinion, today’s jobs will not completely disappear. We will continue to use our human brains to think, create, and be creative. Jobs will transform or change—and many will be impacted. I also do not believe that AI will take over this world and give the order for its extinction. Painting continuous horror scenarios and demonizing AI will not allow fact-based research, discussions, and agreements. Certainly, there is no reason for taking AI’s risks lightly—as the world does not take the risks of nuclear weapons or global warming lightly—but there is still time to act by bringing the smartest (human!) brains together and for all stakeholders fully recognizing their responsibilities—from top leaders to engineers. The fact that 28 countries came together under the UK’s initiative, under the inclusion of the top economic powerhouses—China, India, Germany, Japan, and the US—is a very important first step.

Now, it is time to take the next step and fulfil the agenda set out with local, regional, and global policies and rules that allow the enormous benefits of AI’s promises for the planet to become reality while keeping the risks for humanity under control. It’s like a football match – we need referees – but we also need players on the ground.

#AI #machinelearning #future #employment #ai4good #deeplearning #generativeai #llms #agi

(Disclaimer: The ideas, views, and opinions expressed in my LinkedIn posts and profiles represent my own views and not those of any current or previous employers or organizations with which I am associated. Additionally, any and all comments on my posts from respondents/commenters to my postings belong to, and only to, the responder posting the comment.)

Haitham Khalid

Manager Sales | Customer Relations, New Business Development

1 年

Great summit! Can't wait to hear about the actionable steps discussed to ensure AI safety.

回复
Torsten Welte

Business AI - Office of the CRO

1 年

As you said - Bringing experts together to work on big topics like AI, climate change and economy is critical. We need the best and brightest to focus and exchange their experiences for improvements and to govern for the greater good. There are so many different use cases in healthcare where AI can help. Like your comment on data and continuous monitoring as many discussions only look at development. Thank you for a great article and summary.

要查看或添加评论,请登录

Clas Neumann的更多文章

社区洞察

其他会员也浏览了