Behind the scenes of the first global AI Safety Summit
Entrepreneur First
Entrepreneur First invests in exceptional individuals to build startups from scratch.
Entrepreneur First’s cofounder, Matt Clifford , took a ten week sabbatical this year to co-lead the first global AI Safety Summit, representing the UK Prime Minister. Now that he’s back at EF, we sat down to hear the details on the Summit and what’s coming next.?
What prompted the AI safety summit?
The past decade has seen an exponential rise in investment in AI. If you compare the famous 2012 AlexNet paper , which was the first time a deep learning model surpassed human performance in visual identification tasks, to GPT-4, the model behind ChatGPT, GPT-4 was trained with over 100 million times more computational resource.?
ChatGPT’s release a year ago and its enormous impact woke up the public and policymakers to the fact that this surge in investment has produced a significant leap in capability. The AI Summit was born out of the realization that this might be the calm before the storm—the moment before the release of the next generation of models. The summit aimed to bring the world together to discuss how to navigate this rapidly accelerating technology, ensuring we harness its potential and manage the associated risks.
What did the summit set out to achieve?
The Prime Minister tasked us with creating an agenda that would bring together a broad and inclusive group of attendees, not just from various countries but also from different sectors—companies and civil society, and to lay the groundwork for future global AI governance efforts. We had five key objectives: defining frontier AI and identifying its risks, establishing a process for ongoing discussions, encouraging companies to make commitments related to AI safety, promoting global collaboration in understanding AI's rapidly evolving science, and emphasizing the opportunities AI offers.
What were the key preparations for the Summit?
Normally, preparing for a summit like this would take many months, but we had only ten weeks. Despite the tight deadline, I think we achieved a great deal. We negotiated a communique that all 28 countries would sign, which was a complex process but helped us reach the first globally agreed statement on frontier AI capabilities and risks.?
Second, a week before the summit, the UK published a series of risk reports, which included declassified intelligence assessments, to provide an empirical basis for AI safety discussions. One of the challenges of this field is that it’s an area that until now has been very dominated by thought experiments and analogies, and while that can be very stimulating and generative, for every thought experiment, there's an equal and opposite experiment. The assessments, which drew from academic literature and expert opinions, were a significant step towards putting AI safety on a more empirical footing.?
Lastly, the UK AI Safety Institute, which is a new, research-led organization created in parallel to the summit, created demos to illustrate some of the risks we were discussing. There’s always a risk in AI safety that people will jump immediately to a killer-robot-terminator scenario, which was not what the summit was about, despite some of the reporting. The demos showed things like how you can use today's AI models to scale up a disinformation campaign, or how you (reassuringly) can't use today's AI models to facilitate bioweapon production, but how you might be able to if future capabilities improve as quickly as they have in the past. These demos showcased the potential misuse of AI models, helping to ground the conversations and make them more tangible. These preparations were instrumental in facilitating productive discussions at the summit.
领英推荐
Who attended?
The summit captured the world's imagination, despite being a small and exclusive event due to the limited space at Bletchley Park. Approximately 140 people attended the first day, including cabinet ministers from 28 countries, CEOs of major AI companies, leading academics, and civil society leaders. Day 1 marked a historic moment when the US and China, despite their differences, agreed on a statement about AI opportunities, risks, and governance for the first time.
The second day featured an even smaller group of about 20 leaders. These included UK Prime Minister, Vice President Kamala Harris, Prime Minister Meloni of Italy, the UN Secretary-General, and leaders from France, Germany, Japan, Singapore, Korea, as well as the CEOs of the leading AI companies, such as Demis Hassabis, Sam Altman and Dario Amodei.
What were the major achievements of the AI Safety Summit?
There were three main things that will have, I hope, a lasting impact on AI safety and governance.?
The first is that shared statement from all 28 countries - defining the challenge, acknowledging the risks, and outlining a framework for international collaboration.?
Second, the countries commissioned an annual State of the Science report as an input to each future summit. This report will summarize the current state of AI science and help establish an emerging scientific consensus, similar to what the IPCC does for climate change.?
Third, and where I spent most of my time, was an agreement between G7 countries and other allies and nine major AI companies on the role of governments in testing AI models for national security risks. This agreement ensures that before powerful AI models are deployed, they will undergo testing for potential risks, not only by the companies themselves, but with a role for government as well. I helped negotiate a statement of principles on how that would work, and the role of organizations like the AI Safety Institute in testing.??
What’s next?
The immediate next steps include two more summits over the next year, one hosted by Korea and one by France. These will occur alongside the continued acceleration of AI investment and the release of more powerful AI models. The primary focus will be to implement the agreements made at Bletchley Park, particularly in evaluating AI models for risks and capabilities.
Furthermore, it is essential to work towards greater clarity in AI regulation and policy. The emergence of various regulatory approaches worldwide poses challenges for AI companies operating in a global market. Providing clarity on how companies can innovate and deploy AI safely will be crucial for capturing the benefits of this technology.
For me, now that I’m back at EF I’m focused on identifying and supporting founders and startups that will contribute to AI safety and robustness. I believe that AI safety is not just a policy question but primarily an engineering one, offering significant opportunities for entrepreneurs and innovators. Building technology that ensures AI is safe, trustworthy, and robust will be essential for its widespread adoption.
Medico specialista presso La. Mia
11 个月Scoperta straordinaria