As AI capabilities advance, practical steps to ensure safety become critical. Drawing on consensus built through the International Dialogues on AI Safety (IDAIS) in Venice, Beijing, and Oxford, we've developed a comprehensive guide mapping potential actions for different stakeholders. The guide translates high-level goals from IDAIS consensus statements into concrete policy options across four key areas: -AI safety research -Testing and evaluation -Domestic governance -International governance For each area, we examine specific challenges —from verification methods to monitoring systems— and outline potential policy levers, drawing on both historical examples and emerging practices. This is a living document that will evolve with future dialogues and emerging governance challenges. It aims to serve as a practical resource for policymakers, companies, researchers, and philanthropists working to advance AI safety. Read the full guide here: https://lnkd.in/gMvJuEZm 请见: ?https://lnkd.in/gyvWmVUY #AISafety #AIGovernance #AIPolicy #TechnologyGovernance #FrontierAI
关于我们
The Safe AI Forum (SAIF) is a 501(c)3 non-profit focused on advancing global action and collaboration to minimize extreme AI risks. The organization was co-founded by Fynn Heide and Conor McGurk in late 2023. SAIF’s main program is the International Dialogues on AI Safety (IDAIS), which brings together senior computer scientists and AI governance experts to build international collaboration on these risks. In addition to shepherding the IDAIS program, SAIF also conducts independent research and provides advisory services to other organizations focused on international AI cooperation.
- 网站
-
saif.org
Safe AI Forum的外部链接
- 所属行业
- 非盈利组织
- 规模
- 2-10 人
- 总部
- San Francisco,CA
- 类型
- 非营利机构
- 创立
- 2023
地点
-
主要
US,CA,San Francisco
Safe AI Forum员工
动态
-
?? Join Safe AI Forum (SAIF) as an Operations Manager or Associate! We're looking for talented operations staff to help our organization scale and foster international cooperation on AI safety. ?? Are you: - Passionate about AI risk mitigation? - Skilled in finance, operations, & events? - Highly organized & adaptable? ? Why you should join SAIF: - We're a talented, ambitious and growing team - Work remotely work with global impact - Help make progress on one of humanity's most important challenges - $80K-$140K/year Apply now: https://lnkd.in/eJmEVV26
-
-
?? Continued Consensus on Global AI Safety and Governance ?? The 3rd International Dialogue on AI Safety (IDAIS), organized by the Safe AI Forum and Berggruen Institute, was held in Venice from Sept 5-8. Building on recent positive steps in international cooperation on AI safety, leading scientists and experts published a consensus statement that calls for AI safety to be recognized as a global public good, distinct from broader geostrategic competition. More details?? ?? Signatories: ● Turing-award winners (Geoffrey Hinton, Yoshua Bengio, Andrew Yao) ● AI safety and governance pioneers (Stuart Russell, Zeng Yi) ● Former and current industry leaders (Ya-Qin Zhang, Tang Jie) ● Former heads of state (Mary Robinson) ● Leading governance experts (Gillian Hadfield, Robert Trager) ?? Key Proposals: ??? Emergency Preparedness Agreements and Institutions ● States should set up an international body to coordinate domestic AI safety authorities. ● This body can foster collaboration, audit AI safety regulations, and ensure states adopt a minimal set of effective preparedness measures. ● Over time, this body can set standards for and commit to using verification methods to enforce domestic implementation of the Safety Assurance Framework. ??? Safety Assurance Framework ● Developers should demonstrate to domestic AI safety authorities that their systems do not cross red lines. ● They should set early-warning thresholds for model capabilities, which provide advance warning that AI systems may be on track to cross red lines. ● For more advanced AI systems that cross these thresholds, developers should submit high-confidence safety cases. ● Both pre-deployment testing and post-deployment monitoring may be required as AI systems become more capable. ?? Global AI Safety and Verification Research ● Scaling up independent research into AI safety and verification is necessary. ● Privacy-preserving and secure verification methods are critical to allow states to check an AI developer’s evaluation results are as claimed ● Comprehensive verification may eventually be required through third party governance, software and hardware. ● Verification may also allow states to check the safety-related claims of other states. ● To ensure global trust, international collaboration and stress-testing of verification methods is essential. Read the full statement: https://idais.ai/ #AISafety #AIGovernance #Global #Consensus #Cooperation
-