?? New Policy Brief How can the UK and EU enhance AI security while respecting their distinct mandates? Our latest policy brief explores strategic alignment between the UK AI Security Institute (AISI) and the European AI Office, using a four-tier framework - Collaboration, Coordination, Communication, and Separation - to maximise impact while maintaining autonomy. ??Collaborate on global standards and international engagement ?? Coordinate on evaluation methodologies and interoperability ?? Communicate via structured risk monitoring and incident reporting ?? Separate where strategic autonomy and confidentiality are paramount By leveraging these structured synergies, both institutions can streamline participation in AI summits, ensure consistent safety evaluations, and enhance cross-border AI security—all while maintaining their unique roles. Our key recommendation is for policy practitioners in both jurisdictions, within public bodies or civil society, to concretise these avenues and provide more detailed guidance for their implementation. Lara Thurnherr Risto Uuk Tekla Emborg Marta Ziosi Isabella Wilkinson Morgan S. Renan Araujo Charles Martinet Oxford Martin School Read more: https://lnkd.in/ezue4esz
关于我们
- 网站
-
https://www.oxfordmartin.ox.ac.uk/ai-governance/
Oxford Martin AI Governance Initiative的外部链接
- 所属行业
- 高等教育
- 规模
- 2-10 人
- 类型
- 教育机构
Oxford Martin AI Governance Initiative员工
动态
-
??New member! We are excited to welcome Fazl Barez who joins us as a senior postdoctoral research fellow. He will be leading research initiatives in AI safety and interpretability. Fazl is an established AI safety researcher who collaborates with leading academic institutions, AGI labs, and government organizations. He holds research affiliations with the Centre for the Study of Existential Risk (CSER) at University of Cambridge, the Digital Trust Centre at Nanyang Technological University, and the School of Informatics at University of Edinburgh. As a member of ELLIS, he contributes to advancing AI safety research across Europe. Previously, Fazl served as a Research Consultant at Anthropic, Co-Director and Advisor at Apart Research, and Technology and Security Policy Fellow at RAND Corporation. His industry experience includes developing interpretability solutions at Amazon and The DataLab, as well as building recommender systems at Huawei. Fun fact about Fazl ?? Fazl has watched every engineering challenge from the Stuff Made Here channel - analyzing 30 videos of innovative mechanical builds, from the robotic haircut machine to the basketball backboard that you can't miss, and reads and writes poems. Oxford Martin School
-
-
Oxford Martin AI Governance Initiative转发了
Big thank you to Milltown Partners for having me involved in their post Paris AI Fringe with Tom Bristow Iain Mackay Gaia Marcus FRSA & Melika Carroll. I talked about what we learnt. What worked. What didn’t. And what needs to change. The French attracted great attendance from world leaders. Vance being there just weeks into a new admin. And Vice Premier Zhang Guoqin. The team worked flat out & should be proud. 109bn euros of investment into France is impressive. The summit reinforced France's reputation as an AI power on the rise. What didn’t work so well. At Bletchley we capped at 100 attendees, with 30 countries represented. The French went much bigger with 1,000 attendees and 100 countries. I worried that would make it difficult to deliver substantive international agreements. That was the case. (In the recent Oxford Martin AI Governance Initiative report The Future of the AI Summit Series, we argue that for substantive progress on advanced AI you need a much smaller group of countries who are genuine AI powers.) But putting on these summits is extremely hard. It's easy to pass judgment from the outside. Now for the US & UK. Vance set out a clear & nuanced agenda. One of the most pro tech speeches I've ever heard while simultaneously firing a shot across the bows of big tech. And being alive to concerns of workers. So soon after the start of an administration it was impressive. There was plenty of anti safety rhetoric. Inevitable given Biden's focus on it. But crucially & sensibly Vance kept the door open stating: "this doesn't mean, of course, that all concerns about safety go out the window, but focus matters”. He is right. I might be biased but the British played a blinder. Not signing bought vital goodwill with the Trump admin at little cost. No10 said "the declaration didn't provide enough practical clarity on global governance, nor sufficiently address harder questions around national security & the challenge AI poses to it." This is true. Pro safety people like Max Tegmark argued that countries shouldn’t sign. Timelines for AGI, superintelligence, or a “country of geniuses in a data center” as Dario of Anthropic calls it are getting shorter. The consensus seems to be 2026-27 & 2030 at the latest. One Chinese AI company presented a slide that said that AI would have self perception in 2027 & consciousness in 2030. Once such powerful AI exists then it is crucial that the US & China talk to each other. Thanks to making the right moves early the UK is strategically well positioned to be a broker like we did at Bletchley. The speed of AI progress reinforces how important these summits are. It’s great that India is doing the next summit. But we can’t wait 12 months. The world could look very different in a year. We need summits every 6 months even if they are smaller or have a virtual element like Seoul. They are vital for the international community to keep up with the pace of A.I. The exponential isn’t slowing down!
-
-
?? New Report! In collaboration with Demos and supported by the Government of the Republic of South Korea, our researchers examine the evolving digital rights landscape in the UK, the EU, and beyond. Through extensive literature review, expert interviews, a roundtable, and a policy workshop with digital rights organizations, academics, and policymakers, we explore the current challenges and new approaches to advancing digital rights governance. Read more https://lnkd.in/eg238Vin
-
As the AI Action summit in Paris wrapped up yesterday, our researchers put forward bold reforms to keep future summits more focused and agile. Key recommendations include: ? Maintain a core focus on advanced AI governance ? Adopt a two‐track model, with one track dedicated to high-stakes AI safety and governance and a second for broader public interest issues Other ideas include: ? Reform host selection by moving to a bidding system with regional rotation. This ensures that hosts have both the resources and the geopolitical diversity needed to drive effective summits. ? Establish a hybrid secretariat model -keeping a semi-permanent core team to preserve continuity while giving each host enough flexibility to tailor the agenda to emerging priorities. ? Create a multi-year agenda roadmap overseen by a steering committee. This approach would provide continuity across summits while allowing individual hosts to highlight national or emerging issues. ? Adapt the summit frequency by pairing a major annual summit with interim meetings to ensure the forum remains responsive to rapid developments in AI. Oxford Martin School Lucia Velasco Charles Martinet Henry de Zoete https://lnkd.in/eiDsyJ29
-
Oxford Martin AI Governance Initiative转发了
?? Excited to share our new paper: How effective is machine unlearning for AI safety? As AI systems become more integrated into critical domains like healthcare and cybersecurity, how do we ensure they 'forget' harmful knowledge while preserving beneficial capabilities? Our paper maps out key challenges and open problems in this emerging field. Link to the paper: https://lnkd.in/emjk8zfw Thanks to my amazing collaborators: Tingchen Fu Ameya Prabhu Stephen Casper Amartya Sanyal Adel Bibi Aidan O'Gara Robert Kirk Ben Bucknall Tim F. Luke Ong Philip Torr Kwok Yan Lam Robert Trager David Krueger S?ren Mindermann José Hernández-Orallo Mor Geva Pipek Yarin Gal Collaboration with University of Oxford Tangentic UK AISI Massachusetts Institute of Technology and many others!
-
-
New Research memo! Who Should Develop Which AI Evaluations? In the rapidly advancing field of AI, model evaluations are critical for ensuring trust, safety, and accountability. But who should be responsible for developing these evaluations? Our latest research explores the challenges include: 1. Conflicts of interest when AI companies assess their own models 2. The information and skill requirements for AI evaluations 3. The blurred boundary between developing and conducting evaluations To tackle these challenges, our researchers propose a taxonomy of four development approaches and present nine criteria for selecting evaluation developers, which we apply in a two-step sorting process to identify capable and suitable developers. Lara Thurnherr Robert Trager Christoph Winter Amin Oueslati Clíodhna Ní Ghuidhir Anka Reuel Merlin Stein Oliver Guest Oliver Sourbut Renan Araujo Yi Zeng Joe O'Brien Jun Shern Chan Lorenzo Pacchiardi Seth Donoghue Oxford Martin School Read the full report here: https://lnkd.in/etHrqCms
-
?? New Research Memo! The Oxford Martin AI Governance Initiative recently convened a group of experts to explore how the forthcoming UK frontier AI bill can be best shaped to achieve the UK government’s goals of effective regulation while remaining narrow and pro-innovation. Here are key take aways: 1?? The United Kingdom should act now to secure a position of leadership in frontier AI 2?? Domestic law is a key part of international regulation 3??The government must balance expanding its own reach and its reliance on others 4?? Robust international regulation promotes the domestic economy 5?? The UK AI Safety Institute (AISI) should continue to advance the state of the science as an arm’s-length body (ALB) 6?? The government should emphasize offering free evaluations and safety certification to open-weight AI developers to incentivize participation in the regulatory regime Authors: Nick Caputo Haydn Belfield Jakob M?kander Matteo Pistillo Huw Roberts Sophie Williams Robert Trager Oxford Martin School Find out more here: https://lnkd.in/eyRdcFx4
-
?? Subscribe to our AIGI newsletter to stay updated on our latest research, events, community news, and more! Sign up here: https://lnkd.in/epbvxiMw Check our our latest issue here: https://lnkd.in/eNCiyJ8a We look forward to connecting with you!
-
-
?? New event Access to AI technologies is a cornerstone of digital inclusion, yet disparities persist across regions, communities, and systems. We are hosting a hybrid session on December 4, bringing together global experts to discuss how equitable access can be achieved, highlighting the roles of governments, international organizations, and collaborative frameworks. Panelists will share perspectives on policy innovations, regional strategies, and practical steps toward a more inclusive AI landscape. Registration details below. https://lnkd.in/eTcQZG7M