Couldn't join us live? Or is there a speaker or session you want to revisit? Recordings of The Sixth Edition of The Athens Roundtable on AI and the Rule of Law are now available on our YouTube channel. ?? Watch here: https://lnkd.in/g4aS4NVY ?? We'd love to hear your thoughts. Share your takeaways in the comments. Thank you again to the OECD - OCDE / OECD.AI for hosting the in-person event, and to all our partners who made this year's Roundtable such a success—UNESCO, France's AI Action Summit, the AI & Society Institute, the Conseil national du numérique, Make.org, the Tech and Global Affairs Innovation Hub of Sciences Po, Arnold & Porter, Fathom.org, and H.E. the President of the Hellenic Republic Ms. Katerina Sakellaropoulou.
The Future Society
非盈利组织
Boston,Massachusetts 23,792 位关注者
Aligning artificial intelligence through better governance.
关于我们
- 网站
-
https://www.thefuturesociety.org
The Future Society的外部链接
- 所属行业
- 非盈利组织
- 规模
- 11-50 人
- 总部
- Boston,Massachusetts
- 类型
- 非营利机构
- 创立
- 2014
- 领域
- Public Policy、Emerging Technologies、AI Safety、Technological Convergence、Artificial Intelligence、Privacy、Politics、Technology Governance、Governance、AI、Regulations、Independent Audit、International Governmental Organizations、Responsible AI、Global Governance和Technology Policy
地点
-
主要
867 Boylston St
5th floor
US,Massachusetts,Boston,02116
The Future Society员工
动态
-
With the EU AI Act, the European Union is demonstrating true leadership in AI governance. Informed by an unparalleled collaborative and extensive process over the last several months with experts from academia, civil society, and industry, the EU Code of Practice is being finalized to help providers of General-Purpose AI (GPAI) Models with systemic risk to comply with the EU AI Act. By focusing on the most powerful AI models and their providers, and enabling effective downstream adoption, the Code has been carefully crafted and deserves recognition for the achievement it is. But there is still room for improvement in order to fulfill the #AIAct’s promise. Our analysis of the just-released third draft of the Code highlights recommendations in five priority areas: 1?? External pre-deployment assessment (Measure II.11.1.): The Code should be simplified, removing the exemptions for similarly safe models and instead requiring external assessment as a default.? 2?? Pre-deployment notification (Measure II.14.4.): Model Reports should be shared at least eight weeks before a model is placed on the market.? 3?? Whistleblower protections (Commitment II.13.): To ensure legal certainty, we suggest a full revision of the commitment, clarifying the concrete mechanisms that providers of GPAI models with systemic risk should implement to protect the individuals that are willing to speak up in the face of wrongdoing. 4?? Serious incident response readiness (Measure II.6.3.): Providers should be required to define emergency processes, identifying corrective measures for specific systemic risks, articulating response timelines, and documenting all measures in a corresponding framework to invite scrutiny from the EU AI Office.? 5?? Risk taxonomy (Appendix 1.1.): The Code should honor the intent of the Act (Recital 110 and article 3(65)) by prioritizing? large-scale discrimination, risks to fundamental rights, and risks from major accidents, among others. A successful Code will protect citizens and promote innovation. From railroads to nuclear power, history has shown us again and again how breakthrough technologies serve society best when there’s a good regulatory framework in place. Read our blog post for more details ??
-
In Case You Missed It: The third draft of the EU Code of Practice for General-Purpose AI Model Providers was published today, along with an exec summary (link in comments). This is a remarkable effort, uniting hundreds of civil society, academic and industry experts to decide how the EU AI Act’s GPAI rules should be applied in practice. The Future Society has been proud to play a part in this. But it’s disheartening to see how much the Code has been watered down to appease tech industry demands. It seems authorities will receive the full model documentation at time of release (doesn’t that risk a fait accompli blitzscaling strategy?), risks to critical infrastructure and of mass discrimination, among others, are not taken seriously, and whistleblowers are not adequately protected - despite precedents. In this current form, the Code fails to address the harms and risks that AI poses for EU citizens and businesses, and becomes a box ticking exercise for lawyers instead –which would be a devastating outcome. So please—share your feedback and make your voice heard! A few quick thoughts: ?? Watering this Code down worsens the EU’s AI adoption problem. The AI revolution is here - a strong Code gives the certainty and assurance businesses, entrepreneurs and consumers seek to trust these tools. This legal backstop is crucial to prevent unfair or opaque practices by the providers of GPAI posing systemic risks, so that the ecosystem can thrive. ??With great power comes great responsibility. Products we use everyday like lamps, coffee machines, and cars are subject to safety requirements, such as testing, third party evaluations, prototype pre-approval, incident management, licenses, or other measures to foster reliability and trust. GPAI models that pose systemic risks —already used by hundreds of millions of people and compared to nuclear power by their own creators— merit proportionate, and therefore serious, safeguards. ??Signing means standing for quality, domestically but also outside the EU. Indeed, the Code aligns with international voluntary commitments already made by GPAI providers. Whether EU or foreign developers, signing it demonstrates readiness to deliver on previous promises and on society’s expectations for quality innovation. ??The Code’s Chairs should stop bending the rules to appease industry. This is a strategy that has handed the EU digital market to incumbents for the past 25 years - because startups are focused on building, not lobbying. Providers who refuse scrutiny and accountability should simply withdraw their models from the EU market. This allows more room for EU champions and law-abiding foreign providers–a step forward in EU sovereignty! Let’s take this technology seriously. What are your thoughts on the Code? Let the Chairs know by 30 March? ??
-
?????????????????????????? ?????????????????????????? ???????????????????????? ?????? ???? ???????????????????????????? At the 6th edition of the Athens Roundtable, held in partnership with the OECD, global leaders and experts came together to discuss the biggest challenges in AI governance. A new report from The Future Society captures key takeaways, including: ?? International cooperation: Effective AI governance requires stronger international coordination and alignment on standards. ?? Operationalising AI principles: Moving from high-level principles to actionable frameworks is essential for meaningful accountability. ?? Public-private collaboration: Governments, industry, and civil society must work together to shape responsible AI policies. Beneficial progress in AI isn’t just a national issue—it’s a global one. The discussions at the Athens Roundtable underscore the urgent need for international cooperation to build AI systems that are safe, fair, and transparent. ?? Read the full report here: ?? https://lnkd.in/gdnkPref Niki Iliadis George Gor Frank Ryan Tereza Zoumpalova Mai Lynn M. Jerry Sheehan Audrey Plonk Karine Perset?Celine Caira?Luis Aranda Lucia Russo?Noah Oder?John Leo Tarver ????Rashad Abelson?Angélina Gentaz?Valéria Silva?Bénédicte Rispal?Eunseo Dana Choi? Sara Fialho Esposito Nikolas S. Sarah Bérubé Sara Marchi #AI #AIGovernance #AIAccountability #AthensRoundtable #FutureSociety #OECD #TrustworthyAI
-
-
?? Did the Paris AI Action Summit Deliver on Public Priorities? In February, world leaders, industry representatives, and policymakers gathered at the AI Action Summit to shape the future of artificial intelligence. But how well did the Summit respond to the priorities of citizens and AI experts? Our official pre-Summit global consultation of 11,661 citizens and 202 organizations highlighted a clear message: Embrace AI’s potential with "constructive vigilance"—balancing innovation with strong safeguards. Here's a comparison of the AI Action Summit outcomes with the priorities identified through the consultation: ?? Key Successes ? Launch of Current AI, a public interest partnership with an initial endowment of €400 million and a €2.5 billion funding target. ? Creation of observatories focused on the future of work and AI across 11 nations. ? Presentation of the first International Scientific Report on the Safety of Advanced AI. ? Presentation of a pilot AI Energy Score Leaderboard. ?? Shortcomings ? No proposals on risk thresholds or system capabilities that could pose severe risks, as was committed to in Seoul. ? No corporate accountability mechanisms for commitments made at previous summits. ? No comprehensive education and reskilling programs. ? No concrete mechanisms to include Civil Society Organizations at the next AI Summit. ?? The Bottom Line: While the Summit made key advancements on public interest AI, it did not deliver the concrete guardrails and AI governance mechanisms called for by citizens and AI experts. #AIActionSummit #AIGovernance?
-
-
?? Exciting Opportunity: Join our leadership team! Are you a visionary operations leader who wants to have impact in AI governance? We're #hiring for a COO! Applications are being accepted on a rolling basis until March 17, 2025.
Work with us! Seeking a seasoned Chief Operating Officer (COO) to lead human resources, finance, administration, systems, and compliance for our growing team. This is an exciting opportunity to execute our strategic vision and shape our organizational culture and values as we scale. This is a remote role, open to candidates based between Pacific Time and Central European Time. ?? Rolling applications, apply by March 17, 2025: https://lnkd.in/gWCBN44r What we’re looking for: ?? Interest in AI governance and a passion for The Future Society's mission to align AI through better governance ?? Demonstrated success in a COO or similar senior operations role at a U.S. nonprofit, preferably with an international footprint ?? Knowledge and experience of nonprofit compliance, including 501(c)(3) regulations and reporting requirements The salary range is between $124,000-$158,000, depending on location. #Hiring #Jobs #AIGovernanceJobs #OperationsJobs #COO #COOJobs #RemoteJobs
-
It has been a privilege to attend the #AIActionSummit on behalf of The Future Society (TFS), serving the interests of the broader public. Great to see some of the limelight put on sustainability, future of work, public interest AI and bridging the #AI divide globally. Many had hoped for more on the AI governance track (cf. details here https://lnkd.in/em-B2Dku? ), but TFS's Global AI Governance staff are already rolling up their sleeves to build coordination on critical topics where gaps remain ?? Here are some of my takeaways for the Summit: 1?? Multilateral engagement matters. While some may be skeptical, I continue to believe that international coordination is essential for AI governance, and summits like this play a key role in bringing the world together to address global challenges. 2?? Governance happens in the day-to-day. Real progress comes from consistent, ongoing work – not just high-level discussions. Policies, accountability, and implementation must be priorities every day. AI is moving fast. We need to move faster. Bolder action is necessary: binding standards, rigorous oversight, and sustained investment in public-interest AI must be at the core of our approach. 3?? Despite widespread support for defined boundaries on high-risk AI models, we didn't see the establishment of oversight thresholds or explicit red lines that can prevent systemic risks. These ideas emerged from the Seoul AI Safety Summit and were echoed in our 200-expert and 11,000-citizen consultation to inform AI Action Summit planning. Voluntary corporate pledges are welcome, but without an official mechanism to track their implementation, real impact is limited. In short: The path forward requires sustained action and real accountability. We're committed to making that happen. Thanks to all our partners and supporters, as well as to my full team at The Future Society, for all the collaboration this past week and over the months ahead. Let's do this!
-
-
While a step in the right direction, the voluntary commitments made during the #AIActionSummit by industry leaders can only go so far. As our Executive Director Nicolas Mo?s notes in EL ESPA?OL, the lack of enforcement mechanisms “jeopardize the likelihood of tangible impact.” Accountability and transparency around AI technologies are clear priorities for the public and AI experts around the world. According to the results of the consultation we conducted in the lead-up to the Summit, there is a “clear consensus against the uncontrolled development of AI.” As Mo?s says, “We welcome the voluntary commitments made so far, but we must go beyond goodwill gestures.” Read Esther Paniagua (she/her)'s full article:?https://lnkd.in/dPTbu5sD
-
Our work must continue. The #AIActionSummit has ended, and it has been a privilege to attend on behalf of The Future Society. Below is an issue "map" that emerged from the Summit. The accompanying statement by Summit Co-Chairs India and France flagged that "albeit inconclusive", the exchanges raised calls for the design of an #AI governance system addressing these issues (link in comment). Let's build this. While we hoped for stronger commitments, meaningful progress in AI governance requires persistence beyond #Paris. It seems the Co-Chairs agree: "As we look ahead, it is crucial that we continue to build upon the foundation we have set here, refining, and expanding our efforts in future summits. By maintaining an open dialogue, fostering international collaboration, and addressing emerging challenges, we can ensure that AI is developed in a manner that is safe, secure, trustworthy, equitable, and beneficial to all." All eyes are on future summits organizers - the stakes are high.
-
Civil society has a critical role to play in ensuring that AI technologies are created and deployed to be safe, respect fundamental rights and values, and serve the public interest. But what needs to be done to strengthen the voice of civil society in AI governance? ?? ?? Yesterday, The Future Society came together with other nonprofits to co-host an #AIActionSummit side event on empowering civil society. Our Caroline Jeanmaire presented the results of the consultation we co-organized for the Summit, representing the views of over 200 experts from civil society and academia as well as more than 11,000 citizens. You can see the 10 identified priorities as well as the results in our report or its 4-page summary:?https://lnkd.in/dfXTG9Y3 ?? Discussions sparked the development of a civil society wish list for AI governance (to be released soon). This kind of collaboration is what we need to build a movement across civil society, which can then better work together with government, industry, and academia to build the robust AI governance ecosystem we urgently need. ?? Thank you to Renaissance Numérique, Wikimédia France, the Avaaz Foundation, Connected by Data, Centre for AI & Digital Humanism (Digihumanism), and European Center for Not-for-Profit Law Stichting for co-hosting this vital forum. We value your partnership in advocating for practices and policies that reflect public concerns about the current and future impacts of AI.
-