Partnership on AI的封面图片
Partnership on AI

Partnership on AI

研究服务

San Francisco,California 22,834 位关注者

Advancing Responsible AI

关于我们

Partnership on AI (PAI) is a non-profit partnership of academic, civil society, industry, and media organizations creating solutions so that AI advances positive outcomes for people and society. By convening diverse, international stakeholders, we seek to pool collective wisdom to make change. We are not a trade group or advocacy organization. We develop tools, recommendations, and other resources by inviting voices from across the AI community and beyond to share insights that can be synthesized into actionable guidance. We then work to drive adoption in practice, inform public policy, and advance public understanding. Through dialogue, research, and education, PAI is addressing the most important and difficult questions concerning the future of AI. Our mission is to bring diverse voices together across global sectors, disciplines, and demographics so developments in AI advance positive outcomes for people and society.

网站
https://www.partnershiponai.org/
所属行业
研究服务
规模
11-50 人
总部
San Francisco,California
类型
非营利机构

地点

  • 主要

    2261 Market Street #4537

    US,California,San Francisco,94414

    获取路线

Partnership on AI员工

动态

  • 查看Partnership on AI的组织主页

    22,834 位关注者

    AI-generated avatars, manipulated images, and synthetic voices - newsrooms are navigating a new reality. How should journalism adapt? At #ISOJ2025, Claire Leibowicz will join the panel "Synthetic Media and Journalism: Avatars, Image Creation, and Manipulation" on March 28 from 4:45–6:00 p.m. CT to discuss what this shift means for journalists, audiences, and the future of trust in media. Hosted by Robert Quigley (UT Austin), featuring: ?? Carlos Eduardo Huertas, CONNECTAS (Latin America) ??Claire Leibowicz, Partnership on AI ??Santiago Lyon, Content Authenticity Initiative (CAI) ??Craig Silverman, ProPublica Join online or in person: https://buff.ly/ugqtW6f

    • 该图片无替代文字
  • Partnership on AI转发了

    查看Claire Leibowicz的档案

    Head of AI and Media Integrity at Partnership on AI | PhD at University of Oxford

    ?? Sharing a huge responsible AI reporting milestone! ?? I’m incredibly proud to announce the release of three new case studies from Code For Africa (CfA), Google, and Meedan, about how they are implementing Partnership on AI's Synthetic Media Framework. This rounds out the *19*-case collection developed over the past two years: a massive effort to better understand opportunities + mitigate the risks of synthetic media. These case studies explore how these organizations are using disclosure to promote transparency in AI-generated content. It's been a long journey, but today, we have an extensive library that offers critical insights into how transparency and ethical practices can shape the future of AI. The latest case studies cover important topics ?? How synthetic media impacts elections and political content ?? How disclosure can limit misleading and gendered content ?? How transparency signals can empower users to make informed decisions It’s been immensely rewarding to see this project grow from an idea to a comprehensive, community-built framework that’s helping shape responsible AI deployment. And we're incredibly grateful to the partners who helped bring this latest vision to life -- especially -- Ed Bice, Nat Gyenes, Paree Zarolia, Clement Wolf, Chris Roper, Amanda Strydom who embraced Christian H. Cardona and I through months of edits and kind requests for more details :) And to Christian H. Cardona himself, who hit the ground running at PAI alongside the launch of this ambitious project, seamlessly collected cases, and has been a source of resolve, energy, and levity through it all. You can explore the final collection and dive into the full insights here: ?? Read More https://lnkd.in/e73N9ZPb #AIEthics #SyntheticMedia #ResponsibleTechnology #MediaTransparency #Innovation #TechForGood #AI #DigitalEthics #Leadership #FutureOfTech

    • 该图片无替代文字
  • Partnership on AI转发了

    Could labeling synthetic political ads help safeguard elections? To answer this question, we turned to Partnership on AI’s Synthetic Media Framework for guidance. Today we’re sharing what we learned. As one of Partnership on AI's Framework supporters, we're sharing our unique perspective to help inform the governance and responsible development of synthetic media technologies. This collective effort aims to advance the broader AI community's understanding. We're honored to be part of this collaborative Framework, contributing our experiences to further the responsible progress of this critical field. To read all of the case studies, click here: https://lnkd.in/gJ8iev8E

    • 该图片无替代文字
  • 查看Partnership on AI的组织主页

    22,834 位关注者

    NEW ?? Dive into three new case studies exploring how leading organizations - Code For Africa (CfA), Google, and Meedan - are mitigating synthetic media risks through PAI's Synthetic Media Framework: https://buff.ly/tAYbrzu These studies delve into an underexplored area of synthetic media governance known as direct disclosure - the methods or labels used to convey how content has been modified or created with AI. Read the new cases to learn: ?? How synthetic media can impact elections and political content ?? How disclosure can limit misleading, gendered content ?? How transparency signals help users make informed decisions about content Alongside our existing case studies, the case study library provides critical insights into the evolving landscape of AI content transparency and responsible technology deployment. Read the blog for more insights ?? https://buff.ly/b8SI3x1 #AIEthics #SyntheticMedia #ResponsibleTechnology #MediaTransparency

    • 该图片无替代文字
  • 查看Partnership on AI的组织主页

    22,834 位关注者

    AI is transforming enterprise, but the path to responsible adoption isn’t always clear. This week at #HumanX, Partnership on AI hosted a fireside chat where CEO Rebecca Finlay joined Alayna Kennedy (Mastercard) and Emily M. (Adobe) to explore how enterprises can not only navigate AI adoption but also shape the future of AI governance and accountability. In the conversation, three themes stood out: ?? Trustworthy AI is a competitive advantage for enterprises -?There is a very strong business case for doing AI governance, even in this current landscape, as large enterprises are only going to procure AI?from organizations that have done their due diligence around safety. ?? Scaling is the key word?- Mastercard's experience--from reviewing 5 AI products in 2019 to over 400 in 2024--is an example of creating standardized, scalable controls that don't rely on manual reviews and can be consistent across applications ??You don't have to start from square one, there are existing frameworks to utilize -?Leading companies like Mastercard and Adobe have developed their own frameworks, in addition to industry-recognized risk frameworks like NIST and ISO standards that provide vetted, collaborative benchmarks that have become the international gold standard for AI governance As we look to the future of enterprise AI, the work ahead will require ongoing collaboration and innovation. By engaging diverse stakeholders, we can build frameworks that foster responsible AI adoption, shaping a future where AI drives value while upholding ethical standards across industries.

    • 该图片无替代文字
    • 该图片无替代文字
    • 该图片无替代文字
  • Policymakers, platforms, and civil society are grappling with generative AI’s role in shaping public discourse, particularly during elections. In the final three sessions of PAI’s AI and Elections Community of Practice, experts from the Center for Democracy & Technology (CDT), CIPESA, and Digital Action discussed AI’s use in election information and AI regulations in the West and beyond. The eight-part series is now complete, but throughout its course, many takeaways emerged as stakeholders shared their efforts, received feedback, and discussed tough questions and tradeoffs: ?? Down-ballot candidates and female politicians are more vulnerable to the negative impacts of generative AI in elections. ?? Platforms should dedicate more resources to localizing generative AI policy enforcement. ?? Globally, countries need to adopt more coherent regional strategies to regulate the use of generative AI in elections, balancing free expression and safety. For more insights, read the blog below.

  • 查看Partnership on AI的组织主页

    22,834 位关注者

    From Bletchley Park to Paris, AI governance is evolving. On a recent London Futurists episode, PAI’s CEO Rebecca Finlay returns to the podcast to share insights on the shifting landscape of global AI collaboration. In conversation with David Wood and Calum Chace, she discusses key moments from the Global AI Action Summit and what they signal for the future. Listen here: https://buff.ly/82rBeN8 #GlobalAI #AIGovernance

    PAI at Paris: the global AI ecosystem evolves, with Rebecca Finlay - London Futurists

    PAI at Paris: the global AI ecosystem evolves, with Rebecca Finlay - London Futurists

    buzzsprout.com

  • 查看Partnership on AI的组织主页

    22,834 位关注者

    Tomorrow at HumanX 2025! We’re just one day away from a fireside chat on responsible AI in 2025 (and beyond). Join us at 5:00 PM PT as Rebecca Finlay, CEO of Partnership on AI, sits down with Alayna Kennedy (Mastercard) and Emily M. (Adobe) to explore the future of AI governance and enterprise adoption. They’ll share insights from PAI’s Enterprise AI Workshop and the AI Action Summit in Paris, diving into the challenges and opportunities shaping responsible AI development. ?? March 12 ?? The Fontainebleau Las Vegas ? 5:00 PM PT ?? Details & RSVP: https://buff.ly/9XVxjLv

    • 该图片无替代文字
  • 查看Partnership on AI的组织主页

    22,834 位关注者

    ?? The AI Standards Hub Global Summit is just around the corner (March 17-18) and the full agenda is now live! As AI governance takes shape worldwide, standards are playing a pivotal role in ensuring interoperability, accountability, and trust. This two-day event, co-hosted by Partnership on AI, AI Standards Hub, OECD.AI, and United Nations Human Rights, will bring together experts from across the AI ecosystem to tackle some of the most pressing questions in AI standardization. ?? Day 1: Where do AI standards stand today? Explore their role in global regulatory alignment, fostering inclusivity in development, and strengthening AI assurance. ?? Day 2: How can we govern foundation models effectively? Dive into the intersection of AI safety and standardization and the pathways for global collaboration. Join us for keynotes, expert panels, and dynamic discussions on shaping the future of AI governance. Register for livestream access: https://buff.ly/FqDJj7J #AIStandards #AIGovernance #AIRegulation

相似主页

查看职位

融资

Partnership on AI 共 1 轮

上一轮

补助

US$600,000.00

Crunchbase 上查看更多信息