?? Strengthening information integrity in the digital age!? ? Yesterday, UNDP Global Policy Centre for Governance welcomed Jeff Allen (Co-founder & Chief Research Officer) and Sofia Bonilla (Communications Lead & Research Manager)?from Integrity Institute for a discussion with GPCG Director Arvinn E. Gadgil. The conversation addressed key challenges in strengthening information integrity, including: ?? Why governance solutions must go beyond content moderation? ?? The role of transparency in mitigating disinformation risks? ?? How platform design influences online discourse and digital accountability ? A more transparent, responsible digital space is essential for inclusive governance and resilient societies. UNDP GPCG remains committed to fostering solutions that advance information integrity and democratic digital spaces.
Integrity Institute
智库
We are a community of integrity professionals protecting the social internet
关于我们
The Integrity Institute is a nonprofit organization run by a community of integrity professionals working towards a better social internet. The group has decades of combined experience in the integrity space across numerous platforms. We are here to explore, research, and teach how regulators and platforms can do better.
- 网站
-
https://integrityinstitute.org
Integrity Institute的外部链接
- 所属行业
- 智库
- 规模
- 11-50 人
- 类型
- 非营利机构
Integrity Institute员工
动态
-
? New Blog Post: Addressing Misinformation in Labor and Delivery ?? Medical misinformation is a growing crisis—one with life-and-death consequences, especially in pregnancy, labor, and delivery. As more people turn to social media for health advice, the spread of misinformation can erode trust in medical professionals and put patients at risk. Our latest blog post from Meghan Hepp and Sarah G. explores the challenges Trust & Safety teams face in moderating pregnancy-related misinformation and highlights proactive solutions—from improving platform design to fostering partnerships between healthcare professionals and online communities. ?? Read the full post here: https://lnkd.in/eWsc-zzF
-
? New Report Calls for Proactive Solutions to Tech-Facilitated Gender-Based Violence Current approaches to tech-facilitated gender-based violence (TFGBV) too often respond after harm occurs, placing the burden of safety on victims. We partnered with the Council on Tech and Social Cohesion to push for a different approach: one that embeds safety and user empowerment into platform design from the start. Our new report outlines proactive, design-focused strategies to prevent harm before it happens. It’s time to move beyond reactive solutions and build safer digital spaces by design. ?? Huge thanks to II Members Gabriel Freeman, Nicholas Shen, Theodora Skeadas, Matt Motyl, Ph.D., and Leah Ferentinos for their contributions! Read the full report here: https://lnkd.in/g36rDqMY
-
-
?? The DSA Civil Society Coordination Group has released its initial analysis of the first round of Risk Assessment Reports under the EU Digital Services Act (#DSA). This brief highlights key trends, best practices, and critical gaps in these reports, with the goal of strengthening future iterations and improving user safety. Read the full analysis here: https://lnkd.in/gkkQSpdi The Integrity Institute is grateful to have collaborated with such a talented group, working toward a safer digital ecosystem. Huge thanks to the Center for Democracy & Technology for leading the way!
-
?? How did Big Tech platforms perform in their first Risk Assessment Reports under the EU’s Digital Services Act, and how can they improve going forward? Today, the DSA Civil Society Coordination Group releases its brief: an Initial Analysis on the First Round of Risk Assessment Reports under the EU Digital Services Act (#DSA). The purpose of this feedback is to identify key trends, useful practices and gaps in this first iteration, so that future risk assessments and mitigation measures required under Article 34 and 35 of the DSA are more tangible and do more to advance user safety. This brief focuses on four key aspects: ?? Identifying useful practices – what to keep for next time;? ?? Why it is crucial to focus on platform design when assessing risk; ?? Trust with users and regulators can only be fostered by providing data; ?? Meaningful stakeholder consultation is the key to a safer online environment. The DSA CSO Coordination Group, convened and coordinated by CDT Europe, is an informal coalition of civil society organisations, academics and public interest technologists that advocates for the protection of human rights, in the implementation and enforcement of the EU Digital Services Act. Thanks everyone for their input, as well as Recommender Systems Taskforce and People vs Big Tech for their precious contributions. You can read the full brief here and on our website, link in the comments ???? Access Now, AI Forensics, AlgorithmWatch, Alliance4Europe, Amnesty International, ARTICLE 19, Avaaz, Center for Countering Digital Hate, Center for Studies in Freedom of Expression (CELE), Civil Liberties Union for Europe, Committee to Protect Journalists, Das NETTZ, Ekō , Eticas.ai, Electronic Frontier Foundation (EFF), EU DisinfoLab, European Center for Not-for-Profit Law Stichting, European Digital Rights, European Federation of Journalists, European Partnership for Democracy (EPD), The Future of Free Speech, Gesellschaft für Freiheitsrechte e.V., The Global Disinformation Index, Global Witness, HateAid, Human Rights Watch, IMS (International Media Support), Integrity Institute, interface (formerly SNV), Internews, ISD (Institute for Strategic Dialogue), Association #Jesuislà #Jesuislà #Jesuislà , Mnemonic, Mozilla, Fundacja Panoptykon, People vs Big Tech, Search for Common Ground, 7amleh - The Arab Center for the Advancement of Social Media, 5Rights
-
?? We're hiring! ?? The Integrity Institute seeks a visionary and strategic leader to serve as its Chief Executive Officer (CEO). This individual will co-lead the organization in a dyad relationship with the Chief Research Officer (CRO). Together, these two leaders will be accountable to drive the Institute’s mission to advance the theory and practice of protecting the social internet. The CEO will be responsible for ensuring long-term financial sustainability, cultivating strategic partnerships, and overseeing operational excellence. The ideal candidate will have a proven track record of organizational leadership, fundraising expertise, and experience engaging with policymakers, technology leaders, and civil society organizations. They will be a skilled communicator, an influential thought leader, and a champion of integrity in the digital space. This is a fully remote position, with some domestic and international travel required. Candidates must be eligible to work in the United States without sponsorship or restrictions. Learn more and apply below ?? ?? ?? https://lnkd.in/eN6BdJ-u
-
?? We're hiring! ?? The Integrity Institute seeks a visionary and strategic leader to serve as its Chief Executive Officer (CEO). This individual will co-lead the organization in a dyad relationship with the Chief Research Officer (CRO). Together, these two leaders will be accountable to drive the Institute’s mission to advance the theory and practice of protecting the social internet. The CEO will be responsible for ensuring long-term financial sustainability, cultivating strategic partnerships, and overseeing operational excellence. The ideal candidate will have a proven track record of organizational leadership, fundraising expertise, and experience engaging with policymakers, technology leaders, and civil society organizations. They will be a skilled communicator, an influential thought leader, and a champion of integrity in the digital space. This is a fully remote position, with some domestic and international travel required. Candidates must be eligible to work in the United States without sponsorship or restrictions. Learn more and apply below ?? ?? ?? https://lnkd.in/eN6BdJ-u
-
Last week, we were thrilled to attend the inaugural Marketplace Risk Conference in Austin, TX! It was a fantastic opportunity to connect with industry leaders tackling trust & safety challenges in online marketplaces. Our Communications Lead, Sofia Bonilla, had the pleasure of sitting in on an insightful session on misinformation and policy development in the marketplace led by Sarah Brandt of NewsGuard and our own Integrity Institute member, Alexandra Popken of WebPurify. There were so many great discussions on misinformation, moderation, and keeping digital ecosystems safe! We look forward to continuing these important conversations online and in person with our T&S community. #MarketplaceRisk #TrustAndSafety #OnlineIntegrity
Misinformation sits at a critical crossroads in the tech industry. Nations are debating platform immunity versus culpability, while platforms themselves wrestle with their role in enforcement. Adding to the complexity, brands advertising on these platforms are increasingly wary of being associated with harmful content or facing reputational risks from misplaced ads. At the heart of it all are Trust & Safety teams, tasked with crafting fair policies and building scalable enforcement processes -- ones that protect users, mitigate real-world harm, and uphold platform integrity in an ever-evolving landscape. At Marketplace Risk in Austin next week, Sarah Brandt and I will be discussing all of this. Join us! https://lnkd.in/etf9_Dnr #MRATX25
-
-
?? What I'm reading / writing about this week: For Everything in Moderation I wrote about keeping small/ niche online communities safe: https://lnkd.in/edxtki8H. This was inspired by a survey by Vox Media and The Verge which showed that people want smaller online spaces, and that they value the work that moderators do to keep those spaces safe: https://lnkd.in/eKaqZVdY This survey from the University of Oxford and Technical University of Munich also shows that people (globally) value content moderation: https://lnkd.in/eWBHGJHj Elsewhere in T&S news, the Integrity Institute released some resources around transparency: https://lnkd.in/espTB-su Rachel Kowert wrote about the "safety triforce" and also referenced the safety triangle I talk about constantly: https://lnkd.in/eqxGHSnk Matt Motyl, Ph.D. wrote about the layoff lie in tech right now: https://lnkd.in/eX8qZEZ2 Mike Masnick has a new podcast out about Section 230 called Otherwise Objectionable: https://lnkd.in/ek7kExKT And Thorn has released a report on deepfake nudes and youth, showing that 1 in 8 youth personally know someone who has been targeted: https://lnkd.in/ebBNSCbz
-
?? Algorithmic systems are crucial components of major online platforms, but how they’re designed can either promote integrity or amplify harm. Jeff Allen & Matt Motyl, Ph.D. joined the Knight-Georgetown Institute to outline key steps policymakers & platforms can take to improve feed algorithms for a healthier online ecosystem. Read more below ??
NEW REPORT – Better Feeds: Algorithms That Put People First. As state, federal, and global policymakers grapple with how to address concerns about the link between online algorithms and various harms, KGI’s new report from a distinguished group of researchers, technologists, and policy leaders offers detailed guidance on improving the design of algorithmic recommender systems that shape billions of users’ online experiences. Drawing on the latest research documenting these harms and evidence demonstrating the effectiveness of alternative design approaches, this guide can help shift platform recommendation systems away from attention-maximizing designs toward optimizing for long-term user value and satisfaction. https://bit.ly/3QzxVzq This report is the product of an incredible group of expert authors: Alex Moehring, Alissa Cooper, Arvind Narayanan, Aviv Ovadya, Elissa R., Jeff Allen, Jonathan Stray, Julia Kamin, Leif Sigerson, Luke Thorburn, Matt Motyl, Ph.D., Motahhare Eslami, Nadine Farid Johnson, Nathaniel Lubin, Ravi Iyer, and Zander Arnao. ???? The Problem Some platforms optimize their recommender systems to maximize predicted “engagement” – the chance that users will click, like, share, or stream a piece of content. This design aligns well with the business interests of tech platforms monetized through advertising. And it has been linked to a range of individual and societal harms, including the spread of low-quality or harmful information, reduced user satisfaction, problematic overuse, and increased polarization. 2?? A False Choice In policy circles, chronological feeds and blanket bans on personalization are common go-to solutions to these concerns, but they have important limitations and unintended consequences and can reward spammer behavior. They fail to take advantage of better designs in existence that put people’s? interests front and center. ? The Path Forward Platforms and policymakers can help to address the harms associated with recommender systems while preserving their potential to enhance user experiences and societal value. There is a clear path forward: designing algorithms to promote long-term user value instead of short-term engagement. The report outlines how policymakers and product designers can help make this happen by: - Promoting detailed transparency - Giving users meaningful choices and better defaults - Assessing long-term impacts of design changes ?? Learn More Better Feeds serves as a roadmap and how-to guide for policymakers and technology companies interested in creating algorithmic systems that put users' long-term interests front and center. Read the report: https://bit.ly/3QzxVzq