2024 was a monumental year for TrueMedia.org. Together, we achieved our mission of providing world-class deepfake detection leading up to the U.S. election, analyzing over 60,000 images, videos, and audio clips in less than a year. ????? With more of the global population heading to the polls than ever before, we rose to meet this critical moment in history. However, after much deliberation, we’ve made the difficult decision not to extend our mission into 2025. TrueMedia.org will sunset operations on January 14, 2025. Here’s what’s next: ?? Open Source: We’re making our detection models and web application available to the public. ?? Research Contributions: Our team is submitting academic papers to advance the field of generative AI safety. ?? AI Trust and Safety: Our founder, Oren Etzioni, will continue his work advising startups on AI safety measures. We are incredibly proud of what we’ve accomplished and grateful for your support. Stay tuned—we’ll share links to our open-source projects soon. ~ The TrueMedia.org Team
TrueMedia.org
科技、信息和媒体
A non-profit, non-partisan AI project to fight disinformation in political campaigns by identifying manipulated media.
关于我们
TrueMedia.org, a non-profit, non-partisan AI project to fight disinformation in political campaigns by identifying manipulated media. The project is backed by Camp.org and led by Oren Etzioni. Disinformation, transmitted virally over social networks, has emerged as the Achilles heel of democracy in the 21st Century. This phenomenon is not limited to a single event or region but has been observed globally, impacting critical political decisions. In 2024, we anticipate this challenge to grow explosively due to the increased availability of generative AI and associated tools that facilitate manipulating and forging video, audio, images, and text. The cost of AI-based forgery (“deepfakes”) has plunged sharply during one of the most important political elections in history. As a result, we anticipate a tsunami of disinformation. In the past, disinformation was spread by well-funded state actors but now, due to generative AI, we see the potential for disinformation terrorism. A recent Pew study found the percentage of TikTok users that get news from the platform has doubled since 2020 and is now at 43%. According to Pew, half of U.S. adults get news at least sometimes from social media. This is a recipe for disaster, and one that we are poised to fight by creating a free, non-partisan, and broadly accessible dashboard for identifying manipulated media across social media platforms.
- 网站
-
https://www.truemedia.org/
TrueMedia.org的外部链接
- 所属行业
- 科技、信息和媒体
- 规模
- 11-50 人
- 总部
- Seattle
- 类型
- 非营利机构
- 创立
- 2024
地点
-
主要
US,Seattle
TrueMedia.org员工
动态
-
Parents of LinkedIn, what are you telling your kids to learn to thrive in a world with AI? As a parent of 2 teenage kids, I genuinely don’t know the answer. Some believe AI will create entirely new job categories—think AI trainers, AI ethics officers, independent research scientists —while others believe mass job displacement is inevitable, making Universal Basic Income (UBI) the only viable solution. Over the past year, I’ve asked AI researchers, startup founders, tech philanthropists, educators, and informed policy makers the same question. Here’s the punchline: no one really knows. Here are some of the more interesting responses I received: “Make a lot of money so they don’t end up on the street.” “Teach them how to train or monitor AI models.” “Develop leadership and collaboration skills.” "Be more creative." “Use AI every day to keep pace with its progress.” “Focus on traditional education—knowledge gatekeepers (e.g., college admissions) will still prioritize grades and test scores.” And, of course, I asked AI itself. Here’s what OpenAI’s Operator suggested: - Critical thinking, problem-solving, and adaptability. - Lifelong learning to stay ahead of tech shifts. - AI literacy to leverage its power. - Hands-on, practical experience. - Human-centric skills like mentorship, communication, and collaboration. This all sounds great—but how exactly should parents implement it? To what end? This isn’t just a “jobs of the future” question. It’s about how we equip the next generation to live meaningful, fulfilling lives as a society. And right now, I don’t think we’re doing enough. If you’re thinking about this too and want to help figure it out, DM me—I’d love to chat.
-
Excited to share our new paper: "Deepfake-Eval-2024: A Multi-Modal In-the-Wild Benchmark of Deepfakes Circulated in 2024" ?? ?? Key results: ? Current deepfake detectors perform well on many academic datasets but fail dramatically on real-world deepfakes ? Performance drops by ~50% across open-source video, audio, and image detection when tested on actual 2024 deepfakes ? We've assembled a comprehensive benchmark with 45 hours of video, 56.5 hours of audio, and 1,975 images from 88 websites in 52 languages This research reveals a critical gap between theoretical performance and real-world effectiveness in deepfake detection. As generative AI becomes more accessible and realistic, our field needs better evaluation methods that reflect actual threats. Kudos to my amazing co-authors: Ryan Murtfeldt, Lin Qiu, Arnab Karmakar, Hannah Lee, Emmanuel Tanumihardja, Kevin Farhat, Ben Caffee, Sejin Paik, Changyeon Lee, Jongwook Choi, Aerin Kim, and Oren Etzioni, in addition to the entire fantastic team at TrueMedia.org who made this work possible. Check out the full paper: https://lnkd.in/gidRhUTy Dataset available at: https://lnkd.in/gQBDJEEq #DeepfakeDetection #AI #MachineLearning #MediaIntegrity #GenerativeAI #Research #ComputerVision
-
We're open-source now. ?? - https://lnkd.in/gQrajz58 While TrueMedia.org is winding down, we’re excited to share the next step in our journey: making our technology available to the broader community. Our commitment to advancing AI safety and transparency continues through open-source contributions. What we released: 1. Deepfake Detection Models: State-of-the-art detectors for images, videos, and audio. 2. Web Application: The full source code for our platform, enabling others to build upon our work. Query and get analysis from multiple models back.? 3. Social Media Bot: Source code to check for deepfakes directly on X, extensible to other social media platforms. 4. Commercial Use License: We’re using an MIT license. One of our goals was to elevate the detection industry, which is why we’re allowing commercial use. Congrats team Oren Etzioni, Art Min, Molly N., Garrett Camp, Ryan Murtfeldt, MA, Dawn Wright, Steve Geluso, Michael Langan, Kathy Thrailkill, Michael Bayne, Aerin Kim, Kevin Farhat, Hannah Lee, Max Bennett, Arnab Karmakar, Ben Caffee, Field Cady, Sejin Paik, Nuria Alina Chandra, Lin Qiu, and James Allard.
-
We're featured in the podcast BBC Tech Life. ?? "The technology is moving at blinding speed. We have reached the point— whether it's image, video, or audio—a person cannot tell what's fake." ~Oren Etzioni Full Episode - https://lnkd.in/g3fv9QQ7 Appreciate the collaboration with Lily Jamali!
-
Election day is not the end of political disinformation season. Check suspicious social posts for AI manipulation: https://www.truemedia.org/ No account required. Non-profit, non-partisan, free. As we know, election disinformation can spike *after* elections disinformation spreaders test new narratives before the vote is certified. From the staff at TrueMedia.org, we are proud to do the work for #OurElections.
-
-
Life is too short to not work on a project you're incredibly excited about.
Which political deepfakes have you fallen for without even knowing? You must listen to this Invested episode with the incredible Oren Etzioni - AI expert and founder of deepfake detector TrueMedia.org, Venture Partner at Madrona and founding CEO of Ai2 - before the U.S. election tomorrow. He and Michael deep-dive into the implications of this election being the first in our full-blown AI era, including the influence that deepfakes have on our votes, how TrueMedia aims to combat that, and why Oren is an AI optimist (but won't say 'please' and 'thank you' to his chatbots). This episode is urgent and important. Listen with the link below.
-
Please share broadly!
Given disinformation attacks from Russia like the one reported below, I've asked Google to enable TrueMedia.org to access and analyze political videos for signs of manipulation. This is an extraordinary request that their lawyers are likely to oppose--please share to help protect our elections! https://lnkd.in/gru-jzSA
-
The reviews are in: "TrueMedia.org is the easiest way to spot deepfakes in 2024." Read the review - https://lnkd.in/gFqCgcxK Excerpt from the whatplugin · access to AI tools review: "TrueMedia.org offers a free deepfake detection platform with a claimed 90% in accuracy, which I wouldn’t be surprised if was true. I found it great not only at accurately detecting deepfakes, but also providing detailed analysis of results."
-
-
When we present our demos of deepfakes and AI-powered cybercrimes, people almost always ask: can AI-generated content be detected? For images in 2024, the answer is yes! Try it for yourself on TrueMedia — either to verify something you see online, or to explore what's possible.
Curious about how to detect deepfakes? Step 1. Copy and paste the URL from a social media post Step 2.?We run multiple detectors to see if there’s AI manipulation. You thought GenAI was easy to use… well detection shouldn’t be hard.? TrueMedia.org is free, requires no account, and helps you verify authenticity in just a few clicks. ?? Try it yourself: https://truemedia.org/