The impact of AI on elections and holiday travel scams in 2024

The impact of AI on elections and holiday travel scams in 2024

Leading experts weigh in on the importance of transparency and authenticity in digital content.

Claire Leibowicz

"While fears about a deepfake-driven, super election year did not come to pass, AI continued?to permeate public consciousness and prompt policymakers to act. We've seen greater momentum towards media transparency in service of trust and truth -- not only in the form of disclosing how content has been made, but also through reports and exploration of how companies, ranging from OpenAI to TikTok to the BBC,?are making decisions about content strategy and practices?(like in the PAI Cases). In 2025, we'll need more research, collaboration, and transparency on the empirical study of how people actually are responding to synthetic content, and accompanying transparency signals. And this will only become more pressing as AI touches all of the ways people create, distribute, and encounter media in the years to come.” - Claire Leibowicz, Head of AI and Media Integrity at Partnership on AI


Justin Brookman

“According to the FTC, American consumers lost at least $10 billion dollars to scams last year, and that's probably a significant undercount. Advances in AI are only going to make social engineering tricks more effective, as scammers are able to target phishing attacks more effectively and as voice cloning and deepfakes become more convincing. These behaviors are already illegal, but enforcers are understaffed and underpowered, and today don't have the capacity to take enough action to deter wrongdoers. We need stronger regulators, but policymakers and companies also need to establish transparency standards and obligations to help platforms and tool developers distinguish between legitimate and synthetic content.” - Justin Brookman, Director, Technology Policy at Consumer Reports


The latest news

The impact of AI on the 2024 U.S. election cycle.

  • Amid rising concerns that AI and deepfakes would be significant in swaying the outcome of U.S. Presidential elections, experts assess they were not a critical factor in determining the outcome. However, computer-generated media emerged as a useful tool for engagement and spreading false information. While various post-election analyses suggest AI threats didn’t materialize significantly, notable incidents included deepfake videos targeting Vice President Harris and AI-generated robocalls mimicking President Biden’s voice. In the month leading up to the election, OpenAI applied guardrails to ChatGPT that rejected an estimated 250,000 requests to generate DALL·E images of presidential candidates.?

Combatting malicious election content.

  • Despite efforts to enhance content integrity in elections, the spread of malicious synthetic content highlighted critical gaps in risk awareness and digital literacy. Joint statements from ODNI, FBI, and CISA highlighted the threats of AI-enabled influence operations by foreign adversaries and the increased volume of inauthentic content online. Meanwhile, the Alan Turing Institute’s CETaS report called for comprehensive strategies to help safeguard future elections through authenticity-by-design principles and digital provenance.?

Consumer deepfake scams cost billions globally.?

  • Deepfake scams grew in 2024, with fraudsters leveraging AI to deceive consumers at an unprecedented scale. Global losses are estimated at over $12 billion, fueled in part by deepfakes of high-profile figures like Elon Musk. There has been a 900% rise in schemes using AI tech that targets holiday travelers during one of the busiest travel periods. U.S. policymakers responded to the rise of AI-enabled fraud with proposals targeting transparency and accountability. A recent Senate hearing addressed the dangers of AI fraud, emphasizing how content provenance is a consumer safeguard.

Government task force launched to advance content authentication.

  • Progress on digital authenticity gained momentum when the U.S. Interagency Task Force to Advance International Engagement on Content Authentication was established. The initiative aims to manage AI risks by supporting technical standards, such as C2PA Content Credentials, and increasing public awareness about AI-enabled content. The task force will engage with international partners to implement digital transparency measures and empower individuals to make more informed decisions. These efforts highlight the growing recognition of provenance as a foundation for enhancing the authenticity of online content.

OpenAI launches its generative video tool, Sora.

  • OpenAI launched its text-to-video generator, Sora, which is set to make sophisticated synthetic video content more accessible than ever. The paid version will soon be available to millions of users, amplifying its creative potential and the challenges associated with AI-generated media proliferation. Notably, all Sora videos include C2PA Content Credentials metadata, enabling the verification of each video's origin. OpenAI has also implemented visible watermarks by default and an internal search tool to identify Sora-generated content. As these advancements unfold, enhancing safeguards and providing digital transparency will be essential for fostering responsible AI use and mitigating deepfake risks.

The need for online transparency: Lessons learned

  • The Partnership on AI showcased the importance of using generative AI responsibly in recent case studies, while the DCRF’s Future of Synthetic Media report provided a roadmap for balancing innovation with regulation. The accessibility of new AI-powered photo editing tools on smartphones will change how content is created and shared, reshaping the integrity of our informational ecosystem. As we navigate the technology landscape in an increasingly AI-driven world, the lessons of 2024 underscore the need for digital content authenticity at scale.


Have any comments, ideas, or opinions - send them to us: [email protected]

Subscribe here!


要查看或添加评论,请登录

Truepic的更多文章