The Deep Media Digital Digest
VOLUME 16 / NOVEMBER 15TH, 2024
Welcome to the latest edition of the Deep Media Digital Digest! As AI capabilities expand, so do the stakes around security, privacy, and ethical AI usage. This week, we’re covering some game-changing updates in AI tech, key developments at Deep Media, and ongoing deepfake-related incidents from around the globe.
What's On Our Radar
OpenAI’s ChatGPT has gained new functionality, now able to interact directly with select Mac desktop apps, allowing developers to bring code from VS Code, Xcode, and other coding platforms directly into the ChatGPT interface. This step is considered a precursor to agentic AI systems, making it easier for developers to get code suggestions without needing to copy and paste each time. While this initial feature is designed for macOS, Windows users can expect future compatibility.
In other AI news, Alibaba Cloud’s Qwen2.5-Coder is making waves in the coding AI landscape, demonstrating high performance across 92 programming languages and rivalling leading proprietary tools. Released as open source, Qwen2.5-Coder opens doors for developers who may have previously been priced out of advanced AI coding tools. This development is particularly impactful given current U.S.-China tech tensions, with Alibaba pushing innovation despite export restrictions on key U.S. semiconductors.
Amid the recent U.S. election, OpenAI shared that DALL-E rejected over 250,000 requests involving political figures, flagging attempts at creating election-related deepfakes. This proactive stance helped limit the spread of disinformation during a crucial time. The company also directed over 1 million ChatGPT users to verified voter information sources, exemplifying the importance of ethical measures in AI deployment.
What's Happening Now at Deep Media
In our latest research, we’ve conducted an in-depth analysis of the viral DeepTomCruise content, examining the manipulation techniques behind this high-profile deepfake and the technology needed to detect it. Read our full detection report here to see how we identified subtle indicators differentiating synthetic media from authentic videos, highlighting the capabilities and limitations of current detection methods.
On the government side, Deep Media has expanded its partnerships, now available through ITES-3S, NASA SEWP , and Tradewinds , increasing accessibility to our technology for federal agencies. These contract vehicles make it simpler for government clients to deploy our deepfake detection tools to protect against evolving synthetic media threats.
Deepfakes Dominating the Headlines
In Singapore, a troubling case of deepfake misuse is unfolding as police investigate deepfake nude images of students shared among peers at Singapore Sports School. This incident, which has led to disciplinary action and police involvement, underscores the ethical and social implications of deepfake technology when used irresponsibly. School officials are working with authorities to address the issue and prevent further harm.
In an era where deepfake technology is increasingly accessible, incidents like this highlight the urgent need for robust detection tools and education on the ethical use of synthetic media. As deepfake technology continues to evolve, so too must our commitment to detection and prevention.
~ The Deep Media Team
Co-Founder & Chief Product Officer at DeepMedia AI. We're excited to be building AI products for media authenticity & accessibility
1 周Great edition!!