OpenAI Contemplates Release of Image Classifier for DALL-E 3, but Challenges Persist

OpenAI Contemplates Release of Image Classifier for DALL-E 3, but Challenges Persist

The release of new tools often sparks curiosity and debate, and OpenAI 's latest endeavor is no exception. The organisation has been contemplating the launch of an image classifier tool that can distinguish whether an image was generated using DALL-E 3, OpenAI's generative AI art model. While the accuracy of the tool is impressive, OpenAI has set a remarkably high threshold for its quality, and the decision on when to release it remains uncertain.

Defining the Threshold: Balancing Reliability and Quality

Sandhini Agarwal, an OpenAI researcher focusing on safety and policy, shed light on the matter recently. She revealed that the classifier tool's accuracy, by her estimation, is indeed "really good." However, it has yet to meet OpenAI's stringent quality standards.

“There’s this question of putting out a tool that’s somewhat unreliable, given that decisions it could make could significantly affect photos, like whether a work is viewed as painted by an artist or inauthentic and misleading,” Agarwal said.

The organization is cautious about deploying a tool that may be somewhat unreliable, given that its decisions could have significant implications for images, potentially impacting how a work is perceived—whether as an authentic creation by an artist or as misleading and inauthentic.

OpenAI's targeted accuracy for the classifier is notably high. Mira Murati, OpenAI's Chief Technology Officer, stated that the classifier is "99%" reliable at identifying whether an unmodified photo was generated using DALL-E 3. Although the ultimate goal might be perfection, Agarwal did not confirm.

A draft of an OpenAI blog post, shared with TechCrunch, revealed another intriguing detail. The classifier remains "over 95% accurate when [an] image has been subject to common types of modifications, such as cropping, resizing, JPEG compression, or when text or cutouts from real images are superimposed onto small portions of the generated image."

OpenAI's hesitance to release the tool may be influenced by the controversy surrounding its previous public classifier tool, designed to detect AI-generated text. That tool was pulled due to a "low rate of accuracy," which faced substantial criticism.

Navigating Philosophical Quandaries: What Constitutes an AI-Generated Image?

Agarwal hinted at another factor complicating the decision—the philosophical question of what qualifies as an AI-generated image. While artwork generated from scratch by DALL-E 3 is an obvious inclusion, it becomes less clear when an image has undergone multiple rounds of edits, has been merged with other images, and then subjected to post-processing filters.

Agarwal raised the question, "Should that image be considered something AI-generated or not?" OpenAI is currently navigating this dilemma and actively seeking input from artists and individuals who would be significantly affected by such classifier tools.

Industry-Wide Challenges: The Ongoing Quest for Standardization

The issue of generative media is not unique to OpenAI. Numerous organisations are exploring watermarking and detection techniques as AI deepfakes become more prevalent. DeepMind, Imatag, and Steg.AI are just a few examples of entities striving to address these challenges. However, the industry has yet to reach a consensus on a standardised watermarking or detection method, and there's no guarantee that these safeguards won't be circumvented.

Regarding OpenAI's image classifier's compatibility with images generated by other non-OpenAI generative tools, Agarwal didn't commit to a specific direction but expressed openness to considering it based on the reception of the classifier tool in its current state.

As the technology landscape continues to evolve, questions surrounding AI-generated content and its detection remain at the forefront of ethical and practical discussions. OpenAI's deliberations regarding the release of its image classifier reflect the intricate nature of the challenges involved and the organisation's commitment to responsible and high-quality AI tools.

要查看或添加评论,请登录

Ada Sparks的更多文章

社区洞察

其他会员也浏览了