A Proposal: How Blockchain can solve Gen AI's fake image problems
Basic online research on "the most commonly discussed topics in 2024" almost always includes artificial intelligence, elections, and social issues. This brings us to a controversial capability of generative artificial intelligence (GenAI or AI), namely fake images. Recent technological advancements enable ordinary citizens to create super realistic pictures of their definition (i.e., prompt). In response, platforms put guardrails to prevent users from creating fake images on sensitive topics. However, the fast-paced technological developments created a tic-tac-toe-like race between tech companies and stakeholders who try to leverage the same tools to create fake images.
Some of the current tools publicly available include DALL-E, Stable Diffusion, Midjourney, and Leonardo.Ai. On the other hand, tools making the news like OpenAI's Sora are still not publicly available for use. In recent news, a new open model called Flux made the news due to its super realistic image creation capabilities.
We can list the main challenges related to fake images as follows:
Detecting AI-generated images requires advanced technology and expertise, which is not widely accessible to the public, while detection and verification methods are constantly changing.
Super realistic fake images combined with the so-called world election year led to concerns growing as fast as the resources and attention dedicated to this cutting-edge technology. Furthermore, fake news combined with AI-generated images can spark more powerful reactions from communities in times of social unrest in many places, including the UK.
I believe Blockchain technology can help us to differentiate AI-generated images from human-generated ones through the proposed infrastructure below:
The crux of the issue lies in the fact that today's AI tools can produce images that are indistinguishable from those created by humans. To address this, we require a robust tool or framework that can accurately determine the origin of an image, whether AI or a human created it.
The proposed solution is designed to be at the heart of content creation, residing within our devices. It operates in two distinct steps: 'recording' and 'authentication, 'each playing a crucial role in distinguishing AI-generated images from human-generated ones.
A framework embedded in devices will register user-created content (images, videos, and audio) on a global blockchain ledger.
Although storing device IDs on the same ledger is possible, it requires increased care due to data privacy concerns. Therefore, such data should be anonymised carefully before being recorded on the ledger.
Once a global registry is established, platforms can authenticate the content under question by posting requests to the blockchain ledger.
This framework and proposed infrastructure can easily mitigate significant risks associated with AI-generated fake images.