A Proposal: How Blockchain can solve Gen AI's fake image problems
Image created by using FLUX.1 [dev]

A Proposal: How Blockchain can solve Gen AI's fake image problems

Basic online research on "the most commonly discussed topics in 2024" almost always includes artificial intelligence, elections, and social issues. This brings us to a controversial capability of generative artificial intelligence (GenAI or AI), namely fake images. Recent technological advancements enable ordinary citizens to create super realistic pictures of their definition (i.e., prompt). In response, platforms put guardrails to prevent users from creating fake images on sensitive topics. However, the fast-paced technological developments created a tic-tac-toe-like race between tech companies and stakeholders who try to leverage the same tools to create fake images.

Some of the current tools publicly available include DALL-E, Stable Diffusion, Midjourney, and Leonardo.Ai. On the other hand, tools making the news like OpenAI's Sora are still not publicly available for use. In recent news, a new open model called Flux made the news due to its super realistic image creation capabilities.

The Pope in Balenciaga Drip

We can list the main challenges related to fake images as follows:

  • Misinformation and trust issues: Fake images contribute to the spread of misinformation, making it difficult for people to trust the authenticity of visual content. They can be used to manipulate public opinion or create false narratives, impacting social and political landscapes.
  • Legal and Ethical Concerns: The creation and distribution of fake images raise legal and ethical issues, including copyright infringement and violation of privacy rights. There is a growing need for regulations to address these concerns and ensure the responsible use of AI technologies.
  • Economic Impact: Fake images can disrupt business models, particularly those reliant on content authenticity, such as journalism and digital marketing. They can also lead to financial losses through fraud and scams.

Detecting AI-generated images requires advanced technology and expertise, which is not widely accessible to the public, while detection and verification methods are constantly changing.


France’s President Emmanuel Macron and the arrest that never happened

Super realistic fake images combined with the so-called world election year led to concerns growing as fast as the resources and attention dedicated to this cutting-edge technology. Furthermore, fake news combined with AI-generated images can spark more powerful reactions from communities in times of social unrest in many places, including the UK.


I believe Blockchain technology can help us to differentiate AI-generated images from human-generated ones through the proposed infrastructure below:


The crux of the issue lies in the fact that today's AI tools can produce images that are indistinguishable from those created by humans. To address this, we require a robust tool or framework that can accurately determine the origin of an image, whether AI or a human created it.

The proposed solution is designed to be at the heart of content creation, residing within our devices. It operates in two distinct steps: 'recording' and 'authentication, 'each playing a crucial role in distinguishing AI-generated images from human-generated ones.

A framework embedded in devices will register user-created content (images, videos, and audio) on a global blockchain ledger.



Image Recording Process


Although storing device IDs on the same ledger is possible, it requires increased care due to data privacy concerns. Therefore, such data should be anonymised carefully before being recorded on the ledger.


Image Authentication Process

Once a global registry is established, platforms can authenticate the content under question by posting requests to the blockchain ledger.

This framework and proposed infrastructure can easily mitigate significant risks associated with AI-generated fake images.







要查看或添加评论,请登录

Cuneyt Eti的更多文章