Why you should attribute your AI?generated images
Ai-generated image*

Why you should attribute your AI?generated images

You’ve known this for years, it’s common practice among professionals to mention the author and copyright of any image you publish. But should this apply to AI generated images? I argue that it does.

You probably heard that the U.S. Copyright Office has denied copyright claims to graphic novel authors who used MidJourney in their creations. And I guess, the whole debate on who is really the author of those images is probably not going to get resolved soon. So what is the best practice here?

In Thoughtworks’ report on GenAI: What consumers want, we read that:

The majority (93%) of consumers have ethical concerns with GenAI. […] 67% of consumers also worry about the risks of misinformation. This emphasizes the need for businesses to offer transparency regarding their utilization of GenAI in order to provide the reassurance that consumers need.?

So, a minimum step here is to declare your use of AI whenever possible and appropriate, starting with the images you used in your communication.?

But what if the image is just an illustration? What if my AI generated image is not the purpose of the article I’m publishing and it’s definitely not a photorealistic representation of a situation involving recognizable humans? Basically, the image I’m using is just a few schematic drawings to add a bit of color to what would be a gray blob of text without it.

I still think you should label it, just like Julie Woods-Moss does, for example.?

And the main reason is not because of copyright or authorship, or because you want to show your use of AI tools. You should do it because you want to build trust with your audience.

Let’s switch to text for a second. Would you publish the output of an AI text generator on a topic you know very little about or in a language that you don’t understand? Probably not. At least not without a clear warning that the text you shared is not of your own making. If you put it out unlabelled, you would run the risk of saying things that you didn’t intend to and having your audience attribute those opinions to you.

It should be the same for images. Most people, apart from professional image creators, don’t have the visual knowledge and culture to “read” an image. In other words, the language of images is not known to the majority of people. It’s maybe a bit hard to understand because everyone who sees an image reads it, subconsciously. It’s an immediate process you can’t escape. You know the expression “can’t unsee this'’.??

Unfortunately, visual literacy is not a well known discipline. I wish we had media literacy courses from the early days of our children's education, especially in the world we live in now. But that’s not the case. And if you did not go through media school, you probably received only basic courses on media literacy, if you are lucky.

Also, don’t think that AI?Image Generators have that knowledge for you. They don’t. An image generator is incapable of explaining what they just drew. It’s just a matrix of pixels to them. It does not mean anything.

With this in mind, my recommendation is to label your AI generated images with “Generated with AI” or “Ai-generated image” like the stock image services do, or by mentioning the tool you used “Created with Dall-E”.

An improved label (and thanks to Birgitta Boeckeler for sending me on that path) would be to also share the prompt you used to generate that image. It could help people understand what you intended to show in your illustration, while also, in the spirit of shared learning that exists in certain tech communities, build trust with your audience.


Thanks Chris Ford for the discussion that triggered this post and for helping me polish it.

* Illustration generated with Stable Diffusion XL Base 1.0.

Prompt: A human with long hair, seen from the back, looking confused by an abstract painting hanging on the wall, the painting has letters and characters that are hard to decipher, Pencil Art, Cartoon, happy accidents.

Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2436389740, Size: 1344x768, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, VAE hash: 63aeecb90f, VAE: sdxl_vae.safetensors, Version: v1.7.0
        

Transparency is key when using AI-generated images. BTW, have you tried using ChatGPT with your Midjourney prompts?

回复
Emmanuel Letouzé, PhD

Director & co-Founder, Data-Pop Alliance | Adjunct Faculty @ Columbia U & Sciences Po Paris | Fellow @ MIT & Harvard | ex-Marie Curie Fellow, U Pompeu Fabra | Co-Founder, OPAL | ex-Fulbright | Political Cartoonist

1 年

??You should do it because you want to build trust with your audience.??

回复
Julie Woods-Moss

Digital Leader, CMO, NED and senior advisor, Chair Of The Board Of Directors at dunnhumby,

1 年

Julien Deswaef Very thoughtful article. Being diligent about clearly declaring when an author (or other) has used AI - for copy, images or other aspects - is necessary and will help build consumer trust Thoughtworks

要查看或添加评论,请登录

社区洞察

其他会员也浏览了