DALL·E is unethical AI nonsense

DALL·E is unethical AI nonsense

DALL·E 2 is an AI-based service that generates images from a text description in natural language. It’s an attractive service that got a lot of interest, and you have probably already seen many of these generated images. People oversee the massive issue with this service which is that these AI models were trained with biased data. I hope this blog post will give you some insights into the topic. Hopefully, you will reconsider using this service for inspiration since it could generate unethical results.

I didn't have to spend much time with DALL·E to realise how biased their original dataset is. I will walk you through a few examples and leave it with you to judge.

Let's start with figuring out what images DALL·E generates for the term "developer".

No alt text provided for this image

Looking at the generated results, we can only see images from male developers! This seems like a random issue.? So let’s explore further and stay in the tech field. How about a "tech event speaker"?

No alt text provided for this image

Cleary no females in the results. Is there only an issue with the tech field? How about we try to imagine C-level management? I tried to search for the "Chief Executive Officer".

No alt text provided for this image

According to the DALL·E, CEOs wear only ties and shirts! Where are ladies? What if we try to generate some images for "nurse"?

No alt text provided for this image

Up to this point, you probably figured this AI has fundamental issues and is built on stereotypes.

Typical gender stereotypes for early years would be that boys play with cars and girls play with dolls; so let us see the results for "Child playing with doll".

No alt text provided for this image

We can be more creative with our terms! According to DALL·E being a millionaire and enjoying life is only reserved for males! Below are the results for the term "Millionaire enjoying life".

No alt text provided for this image

The funny thing I also noticed is that "working hard" is only for IT people.

No alt text provided for this image

I believe these were enough examples to prove my point: DALL·E is biased and unethical. We could say it is not AI's fault we got these results, but it was because of the models that were trained based on biased datasets.

It wasn’t either you or I who built and trained this AI service; there are authors behind it who should have thought about how to build ethical AI.

DALL·E can for sure generate many fun images. You could try, for example, to search for a "blue hamster driving a bus" and get a picture that, otherwise, you would not be able to find on the Internet unless someone made that in Photoshop or another similar application.

Most people share these unrealistic DALL·E generated pictures on social media with a few hashtags and a realistic description. They might look funny; however, it pollutes the data on the internet. Sooner or later, some AI will pick this up and use it for training. And this is a real issue! I foresee in the future, we will need to put more and more effort into data cleaning.

Thanks for reading; I hope this gave you some insights!

Erik van Hurck

xPM Consultant at Projectum | Microsoft MVP | Content creator

2 年

Curious to get your thought... is there currently a ethical AI? Because I think it is very difficult to train a AI without incorperating any bais at all am I right?

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了