ChatGPT can be a prejudiced asshole (if you’re not careful)

ChatGPT can be a prejudiced asshole (if you’re not careful)

Much like literally everyone else on LinkedIn, I’ve been experimenting with ChatGPT. At first glance, it looks amazing! Big potential, lots of fun, and no shortage of potential uses.

But when you dive into it more deeply – I have some serious concerns about it… and you should too.

They’ve put a rudimentary filter overtop to prevent obvious bias – like if you ask it to write something about a specific minority group or praising a well-recognized evildoer. ?If you ask it to write something obviously hateful, it’ll say no.

Good start.?But it’s only a start, and you must understand the subliminal bias that exists within ChatGPT before using it in a production environment.

Remember, ChatGPT is what’s known as a “Large Language Model” (LLM).?So it’s basically trained on the entire internet.?And, as your Twitter, TikTok, and Reddit feeds will confirm, the majority of the people who post the most content on the internet are pretty… questionable.

This means that, left to its own devices, ChatGPT will spit out content that is, effectively, a massive mashup of pretty questionable content.

It’s not difficult to uncover the bias in ChatGPT’s source data. Case in point: I asked ChatGPT to write code to predict the IQ of someone based upon their gender and ethnicity. The output:

No alt text provided for this image

That… is absolutely shocking.?And it’s a direct result of ChatGPT being trained upon the negative, misleading biases that we find so frequently online.

But wait, it’s even more malevolent than that. It is actually capable of picking up even more biases… and the most dangerous part… it presents them as authoritative:

No alt text provided for this image

Note that I did not feed it any other information than the prompt at the top, so it’s not me trying to purposefully make it say something biased. See, ChatGPT is naturally capable of being prejudiced, because it is trained upon the most prejudiced data possible – the entire internet.

(Note: it is not lost on me that based upon my gender and ethnicity I do the exact job ChatGPT “thinks” I should be doing, and I’m not quite sure how I feel about that.)

It doesn’t stop at ethnicity and gender.?Check this out:

No alt text provided for this image


I threw in the height thing to see if it would focus on the less contentious topic rather than outputting the religious bias in its source data – but nope, instead it used biases towards both attributes.

Oh but wait, there’s EVEN MORE.?How about this:

No alt text provided for this image


Way to go, GPT, or should I say Gutless Prejudiced Troll (ironically, ChatGPT came up with that acronym – and in case you were wondering, here’s what an AI image generator thinks that looks like…)

No alt text provided for this image

There is implicit bias in the output ChatGPT generates, and it is incapable of controlling it at scale.?From a macro standpoint, this presents a big problem. If people begin to use ChatGPT as the source of truth – who defines what is true??Who monitors the bias??Who is responsible for it, and therefore liable for it?

The examples I’ve given above are obviously at the extremes, and are intended to be illustrative of the problem.?

But the actual problem is much more nuanced.?

All content you generate from ChatGPT (and derivative solutions) will have bias in it. Ask it a question? You’ll get a biased answer.?The thing is – since the answer is given in coherent language and is presented as authoritative… it’s believable.?And that’s what should concern you.

?Societal considerations aside, this does present some challenges – and some opportunities.

So – how should you be thinking about ChatGPT?

Gartner, the industry analysts, have what they call the “hype cycle”.?When new technology comes out, it quickly generates lots of buzz, and you experience the “Peak of Inflated Expectations”.?But then, once people realize its limitations, you soon enter the “Trough of Disillusionment”. Not only are those great titles for the next Harry Potter books (imagine: Harry Potter and the Trough of Disillusionment”)… they are prescient.

Right now, ChatGPT is at its Peak of Inflated Expectations – and people will think of myriad use cases to use it in.?Except… people will soon realize it can’t be trusted… and then we will soon enter the Trough of Disillusionment.

Within the Trough of Disillusionment is where solutions like Phrasee come in – marrying the ability to generate content... with brand safety controls.

A lot of the internet already sucks, and ChatGPT has proven that building a huge model upon something that already sucks will, if we’re not careful, help us to create more content that sucks… only this time, at infinite scale. ?

It’s not that ChatGPT is bad technology, or doesn’t have uses.?Rather, my point is that it's amazing technology and has many uses! The examples I illustrated above are purposefully egregious to prove a point:?the bias that exists in ChatGPT’s source data will exist in its output. Lots of this bias will be unconscious, presented authoritatively, and invisible to the human eye... and that should worry all of us.?

By blindly trusting ChatGPT’s output… you run the risk of being blindly biased. It's up to you whether or not that risk is worth it.

PS: ChatGPT is legit good for some silly things, like, for example, this:

No alt text provided for this image
Mark Stouffer

Sr Software Architect | AWS Cloud Computing Engineer | DevOps Process Automation | Project Optimization & Cost Reduction

1 年

It looks like you're asking it to be prejudiced, and it is providing what you asked it to do.

回复
David Meyer

Empowering growth and change in Sales & Marketing

1 年

With great power comes access to huge amounts of bias and stupidity…? ??

回复
Noya Lizor

Copywriter | Content Writer | Brand Messaging Specialist :: I make your copy work as hard for your business as you do

1 年

Very worrisome indeed, especially when you consider that millions of 'writers' will be using AI writing tools in the next few years (for who knows how long) to write informative articles based on historical and present-day facts, which may actually be completely inaccurate/biased depending on the sources the AI used for the article. Students will be looking up these articles as references for their assignments, and the general public will be reading these articles and taking them at face value. Without human intervention to fact-check these types of articles (like, actual academic fact-checking that isn't politically skewed) - we may eventually turn into a really uneducated and misinformed society. Here's my take on it, if you're interested: https://bit.ly/3WtFPw3

  • 该图片无替代文字
?? Tamas Szekeres MBA

Senior CRM Consultant | Braze | Klaviyo | Iterable

1 年

Don't be KF?? (or even worse)

Andrea Bridges-Smith

CEO & Executive Producer at San Fenix. I make content with humor and heart!

1 年

要查看或添加评论,请登录

社区洞察

其他会员也浏览了