ChatGPT & Me: Human Bias Peeking Through in New AI Model
In March 2023, OpenAI released the chatbot ChatGPT, utilizing the company’s latest GPT-4 AI model. ChatGPT can answer questions, provide recommendations, translate texts, write articles, and even fix coding errors. Despite its expansive knowledge and astonishing speed, it still exhibits racial and gender bias.?
ChatGPT Biases Detected
As an AI language model, ChatGPT has access to an immense amount of data and knowledge, making it a valuable tool for individuals seeking information or advice on various topics. However, like all AI models, ChatGPT is not perfect and can sometimes exhibit bias in its responses.
Bias can manifest itself in several ways. For instance, ChatGPT may be trained on a particular dataset that reflects certain cultural or societal norms, resulting in it producing responses that may be insensitive or even discriminatory towards certain groups of people. Additionally, ChatGPT may unintentionally perpetuate certain stereotypes or reinforce existing biases that are prevalent in society.
The above was written by ChatGPT, following the prompt “Write a blog post about how ChatGPT is sometimes biased.” The output text is notably correct. However, here’s what happened when one user asked for a list of philosophers:
The seemingly omniscient chatbot conspicuously omits any female or non-white philosophers. Additionally, as one user discovered, asking for subjective values in a coding language can result in flagrant sexism and racism.?
Given the swiftness with which this technology is expected to dominate our digital landscape, the evident biases pose a major problem.??
领英推荐
The Origins of ChatGPT
GPT stands for generative pre-trained transformers, the language model used by ChatGPT’s developer, OpenAI. In this model, the AI is trained on a massive corpus of text — over 500 GB spanning most of the internet. Similarly to its sibling program InstructGPT, ChatGPT was trained using Reinforcement Learning from Human Feedback (see below).??
Additionally, ChatGPT was fine-tuned to retain contextual information and engage in human-like dialogue with users. OpenAI also embedded a careful code of ethics in ChatGPT. For example, it will not generate violent or pornographic content, nor will it explain how to build a bomb.?
Where’s the bias coming from?
As the chatbot itself noted, its bias may arise from how the AI is trained. The problem with being trained on human-written texts is that the chatbot inadvertently absorbs the biases demonstrated in those texts. So while ChatGPT only listed Western white male philosophers, it’s likely that many of us would fall into the same pattern.?
What can we do?
The good news is that people are aware of and working on it. OpenAI itself is encouraging user feedback to improve the system and eliminate potentially harmful outcomes. Additionally, many groups are calling for government regulation and intervention to hold AI and its creators accountable.?
It’s also worth noting that this is a human problem as well as a technological one. AI has arrived, and it’s likely to become an integral part of our future and digital landscape. As long as it’s trained on human-generated content, it’s up to us as humans to create more diverse, equal, and inclusive content.
Prepared by: Cassidy Mayo