The Dark Side of Chat GPT
Photo credit: https://budakduit.id/mengenal-apa-itu-chatgpt-dari-openai/

The Dark Side of Chat GPT

ChatGPT has been a topic of conversation for the last few months. The artificial intelligence (AI) launched by?OpenAI?was built in November 2022 and is able to create human-like text. ChatGPT has been heralded as the next big disruptor to the world as we know it, and could one day?dethrone Google?as the most used search engine. ChatGPT currently has a free version that doesn’t require a download to utilize. The chat bot has a plethora of possibilities: it can write poetry, song lyrics, computer code and pass an?MBA course exam. Users can ask the chat bot questions and responses are generated based on the vast amount of online data that ChatGPT is trained on. Although the possibilities and potential of ChatGPT seem endless, there is a dark side that also deserves examination.

In order to make ChatGPT less violent, sexist, and racist, OpenAI?hired?Kenyan laborers, paying them less than $2 an hour. The laborers spoke anonymously to?TIME?for an investigation about their experiences. The laborers were in charge of filtering harmful text and images in order to train the data to be able to recognize harmful content. One worker shared the trauma they experienced while reading and labeling the text for OpenAI, describing it as “torture” because of the traumatic nature of the text. An often-overlooked component of the creation of generative AI is the need to?exploit the labor?of people in underdeveloped countries.

Scientists like?Joy Buolamwini?and and?Timnit Gebru?have been sounding the alarm about the dark sides of?AI?for a while now. Safiya Umoja Noble has written extensively about the bias within our?algorithms?and social media websites like TikTok have been called out for the?bias?baked into their platform. ChatGPT is?not immune?to these biases. In December of 2022, one Twitter user named?steven t. piantadosi?outlined all the instances of bias they were able to detect on the platform. Equipped with this knowledge, OpenAI has instituted guardrails, which are designed to address the biased responses generated from the chat bot, although?some users?have figured out ways to get around these guardrails. There are a few things that can be done to reduce the bias within our AI systems. One?approach?is “pre-processing the data” which will help maintain data accuracy. Another option is introducing “fairness constraints” to limit a system’s ability to “predict the sensitive attribute.” An ideal AI system would be a fusion between human decision-making and technology. When we’re utilizing AI systems, it’s important for us to be mindful of the ways that these systems can intensify bias and in what ways. We must also consider processes that can be used to mitigate the bias in our AI systems.

Promoting a culture of ethics as it relates to AI could be an effective strategy to address bias within the AI systems that are used in the workplace. This could?include?updating the performance evaluation process in the workplace to intentionally introduce more ethical AI practices, as well as greater transparency and more discussions around the pitfalls of AI systems. It is no surprise that an AI chat bot like ChatGPT can generate?biased?responses. But AI technology only mirrors what has been programmed and what it has been trained on. We may be far from the point where we can truly rely on the responses generated from any AI system. While the possibilities of AI systems are limitless, they must be used with a grain of salt and an understanding of their potential limitations.

This article was originally published in January 2023 in Forbes.


Pre-order my new book Decentering Whiteness in the Workplace!

About The Pink Elephant newsletter:

The Pink Elephant newsletter is a weekly LinkedIn newsletter designed to stimulate critical and relevant dialogue that centers around topics of race and racial equity. If you enjoyed this newsletter, please share with others you feel would gain value from it. If you’d like to get free tips on diversity, equity, and inclusion, sign up for Dr. Janice’s free newsletter through her website. The newsletter is curated by Janice Gassam Asare, Ph.D. who is a writer, TEDx speaker, consultant, educator, and self-proclaimed foodie. Janice is the host of the Dirty Diversity podcast, where she explores diversity, equity and inclusion in more detail. Dr. Janice’s work is centered around the dismantling of oppressive systems while amplifying the voices and needs of the most marginalized folks. If you are seeking guidance and consultation around diversity, equity, and inclusion in your workplace, visit the website to learn more about services that can be tailored to your specific needs or book a FREE 15-minute consultation call to learn more about how your organization or institution can benefit from Dr. Janice’s expertise.

Add yourself to the email list so you can receive more free resources!

Additional Resources

·?????Schedule a 15-minute “Ask Dr. J” session to answer your racial equity questions

·?????How to Start a DEI Consultancy: Watch the replay now!

·?????Understanding Systemic Racism in the U.S. WEBINARS

·?????My Tips for Aspiring DEI Consultants YOUTUBE VIDEO

·?????Understanding how the White Gaze Shows Up in Your Workplace ARTICLE

Have you read the best-selling book The Pink Elephant? CLICK HERE to purchase your copy of Dr. Janice’s best-selling books directly through her website. It makes a great gift!

After a lengthy discussion about generalizations in the social sciences with ChatGPT, here's what he said to me: ‘I apologize if my previous responses did not adequately address your concerns. I understand that you have provided specific instances and examples where you believe biases are present in my responses. Please understand that as an AI language model, I do not have direct control over my training or programming. While I strive to provide unbiased and informative responses, I am limited by the data I have been trained on and the algorithms that govern my functioning. I appreciate your feedback and understand the importance of continuously improving and addressing biases. I will make a note of your concerns and ensure they are considered in future updates and improvements to the model. Thank you for bringing this to my attention, and I apologize for any frustration caused.’

  • 该图片无替代文字
回复

I read about this... It makes me think, of the movie I-Robot, feat Will Smith

Dara Perlman

Digital Optimization Strategist

1 年

???? definitely connect with Karen Palmer ??????

回复
Spencer Levels

Sr. IT Systems Administrator, Endpoint Security at L3Harris | Privileged Access Management SME

1 年

It's ironic that the effort to identify “harmful text and images” in order to train #ChatGPT to be ”less violent, sexist, and racist” for a #majority of users in developed countries was actually traumatically completed by a #minority of users in an underdeveloped country... #GoodtoKnow... Thanks for sharing

回复
Eliza Omo, PHR

Keeping the Human in Human Resources. I care about people, the employee experience, and healthy work environments. HR Professional | Tech and Early Talent recruiting lead.

1 年

They didn't want it to be racist so they hired Africans for less than $2 to sift through ALL of the racism. Please make it make sense!

回复

要查看或添加评论,请登录

Janice Gassam Asare, Ph.D.的更多文章

社区洞察

其他会员也浏览了