The Imperfections of ChatGPT: A Human Experience
Photo from the Warner Bro. Film, Her

The Imperfections of ChatGPT: A Human Experience

As a Program Manager I love efficiency and with a world that is embracing AI it's exciting to have a new tool that helps you scale no matter what profession you’re in. With that said, Artificial Intelligence is incredibly popular right now, but it’s important to acknowledge that it’s not perfect. We still have to remember we're actively QAing and building best practices for even the most simple use cases. We can't assume its speed to do things always provides a return that's better than a human return.

I experienced this recently when editing a draft of a note I hoped to send. Knowing my audience’s love for certain films, I asked ChatGPT to reference movie lines from a specific list of movies that my audience would relate to and that I've personally seen. The prompt was along the lines of, "Please help me write a note using the details below and please reference lines from movies such as [insert list here]."

As an analyst formerly in trade, and still at heart, I QA'd the final draft's quotes to ensure they were actual lines from the films. There was one particular film that I’ve seen several times, and the returned quote was unfamiliar to me. Unable to find any reference to the film line, I asked? ChatGPT , "What scene in Anatomy of a Fall does [this] quote come from?" The quote was not from the film. ChatGPT replied "It appears to be a general statement often attributed to themes of loyalty and support found in various contexts but did not come from any specific scene in [the movie]." ChatGPT then provided a new draft immediately with direct film quotes. While you could argue that my prompting wasn't sharp enough to get me exact quotes (2 out of 3 references were accurate and direct quotes), this experience highlighted how easily ChatGPT hallucinations or misinformation can spread. If I didn't take a second look, I would have sent the note, thinking everything was accurate and directly quoted because of my trust in AI’s ability to provide returns that match my expectations.

The draft I wrote was a note I was trying to enhance, but people use GenAI for C-level presentations, QA code, final exams, and other large-scale work many plan to lay the foundation on. This underscores the importance of fact-checking, whether the information is man-made or AI-generated. Fact-checking is imperative to combat misinformation or poor foundations for any project.

At the bottom of every ChatGPT return you'll see the note, "ChatGPT can make mistakes. Check important info." I'd take the note seriously.

Thaisa Fernandes ?

building things + podcast + author + vegan ??

4 个月

Love this article! It's a great reminder for QAing and fact checking everything and not just trusting the AI tools and honestly the "internet"

要查看或添加评论,请登录

社区洞察

其他会员也浏览了