3 Reasons I Don’t Cite ChatGPT (+ 2 Things I Do Instead)
Rachel Riggs
Digital Education Leader | Applying practical, scalable solutions to drive equity in education | EdTech, AI, Language Learning, Open Education, Digital Skills
ChatGPT and similar tools like Bard, Anthropic’s Claude, and Bing Chat utilize a technology known as generative AI (GAI) to quickly produce coherent language in various styles, formats, and tones. GAI technology leverages large language models (LLMs) trained on huge datasets to generate human-like text, making it useful for content creation, communication, and creative writing tasks. GAI-powered chatbots can produce comprehensive, cohesive essays, catchy social media blurbs, or detailed outlines in seconds. Due to this automation of writing, there are questions about whether using ChatGPT-generated content should be considered “plagiarism.” Many have asked me (and I’ve asked myself), “How do I cite chatbots?”?
When I first started using ChatGPT, I did cite it or, cleverly, hint at it being used, or even, in a really obvious way, include a screenshot of ChatGPT's responses. Over time, though, I’ve given up the practice. Here’s why:?
1. ??? Sustainability - Should I give Google Docs credit, too?
Because the underlying technology that powers these chatbots is being used to build features in many existing tools, citing their use is increasingly murky. Google Labs is already testing GAI in Google Workspace. Grammarly has adopted GAI in GrammarlyGO. Hemingway Editor is using the technology, too. If I’ve written a report in Google Docs, checked my writing with Grammarly, and adjusted the reading level with Hemingway Editor, my citation would say, “This report was drafted using Google Docs, Grammarly, and Hemingway Editor.” What does that tell a reader who isn't following the latest announcements by these tech companies? Even though all of these leverage the same or similar technology as ChatGPT, the growing integration into existing programs will render citing GAI-powered tools senseless. There will likely come a point where GAI is so seamlessly integrated into our workflow and tech stack that it will be hard for even us to know when we were using it. We either must acknowledge the fleeting value of citing ChatGPT today or prepare for a future in which we list all writing tools that were leveraged to produce a written work. (As a techie, I'm not against it. As a reader, I know I'd skip over it.)
2. ???? Human-Centeredness - What is ChatGPT without the humans?
The LLMs that power tools like ChatGPT are built on datasets that include many other humans' creative and academic works. Furthermore, many layers of human intervention, fine-tuning, coding, testing, and more go into their development. At the end of the day, the names of the tools - ChatGPT, Bard, Claude - reference a commercial writing tool, not the humans that contributed to them (??Wikipedia contributors). The credit is best reserved for humans. Furthermore, what I know for sure is that most educators are using ChatGPT as a starting point. Giving it credit as though it's a single source of information conflates its role in the writing process. ChatGPT is known to produce false information and, in most cases, doesn’t offer accurate citations. Citing ChatGPT as a source of information supports a false narrative that chatbots provide information when, in fact, they provide natural-sounding language. ChatGPT is good at producing the skeleton of a written work. I still have to spend significant time bringing that skeleton to life by fact-checking, adjusting the tone, and adding my uniquely human perspective and experience. Celebrating humans for written works is a more human-centered approach to utilizing artificial intelligence than citing the commercial tools that help us synthesize, organize, and communicate our thoughts and work.?
3. ? Law and Precedent - Go to the source. (Hint: It's not ChatGPT.)
Trusted sources like Creative Commons advocate for AI-generated output to reside in the public domain. Even ChatGPT Terms of Use states, “Subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title, and interest in and to Output.” That means that you have full ownership over the content you generate using ChatGPT (the “Output”) as long as you are abiding by their rules (the “Terms”), which don't include citing the tool in writing. While APA does provide citation guidance, they also critique the accuracy of information and sources generated by ChatGPT and offer this alternative to using ChatGPT as a reference, stating:
It may be better to read those original sources [from which ChatGPT is drawing information] to learn from that research and paraphrase or quote from those articles, as applicable, than to use the model’s interpretation of them.
Furthermore, Springer Nature, a well-known academic publisher, rejects attribution to ChatGPT as an author and recommends citing its use in a methods or acknowledgments section of academic papers. In summary, we own the content that ChatGPT generates, the content it generates shouldn’t be viewed as a source of information, and ChatGPT cannot be an author. I propose using ChatGPT as a time-saving tool to brainstorm and synthesize information in the writing process. Then, we must seek out primary sources of knowledge and creativity and reallocate any time saved with ChatGPT to ensure that citations of human-generated work are high quality and accurate.
Ideally, technologists and policymakers will work together to continue to make progress toward properly attributing AI-generated output (like Bard and Bing Chat do!). Furthermore, I’ll be eager to see how watermarking and other mechanisms contribute to AI transparency. In the meantime, as technology and oversight develop, there are two ethical and impactful practices we can utilize as individual users of GAI-powered chatbots.??
领英推荐
The 2 Things I Do Instead
Two common themes across policies and guidelines that address AI are “human-centeredness” and “transparency.” Putting these concepts into practice with GAI is a sustainable solution to the ethical question of attributing our work to AI. These are practical ways to take a human-centered approach and be transparent about using AI, so the next time you use ChatGPT, try these two strategies.
?? Avoid copy/pasting - Don't let ChatGPT take the wheel.
ChatGPT has certain limitations when compared to human writers. Unlike me (and you), it doesn’t have in-depth and specialized expertise. It can talk about elements of my work like edtech, digital equity, English language learning, adult literacy, etc. Still, it doesn’t produce any unique insights that sit at the intersection of those elements in the same way that I can. It has no lived experience, so it has no clue where to inject empathy or creatively weave in a realistic scenario. It’s also known to produce false information and biased perspectives. Because of all these limitations, I avoid letting AI-generated output represent me. I don’t copy and paste a list of ideas straight from ChatGPT and send them to a colleague. I don’t copy and paste a fully written blog and publish it. I analyze every output and edit it to ensure that
Make sure you check ChatGPT output for accuracy and to inject your unique insights and creativity.
?? Talk about it - Keeping secrets is stressful, but sharing knowledge is fun!
There are additional ways to be transparent about your use of ChatGPT. A notice like “This was written using ChatGPT,” tells part of the story, whereas an in-depth dialogue about how and when to use GAI-powered tools could have more of an impact. My colleagues (from coworkers to social media connections) know I use ChatGPT because I’ve been vocal about leveraging it for my work. Talking about it has opened up opportunities to?
A truly human-centered approach is one in which we cite human works, unleash our uniquely human perspective, and share innovative strategies and tools with other humans. They are likely eager to learn more about this transformative technology, whether it’s for their understanding, to develop guardrails, or to share collective knowledge and insights.?
Note: This blog assumes an audience of education and workforce development practitioners. For more extensive guidance on when to cite ChatGPT and how to cite ChatGPT in different styles, I encourage you to check out this in-depth guide.?
Nurse Leader and Educator, Nurse Paralegal
1 年Wow this was informative and useful!
Adult education & Digital storytelling: trainer, teacher, consultant, coach
1 年Thanks for always sharing great insights on AI/ChatGPT!
Adult Education Leader and Speaker | Professional Development | Project Management | Digital Skills and AI Workshops for Educators
1 年The "2 things I do instead" are really helpful. Thanks for including those! I also think the focus on being human-centered is important. As I use different AI tools like ChatGPT, it's becoming harder to distinguish between my input and AI's output. I'll give feedback, add details, share my own writing samples for AI to mimic, and then continue prompting over a lengthy thread. There's a lot of back and forth. At the end of the thread, it's hard to tell who the actual author is. Great post, Rachel. Thanks for sharing!
AI and EdTech trainer and consultant/Learning and Development/ MEPLI convener and 2022 fellow at Harvard Graduate School of Education/ Instructional Designer/English Language Instructor and Coordinator
1 年interesting article. I totally agree ??