Disclosure of AI is important. But not always.
While I subscribe to several digital-only newspapers -- these include The Wall Street Journal, The New York Times, and The Washington Post -- I also get my local daily newspaper tossed on my driveway every morning. (Well, mostly it lands on the driveway. Sometimes, it's thrown into the gutter; other times, the lawn.) It is because I flip through every page of The East Bay Times, seeing articles I might never see in a digital version (where I only click on what interests me), that I caught a syndicated column called "Work Friend" by Roxane Gay.
"Work Friend" is an advice column. In a recent installment, a reader noted that where he works, "I recently noticed colleagues are sending ChatGPT-generated responses without acknowledging that they were generated by artificial intelligence." The anonymous contributor said that they support using AI to improve efficiency, "but this sort of review has the opposite effect: The AI-generated response does not take into account previous discussions/decisions during the review process and can generate unnecessary busy work." They asked for "the best way to broach this subject without causing a negative reaction."
Gay's answer was mostly on point, but she argues that "moving forward, (employees) need to identify all AI-generated work."
Disclosure is one of the hottest topics in the world of business adoption of AI. In many cases, disclosure is a requirement, whether the AI-generated content is being used internally or externally. But do we really need to disclose ALL AI-generated work?
No. No, we do not.
In many cases, AI is just a tool, like Microsoft Excel or Adobe Premiere. When I use Excel to crunch some numbers, must I disclose it? (Back in 1967, when Texas Instruments introduced the first pocket calculator, did any business require disclosure by employees who used it for calculations?) When I use an image from a stock photo service for an article banner, I feel no compulsion to say, "The image at the top of this article was acquired from Dreamstime."
AI Isn't New -- and Its Earlier Uses Have Gone Undisclosed
Keep in mind that AI has been with us for a long time. An AI disclosure does not accompany your Netflix recommendations. You don't hear a disclosure every time Siri answers a question. If you use your face to open your phone, Google Assistant doesn't inform you that AI was used to accomplish that feat.
It gets more complicated the deeper you dig. Consider that the era of AI chatbots will eventually end, with AI instead integrated into tools. Just look at the speed with which Microsoft is weaving Copilot into every product it has.
Here's an example of when I used ChatGPT as a productivity tool. Let's put it to the disclosure test.
One of the senior executives at my company sent me a document containing hundreds of bullet points collected during a brainstorming session. The document captured five categories of comments. My task was to eliminate redundancies by consolidating comments, putting them in subcategories with logical labels, and rewriting everything so it was consistent. "Synthesize it all," he said.
领英推荐
I was looking at two days of work easily. Instead, I sliced the document into five separate files, one for each main category. Then, I searched and replaced the company and employee names. Once the documents were scrubbed, I pasted them into ChatGPT with the same instructions I had been given. In under a minute, ChatGPT presented me with four or five categories of bullet points for each document.
My work did not end there; I didn't simply send ChatGPT's output to the exec and call it a day. (And if I had, I would have disclosed it.) First, I carefully read the output, leading me to rewrite some labels, recategorize some bullets, and make some other adjustments. Then I brought out the original list to compare with ChatGPT's output and found it had left out a few items that needed to be included. I ensured it had not made up anything, which it hadn't, most likely because I had instructed ChatGPT in the prompt to use only the uploaded document and no other sources. Finally, I replaced the bogus company and employee names I had used with the legit ones.
By the time I was done, I had spent about four hours on the task, saving about 12, according to my SWAG (scientific wild-ass guess).
The executive was greatly pleased with the result.
I did not mention that I had used AI to assist with the project, any more than I would have mentioned that I had used Microsoft Word to produce the final draft.
Disclose When the Use of AI Matters
If we can agree that it is unnecessary to disclose every use of AI, what uses of AI should be disclosed? That's easy: Any use that could create confusion or misunderstanding, along with any time lives are on the line (such as using AI to make public safety and healthcare decisions). Businesses that use AI to make hiring or promotion decisions should disclose it. Schools should use it for admissions and grading. Any image that can be mistaken as real should include disclosure (like the image of Donald Trump surrounded by Black supporters -- which, of course, his campaign did not disclose, but there's plenty of material out there already about using AI for deliberate disinformation).
However, an image that is clearly an artistic representation of something does not require disclosure. What difference does it make whether it was created by a graphic artist or Midjourney? (And I never disclosed that art accompanying an article was created by a graphic designer.) Applications for operational efficiency don't always need to be disclosed (though in the case that anonymous employee presented to Gay, if you're going to distribute unedited AI outputs, that should be disclosed). If the music bed I use in a video was generated by an AI tool rather than YouTube's royalty-free music library, that doesn't need to be disclosed. It doesn't matter (unless you're gratified by people saying, "AI did that? Really? Wow, that's cool!"). Just imagine an article that includes the following coda: "Anthropic's Claude generated the subheads in this article."
As communicators, we are capable of making judgment calls about when to disclose and when to avoid unnecessary extra documentation. Companies should establish guidelines for everyone to follow, ensuring consistency in the disclosure of the most important uses of AI.
But the headline in The East Bay Times, "All AI-generated work must be identified as such," is nonsense.
By the way, I typed this post using my Qwerkywriter keyboard. You really needed to know that, didn't you?
Data-Driven Decisions / Data Governance / Process Improvement / Complex Systems Integration
7 个月It's important that AI doesn't lie when asked directly if it is not AI. Also, disclosure is important in scientific works. In all other cases, it is no longer important. People make mistakes, as does AI. Articles that resemble scientific papers in non-scientific publications, without links to research sources, serve (and always did independently of AI) as entertainment rather than information.
Change & Communication Leader | Writer | Keynote Speaker | President, IABC NSW | Prosci?| SCMP
7 个月?? iIt’s super annoying when people say “this article/podcast etc did not use AI” too.
Independent Hype-Free Views on AI and Digital Media
7 个月I agree with you. We'd have to disclose everytime we post on a social network's AI tools, use Google Ad Words, Grammar.ly, etc. It's just not pragmatic with the coming inclusion of gen AI in almost every software package.
?? Dynamic Global Leader Driving Internal Communication, Culture & Engagement | ?? Expert Storyteller & Writer | ??? Passionate about Diversity, Equity & Inclusion
7 个月Hey Shel Holtz, SCMP, thanks for diving into the AI disclosure debate! I’ve been following this topic, and I agree with the idea that not all AI-generated work needs to be identified. I appreciate that you included ways AI is already integrated into our lives to help us understand. I'm curious to learn what others think should be disclosed when using AI.?P.S. I'm stealing your SWAG definition. ??