How Writers Can Win the AI Content Wars — Before It’s Too Late
Joe Lazer (Lazauskas)
Best-Selling Author of The Storytelling Edge | Fractional Marketing Exec | Keynote Speaker | Storytelling Workshops & Trainings
Programming Update: I'm now publishing this newsletter on Substack weekly and on LinkedIn less often. To get proven storytelling and audience-building strategies in your inbox each week, Subscribe here for free!
****
I’ve been thinking a lot about this new Fiverr ad, a bizarre piece of corporate wish fulfillment with the refrain, “Nobody cares if you use AI.”
On the surface, it’s just making fun of people on LinkedIn who post incessantly about their ChatGPT hacks. But just below the surface, it has a deeply nihilistic message: You shouldn’t care if content is made by AI. And if you do, you’re nobody. Your opinion doesn’t matter.
It’s just one ad, but inside the tech world, the sentiment is everywhere. And for those of us who care about the value of human creativity, it’s a dangerous message we need to guard against.?
Do people care if you use AI?
Do people care if you use AI to create content? I spend a lot of time researching and writing about AI and the future of work, and I can tell you that, yes, they certainly do:
I could go on. Just 10% of people say they’re more excited than concerned about AI, while most are more concerned than excited. This data surprises many people I know! There’s a massive gap in AI sentiment between how tech/ad industry folks and the average consumer. For instance, a Yahoo and Publicis survey found that while 77% of advertisers view AI favorably, just 38% of consumers say the same.
But Fiverr — a platform founded on the ethos that creative work is worth less than a Crunchwrap Supreme — would like it if nobody cared. So would the executives Fiverr is targeting with this ad. After all, If your audience doesn’t care whether content is created by AI, that means you can fire 90% of your content team and buy bulk packages of cheap, ChatGPT-penned SEO and LinkedIn slop from Fiverr instead.
Do more with less, amirite???
Fiverr’s ad is a form of corporate wish fulfillment; it tells Fiverr’s target audience (marketing execs, CEOs) what they want to hear while also swaying a skeptical public towards AI apathy.?
But Fiverr — a platform founded on the ethos that creative work is worth less than a Crunchwrap Supreme — would like it if nobody cared.
For creatives, this is how the AI content wars will be lost — not with a bang but a hand wave. The biggest risk to writers isn’t AI itself — it’s that people will stop valuing human craft, creativity, and artistic choices.
The Fiverr ad is just the beginning. In the coming years, the corporate world is going to spend a lot of money convincing people they shouldn’t care if content is made by AI. It’s time we fought back.?
How to win the AI content wars?
I’m not dogmatically anti-AI. I’ve spent the past three years working with a future-of-work-focused AI startup. I incorporate AI into my creative process. I’ll talk out the issues I’m having with my book with Claude over a late-night drink.
I’ve also been exposed to how many tech leaders and corporations talk about generative AI when reporters aren't present. While they may give lip service to the value of human creativity, the truth is that it’s in their best financial interest if the general public becomes apathetic about how AI is used.?
This is how you get OpenAI’s (now ex) CTO saying, “Some creative jobs will be replaced, but maybe they shouldn’t have been there in the first place,” and then being forced to backtrack.
"For creatives, this is how the AI content wars will be lost — not with a bang but a hand wave."
We’re in a precarious moment. Two years post-ChatGPT, we haven’t established any cultural or ethical norms about using generative AI in content creation. There’s no clear consensus of right and wrong, and without one, our corporate overlords will take advantage of the ambiguity to devalue creative work.
It’s time we fixed that. And by we, I mean you and me. Not the platforms.
Almost every day, I see another “Note” on Substack pleading with the platform to update its terms of service to ban the use of generative AI without disclosure. But if we wait for the platforms to create new AI norms, we’ll be waiting forever. We need to make them ourselves.
Ethical norms aren’t bestowed from the heavens. They’re fluid. Plagiarism, for instance, wasn’t a serious infraction until the 18th century — which is why both Shakespeare and Leonardo da Vinci “borrowed” way more from other artists than we’d be comfortable with today. The Renaissance brought an emphasis on individualism and individual genius. Writers and academics wanted to protect their ideas and the profits that came from them. So, they exerted their influence by making copying without attribution a taboo practice.
So, what should our new norms be about generative AI? I don’t have all the answers, but I have one suggestion to get us started.
A new norm that’ll help us start to win the AI Wars
Generative AI is complicated because it doesn’t clearly violate an existing norm. Frontier models like ChatGPT and Claude rarely plagiarize because they don’t ingest creative works wholesale. Instead, they use them as training data to help them predict the next word in a string of text. (Stephen Wolfram has a must-read primer on the inner workings of generative AI.)
Sure, I find it incredibly irritating that ChatGPT has been trained on my last book — after all, it was copyrighted, no one asked me for permission, and I didn’t see a single cent of compensation! But it’s a legal gray area that may fall under Fair Use. ChatGPT can impersonate my writing, and while that impersonation is QUITE insulting — like looking at yourself in a funhouse mirror — it’s unlikely to directly plagiarize my work.
领英推荐
"There’s no clear consensus of right and wrong, and without one, our corporate overlords will take advantage of the ambiguity to devalue creative work."
The best comparison to using GenAI might be ghostwriting, which is also mired by ethical murkiness since many vain CEOs don’t disclose when they use a ghostwriter on a book. For some reason, that’s considered acceptable. But no self-respecting writer would ever use someone else’s words without disclosure or citing their sources, so I’m proposing one simple, new norm:
New Norm: Disclose how you used Generative AI in every post.?
Maybe you used GenAI as part of your research process, as I did with this post. Maybe you asked for feedback after you finished a first draft. Maybe you rewrote a paragraph with Claude’s help. Maybe ChatGPT wrote an entire section, and then you edited it. I’m not calling for us to judge one another — I’m just asking for transparency. A little section at the bottom of each post that discloses how you used it. I'm calling on both independent writers as well as content teams at brands to pick up this practice.
If we start a grassroots movement of voluntary disclosure, I believe we’ll accomplish a few things:
1) We’ll learn from each other.?
Contrary to what Fiverr thinks, I’m very interested in how other writers are using AI, but it often feels like we’re terrified of talking about it.?
I’ll start: I never let AI write for me, but I use Perplexity and Claude for research (while being sure to fact-check for hallucinations). I’ll often talk through ideas with Claude like I would with an editor. I use Opus for social video editing. Occasionally, I’ll use Claude to brainstorm a pop culture reference or sharpen a metaphor.?
Do any of these practices violate the trust or expectations of my readers? I don’t think so, but I’d like to know if they do!?That’s why I’m advocating for the disclosure of GenAI usage, even if you just use it for research or feedback and don’t use any GenAI-generated text. Let’s learn from one another and figure out where the line should be.
2) We’ll build trust with readers.
Psychologically, self-disclosure is one of the most effective ways to build trust with others, which seems particularly true with AI; in one study, disclosing the use of AI in ads boosted trust in the brand by 96%. There’s power in transparency.?
Want your content to stand out? Want to gain a leg up over your competition — whether that's a competitive company or another newsletter in your space? This is a way to do that!
3) We’ll create a new norm that will spawn others.
If disclosing your AI usage becomes the new normal among writers, we won’t need platform policy changes. (Although that’d be nice.) It’ll just become the expectation, like citing sources. I also think that this transparency will help us reach a consensus on other norms around generative AI as we start a conversation about where we should draw the line.
4) We’ll distinguish and increase the value of human-generated content
This is the big one—the reason why I believe voluntary disclosure can be the first step for writers in winning the AI content wars.?The internet sucks right now, and that means we have an opportunity to build something new and better.
Social media is in decay. Facebook has now been overrun by AI-generated shrimp Jesus posts that millions of people are worshipping — a true Kurt Vonnegut dystopia come to life. X has become a cesspool of white nationalist eels. Mark Zuckerberg has plans to flood Instagram with creepy AI-generated content. TikTok is openly trying to become QVC with their shop feature as influencers shamelessly hawk crappy products for a taste of late-capitalist glory. LinkedIn cuts to the chase and actively begs users to post AI-generated broems.?
I believe we’re about to see the internet split in two. On one side, the AI-infested legacy social platforms of today. On the other side, a renaissance of niche, small, personality-driven media brands that people flock to because they are distinctly human — run by real people readers can trust.
Enjoying this post? Subscribe to my Substack newsletter to support my work and get proven storytelling and audience-building strategies in your inbox each week.
Platforms like Substack and Beehive are providing the infrastructure for these new media brands, and one of the best ways to signal to readers that we are human is to disclose how we use generative AI (if at all). After all, the AI slop farms on the dark side of the internet aren’t going to do the same. Again, this doesn’t mean you’re not allowed to use generative AI. It just means you’re transparent with your audience about how you used it.?
All it takes is a little disclosure. Whether you're an independent writer or running a content team at a brand, you have a lot of trust to gain and little to lose by adopting this practice.
At the risk of sounding like a thirsty YouTuber, I’d love to hear what you think in the comments. Do you use AI in your creative process? Where do you draw the line? What other new norms can we set? Do you think my proposal is completely idiotic? It’s okay if you do! We don’t need to agree — we just need to stand up and show that we care.?
How I used GenAI while writing this newsletter
Recommended Reads?
The State of Culture, 2024 (Ted Gioia): You may have read this already, but I just did, and damn — it nails our dopamine-addled world.?
What is ChatGPT doing and why does it work? (Stephen Wolfgram) : Even if you hate GenAI, know thy enemy. This overview is simply excellent. ? You’ll Grow Out of It (Jessi Klein): The most underrated book of humor essays of all time and the perfect holiday gift for anyone who can read.
I’m the best-selling author of The Storytelling Edge and a storytelling nerd. Subscribe here to get storytelling and audience-building strategies in your inbox each week.
OK Bo?tjan Dolin?ek
Joe Lazer (Lazauskas) I thoroughly enjoyed reading this! I hope a significant portion of your target audience takes your advice—it’s a win for everyone in the long run.
Consultant, Strategic Marketing, PR & Comm, Writer & Educator
2 个月Well put. I totally agree and that’s been my policy
Office Manager and Paralegal
2 个月Good article! I'm for disclosure. Question: can you please cite the study(ies) for you first set of stats? I am interested in the context. Thanks!
Keynote Speaker and Team Facilitator Elevating Leadership for The New Workplace, Author, HR Executive, Start Up Advisor, Team Effectiveness Coach, Executive and Leadership Development Expert, Org Capability Facilitator
2 个月This is the conversation we must have in organizations to ensure transparency, innovation and human capabilities can converge in value creation.