The AI Content Crisis Nobody's Addressing and the Simple Solution to Fix It
Santiago Restrepo
I help smart, busy people harness AI to work smarter, think bigger, and get more done | Helping People Work Smarter with AI | No-Code Solutions | AI Productivity Consultant | Ai Sherpa
In a world where AI-generated content is becoming ubiquitous, we're facing a crisis of credibility and trust. Every day, I see comments questioning whether content is AI-generated or not, watch heated debates about authenticity, and notice a growing skepticism in digital spaces. As someone who regularly creates content with AI assistance, I believe we've reached a critical point where we would greatly benefit from addressing this head-on.
The Trust Problem
AI is transforming content creation at an unprecedented pace. Emad Mostaque, the CEO of Stability AI, during his appearance on Joe Rogan's podcast in April 2023, made several bold predictions about the future of AI and its impact on creative content generation.?
Mostaque predicted that by the end of 2024 (right about now), more than 90% of online content could be AI-generated. I’m not sure we are there yet, but it doesn't feel that far off. He also suggested that the volume of AI-generated content would soon surpass the total amount of human-created content throughout history.
In my opinion this isn't inherently problematic. The issue lies in the ambiguity surrounding creation methods and authorship. When people can't distinguish clearly between human and AI-generated content, it breeds mistrust and skepticism.
I see this mistrust manifesting in comment sections, social media interactions, and even in my own reactions to content. That momentary pause, that "hum, this seems AI generated" thought is becoming increasingly common. While using AI for content creation isn't a major issue, the lack of transparency about its use is.
Your AI and My AI - A New Conversational Paradigm.
We're entering a strange meta-environment where AIs interact with other AIs in spaces meant for human connection. I've observed this particularly in AI communities and LinkedIn interactions. AI-generated content receiving AI-generated responses, with humans merely copy pasting and spectating. This raises an uncomfortable question: What's the point of having AIs engage in internal conversations while humans pretend to contribute?
A Solution (Maybe) - The AI Influence Level (AIL)
This is where Daniel Miessler ?? clever AI Influence Level (AIL) system comes in. Introduced in May 2023, AIL provides a straightforward framework for disclosing AI involvement in creative work. The system uses a 0-5 scale with clear examples:
Why AIL Matters
The beauty of this system lies in its simplicity and clarity. As someone who regularly creates content with AI assistance (typically at AIL 3), I've started implementing these labels in my work. Here's why it matters:
Personal Perspective
I wouldn't be able to produce the amount of content I do now without AI assistance. My time, capacity, and interest don't align with traditional content creation methods. But I'm not pretending to be something I'm not. I'm interested in communicating ideas effectively, and AI helps me do that.
There's no shame in leveraging technology to communicate more effectively.
Moving Forward
As we enter this new era of content creation, frameworks like AIL become increasingly significant. They help maintain authenticity in digital spaces and prevent the erosion of trust that comes with ambiguity about AI involvement.
I'm implementing AIL ratings across my content (currently on new content, but will start disclosing on past content as well), and I encourage others to do the same. Let's create an environment where we can be proud of how we create, whether it's purely human-generated, AI-assisted or completely AI Generated.
Because ultimately, it's not about whether we use AI, it's about being honest about how we use it.
领英推荐
A Vision for Implementation
The AIL system has great potential for practical implementation across platforms. Imagine social media platforms, forums, and content sites incorporating AIL as a standard.
The Forum Problem
I’m witnessing firsthand how AI is transforming online communities, and not always for the better. Maybe this is something you can relate to: Someone takes the time to write a thoughtful post seeking help in an AI forum. Within minutes, others copy that question into ChatGPT, paste the response, and farm engagement. While clever, this practice deteriorates the quality of discourse in our spaces.
Look, I respect the ingenuity behind gaming these systems and algorithms. But when AI-generated responses (AIL 5) flood our forums, we lose something precious and truly valuable,? genuine human interaction and authentic knowledge sharing.
It’s All About Choice?
By implementing AIL ratings, we're not just being transparent, we're empowering choice. Users should have the right to decide what level of AI involvement they're comfortable engaging with. Yes, disclosing that my content is AIL 3 might mean some people choose not to read it. But that's a price I'm willing to pay for integrity.
A Call to Action
This isn't just about individual articles or posts, it's about preserving valuable online spaces for everyone. Here's what you can do:
The Future We Choose
We're at a crossroads. We can continue down the path of ambiguity, where AI-generated content silently floods our spaces, or we can embrace transparency and choice. The AIL system offers a voluntary framework that respects both creators and consumers.
Remember, this isn't about restricting AI use, it's about creating an environment where everyone can make informed choices about their online interactions. Whether you're a content creator, platform developer, or everyday user, you have a role to play in shaping this future.
Let's build a digital world where transparency isn't just appreciated, it's expected.
This article: AIL 3 - Created using AI with extensive human structure and guidance
Behind the scenes - How This Article Came to Be
Since I'm advocating for transparency about AI usage, let me pull back the curtain on how I create content like this article (AIL 3).
My ideas usually come at random moments, while reading, watching something interesting, or just walking around thinking. Before AI, most of these thoughts would evaporate or maybe surface in conversations with friends, but rarely make it to paper. The friction of writing coherently and editing would usually kill my momentum.
Now my process looks something like this: When an idea strikes, I grab my phone and record a voice note, letting my thoughts flow naturally. I then dig deeper, using tools like Perplexity to research supporting arguments or counterpoints. All of this - my voice notes, research, and additional context - goes into a specialized Claude 3.5 Sonnet project I've created.
I've developed a writing style guide based on my most successful articles and social media posts. This guide helps the AI understand my voice and tone. The AI generates a first draft, which I review and either edit directly or record additional voice notes for major revisions. We go back and forth until the piece feels right, with the final version always getting a final human polish from me.
Each iteration teaches the system more about my style and preferences, making future drafts more refined. This workflow lets me focus on what matters, the ideas and insights, while reducing the friction that used to keep those thoughts locked in my head.