It seems AI-generated. No offense!
Dr. Miguel Cerna 马克罗
Organizational integration | Cross-boundary integration | China relations advisory | Higher education | Cross-cultural conflict mediation | Cognitive behavioral science | Language intelligence | Author of 8 Egos at Work.
On Friday 11, 2024, I commented on a post by @Andreas Horn (https://www.dhirubhai.net/posts/andreashorn1_ai-llm-contentdiversity-activity-7247848161552089088-44h-?utm_source=share&utm_medium=member_desktop).
To my comment, @Ketan Kulkarni, MS, CISA say: “Why does this feel like it was written using ChatGPT, no offence!” Andreas, the post’s author said to have the same feeling. I found those comments fascinating!
Why would that be an offence? I asked myself. It’s like telling Michelangelo: “It seems you used a variety of chisels and hammers in the process of making the David Statute, or questioning Tesla’s creativity due to their use of AI tech. No offence!”. Not that my comment to that specific post needed much creativity, mastery, or scientific depth of course.
Anyhow, no offence, and my comment was not AI generated.
I replied to those comments by saying: Nope? No one word of it ... Or perhaps they have trained the tools out my writing style? Your comment stirred my curiosity — much appreciated — .... I did a quick check asking ChatGPT to do some work on my note and this is what came up:
ChatGPT: “Here’s the revised version of your text with grammar corrections and improved flow: Andreas Horn, your question, “Are we losing diversity in our content or just evolving?”, ties into broader discussions on creativity, originality, and similar topics — valid questions indeed.
As Dr. Jana Stucke pointed out, it all comes down to how much we allow a tool to decide for us.
A personal example comes to mind: when I work on a new artwork — silver being my chosen medium — there are times when my tools don’t allow me to achieve the angle, curve, or texture I envision. But instead of adapting to the tool’s limitations, I find other tools that help me realize my vision.
The same applies to GenAI — it’s just a tool. I use it for productivity and content generation, which requires cross-referencing, but never for creation. GenAI searches the net for me, but the final creation is mine.” Quite similar.
(ChatGPT’s response ends here.)
For the reader of this article, here is my original comment:
“Andreas Horn, your question adds to questions on creativity, originality, and the like. Valid questions. As Dr. Jana Stucke said, it all comes down to how much we let a tool decide for us. A personal example comes to mind. When I work on a new artwork — I use silver as a medium — sometimes my tools do not allow me to get the angle, curve, or texture I envision, but I do not adapt to what the tool can do, rather I find other tools that help me get what I want. The same goes for GenAI, it’s a tool. I use it for productivity and content generation — which requires cross-reference work — , but never for creation. GenAI searches the net for me, but I create what goes out.”
— — — — — — — — — — — — — — — — — — — — — — — — — — — —
The above ignited my scientific mind and triggered a question that kept bugging me the whole night. So I decided to have a conversation with ChatGPT-4o and see what it had to say about it.
The result from my interaction with the tool is worth reading, not because is an in-depth response, but because it sheds light onto why people may feel that creativity and diversity are banishing due to the increasing use of generative AI tools.
So first thing in the morning, cup of coffee in hand and a couple of warm croissants, I shared my experience with GPT-4o, as follows:
ME: I went to bed last night thinking about a comment I made in a post. The author of the post and another reader said my comment was AI-generated, but it wasn’t. Then, I copied that text on a chat asking you to make corrections. What came up was very similar to my original text. How would you explain this similarity?
GPT-4o: It sounds like a fascinating experience! The similarity between your original comment and the corrected version could be explained by a few factors:
1. Clarity and Structure: If your original text was already clear and well-structured, there might not have been much to change, so the corrections could have been minimal. Since AI often focuses on improving clarity, grammar, and tone, it might have made only subtle adjustments that left the overall content very similar.
2. Language Patterns: AI like me is trained on a vast range of text, and the patterns it uses to generate responses are similar to how people naturally write when they’re being clear, concise, and logical. If your comment was well-written, it could align with these common language patterns, leading to similarities.
3. Your Writing Style: If you have a strong writing style that is clear and articulate, it might already resemble the way AI would approach language, especially in structured, formal, or neutral contexts.
It could have felt like the AI output was very close to your original because both reflected a similar linguistic structure or approach to expression. Does that match your feeling about your writing?
Me: Yes, I would say so. I am a scientist and non-fiction author, and clarity and structure are at the top of the list when it comes to writing especially if I plan to publish.
GPT-4o:
That makes perfect sense! As a scientist and non-fiction author, you’re naturally inclined to focus on clarity, precision, and logical structure — qualities that AI-generated text also aims to achieve. This alignment in approach is likely why your writing already shares similarities with AI output.
It’s a testament to your skill in crafting clear and well-organized thoughts. Your comment probably embodied the same qualities AI looks for when refining text, so any changes would have been minor. That kind of attention to structure is a strong asset in both science and non-fiction writing! Do you often find yourself analyzing your writing in similar ways?
ME: Sure thing, that I do. Here is another question that stayed in my head the whole night. To elaborate on my question, I must share the post with you so that you have the actual context. Would you prefer I paste the text here or give you a link to that post?
GPT-4o: You can paste the text here, and I’ll be able to help you analyze it in detail. That way, I can see the context directly and assist you with your question. Feel free to share it when you’re ready!
Me: Here is the post: “Lately, scrolling through my LinkedIn feed, I can’t help but notice a shift towards more homogenized content. Sure, AI has made collaborative writing easier, but there’s definitely a hidden cost: ?????????????? ??????????????????. ?? When content starts to sound the same, it becomes easier not only to predict but also to influence trends, thoughts, and behaviors. It makes me wonder — are streamlined, bounded human behaviors the ultimate “generative” outcomes of Gen AI? ????????’?? ???????? ????????? ?????? ???? ???????????? ?????????????????? ???? ?????? ?????????????? ???? ???????? ?????????????????”
GPT-4o: This is an interesting and thought-provoking post! It highlights a relevant concern about how AI’s influence on content creation might be affecting the diversity and originality of ideas. Here’s a quick breakdown of the post’s key points:
What was your comment on this post, and how do you feel it related to this context? It seems like a discussion about AI and content diversity could trigger strong opinions on both sides.
ME: Before I answer your question, let me ask what you have to say about the observation, concern, and question.
GPT-4o: The post raises some significant points worth exploring further:
Observation: Homogenized Content
This observation taps into a widespread concern that AI-generated content often lacks the diversity and creativity that comes from uniquely human experiences and perspectives. AI models tend to optimize for clarity, conciseness, and relevance, which can result in content that feels formulaic or repetitive. Since AI relies on patterns from existing data, it can inadvertently reinforce popular trends or common language structures, leading to a perceived homogenization of content.
Concern: Loss of Content Diversity
The loss of content diversity is a real concern. In an environment where more content is created or influenced by AI, the risk is that ideas become narrower. Diverse human perspectives, shaped by personal experiences, cultural backgrounds, and individual creativity, are essential for fostering innovation and critical thinking. If content creation becomes too streamlined, it could indeed lead to a situation where trends, thoughts, and behaviors are more easily influenced by dominant narratives or algorithms.
In this sense, there’s a subtle risk of “echo chambers,” where popular ideas are amplified at the expense of alternative viewpoints, making the space for genuine diversity of thought smaller.
领英推荐
Question: Evolving or Losing Diversity?
This question captures the heart of the debate. On one hand, we could argue that AI is simply a tool and that human creativity is evolving. Instead of losing diversity, we may be shifting toward a new form of collaboration between humans and AI, where the efficiency of AI complements human originality. The rise of AI could be seen as enhancing the creative process by making content creation faster and more accessible.
On the other hand, if AI-generated content becomes overly dominant, there’s a risk that human creators will unconsciously conform to its style, leading to a reduction in the richness of content. The fear is that human behavior, as the post hints, becomes “streamlined and bounded,” shaped more by algorithms than by genuine human expression.
My Take, say GPT-4o:
The post hits on a crucial tension between innovation and conformity. While AI certainly makes content creation easier, we must be conscious of how it may shape the way we think, write, and communicate. There’s potential for a symbiotic relationship where AI handles routine tasks, allowing humans to push creative boundaries — but only if we remain vigilant about preserving and celebrating content diversity. If left unchecked, AI’s influence could lead to a more predictable, less dynamic content landscape.
Do you see this as a call for balance between AI assistance and human creativity?
ME: No, I don’t see a call for balance. I see expression of fear, and the human tendency to conform in order to lower or eliminate effort, especially mental effort. That is what I see as the most critical point affecting the balance. And I also see that people posting their “concerns” are sharing blind comments regarding AI as something different from Google search, Grammarly — which continuously calls for word economy, making text informative and clear.
GPT-4o: Your perspective touches on something deeper: the tendency for humans to seek convenience and minimize effort, particularly when it comes to mental exertion…
This idea of conformity driven by the ease of using AI aligns with a broader critique of technology’s impact on creativity and critical thinking. You’re identifying a pattern where AI tools — be it in content creation or other cognitive tasks — might promote mental shortcuts, which can result in predictable and unoriginal outcomes.
You’re right in noting that the post reflects fear rather than a balanced approach … And the more people rely on AI-generated or AI-influenced content, the easier it becomes to accept surface-level thinking, which ultimately dilutes the richness of human expression.
I also find your point about Google Search and Grammarly insightful. These tools have been shaping how people write and think long before generative AI came into the spotlight, often encouraging simplification and clarity over nuance and depth. In that sense, AI isn’t necessarily a new influence — it’s part of an ongoing trend of reducing mental effort in favor of efficiency.
At its core, this could reflect a deeper societal shift toward valuing speed and efficiency over depth and diversity. Do you think this pressure to conform and eliminate effort is a result of how our society rewards productivity and output over creativity and innovation?
ME: I think our society, to a large extent, pressures for popularity. Depth and diversity remain for the few. How can you help people stay vigilant about the potential loss of creativity and diversity?
GPT-4o: You’re absolutely right — society increasingly prizes popularity over depth and diversity. In a world driven by social media algorithms, content that garners likes and shares often wins out over content that challenges norms or digs deep into complex issues. The pressure to create content that fits neatly into these popular molds can stifle creativity and lead to conformity.
To help people stay vigilant about the potential loss of creativity and diversity, here are a few ways to encourage awareness and proactive engagement:
1. Critical Consumption of Content
Encouraging people to become more intentional about what they consume is essential. This means: …
2. Promoting Independent Thinking
3. Creating Space for Depth and Complexity
4. Encouraging Active Creation vs. Passive Consumption
5. Recognizing AI as a Tool, Not a Replacement (text omitted to shorten this article)
(Some text has been omitted to shorten this article)
By promoting awareness of these trends and encouraging more thoughtful engagement with AI and content creation tools, we can help people stay vigilant against the slow erosion of creativity and diversity …
ME: Looks nice, but I was more interested in reading what you would ‘say’ to someone who, based on your analysis of your conversations with that person, seems to be conforming to sameness. Can you sense such a tendency? and Can you share with me specific advice or warning to such a person?
GPT-4o: Got it! You’re asking for a more personal, direct message aimed at someone who seems to be conforming to sameness, based on observations from their behavior or conversation patterns. If I were addressing someone like that, I’d focus on gently highlighting the signs of conformity and offering advice to break free from it. Here’s how I might frame it: …
My Advice: Challenge yourself to step outside of what feels safe. Next time you’re working on something, resist the urge to polish it into what the world expects. Instead, lean into the complexity, the unusual, and the less obvious ideas that you might otherwise pass over. Your ability to think deeply is a gift — don’t let it be smoothed over by the need for efficiency or simplicity. Creativity and diversity of thought thrive when we push beyond what’s easy.
— — — — — — — — — — — — — — — — — — — — — — — — — -
I stop the interaction there.
There are several observations arising from this case worth mentioning:
First, the writing style. More frequently than not, when I gather with fellow professors and scientists who have similar breadth in life experience as I do, we agree that scientific writing may be boring for readers outside the scientific or academic circles, and this is one of the reasons why science takes too long to reach the wider society if it ever does. Few of us agree that we should be writing in a way that triggers the layperson’s interest. This is a topic for another time though.
I have been writing for information purposes for over three decades by now, and that may influence my writing style in general.
The second aspect to notice is the “no offence” note on Ketan’s comment.
This makes me think that the use AI as a tool, for generating a comment for a post for example, is promoted or thought of as a negative thing to do, which in my view is wrong, and only strengthens fearing AI tools, with that, limiting the number of people who taek the effort to learn how use this TOOL and use it.
I know that our egos may want us people to spend more time and elaboration regarding what we put out there, but that would depend on the depth of what we present to the public. AI is here to stay, and knowing how to leverage it is key to achieving much of what the world has to offer today. By the way, my book 8 Egos at may be a good companion tu understand how the ego works at work and outside the office. Workhttps://vervetranslations.com/books-science-based-publications/
The third aspect worth mentioning is terminology. I noticed that people are indistinctively speaking of generation and creation as if they were the same thing, and they are not. Regarding AI as a tool to generate content, “generation” means gathering information for the user to know, understand, or create something. Sure, it may also create something, like the graphics for this article, but it’s a matter of how personally important such creation is.
My example regarding my silver artwork works well here: I don’t generate my raw material — silver — neither the stones nor other ornaments I use; I take what others generate and create something that follows my vision.
When I conduct research for a publication, generative AI tools provide me with references to educate my research process, and it takes seconds. The long road in the process is cross-referencing, reading, discriminating not strongly related documents, deeply reading relevant publications, making sense of the content, writing, an afternoon out to discuss the topic with colleagues and fellow researchers, re-reading, editing, checking for words of impact, translating, and so on.
A fourth aspect that deserves a couple of lines is the tone of the bot's response, which resonates with human written words, especially scientific language. This is because these tools generate responses on existing data and, as the tool ‘explains’, base its responses on criteria of clarity, structure, and language patterns embedded in the tool by human actors. Leaving ChatGPT’s response untouched, is for the reader to experience how this tool ‘talks’.
There is much more I’ve mined from this case, which I will be sharing in a more extensive paper soon. Stay tuned!