Before you get too hyped about ChatGPT, consider this...
Everybody seems to be talking about ChatGPT these days: How it will revolutionize content creation, how it might replace any form of manually written content – and basically reshape life as we know it today.
And one thing is for sure: It’s a fascinating tool for generating all kinds of content with minimum effort within the blink of an eye. If there’s one thing, though, I learned during my career as a content creator, it’s this one: Shortcuts usually have a catch. In this case: not just one. So don’t get too excited about it yet.
First things first: What is ChatGPT?
ChatGPT is an AI-powered chatbot generating “human-like” text based on the trained input it received – with quite amazing results. See for yourself.
All the sentences look syntactically correct. They make sense and could actually have been written by a human writer. But this is not where it stops. Since OpenAI has opened ChatGPT for the public, many people have been testing out its limits, e.g., explaining the bubble sort algorithm in the style of a gangster movie from the 40s, categorizing SEO keywords in a table, or writing code in Hinglish. I know there are more amazing results out there, like writing a (full?) bachelor's thesis and generating sad life story posts for LinkedIn (what I learned from a flipped egg toast kind of story). Unfortunately, I didn’t bookmark anything when I first started seeing examples popping up in my feeds. But I’m sure you’ll find them once you start browsing.
What these examples demonstrate quite nicely: ChatGPT does not only generate text. It generates all kinds of ready-to-use information in different forms. So it can, for instance, also debug code or provide you with a step-by-step how-to.
This is cool, no?
Yes. And no. Let’s take a look at what ChatGPT says when asked why companies should not use AI tools for their content creation.
Let's sum this up:
All these answers seem reasonable and straightforward. But let’s have a closer look at three of those: predictability, creativity, and values.
AI content is not predictable unless it's already predictable
AI content tools are not as intelligent or smart as some might think. Basically, ChatGPT scraps text from a predefined data set and "curates" it in a new way. So the outcome highly depends on the information input quality. When used with a regular web search (people already tried this), the door is open to all kinds of false information that might have even been published for exactly that purpose: misinformation. So unless you're a real expert in your field and don't catch any false information generated, you're spreading it without even thinking about it. It's AI, right? So do you really have to check it? ChatGPT is also not that impressive anymore once you realize that you basically get what you expect from "controlled input".
领英推荐
AI content lacks originality and creativity
AI content might be original in the way it "remixes" information and the form it is outpoured. But as it depends on defined information, you won't find original or innovative ideas in it. That's why ChatGPT cannot forecast any trends, for instance.
Let's face it. Currently, it's sort of like a reverse crawler, selecting and ejecting data – in a predetermined form. Honestly, I don't know how useful it is that you can generate a poem about SEO in seconds. But yes, any creator using unexpected forms of content for differentiation purposes might get a bit antsy right now. Others will jump on the hype train because they never had the skills or talent (or else) to do so.
Is using AI tools like ChatGPT "ethical"?
There are many dimensions to this, so let's stick to the creator side.
Of course, it's worrying that SOMEONE did create content at some point, and now it serves as the foundation and reference for a tool on a large scale. As if being a creator isn't hard enough already, now not only people are "stealing" your content. In the future, original creators need to compete with tools like ChatGPT or Dall-e (AI images) that might use their content without consent and/or without naming a source despite it being the origin of AI-generated content. And it's not just creators but the whole creator industry, from content marketing agencies to image databases, that will have to figure out ways how to deal with this. Shutterstock, for instance, has already announced a partnership with OpenAI.
On the other side, we'll have "new creators", people who never had access to creating "good" content (may it be a lack of skills or else). So what will happen when everyone can write... let's say... decent texts? Will it raise the bar and pressure to create even better content? Or is it a form of enablement? But which kind of people are we empowering? ChatGPT is at least trying to be a bit cautious of what kind of information it provides. So its answers are limited to "harmless content". However, it doesn't seem to be so difficult to find ways to bypass its regulations, as in this example of how to hotwire a car. Did not test it out myself, but apparently, quite a lot of people are eager to bypass ChatGPT's restrictions.
Is the hype justified?
Obviously, these are just a few thoughts on ChatGPT because there would be so much more to say about it. In general, there are some takeaways for me.
And let's put a "currently" in all of those.
So far, I haven't seen a single post generated by ChatGPT that would convince me to let it do my writing. Not because I could recognize AI-generated text by some kind of "visible flaw". But it usually lacks a certain reading flow, personal style, and/or depth. But that's also common for all kinds of "average" content written by people. And I've seen ChatGPT coming up with quite a bit of nonsense, too, so I'll stick to writing content myself for now.
However, I've been playing around with Dall-e for some time now (aka also see header image) and still haven't figured out how it really works ??.