Before you get too hyped about ChatGPT, consider this...

Before you get too hyped about ChatGPT, consider this...

Everybody seems to be talking about ChatGPT these days: How it will revolutionize content creation, how it might replace any form of manually written content – and basically reshape life as we know it today.

And one thing is for sure: It’s a fascinating tool for generating all kinds of content with minimum effort within the blink of an eye. If there’s one thing, though, I learned during my career as a content creator, it’s this one: Shortcuts usually have a catch. In this case: not just one. So don’t get too excited about it yet.

First things first: What is ChatGPT?

ChatGPT is an AI-powered chatbot generating “human-like” text based on the trained input it received – with quite amazing results. See for yourself.

Screenshot of ChatGPT answering correctly what it is.
ChatGPT works like a chat and can be "refined" within a conversation.

All the sentences look syntactically correct. They make sense and could actually have been written by a human writer. But this is not where it stops. Since OpenAI has opened ChatGPT for the public, many people have been testing out its limits, e.g., explaining the bubble sort algorithm in the style of a gangster movie from the 40s, categorizing SEO keywords in a table, or writing code in Hinglish. I know there are more amazing results out there, like writing a (full?) bachelor's thesis and generating sad life story posts for LinkedIn (what I learned from a flipped egg toast kind of story). Unfortunately, I didn’t bookmark anything when I first started seeing examples popping up in my feeds. But I’m sure you’ll find them once you start browsing.

What these examples demonstrate quite nicely: ChatGPT does not only generate text. It generates all kinds of ready-to-use information in different forms. So it can, for instance, also debug code or provide you with a step-by-step how-to.

This is cool, no?

Yes. And no. Let’s take a look at what ChatGPT says when asked why companies should not use AI tools for their content creation.

Screenshot of ChatGPT explaining the downsides of AI creation tools which are summed up hereafter
These answers are a bit generic but not wrong and decent "fast food" in a world that always needs content as quickly as possible.

Let's sum this up:

  • AI tools can be unpredictable
  • AI content can be repetitive and lack creativity
  • AI tools may not capture the tone or style you want
  • AI tools can be expensive
  • AI tools can challenge your values

All these answers seem reasonable and straightforward. But let’s have a closer look at three of those: predictability, creativity, and values.

AI content is not predictable unless it's already predictable

AI content tools are not as intelligent or smart as some might think. Basically, ChatGPT scraps text from a predefined data set and "curates" it in a new way. So the outcome highly depends on the information input quality. When used with a regular web search (people already tried this), the door is open to all kinds of false information that might have even been published for exactly that purpose: misinformation. So unless you're a real expert in your field and don't catch any false information generated, you're spreading it without even thinking about it. It's AI, right? So do you really have to check it? ChatGPT is also not that impressive anymore once you realize that you basically get what you expect from "controlled input".

AI content lacks originality and creativity

AI content might be original in the way it "remixes" information and the form it is outpoured. But as it depends on defined information, you won't find original or innovative ideas in it. That's why ChatGPT cannot forecast any trends, for instance.

Screenshot of ChatGPT explaining that it is limited to trained input

Let's face it. Currently, it's sort of like a reverse crawler, selecting and ejecting data – in a predetermined form. Honestly, I don't know how useful it is that you can generate a poem about SEO in seconds. But yes, any creator using unexpected forms of content for differentiation purposes might get a bit antsy right now. Others will jump on the hype train because they never had the skills or talent (or else) to do so.

Is using AI tools like ChatGPT "ethical"?

There are many dimensions to this, so let's stick to the creator side.

Of course, it's worrying that SOMEONE did create content at some point, and now it serves as the foundation and reference for a tool on a large scale. As if being a creator isn't hard enough already, now not only people are "stealing" your content. In the future, original creators need to compete with tools like ChatGPT or Dall-e (AI images) that might use their content without consent and/or without naming a source despite it being the origin of AI-generated content. And it's not just creators but the whole creator industry, from content marketing agencies to image databases, that will have to figure out ways how to deal with this. Shutterstock, for instance, has already announced a partnership with OpenAI.

On the other side, we'll have "new creators", people who never had access to creating "good" content (may it be a lack of skills or else). So what will happen when everyone can write... let's say... decent texts? Will it raise the bar and pressure to create even better content? Or is it a form of enablement? But which kind of people are we empowering? ChatGPT is at least trying to be a bit cautious of what kind of information it provides. So its answers are limited to "harmless content". However, it doesn't seem to be so difficult to find ways to bypass its regulations, as in this example of how to hotwire a car. Did not test it out myself, but apparently, quite a lot of people are eager to bypass ChatGPT's restrictions.

Is the hype justified?

Obviously, these are just a few thoughts on ChatGPT because there would be so much more to say about it. In general, there are some takeaways for me.

  1. ChatGPT is a pretty impressive tool when it comes to recreating text that is "human-like"
  2. ChatGPT is neither intelligent nor can it set the standard for authoritative content
  3. ChatGPT is only as good as the database and its reviewer
  4. ChatGPT is potentially dangerous as it can empower the wrong people and ideas, e.g., it might provide easy access to harmful information, or it might become the greatest liar out there when people blindly believe in its authority
  5. ChatGPT will not kill the creator industry, but it will change it. If you're good at what you're doing, no need to panic yet.

And let's put a "currently" in all of those.

So far, I haven't seen a single post generated by ChatGPT that would convince me to let it do my writing. Not because I could recognize AI-generated text by some kind of "visible flaw". But it usually lacks a certain reading flow, personal style, and/or depth. But that's also common for all kinds of "average" content written by people. And I've seen ChatGPT coming up with quite a bit of nonsense, too, so I'll stick to writing content myself for now.

However, I've been playing around with Dall-e for some time now (aka also see header image) and still haven't figured out how it really works ??.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了