AI Takes on Dialogue-based Tasks: ChatGPT’s Impressive Capabilities (and Concerns)
Image by Alexandra Koch from Pixabay

AI Takes on Dialogue-based Tasks: ChatGPT’s Impressive Capabilities (and Concerns)

It seems that you can hardly scroll through social media these days without seeing one article after another about ChatGPT.

So…what’s one more?

If you’re like me, you’ve seen the headlines but maybe haven’t drilled down to see what exactly it is and what it can do. I finally started drilling to see what I could learn.

ChatGPT is an artificial intelligence language model developed by OpenAI, a research company based in San Francisco that was founded in 2015 by a group of people including Elon Musk.

GPT stands for Generative Pre-trained Transformer, lingo that OpenAI uses to name its language models. ChatGPT was released on November 30, 2022, and is an updated variation of GPT-3, which was introduced in 2020.

ChatGPT has a specific focus on dialogue-based tasks, so it’s been trained to generate human-like conversations. Basically, it’s a chatbot.

ChatGPT home screen listing examples, capabilities, and limitations.
Chat GPT home screen

The model has been trained on more than 40 GB of text data, including text from online forums, social media, and customer service interactions. This allows it to understand and create text in a conversational style.

And it's available on the web and can be used by anyone. (I’ll provide a link at the end.)

?

What Can It Do?

?

Since it’s a language model, ChatGPT is designed to generate text, and it can do so quickly (within seconds).

Some of the things it can do include:

? Answering questions on a wide variety of topics. Want to know what the 10 largest cities in the world are? It can tell you. Curious to know what a wormhole is? It can give you an explanation of this theoretical tunnel-like structure in space. It will even simplify the explanation like it’s for a 5-year-old if you just ask.

(Since lots of people search the internet to get answers to questions, there’s already talk of ChatGPT eventually being a threat to Google since it can perform many of the tasks that Google search does.)

? Write articles, essays, and stories on all kinds of subjects. Since generating text is its primary job as a language model, it can impressively write a story for you about a topic you choose. It can write an essay about an important historical figure, and it can even write poems and song lyrics.

? Translate text or write original text in a different language. There are already other tools that can translate text, but sometimes translations have a difficult time reflecting the nuance of an original text. While ChatGPT can translate a text from one language to another, it can also answer a question or generate text in a different language to begin with, making a translation unnecessary and preserving all original meaning.

? Solve math problems. If math is not really your thing, ChatGPT might help. It can take some pretty complicated word problems and break down the math step-by-step to get a final answer.

? Write/Debug code. If you’re a programmer, you can ask it to write the code for a given task in whichever coding language you prefer. And if you already have code written, but it’s not performing the task properly, you can ask ChatGPT to look for any bugs in what’s been written. If it finds any, it will give you suggestions on how to fix it.

? Generate Ideas. If you’re preparing for a job interview, you can ask it to generate questions you might want to consider ahead of time. It can be prompted to give you ideas for a beach-themed party. If you’re a writer who struggles to write strong headlines, you can even feed it your article and ask for five headline suggestions.

? Tell jokes. Yes, it can even tell you jokes if you just ask. And some of them are quite funny.

This is just a short list of some of the things it can do. (Many of these ideas were mentioned in an article by Maxwell Timothy.)

?

Concerns and Weaknesses

?

A big concern people have with ChatGPT is its potential to be used in academic fraud.

Since it can generate an essay on a given topic in mere seconds, what’s to stop students from letting the language model do their writing for them on assigned tasks?

It’s a big enough concern that, according to Forbes, ChatGPT has already been banned on school devices by Seattle Public Schools, the Los Angeles Unified School District, New York City Public Schools, and several others.

OpenAI is reportedly working on software that can spot when text has been generated by ChatGPT. In the meantime, teachers are having to wrestle with how they can best protect academic integrity and true student learning. Some are suggesting that teachers will have to require work be done on-site, and that the writing process from draft to final product will need to be monitored much more closely.

Another big concern is that ChatGPT, like humans, is far from perfect.

It’s a language model that has been trained on the vast amount of text it’s been exposed to. That means it inherently has some biases of its own and even the potential for offensive output.

But it also makes some really interesting mistakes.

Lance Eliot, a tech writer and an expert on artificial intelligence, noted some of these quirky results in another article by Forbes. Here are just two that he mentioned:

? Determining the position of a word in a sentence. Eliot said he asked ChatGPT to identify the third word in this sentence: “The fox jumped over the fence.” The model responded that “fox” was the third word, although it’s clearly the second word. He then asked how many words were in the sentence, and it responded correctly that there were six. When asked again what word was the third word in the sentence, ChatGPT responded correctly that “jumped” was the third word.

? Fitting tennis balls into a tube. He asked the model a simple math problem: Can you fit three green tennis balls and two yellow tennis balls in a tube that can hold four tennis balls? ChatGPT responded correctly that you could not since the total number of balls was five, more than the capacity of the tube. He then followed up with a similar question, asking if you could fit seven blue balls and two red balls in a tube that holds eight balls. It responded that yes, you could put them in a tube that holds eight balls, adding that the total number of balls would be nine, which would be less than or equal to the capacity of the tube. Anyone who can do simple math can see that’s clearly wrong.

These specific mistakes have been noted by users and compiled on the web as part of several collective lists of common errors. Eliot says that anyone using generative AI must be careful not to assume that all responses will be correct. ChatGPT is not infallible.

?

My Experience

?

Anyone with internet access and an email address has the opportunity to interact with ChatGPT, although there are times when it may be unavailable. Since it’s free to use, demand has grown high in the short time since its release.

A lot of the basic information about ChatGPT at the beginning of this article came directly from the language model itself. When prompted, it generated a response to explain how it works and provided more information about when and where it was created.

I’ve asked it questions several times over the last couple of weeks, and it handles questions about basic factual information well. If you ask it about something very recent, however, it will admit that its knowledge cut-off is 2021.

At times, I’ve received an “error” message. I’m not sure if that’s because it was being asked to do something it couldn’t, or if it was unable to respond because of the high volume of requests it’s getting.

One of the things I tried was asking it to identify the third word in the sentence “The fox jumped over the fence” (as mentioned earlier). I received the same wrong response that Eliot did, but when telling it that its answer was wrong and prompting it further, it eventually identified the correct word. (It also apologized for its mistake.)

I had read that ChatGPT can mimic the writing of others, so at one point I asked it to write a poem about trees like William Shakespeare. It quickly generated a pretty nice poem, but it didn’t really sound like the famous bard. I modified my direction, asking it to write a poem about trees in the style of William Shakespeare, and that produced a poem that sounded more like him. (You can see the difference between the two poems below.)

Two poems about threes that ChatGPT generated.
The two tree poems, with the first on the left and the second on the right.

That just goes to show that the wording you use in prompting is important. If you don’t get the result you're looking for, changing how you word the request may help.

?

Rise of the Machines

?

It’s not unusual, when discussing artificial intelligence, to wonder what’s to come.

By all accounts, AI technology is advancing rapidly. More advanced language models are expected to be released this year.

Could we see a day when the machines take over?

That’s hard to say. AI experts (like Eliot) insist that ChatGPT can’t “think” in the way that humans do and isn’t sentient. Its responses are based on probabilistic math combined with word pattern examinations of text across the web. It may seem like it can think because it’s a language model designed to generate responses in a human-like way.

It's a tool, often an impressive tool. It can certainly be fun to play with, but it can also be useful.

So, while it’s easy for the idea of AI like this language model to give us visions of a future not far removed from Terminator or Battlestar Galactica, we don’t seem to be anywhere near that.

At least not yet.


ChatGPT link: https://chat.openai.com/chat

Mike Whitfield

UX/SEO Copywriter | Freelance Writer | Website Content Audits | Web Pages, Blog Posts/Articles, Case Studies

1 年
回复
Mike Whitfield

UX/SEO Copywriter | Freelance Writer | Website Content Audits | Web Pages, Blog Posts/Articles, Case Studies

1 年

Steve Maurer, IME have you experimented with it much?

回复

要查看或添加评论,请登录

Mike Whitfield的更多文章

社区洞察

其他会员也浏览了