AI Is Going To Steal My Lunch! The Truth About ChatGPT
Can AI lend a helping hand? (Photo by Photo by Possessed Photography on Unsplash)

AI Is Going To Steal My Lunch! The Truth About ChatGPT

ChatGPT has dominated technology headlines lately, in some cases even shoving aside reporting on Elon Musk and his Tweet-related court case (accusations include share price manipulation) and Facebook’s ongoing challenges around data safety and sharing.

What exactly is ChatGPT?

In a nutshell, ChatGPT, developed by OpenAI was launched as a free tool (during the feedback-gathering phase of its development) on November 30, 2022. ChatGPT is a natural language processing tool driven by AI technology. It allows for human-like interaction, including conversations, and answering questions. It can assist the user in composing emails, writing marketing copy (including short and long-form blogs) ?composing essays, and writing code.

However, the Chief AI scientist of Meta (formerly the Facebook company), Yann LeCun was not all that impressed by ChatGPT. According to him, it was a ‘decent’ example of AI in action, but that it was not “the least bit revolutionary” and it was simply a well put together chatbot.” Is this sour grapes from a company that is certainly heavily invested in the future of AI?

Microsoft disagrees with Meta’s assessment. The company invested $1 billion in OpenAI in 2019 and has continued with a focused investment strategy when it comes to OpenAI. Microsoft has just provided another significant tranche of funding to the creators of ChatGPT (unfortunately no dollar value was available).

My Lunch!

ChatGPT has excited comments in ways that may have startled the powers that be at Meta, but the fact remains that the Chat GPT artificial intelligence-driven software has the potential to be enormously disruptive – especially in the marketing field.

Imagine what is, for all intents and purposes, an AI assistant that can take a brief. It can respond to research demands, be mindful of key messages and brand messaging, trawl the Internet for information from essays and scholarly works, as well as popular opinion and blogs – and come up with a piece of writing that is perfectly suited for inclusion into a blog – or as part of a sales kit, or internal corporate messaging.

That’s what ChatGPT can do. In theory. When results are viewed generously. Sometimes.

Your Sandwiches are safe – for the moment

ChatGPT is a work in progress. And it’s already gotten at least one company specializing in the written word and viewed as an authoritative source into trouble.

Only a few weeks ago CNET, a well-known technology news site decided to take ChatGPT out for a spin. The idea was that the AI could help create simple content for CNETs Futurism and The Verge?sites. However, both CNET and its sister site, Bankrate landed up with copy that was riddled with factual errors – and articles that could have featured as poster children for plagiarism.

Should CNET have seen this coming? Let’s look at the facts.

Things explode on the Internet quickly. The excitement around ChatGPT went viral. Here was artificial intelligence in action! It can hold a conversation! It fools the Turing Test! The Turing Test?is a testing method used for determining whether or not a computer is capable of thinking like a human being.

According to many pundits, ChatGPT would revolutionize the generation of content, with one being quoted as saying ‘All you have to do is ask for 2,000 words of content and you’ll have quality copy that is useful to the reader!’

The days of the creative are done!

Back to Reality

Let’s take a few steps back.

The ChatGPT AI tool recently passed law exams in four courses at the University of Minnesota. It passed (with middling results) another exam at the University of Pennsylvania’s Wharton School of Business, according to professors at the schools.

That’s all very well. But here are the facts.

ChatGPT is not artificially intelligent. AI is one of those terms that has captured the popular imagination. The day of the robot butler – or even the AI-powered semi-competent content creator that can operate on a ‘fire-and-forget basis has not arrived. Not even close.

Ask ChatGPT what it does and it will provide a lightning-fast response;

“I am a machine learning model that has been trained on a large dataset of text which allows me to understand and respond to text-based inputs.”

Could this AI disrupt the online search engine industry, allowing Google (as an example – and the Gorilla in the Search Sandpit) to imitate the functionality of digital assistants like Alexa and Siri? Alternatively, could ChatGPT handle the onerous tasks related to content creation, replace customer service chatbots (long overdue), engage in research, draw up legal documents, and more?

The Bottom Line

Enough questions – time for answers – let’s start the ball rolling with a controversial one.

When it comes to simple tasks, such as drawing up lists, developing quizzes, identifying research sources, rote completion of assignments (such as passing tests relying on recollection, not original analysis), creating images to match stories and even holding a conversation, ChatGPT has a chance to shine.

Is it artificially intelligent and can it replace a skilled human content creator? The answer is no. Not yet – and maybe never.

The Truth and Nothing but the Truth

The fact of the matter is that ChatGPT and other competing AI content generation systems cannot tell fact from fiction. The only creativity exhibited by ChatGPT is based on content that has been sourced from a human being and produced in a format that conforms to the search criteria inputted by the user. Collating is not creating.

The outputs supplied by ChatGPT have not even reached the stage where they could be classed as a reflection of the ‘Uncanny Valley’ characterization.?The output is simply not human enough – and where it is passably human in tone and content it is a result of repurposing the work of a skilled and creative human being.

Scratch under the skin of ChatGPT and you will find that its attractions are remarkably similar to those of the computers that replaced vacuum tube systems – fast and able to process enormous amounts of data. ChatGPT does not ‘understand’ the data or other material that it is using as a basis for producing content. There is no originality or thought. ChatGPT and the many other ‘AI Copy Generators’?cannot think. They do as instructed. The result is often characterless. A cheap and stilted imitation of what copy should be.

ChatGPT and other AI systems may draw data from an enormous pool of Internet-based knowledge and published works, but AI lacks general knowledge and the ability to write with passion – and draw together disparate ideas to create content that appeals to human beings – and (in the case of business copy), motivate action and feeds the sales funnel, encourages brand loyalty and positions the organization as an expert and trusted source.

The output from ChatGPT and other AI content generators is useful to get the creative juices flowing. They can make data gathering and research far less time-consuming. In time, the output could become more human and nuanced - AI might even develop a sense of humor.

At the moment, businesses are stuck with copywriters, researchers, and communication professionals that may occasionally develop body odor and require copious amounts of coffee.

要查看或添加评论,请登录

Steve Mallach的更多文章

社区洞察

其他会员也浏览了