Taming the AI Beast: How to make ChatGPT serve, not enslave you

Taming the AI Beast: How to make ChatGPT serve, not enslave you

I've been playing with the ChatGPT free version for a few weeks now and I admit I'm impressed with what it can do. But I'm also concerned about the dangers inherent in this and other generative AI tools.

Here, we'll review the excitement around the ways content creators can use ChatGPT's pretty amazing capabilities; examine two major concerns about the quality and one about the ownership of its output; and talk about ways you can avoid those dangers.

Yes, it's a powerful tool

And it's not going away. You've seen the headlines about how ChatGPT.

With millions already using it, the companies and programmers behind ChatGPT and similar generative AI platforms will only keep them growing and evolving.

So as the Borg (in Star Trek) says, "Resistance is futile."

Then again, the humans survive in that storyline. Picard and Seven learn to take advantage of the tech.

So the real question is how can you put the tools that AI makes available to best use in your work and life?

I've been doing a bunch of research myself on the three parts of this article. We'll circle back to why that's important below.

Then, I decided to ask ChatGPT to write a post for me. Here's the way I worded the request, focusing on business uses:

"Write a blog post describing the best ways to use ChatGPT for business purposes. Take note of the weaknesses and limitations, including database limits, false answers, and copyright ownership issues. Offer suggestions for avoiding problems from these weaknesses and limitations."

The response was a decent first draft, 624 words long. And that, my friends, is one of the most powerful aspects of this tool: generating first drafts, or as I called them on in an earlier post here, "instant outlines."

But I knew enough from my own research to recognize that the AI draft was incomplete. Plus, the writing became repetitive and overly formal for my style. For example, slight variations of the phrase "ChatGPT can only provide answers based on the data it has been trained on" appear six times in its short answer. To me, that point needed to be made only once.

Of the uses ChatGPT boasted in its draft, content creation is where most of the recent excitement seems to be centered. Here, I'm using only a few excerpts from the first draft generated by ChatGPT. But that doesn't mean the exercise was useless. The draft provided several useful nuggets, a proposed sensible structure for the post, and upon reflection triggered some useful follow-up questions that we'll explore in a moment.

My lessons on how (and how not) to use ChatGPT in your content creation work should be clear from the examples in the discussion below.

I'll wrap up this section with some links to posts and articles you may find useful in expanding your use of ChatGPT:

?

Danger, Will Robinson!?

Lies, Damn Lies, and Statistics

Several examples of ChatGPT and its competitors making glaring errors have been covered in the news lately. The headline of that?article about ChatGPT passing an MBA exam?obscures some of the problems the Wharton professor actually reported:

"Surprisingly, it performed the worst when prompted with a question that required simple math calculations. ... These mistakes can be massive in magnitude. ... The present version is not capable of handling more advanced process analysis questions, even when they are based on fairly standard templates."

Another Wharton professor called ChatGPT "a?consummate bullshitter, and I mean that in a technical sense," explaining:

"Bullshit is?convincing-sounding nonsense, devoid of truth, and AI is very good at creating it. You can ask it to describe how we know dinosaurs had a civilization, and it will happily make up a whole set of facts explaining, quite convincingly, exactly that. It is no replacement for Google. It literally does not know what it doesn’t know ..."

And in an?article on Medium, writer Zulie Rane told the (almost) horror story of working with ChatGPT on an article for a client. She spent hours working with the AI to get some of the content and then wrote the article herself, incorporating some of the material from ChatGPT and adding her own research. But just to be doubly sure about the accuracy and quality of this article she was being paid to produce, she sent it off to a professional editor knowledgeable about the topic.?

She got the article back "covered in red writing and strikethroughs and critical comments." She explained that ChatGPT had "fabricated facts ... made incorrect analogies ... got specific technologies wrong" and concluded that

"ChatGPT is not a reliable writer or researcher."

Lest you think these are criticisms from Luddites who resent new technology, why not take ChatGPT's own word for it?

Remember that I asked ChatGPT to include its own weaknesses and limitations in the blog post draft it generated. Its answers included:

"False Answers: ChatGPT may provide false or inaccurate answers based on the data it has been trained on."

And a?post about ChatGPT on the OpenAI blog?helps explain why such errors occur:

"ChatGPT sometimes writes?plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s?currently no source of truth; ...
"Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our?current models usually guess?what the user?intended.

I followed up my intitial request by asking for examples of ChatGPT giving incorrect answers. The AI gave remarkably honest and sobering answers:?

"Here are some examples of false or inaccurate answers generated by ChatGPT:
Medical Advice:?...
Legal Advice:?...
Sensitive Topics:?...
Company-Specific Information:?..."

When OpenAI and the chatbot themselves warn you that you might get "incorrect or nonsensical answers" to your questions, it seems wise to take notice.?

Take special note of that warning from OpenAI that ChatGPT has "currently no source of truth."?

Think about what kinds of "incorrect or nonsensical" garbage might be included in its database composed of internet scrapings. And how it may go about pulling together answers when it makes its "guess what the user intended."

Want further proof that ChatGPT will make things up??Zulie Rane noted,

"The further you push [ChatGPT], the more you’ll realize this. The more details you ask, the more you demand, the less correct it gets."

When I first started researching the copyright questions discussed below, I "pushed" ChatGPT to provide specific sources for recent thinking and asked it for links to relevant online articles.

The first time I asked, I got this response: "I'm sorry, but as a text-based AI language model, I don't have the ability to provide links." Having previously gotten answers with links in them, I knew this was "incorrect or nonsensical." Was it a lie, or AI laziness?

So I refined the question a bit and this time got a very promising answer:

Great, I thought.?

Until I clicked the links:?

I tried searches within each of these sites, along with web searches in Google and Bing, with and without quotation marks around the supposed titles of these alleged articles. I even tried searching the names of the authors and their online publication lists.

As far as I can tell, ChatGPT made up the articles and its descriptions of them.

It might have been true back when ...

The other major area ChatGPT warns about so often it might get lost is the currency of the information it uses to generate answers. OpenAi explains, "ChatGPT is fine-tuned from a model in the GPT-3.5 series, which finished training in early 2022." This translates to a database scraped from the internet up through 2021.

Some stuff has happened since 2021, important events (e.g., mid-term elections, war in Ukraine), evolving ideas (e.g., around racial, gender, environmental justice), significant discoveries (e.g., from the Webb telescope). Much of it has been recorded in one form or another on the internet.

More stuff will keep happening.

To my follow-up question about outdated answers, ChatGPT replied:

"Here are some examples of outdated information ChatGPT may use in its answers:
Historical Events:?...
Technology:?...
Business News:?...
Cultural References:?..."

Recall the warning from the Wharton professor that Chat GPT "literally does not know what it doesn't know." And instead of admitting that up to date answers to your question are not in its database, ChatGPT might "happily make up a whole set of facts" to provide you with "convincing-sounding nonsense, devoid of truth."?

We may assume the AI isn't (yet) feeling happy about deceiving us, right?

But we should be careful, as Zulie Rane showed, about making sure we're not using "convincing-sounding nonsense" in our work.

Who owns copyright in your work?

I've been noodling this question about content created by, or with assistance from ChatGPT, and my research so far indicates that nobody knows.?

In one blog post?analyzing the OpenAI?Terms of Service?that govern both ChatGPT and DALL-E, the attorney blogger opined that you may not get copyright in the output from the AI because it doesn't meet the fundamental requirements of originality and human authorship.?

Another post?drew a distinction between content passively or randomly generated by the AI and that showing human involvement. At one end, quoting an attorney,

"When AI randomly generates artwork, then there’s no human authorship ... 'If you’re?letting the computer take over for you, like pulling a lever on a slot machine, then it’s public domain really, anyone could take that output. You don’t have intellectual property rights to it.' "

In contrast, the attorney noted,

"[T]he more human input that’s involved, the higher the likelihood that the artwork is eligible for copyright. When AI is used to create an artist’s specific vision?with direct manipulation by the artist, then the artist would have intellectual property rights."

In another piece, aptly titled?Is Copyright Broken? Part 3: Artificial Intelligence and Author Copyright, the author notes some early attempts around the world to define copyright in AI-involved works, but acknowledges that nothing is settled. She mentions the UK's effort,

"In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be?the person by whom the arrangements necessary for the creation of the work are undertaken."

The emphasized phrase in the UK law seems to leave open the possibility that in the case of the "slot machine" analogy quoted above, instead of nobody owning the copyright, the person(s) who wrote the code for the AI could be the ones who made "the arrangements necessary for the creation of the work."

In my digging into this question, I was focused on U.S. copyright law. The U.S. Copyright Office has already?issued a decision refusing copyright?to an image "autonomously created by artificial intelligence without any creative contribution from a human actor." Since then?it's been reported that, after initially granting registration for a graphic novel partially created using AI, the Office has asked the author "to provide details of my process to show that there was?substantial human involvement?in the process of creation of this graphic novel."

From those starting points, I found my way to the Copyright Office's Compendium,?Chapter 300 - Copyrightable Authorship: What Can Be Registered, which emphasizes both the originality (§ 308) and human authorship (§ 306) requirements. It also specifically states under "Uncopyrightable Material" in § 313.2:

"Similarly, the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically?without any creative input or intervention from a human author. The crucial question is 'whether the 'work' is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine."

The emphasized phrases in these sources got me thinking about other major technological breakthroughs that enabled or expanded human artistic output. One of the court cases cited in the Compendium on the question of human authorship goes way back to 1884, when photography was fairly new and the copyright ownership of the images created had not been settled.

No alt text provided for this image
Copy of photo in the public domain via Wikimedia Commons

In?Burrow-Giles Lithographic Company v. Sarony, the Supreme Court considered the case of photographer Napoleon Sarony and his famous photograph of Oscar Wilde. A lithographer had reproduced the image for sale and claimed that Congress had no power under the Copyright Clause of the Constitution to protect photographs, because "a photograph is not a writing nor the production of an author."

Touching on how photography was still a new and mysterious technology and why I think the decision about it will impact how the courts treat AI output, the Court went noted:

"The only reason why photographs were not included in the extended list in the [Copyright] act of 1802 is probably that they did not exist, as photography, as an art, was then unknown, and the scientific principle on which it rests, and the chemicals and machinery by which it is operated, have all been discovered long since that statute was enacted."

Still, the lithographer argued that a photograph is a "mere mechanical reproduction ... and involves no originality of thought ..."

In finding that, indeed, Sarony had proven that his photograph involved elements "of originality, of intellectual production, of thought, and conception on the part of the author," the Court noted that the trial judge had made findings of fact that the photograph was a:

"useful, new, harmonious, characteristic, and graceful picture, and that [the photographer] made the same . . . entirely from his own original mental conception, to which he gave visible form by posing the said Oscar Wilde in front of the camera, selecting and arranging the costume, draperies, and other various accessories in said photograph, arranging the subject so as to present graceful outlines, arranging and disposing the light and shade, suggesting and evoking the desired expression, and?from such disposition, arrangement, or representation, made entirely by [the photographer], he produced the picture in suit."

It's worth reading that set of findings over a few times, I think, as it should give insights into how human content creators can and should partner with these new AI tools to generate content they can proudly publish and call their own.

The result in the photography cases also calls into question whether that language in the UK law should ever be interpreted to place copyright with the company or programmers who create and provide these AI tools. That would be as if the courts had decided that Kodak or George Eastman owned copyrights in the pictures generated by the cameras and developing services they provided.

Mastering your tools

Like any other craft, writing and creating images have always challenged humans to develop and then learn how to use new tools. Over time, I think these new AI tools will seem no different.?

The main skillsets you'll need to master based on the exciting potential and worrisome problems aren't much different from the past, after all.?

  • Experimenting with and learning to articulate your own questions that produce promising output. This is like Sorony arranging the setting for Wilde's photograph.
  • Testing and verifying that the output is sensible, up to date, and true. Like Sorony making sure his camera, film, and the subsequent development process worked properly.
  • Then, blending in your own original research and writing to make certain the ultimate result gives visible expression to ideas from your mind. Like posing and evoking the right expression from your subject.

I hope this review has both alerted you to some of the dangers and inspired you to avoid them, as you work to master these powerful new tools and seize upon the ever-changing opportunities for content creators.

Carol D.

Words. Create. Ideas.? 2016 Together. We. Thrive.? 2020 Healthy Aging Starts At Birth? 2023

1 年

Tom Collins ChatGPT's answer to this question demonstrates it was taught the importance of critical thinking! My question to ChatGPT: Can you identify misinformation? Answer: "As an AI language model, I am trained on a large corpus of texts and can provide information based on that corpus, but I am not inherently able to identify misinformation on my own. However, I can utilize various techniques to assess the credibility of information, such as comparing information across multiple sources, fact-checking against known sources of reliable information, and identifying inconsistencies or errors in the information. Additionally, if you have specific criteria or guidelines for what you consider to be misinformation, I can apply those standards to evaluate the accuracy of the information in question. Ultimately, it's important to critically evaluate information and not rely solely on any one source, including me, to determine its credibility."

Carol D.

Words. Create. Ideas.? 2016 Together. We. Thrive.? 2020 Healthy Aging Starts At Birth? 2023

1 年

Tom Collins Questions. Do you know if ChatGPT was programmed to flag certain information "irrelevant" when it searches for an answer? Can you tell ChatGPT, "Disregard everything prior to (date)" ?

DEBORAH BROWN-VOLKMAN

Career Goals Advisor To Fortune 500 Companies ??LinkedIn Top Voice & Social Media Influencer ??LinkedIn Job Search Strategist ?? Childhood Trauma Survivor ?? 20+ Year Executive & Career Coaching Expertise & Track Record

1 年

Tom Collins excellent article. ????

Yvonne DiVita

?? ???????? ?????????????????? | Author | Book Coach | Author Specialist | Helping passionate professionals and entrepreneurs create authority, build thought leadership, and create community with their published book.

1 年

Toby Bloomberg Especially for you.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了