Is ChatGPT Trustworthy?

Is ChatGPT Trustworthy?

ChatGPT, the popular AI language model, is an extremely fascinating aspect of technology. It can instantly produce fluent, human-sounding responses in response to your inputs. But how accurate are the answers to those questions?

While the tool's linguistic capabilities are outstanding, we have found via testing that it is not always possible to trust the correctness of its results. We recommend employing ChatGPT as a source of inspiration and feedback—not as a source of information.

In the section below, we go through what ChatGPT excels at and what kinds of things it frequently gets wrong. We also look at the reasons behind its inconsistent reactions and the responsible ways to employ it.

What ChatGPT is good at?

ChatGPT was trained on a massive quantity of information, so it's likely to know something about almost everything you ask. Additionally, it excels at providing examples to support its conclusions.

Example: ChatGPT explains the theory of relativity

Explain the theory of relativity.

  • The theory of relativity, developed by Albert Einstein, is a set of two interrelated theories: special relativity and general relativity. Special relativity describes how the laws of physics are the same for all observers moving at a constant velocity relative to one another, and that the speed of light is constant for all observers. It also shows that time can appear to pass differently for observers in different relative motions, and that matter can be converted into energy. General relativity extends the ideas of special relativity to include the effects of gravity. It describes how matter and energy can warp the fabric of spacetime, causing the curvature of the path of objects moving in its vicinity. This means that the presence of massive things can cause light to bend, time to slow down, and the spatial geometry of the universe to be non-Euclidean. Together, the theory of relativity revolutionized our understanding of space, time, and gravity, and has been confirmed by numerous experiments and observations.

What ChatGPT gets wrong

Despite the vast number of subjects ChatGPT can confidently address, it is not a good idea to rely on its responses without cross-referencing them with reliable sources. Ask it a more precise inquiry, even if it seems simple to a person, and it might not respond correctly.

Example: ChatGPT answers incorrectly

Are there any singular nouns ending in a single “s”?

Yes, there are singular nouns that end in just one “s”. Here are some examples:

  • Glass
  • Grass
  • Bass
  • Gas
  • Pass
  • Class

Note that these nouns can still be made plural by adding “es” to the end, such as “glasses”, “grasses”, “basses”, etc.

The tool's response is incorrect in this case: five of the six examples it provides end in a double "s." However, it doesn't convey any lack of confidence because the response uses the same tone as it would if the response were accurate.

In this case, it's simple to see that the given response is incorrect, but with more complex topics, it might not be as clear. When utilizing ChatGPT to learn about a subject, it is especially important to confirm the information it provides you with using reliable sources.

Can ChatGPT learn from its errors?

The advertised features of ChatGPT include remembering what was said before in the same discussion and reacting to user corrections. Does this, however, enable it to comprehend and apply feedback for the things it initially gets wrong?

We attempted to correct ChatGPT for the inaccurate response it provided above, but we discovered that despite accepting the correction and acting as if it understood, it kept answering incorrectly. This implies that it had trouble understanding the initial prompt and is having trouble understanding any corrections.

Example: ChatGPT responds to a correction

Most of these examples end in a double “s.”

I apologize for the mistake. You are correct, most of the examples I gave end in a double “s.” Here are some examples of singular nouns that end in a single “s”:

  • Cross
  • Boss
  • Loss
  • Kiss
  • Bliss
  • Dress
  • Guess
  • Address

These are just a few examples. Many more singular nouns end in a single “s.”

The same thing happened with a more technical grammatical matter. The tool was able to correct the initial issue we identified, but it also committed another mistake in the process. When we fixed this second problem, it pretended to comprehend but was still unable to provide the right response.

Example: ChatGPT responds to corrections

Provide an example of a possessive noun used independently, not as a determiner.

  • One example of a possessive noun used independently is: “The book is mine.” In this sentence, “mine” is a possessive noun that is used to refer to ownership, and it is not acting as a determiner before a noun. Instead, it is being used as a pronoun to replace the noun “book.”

“Mine” is a pronoun, not a noun.

  • My apologies, you are correct. “Mine” is indeed a pronoun, not a noun. Thank you for pointing that out. Here’s an example of a possessive noun used independently: “Mark’s car is brand new.” In this sentence, “Mark’s” is a possessive noun that is indicating ownership, and it is not used as a determiner before a noun. Instead, it is being used to show the relationship between the noun “car” and the person “Mark.”

In that example, “Mark’s” is used as a determiner modifying the noun “car.”

  • I apologize, you are correct once again. “Mark’s” is indeed functioning as a determiner in that sentence, modifying the noun “car.” My apologies for the oversight. Thank you for bringing it to my attention.

Can you provide a correct example?

  • Certainly! Here’s an example of a possessive noun used independently: “Jennifer’s success was the result of her hard work.” In this sentence, “Jennifer’s” is a possessive noun indicating ownership, and it is not used as a determiner before a noun. Instead, it is being used to show the relationship between the noun “success” and the person “Jennifer.” This sentence is a clear example of a possessive noun being used independently, not as a determiner.

In the long run, ChatGPT is expected to learn from some of its errors because it will be trained further on the discussions it is now having with users for future updates (but it is unlikely to ever be perfect). However, its capacity to comprehend and remember feedback seems constrained in the context of a private chat.

Why does ChatGPT sometimes make mistakes?

An AI language model is ChatGPT. It seeks to produce answers to your inputs that are fluent and convincing. It was able to address a variety of subjects because it was trained on a large amount of text from many sources. However, it doesn't find the information it needs by searching a database for the answers. Instead, it uses the patterns it was taught during its training.

Asking it an unexpected question highlights this restriction. For instance, "Is Paris the capital of France?" The right response would be "No, it's the other way around," as understood by a person. Paris is the capital of France. However, ChatGPT becomes confused.

Example: Confused answer

Is France the capital of Paris?

  • No, Paris is not the capital of France. Paris is a city in France and the capital of France is Paris.

This is because the model only generates text based on the patterns it was trained on; it doesn't actually "know" anything. It never intentionally lies, but it also struggles to distinguish between what is true and what is false. Due to the peculiar nature of the question in this instance, it is unable to fully understand what is being asked and ends up contradicting itself.

Most general knowledge questions should be answered correctly by ChatGPT most of the time, but when the question is phrased oddly or pertains to a highly specialized field, it might easily answer incorrectly or appear to be making things up ("hallucinating," as the developers sometimes describe it). And it presents itself as equally assured in both the correct and incorrect responses.

How to efficiently use ChatGPT

However, ChatGPT is still an interesting and practical tool despite these drawbacks. ChatGPT can be used for both academic study and writing. However, there are some guidelines for responsible and efficient tool use.

One can:

  • Ask ChatGPT to go through a topic's fundamentals.
  • Use it to generate and examine concepts for research outlines, inquiries, etc.
  • Use ChatGPT for assignments by requesting writing comments there.

But don't:

  • Trust ChatGPT for information without consulting any other sources.
  • Mention ChatGPT as a reliable source of information.
  • Hire it to write your homework (this is also regarded as plagiarism)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了