Is ChatGPT Trustworthy?
Learnasyougo
We are an end to end learning solutions company disrupting the learning experience of millennials & genZ
ChatGPT, the popular AI language model, is an extremely fascinating aspect of technology. It can instantly produce fluent, human-sounding responses in response to your inputs. But how accurate are the answers to those questions?
While the tool's linguistic capabilities are outstanding, we have found via testing that it is not always possible to trust the correctness of its results. We recommend employing ChatGPT as a source of inspiration and feedback—not as a source of information.
In the section below, we go through what ChatGPT excels at and what kinds of things it frequently gets wrong. We also look at the reasons behind its inconsistent reactions and the responsible ways to employ it.
What ChatGPT is good at?
ChatGPT was trained on a massive quantity of information, so it's likely to know something about almost everything you ask. Additionally, it excels at providing examples to support its conclusions.
Example: ChatGPT explains the theory of relativity
Explain the theory of relativity.
What ChatGPT gets wrong
Despite the vast number of subjects ChatGPT can confidently address, it is not a good idea to rely on its responses without cross-referencing them with reliable sources. Ask it a more precise inquiry, even if it seems simple to a person, and it might not respond correctly.
Example: ChatGPT answers incorrectly
Are there any singular nouns ending in a single “s”?
Yes, there are singular nouns that end in just one “s”. Here are some examples:
Note that these nouns can still be made plural by adding “es” to the end, such as “glasses”, “grasses”, “basses”, etc.
The tool's response is incorrect in this case: five of the six examples it provides end in a double "s." However, it doesn't convey any lack of confidence because the response uses the same tone as it would if the response were accurate.
In this case, it's simple to see that the given response is incorrect, but with more complex topics, it might not be as clear. When utilizing ChatGPT to learn about a subject, it is especially important to confirm the information it provides you with using reliable sources.
Can ChatGPT learn from its errors?
The advertised features of ChatGPT include remembering what was said before in the same discussion and reacting to user corrections. Does this, however, enable it to comprehend and apply feedback for the things it initially gets wrong?
We attempted to correct ChatGPT for the inaccurate response it provided above, but we discovered that despite accepting the correction and acting as if it understood, it kept answering incorrectly. This implies that it had trouble understanding the initial prompt and is having trouble understanding any corrections.
Example: ChatGPT responds to a correction
Most of these examples end in a double “s.”
I apologize for the mistake. You are correct, most of the examples I gave end in a double “s.” Here are some examples of singular nouns that end in a single “s”:
These are just a few examples. Many more singular nouns end in a single “s.”
领英推荐
The same thing happened with a more technical grammatical matter. The tool was able to correct the initial issue we identified, but it also committed another mistake in the process. When we fixed this second problem, it pretended to comprehend but was still unable to provide the right response.
Example: ChatGPT responds to corrections
Provide an example of a possessive noun used independently, not as a determiner.
“Mine” is a pronoun, not a noun.
In that example, “Mark’s” is used as a determiner modifying the noun “car.”
Can you provide a correct example?
In the long run, ChatGPT is expected to learn from some of its errors because it will be trained further on the discussions it is now having with users for future updates (but it is unlikely to ever be perfect). However, its capacity to comprehend and remember feedback seems constrained in the context of a private chat.
Why does ChatGPT sometimes make mistakes?
An AI language model is ChatGPT. It seeks to produce answers to your inputs that are fluent and convincing. It was able to address a variety of subjects because it was trained on a large amount of text from many sources. However, it doesn't find the information it needs by searching a database for the answers. Instead, it uses the patterns it was taught during its training.
Asking it an unexpected question highlights this restriction. For instance, "Is Paris the capital of France?" The right response would be "No, it's the other way around," as understood by a person. Paris is the capital of France. However, ChatGPT becomes confused.
Example: Confused answer
Is France the capital of Paris?
This is because the model only generates text based on the patterns it was trained on; it doesn't actually "know" anything. It never intentionally lies, but it also struggles to distinguish between what is true and what is false. Due to the peculiar nature of the question in this instance, it is unable to fully understand what is being asked and ends up contradicting itself.
Most general knowledge questions should be answered correctly by ChatGPT most of the time, but when the question is phrased oddly or pertains to a highly specialized field, it might easily answer incorrectly or appear to be making things up ("hallucinating," as the developers sometimes describe it). And it presents itself as equally assured in both the correct and incorrect responses.
How to efficiently use ChatGPT
However, ChatGPT is still an interesting and practical tool despite these drawbacks. ChatGPT can be used for both academic study and writing. However, there are some guidelines for responsible and efficient tool use.
One can:
But don't: