Is it high time to take ChatGPT offline?
Aleksandr Tiulkanov
Upskill in the EU AI Act - Link in profile | LL.M., CIPP/E, AI Governance Advisor, implementing ISO 42001, promoting AI Literacy
At this point, you might be wondering, where am I even coming from? True, by this time, you may have seen dozens of articles and discussions praising this tool. ChatGPT revolutionises this field, it enables us to do wonders in that area, it saves us from those chores…
If this is the only sentiment you have seen around it, I have bad news for you.
ChatGPT is not reliable. It is in beta. As of today, you should never, ever, take any ChatGPT output at face value. And that limits currently possible use cases dramatically.
What is exactly wrong with ChatGPT?
Its outputs are often not truthful
Those who have read my previous newsletter are already aware of one example proving my point. The example was related to legal research, but the same goes pretty much for any other application. My colleague Dan Reitman has tried it in personal fitness management, and it failed spectacularly. Some people have tried it in other areas with similar disappointing results.
In all those cases we didn’t take ChatGPT’s outputs at face value, but instead verified them and found one commonality in our assessment.
What ChatGPT seems to be currently optimised for is generating persuasive, engaging content without sufficient regard for the truth value of its outputs.
To put it in purely academic terms, it is a proficient producer of bullshit. As observed by the famous philosopher Harry Frankfurt in his treatise “On Bullshit”, this phenomenon occurs when, instead of lying to you intentionally, one genuinely does not care whether what is being said is true.
Don’t we have the same problem with people?
The above is true not only for ChatGPT, but also for some people. Irresponsible bloggers, sensation-seeking journalists and pickup artists all thrive on delivering persuasive messages without regard for truth.
Sometimes, even responsible people are incentivised to disregard the truth value of what they say. For example, when I was preparing for the speaking part of the IELTS examination (a test of English proficiency for non-native speakers), I was specifically instructed to care not about what I say, but how I say it.
My exam consultant’s advice, which proved very useful (I got a score of 8.5 out of 9), was to concentrate on speaking persuasively and prioritising the flow of the conversation, not the truth of my statements. Because the goal of the exercise was not telling the truth but demonstrating speaking and conversation ability.
Yet in many other parts of life, we prioritise (or should prioritise) truth instead.
The developers of generative art software may have trained their systems in distilling and reproducing what it is like to paint like Rembrandt or other famous artists. The ChatGPT developers seem to have inadvertently trained their system in distilling and reproducing what it is like to talk like a proficient producer of BS, an ignorant but engaging know-it-all.
People’s lies are predictable, ChatGPT’s untruths are not
But this isn’t the whole problem. During our private discussion, my colleague Dan Reitman likely nailed it: ChatGPT seems to embed untruths in very unpredictable parts of its outputs. Humans usually don’t say untruths without a reason. Humans have their motives, and we may be more or less adapted, evolutionarily, as well as through generational knowledge transfer and training, to account for that. By guessing the mental states of fellow humans, we’re used to identifying in what parts what they say might be untrue.
ChatGPT has no motive to lie intentionally, its patterns for unintentionally embedding falsehoods are very much unlike those of humans and are therefore very difficult to predict. Our in-built heuristics fail here. Because of that, you cannot trust anything ChatGPT says, and you must verify everything.
What are the possible harms?
Non-transparency may harm many users
As of January 2023, ChatGPT is freely available to everyone, not only to those who understand its inherent design flaws and who apply critical thinking before using the generated outputs.
A bleak upfront disclaimer that the system “may occasionally generate incorrect information” is a huge understatement and is visually lost on the start screen:
In my view, there should be a simple, prominent, colour-highlighted warning to suggest that users should never, ever, take any ChatGPT output at face value.
When assessing the risks resulting from non-transparency, we often think of who may be vulnerable. It is often children, largely because their cognitive capabilities are not developed enough to help them see the risks and act on them.
In this instance, however, the vulnerable group might be significantly larger, minding that most schools don’t teach critical thinking as a subject. It also doesn’t help that, as noted above, natural human heuristics seem to be ill-suited for detecting untruths in ChatGPT outputs. Often you really need to be a subject matter expert to identify what isn’t right.
Harms may multiply as generated content propagates
It is bad enough that overexcited early adopters of technology may underexercise critical thinking and overrely on ChatGPT’s (or similar systems’) outputs, potentially losing a legal case, impairing their health, or causing other harm to themselves and their families.
As such tools are optimised for generating persuasive texts, everyone can use them to create and post engaging content. Generative AI systems significantly reduce the barriers for content creation. Copywriters and corporate social media staff were already using similar tools to save their time and money, and now largely the same or better technology is available to amateurs.
领英推荐
One may go as far as to predict that, as a result, the whole internet may drown in engaging AI-generated content which contains non-obvious falsehoods. This may be especially concerning now that the algorithms governing social media may be amplifying the content which may not be necessarily truthful but drives user engagement, and therefore user retention and ad revenue.
Notably, some online communities have already imposed temporary bans on the submission of AI-generated texts by their users.
The digital divide may widen
As people become more aware of the deficiencies of AI-generated content, preferences may shift towards professionally curated content as opposed to amateur pieces or more generally texts of unknown or questionable provenance.
Yet, understandably, a large chunk of professionally curated content, including peer-reviewed academic papers and respectable business media, is not and will not be available to everyone for free.
This might exacerbate the existing digital divide, as people without sufficient means might be increasingly relying on free but mostly AI-generated content of questionable truthfulness, while the luxury of truth will be afforded in the first place to discerning intelligentsia and the people of means.
Arguably, this kind of future wouldn’t be a desirable one, to say the least, despite there being some utility in improving the understanding and status of content curation in the humanity’s pursuit to refine available knowledge.
Why, then, may we still want ChatGPT to stay online?
As you may have seen from the observations above, there may be obvious risks of harm to ChatGPT users, these users’ potential audiences and the public at large.
What, then, might be the arguments for the system remaining online?
If it stays online, it may be improved faster
The situation and public outcry might be worse if OpenAI decided to market the commercial derivative of ChatGPT without openly beta-testing it.
With the benefit of failures and design flaws discovered and publicly discussed at the beta stage, OpenAI developers will be in a better position to improve the system in many quick iterations before it becomes widely marketed.
By allowing a large community of enthusiasts to tinker with the beta version, OpenAI has accelerated not only the hype around generative AI, but also evoked important, valuable public discussion and constructive criticism. I wouldn’t be able to write this article if ChatGPT wasn’t publicly available.
Already now, there are legitimate use cases
Many curious individuals have already found ChatGPT genuinely useful in their work:
This list, of course, may not be exhaustive. Furthermore, as ChatGPT is improved and becomes more reliable, the scope of legitimate use cases will grow.
There might be ways to avoid the falsehood-ridden Internet
Some software developers claim they already have tools to detect AI-generated prose. If those tools prove effective, we may already have the means to reliably detect the patterns in which ChatGPT’s and similar systems’ outputs are different from human outputs.
Yet even before such detection technology is mass-adopted, social media service providers might change their content amplification criteria to amplify verified user content and de-amplify content of questionable origin.
Likewise, search engine providers may alter their content relevancy criteria, so that curated content and its providers are amplified in search results.
Whether the risks will begin to materialise, and whether reactive steps will be taken, and for what reason (the providers’ own concerns, public demand or political pressure), still remains to be seen.
To sum up
As of 9 January 2023, ChatGPT does not seem to be a trustworthy AI system. You should not use its outputs without them being fully reviewed, in all aspects. If the application you’re trying to use it for is sensitive, ChatGPT’s outputs must be reviewed by a subject matter expert in the relevant application domain.
If you have children or other vulnerable people in your life who are likely to use ChatGPT or similar systems, have a talk and explain these systems’ deficiencies and limitations very clearly, adapting the arguments from this article as necessary.
And if Sam Altman or someone else from OpenAI will be reading this, my message to you is simple: your product has great potential, but you should really be more transparent. You should be more clear about the deficiencies of the system as it currently stands. Consider providing a prominent warning about the fact that users should never, ever, take any ChatGPT output at face value. The disclaimers you currently have are not good enough.
If you like my work, please support me on Patreon.
EdM, SSCP, CISSP, CIPP/US
1 年It is really all about what you are asking of it... Accuracy is not something you should expect from what is in essence an overpriced encabulator machine. If you tried to extract oil from a water spout, you will not get very far...
Conseillère aux opérations chez Commission de la construction du Québec | Technologie éducative, Gestion de projets
1 年ChatGPT pourrait devenir l’outil par excellence d’une ??hypertrophie de la propagande??. Inquiétant.
Gestor em Privacidade | Consultor LGPD | Parceiro HACKER RANGERS | IAPP Member | Membro AP? Comitê de Seguran?a | Membro G20 | Palestrante | Vocalista de Rock | Trilheiro.
1 年Muito bom!
Owner&Chief Architect, Inspired | Researcher
1 年Excellent article pointing out current capabilities and limitations I have tested ChatGTP in various scenarios. It is impressively fluent and confident in its responses, making them very convincing. I have also found that it is sometimes completely incorrect! It will tend toward populist and average thinking rather than verified fact. It will also make assumptions which are not correct, but present them as fact. The technology has huge potential to improve the interface between humans and AI models, but will need careful improvement to become more reliable.