ChatGPT, go fight disinformation!
Dan Sobovitz
Founder @ Spreadable.io | External Relations, Public Communications, Editorial
Tl;dr What if we used #ChatGPT to detect and reply to #disinformation posts and #FakeNews?
By now, the scale and gravity of online disinformation has become fairly known in Western societies. Not only do we understand the risk of increasing polarisation and social tensions, we have actually seen its impact in action; distorting democratic elections, spreading conspiracy theories or even triggering violent attacks. Yet, few effective solutions have been presented so far. We have tried passing the responsibility to the platforms, who in turn hire 'armies' of fact-checkers and find themselves in an awkward position of having to judge right from wrong or true from false. This has not only created difficult ethical questions but has simply proven ineffective.
The spread of fake news is by definition faster than its (human) tracking. Bot farms are easily scalable whereas human moderation is not. And most importantly, by the time we refute a false argument, it has often already spread many times around the world.
This is an asymmetric information battle, given that the malign (often state) actors tend to use generic accounts, AKA bots; whereas on the defense we revert to humans to monitor, assess, and address. This battle is therefore doomed to fail.
The advancement of #AI in general and the introduction of the incredibly performant ChatGPT of OpenAI in particular is opening some new opportunities. The new tool is capable of producing human-sounding language at a level never seen before. More importantly, as opposed to previous concerns about AI bias and prejudice (simply due to the fact that AI relies on large amounts of texts where the human bias exists already), ChatGPT has so far proven a very responsible and trust-worthy editorial line.
领英推荐
Yes, I do recognise the risk of delegating the defense of truth to machines. And yes, I am also torn between being fascinated and enthusiastic about ChatGPT on the one hand, and terrified by its potential harmful implications on the other. What if tomorrow the algorithm changed and so did its answers? How do we go about regulating such a powerful tool? And is it really responsible to delegate to an AI machine whose reasoning we do not even understand?
Yet, given that no other effective solution has been presented so far to the growing threat of disinformation on our democratic system, I encourage organisations addressing this problem to start an incremental experiment in a controlled environment, testing the quality of answers of ChatGPT and whether it is capable of providing to false arguments.
Following my intuition that ChatGPT is on the 'right side of history' and would be 'willing' to defend the truth, I started my own small-scale experiment with it. Here's the continuation of the query I posted above.
Were such larger scale experiment to succeed, we could imagine a system which monitors posts for disinformation (various versions already exists), triggers the chatbot to produce a reply, and posts it. Eventually, the entire process can be automated but given the novelty of the technology and the associated risk, human moderation and control should probably remain until we have gained enough experience and trust in this new environment.
So what do you think, could AI indeed pick up the challenge of counter-disinformation? This question I do not intend to ask ChatGPT. Not yet, anyway...
?? Modern International Communication Operations ?? | Communication Manager & Consultant | Certified Trainer | Certified Public Speaking Trainer | Science & AI policy communication |
1 年#Disinformation is a serious problem w/ real-world consequences, from undermining democracy to spreading #misinformation that can harm individuals & communities. #ChatGPT is trained on a massive dataset of text, which enables it to generate human-like text that is informative, engaging & convincing. This makes it ideal for generating counter-narratives & fact-checking to counter disinformation. Imagine a scenario where a disinformation campaign spreqds false information about a particular topic. With ChatGPT, we can quickly generate a comprehensive article that debunks the false claims & provides accurate information. Thiearticle can then be shared on social media, news websites & other platforms to reach a wide audience & help to stop the spread of disinformation. ChatGPT can also detect patterns in the language used in disinformation campaigns. By analyzing a large dataset of disinformation, ChatGPT can learn to identify the common tactics & strategies used by disinformation campaigns. While ChatGPT is a powerful tool, it is not a silver bullet against disinformation. Disinformation requires a multifaceted approach including technical solutions, media literacy education, and changes to social & political systems.