The Incoming Tsunami of AI Enabled Disinformation
Varun Kareparambil
Crafting Tailored Security Solutions for UHNWIs & Corporations | Creator of AI ThreatScape Newsletter
New Era, New Threats
ChatGPT and other generative language models have ushered in an exciting era. It is probably for the first time that an average person has easy access to a bot that delivers human-like speech within seconds. These new language models will only improve, and it will happen at breakneck speed.
A plethora of new solutions will be born. But, in the midst of all of this, it is quite likely that this question will keep playing on our minds frequently - Is this piece of content written by a human?
For a malicious actor running a disinformation operation, this is game-changing. Until yesterday, running a disinformation campaign at scale was quite complex. A certain dependency on human labour was still prevalent.
But now, these language models offer the possibility of automating the creation of convincing and misleading text, at scale while being highly cost-efficient.
Researchers at OpenAI have long been nervous about the tech being misused by bad actors. In fact, they have documented their concerns in a paper written back in 2019.
To understand how platforms like ChatGPT can be weaponised, let’s take the example of an experiment run by NewsGuard, a fact-checking company. In January 2023, they fed 100 prompts into ChatGPT. These prompts were related to common false narratives around US politics and healthcare. 80% of the responses produced by ChatGPT were either false or misleading. Below is one such sample.
Evolution of The Disinformation Landscape
Generative AI text creates new possibilities, bringing in new changes. Every disinformation campaign typically has 3 key vectors - Actors, Behaviour and Content also known as the Disinformation ABC framework which was introduced by Camille Francois.? Building on this framework, recently, researchers came up with how language models such as ChatGPT can affect the ABC’s of influence operations. See below:
So, what are we possibly going to be faced with? Summarising from the table above, we can expect the following:
领英推荐
Actors
Behaviour
Content
So, What’s Next?
In my view, our starting point should be - Acceptance. We have to begin by accepting the very real possibility that disinformation operations will see a whole new scale and complexity.
Most importantly, for companies and influential persons, it is not a question of ‘If’ but ‘When’.
While not all disinformation campaigns will be effective, it would not be wise for companies or influential persons to play the waiting game. When there is targeted disinformation; those who are prepared will be better off.
Companies or individuals who have a fair understanding of their exposure to disinformation, are working proactively towards identifying false propaganda against them and have a response mechanism in place - are already well-placed to deal with disinformation when it comes their way.?
Companies who are still on the fence thinking about the ‘If’ and not working on the ‘When’ - are likely to feel the full force of the incoming tsunami of AI-enabled disinformation when it hits them.
Nation First - Always & Every Time
1 年Very thought provoking article, and indeed a definite reality in times to come... we are living in really very interesting times - AI, ChatGPT, Drones, Space wars, Brain chips and what have we... !!
Director, Spirogyra Software Private Limited
1 年Very clear threat and could prove to be highly disruptive and dangerous. Can a marker be introduced which will be in all AI generated content, which will encourage us to cross check and verify the information provided ? Is this doable Varun Kareparambil ?