The Incoming Tsunami of AI Enabled Disinformation

The Incoming Tsunami of AI Enabled Disinformation

New Era, New Threats

ChatGPT and other generative language models have ushered in an exciting era. It is probably for the first time that an average person has easy access to a bot that delivers human-like speech within seconds. These new language models will only improve, and it will happen at breakneck speed.

A plethora of new solutions will be born. But, in the midst of all of this, it is quite likely that this question will keep playing on our minds frequently - Is this piece of content written by a human?

For a malicious actor running a disinformation operation, this is game-changing. Until yesterday, running a disinformation campaign at scale was quite complex. A certain dependency on human labour was still prevalent.

But now, these language models offer the possibility of automating the creation of convincing and misleading text, at scale while being highly cost-efficient.

Researchers at OpenAI have long been nervous about the tech being misused by bad actors. In fact, they have documented their concerns in a paper written back in 2019.

To understand how platforms like ChatGPT can be weaponised, let’s take the example of an experiment run by NewsGuard, a fact-checking company. In January 2023, they fed 100 prompts into ChatGPT. These prompts were related to common false narratives around US politics and healthcare. 80% of the responses produced by ChatGPT were either false or misleading. Below is one such sample.

No alt text provided for this image

Evolution of The Disinformation Landscape

Generative AI text creates new possibilities, bringing in new changes. Every disinformation campaign typically has 3 key vectors - Actors, Behaviour and Content also known as the Disinformation ABC framework which was introduced by Camille Francois.? Building on this framework, recently, researchers came up with how language models such as ChatGPT can affect the ABC’s of influence operations. See below:

No alt text provided for this image

So, what are we possibly going to be faced with? Summarising from the table above, we can expect the following:

Actors

  • A larger number and more diverse group of propagandists will emerge. With costs driven down, new players from various ethnic backgrounds will foray into the disinfo domain, introducing narratives which have never been seen before.??
  • Outsourced firms will gain prominence. Disinformation As A Service (DAS) is highly likely to see an uptick. Newer companies offering DAS will be born.


Behaviour

  • Automating content production will increase the scale of campaigns. What took complex efforts and time to achieve, is likely to be achieved within significantly reduced timelines.??
  • Existing behaviours become more efficient. Tactics which have mostly been expensive, such as cross-platform testing are likely to become cheaper.
  • Novel tactics will emerge. Content produced is likely to be more personalised and in real-time, by possibly using one-on-one chatbots.


Content

  • Messages will be more credible and persuasive. Generative models are likely to significantly improve the quality of the messaging, compared to propagandists who lack cultural knowledge of their target.
  • Propaganda will become less discoverable. Current campaigns are usually identified owing to the common application of copy-pasting text. However, with language models, the text produced will be quite distinct. This will make it harder to identify disinformation campaigns.??????

So, What’s Next?

In my view, our starting point should be - Acceptance. We have to begin by accepting the very real possibility that disinformation operations will see a whole new scale and complexity.

Most importantly, for companies and influential persons, it is not a question of ‘If’ but ‘When’.

While not all disinformation campaigns will be effective, it would not be wise for companies or influential persons to play the waiting game. When there is targeted disinformation; those who are prepared will be better off.

Companies or individuals who have a fair understanding of their exposure to disinformation, are working proactively towards identifying false propaganda against them and have a response mechanism in place - are already well-placed to deal with disinformation when it comes their way.?

Companies who are still on the fence thinking about the ‘If’ and not working on the ‘When’ - are likely to feel the full force of the incoming tsunami of AI-enabled disinformation when it hits them.

Col Manish D K.

Nation First - Always & Every Time

1 年

Very thought provoking article, and indeed a definite reality in times to come... we are living in really very interesting times - AI, ChatGPT, Drones, Space wars, Brain chips and what have we... !!

Gautam R.

Director, Spirogyra Software Private Limited

1 年

Very clear threat and could prove to be highly disruptive and dangerous. Can a marker be introduced which will be in all AI generated content, which will encourage us to cross check and verify the information provided ? Is this doable Varun Kareparambil ?

要查看或添加评论,请登录

Varun Kareparambil的更多文章

社区洞察

其他会员也浏览了