AI-Powered Disinformation
Victoriano Donato Cabiles
I cater to the needs of the 4% of LinkedIn members 55+ years I Let me help with your personal/professional issues I I am your trusted confidante I Reach out to me I [email protected]
AI is here, and its use is being embraced by more and more actors, including both protagonists and antagonists in all walks of life and sectors of society. In the hands of good actors, AI is a tool to help make work and daily life easier, and the world a better place. However, in the hands of villains, AI poses a significant threat as it can be used to spread disinformation.
The Merriam-Webster online dictionary and thesaurus define disinformation as “false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth,” and a synonym for disinformation is propaganda. Propaganda is similarly defined as “ideas or statements that are often false or exaggerated and that are spread in order to help a cause, a political leader, a government, etc.”
In short, malicious actors’ use of AI as a weapon of disinformation in our digital world is bad – really bad. Here are some ways on how AI-fueled disinformation can cause real harm and damage (thanks #ChatGPT and #GoogleGemini):
1. Fake Videos or Deepfakes: AI can generate realistic fake videos. For example, a video of the #AustralianPrimeMinister saying things they never did here on LinkedIn, such as racist or sexist remarks, posted within the election week. If audiences could not discern a deepfake from a real video, this could cause distrust and ruin a political career.
2. Social Media Bots: AI-powered [chat] bots on social platforms can simulate human behavior, such as liking, sharing, and commenting on posts, to make disinformation appear legitimate, causing confusion or promoting a low-quality but pricey brand.
3. Virtual Agents: Similar to chatbots, AI virtual agents can engage with users online, spreading false information or promoting specific narratives. These virtual agents can be programmed to respond to queries, engage in conversations, and disseminate propaganda on various platforms.
4. Phishing Emails: AI can write convincing emails that impersonate legitimate sources. These emails can be used to trick consumers into revealing personal information or clicking on malicious links.
5. Automated Text Generation: AI algorithms can generate large volumes of text that mimic human writing style and syntax. This can be exploited to create fake news articles, opinion pieces, or social media posts designed to deceive readers and spread false information against competitor brands, products, or organizations.
6. Generative Adversarial Networks (GAN): AI can generate highly realistic images of nonexistent people, objects, or events. These images can then be used to create fake profiles, evidence, or visual content to support disinformation campaigns or other propaganda.
领英推荐
7. Fake News Websites: AI can be used to create fake news websites that look like real news outlets. If undetected, these websites can publish fabricated stories to mislead people and manipulate public opinion on government intervention programs, as an example.
8. Trading Manipulation: AI algorithms can be used to manipulate financial markets by spreading false information or rumors that can lead to market volatility and allow attackers to profit from artificially induced price fluctuations.
9. Recommendation Manipulation: As with trading manipulation, AI algorithms can be used by social media platforms [and news websites] to recommend misleading content to users in order to generate more clicks and engagement.
10. Sentiment Manipulation: AI algorithms can be used to manipulate sentiment analysis systems. This includes public opinion based on social media posts and other online content. AI-generated positive or negative content can skew sentiment results and manipulate public perceptions.
These are just several examples. As AI technology continues to develop, even more sophisticated disinformation campaigns will emerge. Therefore, it is very important to be highly critical of the information being disseminated digitally/online and to verify the sources of such information that can be potentially damaging or harmful.
Victoriano D. Cabiles