Generative AI and Disinformation: A Two-part Series
By: Rosyl Saldo, 02 August 2023, Ho Chi Minh City, Vietnam?
?
I am writing this article after reading lots of Pocket stories on ChatGPT, language learning models and artificial intelligence. A quick disclaimer, I didn’t use any of the “smarty AI writing assistant” in doing this article, which I felt like an unspoken paranoia among others when publishing blog posts. Readers may or may not think that an article is written using AI prompt. At the very least, I used Grammarly for grammar checking. The few close people who know me will recognize the tone of my writing and will know right away that it’s me, a human being, who is speaking in this article. Well, I guess times have indeed changed. ?
By now everyone knows what Generative AI is. In a nutshell, it’s a type of artificial intelligence that can create new content such as text, images, audio or video, based on data and algorithms. The exponential rise in popularity of AI over the years, especially with the advent of ChatGPT has shown that AI is not only a powerful tool for innovation and creativity, but also a potential weapon for deception and manipulation. There are a lot of useful tasks that can make use of AI technology, which would take an entire article if I talked about it, so we’ll save that for another time. Unfortunately, though, AI techniques have also been used to create and disseminate disinformation, which is the deliberate spread of false or misleading information that aims to influence people's beliefs, opinions or actions.?
?
Disinformation or fake news (as we call it colloquially in the Philippines) is not a new phenomenon, but in recent years AI has made it easier, faster and cheaper to produce and distribute such contents. Moreover, AI has enabled a new form of disinformation that is more effective and dangerous than ever before: targeted disinformation. Targeted disinformation uses generative AI to produce and disseminate disinformation that is tailored to specific groups or individuals, based on their profiles, preferences or vulnerabilities. By exploiting people's cognitive biases, emotions and trust, targeted disinformation can manipulate them in subtle and sophisticated ways, without them even realizing it.?
?
I had the chance to learn more about this kind of disinformation when I deep dived into the Cambridge Analytica and Facebook ruse, five years ago when I was starting in the data privacy field. Cambridge Analytica was said to have?harvested Facebook data on tens of millions of Americans without their knowledge to build a “psychological warfare tool,”?which it unleashed on US voters to help elect Donald Trump as president. The data was gathered from an app that pays users to answer a personality quiz which then gives access to their?Facebook account and their friends, and the rest is history.?It was an elaborate, underhanded process back then done by humans in an agency.?
?
Now, targeted disinformation is more sophisticated hence making it more difficult to detect and counter, because it can evade traditional methods of verification, fact-checking and moderation, and blend in with authentic content. Generative AI can be used to create realistic and convincing content that mimics the style and format of legitimate sources, such as news articles, social media posts or videos. No need to sign up for anything because it’s right in our faces, like a special dinner served on a hot plate. It’s part of our everyday life as we scroll through our phones, freely flowing in the vast world wide web and we’re not very much aware of it.?
?
People in general post a lot of content to social media and other platforms, which makes it easy to gather data for a disinformation campaign. This is technically free lunch for those who have the intention to use data to purvey the type of information they need people to accept and believe in.?After having a profile of various groups of people in a country, they can train the generative AI system to produce content that influences those targets in very complex, and most the time, subtle, ways.?
领英推荐
?
For one, fake news articles sound legit news already that it’ll take an effort to distinguish between the two. Relevancy, fine tuning and tone is too good to be true. These fabricated stories mimic the style and format of legitimate news sources to spread false and biased information on political events and social issues. Eventually, they can have serious consequences for society as a whole, as it can undermine trust in institutions, polarize societies, incite violence and spread misinformation.?
?
Another is the use of bots in the form of automated accounts that emulate human behavior online. These bots are spread to amplify or suppress certain narratives, influence public opinion or disrupt online discussions. They’re not even human bots, typing comments on a dark room in a certain unknown place. They’re AI bots that could be a hundred times faster and smarter than humans. And then there’s the more advanced deep fakes, synthetic videos or images that show people doing or saying things that they never did or said, such as impersonating or defaming politicians, celebrities or other influential figures. It’s a scary world out there, not knowing which is real and which is fake. Even if it’s real, we can’t know how much of it is sourced from real sources or AI generated sources.?
?
“We’ve been pretty well tricked by very low-quality content. We are entering a period where we’re going to get higher-quality disinformation and propaganda. It’s going to be much easier to produce content that’s tailored for specific audiences than it ever was before. I think we’re just going to have to be aware that that’s here now,” says Kate Starbird, an associate professor in the Department of Human Centered Design & Engineering at the University of Washington.?
?
End of Part 1?
?
Working on Part 2, what to do about these generative AI and the potential for targeted disinformation.?
?