How AI is boosting disinformation
UN Photo/Elma Okic

How AI is boosting disinformation

Hate and lies are already polluting our digital information ecosystems, but generative AI tools could be about to make things dramatically worse.

“AI” was named the?word of 2023 ?— and for good reason. New tools, if developed and used responsibly, can change the world for the better.

We’ve already seen glimpses of how AI-powered tools are improving access to all kinds of information, as well as healthcare, education, legal and public services for people around the world.

But we must be cautious.?This rapidly evolving technology presents grave risks as well as opportunities.?In January, the World Economic Forum declared?AI-powered misinformation ?as the world’s biggest short-term threat.

Misinformation, disinformation and hate speech?are already polluting our information ecosystems ?— polarizing societies, eroding trust, and ultimately threatening human progress.

The use of AI to help spread this content is nothing new. Disinformation actors have long been deploying AI-powered bots on social media and training AI-powered algorithms to promote hate-filled and misleading content.

Yet high investment and maintenance costs have always limited the scale of disinformation operations — until now.

Cheap, off-the-shelf generative AI tools have lowered barriers to creating and spreading disinformation, both in terms of cost and manpower. Hateful and misleading content can now be churned out with little human intervention, cheaply, and at scale.

What’s more, it’s even harder to detect. AI-generated content leaves few fingerprints, making it much harder for journalists, fact-checkers, law enforcement, or ordinary users to tell it apart from the real deal.

We already see impacts in many areas, from?peace and security ?to human rights.?Targeted disinformation ?— already a potent weapon in any war — is cheaper to make and spread. Disinformation campaigns have been cited as one of the biggest challenges for UN peacekeeping missions —75% of surveyed UN Peacekeepers reported that misinformation or disinformation had impacted their safety and security. The ability to flood online environments in conflict zones with AI-generated disinformation can?lead to further violence and destabilization .

The same goes for content that is racist,?anti-Semitic , Islamophobic,?otherwise xenophobic , or even?sexually abusive . In one shocking recent?study by Stanford researchers found more than 1,000 exploitative illegal images of children in a prominent open-source database used to train some AI image-generating tools.

Equally disturbing is the rise of AI-generated non-consensual pornographic images. In many cases, these are being spread?in a bid to silence female voices ?— politicians, journalists, activists, and even disinformation researchers.

But they are also cropping up in more everyday settings. Last year, in Spain and in the United States, AI-generated fake nude images of teen girls were found circulating online and through messaging apps. They had been made by the girls’ teen classmates, simply by loading photos into an AI app.

The potential dangers don’t end there. Many researchers are warning of the threat AI-generated disinformation poses to democracies — a massive issue during this bumper election year, when more than 2 billion people around the world are eligible to vote.

In fact, AI-powered voter manipulation is already here. AI tools are?already being used ?to spread plausible-looking deepfakes and other disinformation, in places via mock news sites or fake broadcasters — complete with AI-generated news anchors. Almost anyone can?create a news outlet ?that looks like a real channel.?This deepfake video technology is being deployed on social media feeds to deceive people with propaganda disguised as news.

Many attempts to sway voters are part of wider efforts to sow confusion and undermine public trust in everything from the media to public institutions to the electoral process itself. Even science — including the scientific consensus around climate change —?is under attack .

We can’t afford to go on like this.?Many dedicated bodies around the world, including the UN, have long been exploring ways to tackle online harms while robustly upholding human rights. Yet AI tools are evolving so fast that they threaten to overtake that work.

That’s why we must act fast. Governments, civil society, and individual users are?demanding urgent action ?from the developers of AI tools to make their work safer and more transparent.

We need effective guardrails, we need humane solutions, and we need generative AI tools that embrace safety and privacy by design.

The UN is seeking action on several fronts. In October, UN Secretary-General Antonio Guterres established a multidisciplinary, representative?AI Advisory Body ?that recently presented recommendations for strengthening global AI governance, while?UNESCO ?has issued important guidelines on the ethics of AI and how to mitigate against potential online harms.

In addition, my team and I are?developing ?a code of conduct for information integrity to help boost societal resilience against disinformation and hate, while robustly upholding human rights.

There are some hopeful signs. It’s encouraging that some AI developers have agreed to watermark and fingerprint AI-generated photos and videos. While the technology is not foolproof, that’s a start. New iterations of watermarking technology should also carefully consider implications on the basic rights of the user.

At the same time, we’ve seen how AI-powered tools themselves are essential allies in the fight against information harms, with many tech companies relying heavily on AI to detect and address harmful content on their platforms.

Gen AI could be a powerful force in our work for information integrity if harnessed for good. But this has to happen now. The stakes are far too high. Once the damage is done, it will be too late.

Kajal Singh

HR Operations | Implementation of HRIS systems & Employee Onboarding | HR Policies | Exit Interviews

5 个月

Great read. The eventual goal of AI systems is to surpass human accuracy in various tasks, thereby increasing productivity and allowing humans to focus on more complex endeavors. However, over time, AI models face challenges, which lead to decreased accuracy due to evolving data and inherent limitations. Regular maintenance, akin to Software DevOps, is essential for both traditional software and AI systems, which involves code refactoring, bug elimination, and continuous improvement. However, AI systems demand additional processes, including repeated iterations of data gathering, labeling, model training, and repackaging. Hence, the annual maintenance cost for AI systems is estimated to be three to four times that of traditional software, and it often constitutes 50-80% of the initial development cost. This high maintenance cost poses challenges for scaling AI projects, which has also been indicated by recent industry surveys. Hence, organizations and investment professionals must adapt policies and valuation models to account for the unique challenges and costs associated with maintaining AI systems. More about this topic: https://lnkd.in/gPjFMgy7

回复
Mahenoor Yusuf

Founder & CEO of Fact Finders Pro | Tech & AI ambassador | Combat Disinformation | Harvard Alum

8 个月

You are absolutely right Melissa Fleming, we need to join our efforts across borders to combat disinformation. Leading the Fact Finders Pro team, we are on the mission to use AI to develop an online platform to help people to spot disinformation and fake news.?

回复
Stephan Darbell

Senior Cyber Threat Intelligence Advisor I DISARM Expert FIMI Foreign Information Manipulation and Interference (FIMI) Utl?ndsk informationsmanipulation och interferens I Digital Innovation l Strategy l Digitalisation

9 个月

AI is a really big treath to disinformation but we can stop it and I know how...

回复

Thanks for sharing your thoughts on this, Melissa. We featured your post in the second edition of our newsletter on mis- and disinformation. You can check it out here:

回复
Alex Armasu

Founder & CEO, Group 8 Security Solutions Inc. DBA Machine Learning Intelligence

9 个月

Much thanks for your post!

要查看或添加评论,请登录

Melissa Fleming的更多文章

社区洞察

其他会员也浏览了