AI disinformation: A World Wide War on the World Wide Web
This week, Bletchley Park, former base of the World War Two codebreakers, hosted the UK Government’s first AI Safety Summit. For many British people, Bletchley is a key pillar in our World War Two mythology, a narrative that celebrates the triumph of good versus evil.
This significance has not been lost on Rishi Sunak. The Prime Minister has already announced a?£100 million investment in a new Safety Institute, to which he hopes to attract “the best and the brightest researchers from around the world”.?
AI-generated images will also be covered by the Government’s imminent online safety bill, which will require social media companies to prevent such images appearing on their platforms. Mr Sunak has remarked that “monitoring the risks posed by artificial intelligence (AI) is too important to be left to big tech firms”.?While this narrative is designed to be reassuring, I find it naively simplistic.
I do, of course, welcome this investment and the focus the UK Government is placing on this risk. AI represents a real and present threat. We’ve had disinformation for a long time, but, with AI, we have given its creators -?rogue states, criminals, extremists and peddlers of conspiracies?- a powerful weapon.?
AI imagery and video is compelling and becoming increasingly sophisticated in its speed and quality. We all tend to trust the content in front of?our eyes, yet it is becoming increasingly difficult to distinguish what is real from what is fake.??
We are now seeing?AI?generated images of children in conflict are going viral. One deep-fake image of a baby partially buried in rubble, fooled even some journalists, making the front cover of French newspaper, Liberation. Such fakes manipulate people’s otherwise real and justified outrage at the impact and scale of war on civilian populations and trivialises their real suffering. Such is the fog of war.
But as we all know too well, the dark side of this technology is difficult to combat. It is the enemy of trustworthy and factual communications. And it is capable of producing seductive content which can confuse, disempower and even kill.??
In this era of immense change, protecting our information environment requires a strategic approach to building the necessary resilience among our citizens and systems.?
领英推荐
This is why we cannot leave this to any one government, or even governments globally to solve. We need real global collaboration that draws on a whole of society approach – the security services, educators, the media, community organisations, faith leaders, influencers, the creative industries, charities, as well as Big Tech – to create a healthier information environment and build trust in official institutions.?
Taking no action is not an option.
Those of us in the marketing and communications industries have a critical role to play.
We must use our skills as content-creators, storytellers and cultural influencers to build narratives and content that is?both?genuine – and compelling. We must build trust in those institutions that deserve it, outcompeting the bad actors and creating compelling counter-brands to the lies and untruths.
As experts in behaviour change, we can also contribute to changing the way people interact with online content. At Freuds+, we’ve been applying the behaviour change skills we developed in the pandemic, tackling issues like vaccine hesitancy, to other areas of harmful online behaviours. One example is our work with Meta on its Child Safety campaign. This initiative proved that we can improve critical thinking, educating users on the harmful content they are sharing, liking and commenting on, so that they know how to report it, with unprecedented success.
The importance of communications on this topic is a constant debate, with recognition of its importance in principle, but governments, organisations and the national security community often need persuading in practice. That means we need to keep demonstrating the effect that communication is achieving, and how it can be used to tackle the?firehose of falsehood and the sewer of disinformation we’re seeing online.
I hope that this week’s AI Safety Summit will be a turning point in the national, and global, discourse on AI safety, and will demonstrate real ambition to take control of our online environment.
Please feel free to get in touch to share and discuss your own views on AI and online?safety.