How I Faked a Wind Farm in London Using Nothing but AI
Will there ever be a wind farm in Chiswick? Of course not, it’s an absurd premise. Chiswick is a London suburb only five miles from the city centre, whereas wind farms need vast, open, exposed spaces.
But, did I use AI to manufacture a wind farm in Chiswick, luring in national journalists while causing chaos behind the scenes?
Actually… yes. Buckle up, this one’s a crazy ride.
Why would I even do this?
Much has been written about the potential for AI to be used by bad actors, be it cybercriminals creating malicious code or phishing emails, or state-backed disinformation campaigns using AI content at a national level.
What hasn't been explored is the potential for AI to deliver tailored disinformation campaigns at a local level, influencing local communities with falsehoods on matters that they really do care about.
Whilst that scenario might not seem all that damaging in isolation, in the UK the outcome of the General Election is typically decided by about 75 'swing seats'. Using AI to target these - with individually tailored campaigns - has potential to move the electoral dial.
So, is this a threat we should be worried about? In the interests of science, I decided to find out. And in the interests of having fun with it, I decided to make my chosen premise completely, and utterly, ridiculous.
Disclaimer - this was a personal interest project and not connected with my day job in cyber security.
Let me introduce you to Chiswick Wind Farm.
Step One - create the story
The rule was this: everything had to be AI generated. As such, my overall time investment was well under two hours. Tools included:
The Generative Fill feature of Adobe Firefly (free, all in the cloud) is incredible - just type what you want, where you want it, and it literally fills in the blanks.
I used the same approach using 3D satellite imagery from Google Earth - a quick prompt into Adobe Firefly and I had an aerial view of a windfarm next to the river Thames, which was used as a Twitter header along with the website banner at chiswickwindfarm.com
ChatGPT then wrote all the content based on a series of prompts, the output of which I pasted into the Chiswick Wind Farm website.
领英推荐
Step Two - create legitimacy
At this point, I bought Chiswick Wind Farm some Twitter followers in an attempt to appear like it hadn't just appeared out of thin air. 1,000 followers cost £9, I'm not sure if that's cheap or not, but to a casual observer it gave the project some kind of legitimacy.
Next was following and re-tweeting some high-profile accounts in the renewable energy space. It happened to be ‘Wind Energy Week’, and the UK Trade Association for Renewable Energy was pushing the #WEW23 hashtag pretty hard. By using the hashtag, Chiswick Wind Farm's posts were retweeted by Renewable UK - and so gained the first stages of legitimacy.
Step Three - amplification
This is where it got interesting. Jeremy Vine is a well known TV news personality, having presented the flagship BBC news program Newsnight, and investigative documentary series BBC Panorama. People trust what he has to say.
Jeremy Vine tweeted about Chiswick Wind Farm, and in doing so broadcast the website to his 786,000 followers.
With such a credible source amplifying the message, Chiswick Wind Farm started to be inundated with messages through Twitter and Email.
ChatGPT then wrote a completely fabricated research paper, that collated the results of a fake consultation with 380 local residents. Publishing this was perhaps the final straw for some, outraged they hadn’t been consulted, and demanding to see the results in full.
Reality Check - defense kicks into gear
By this point I was pretty shocked by how easy this was. I had created a wind farm, people believed it to be real, and momentum was building. It had cost me about an hour, and £10 in fake twitter followers.
But thankfully, someone stepped up. A local journalist at chiswickw4.com started emailing asking for details. Actual details, like asking about the entity behind the project, and referencing specific names and groups that should have been consulted. The email replies generated by ChatGPT were no match for a smart, connected and motivated real human being – and the whole premise was completely debunked in an article proclaiming the project as a hoax.
What can we learn from this?
Firstly, it took a local journalist to effectively respond and shut down the fake wind farm. In this scenario, there is no substitute for local knowledge and local community connection, and while the news industry seems to continually evolve based on eyeballs and clicks - it's the traditional approach here that defeated AI. If a town doesn't have strong local media and passionate community, might it be more susceptible to this kind of operation?
Second, it is shockingly easy to create and amplify disinformation using AI - and AI can enable campaigns tailored at local communities and repeated many times over to gain scale. Based on this exercise, I would estimate that 2-3 people could easily run tailored (and far more effective) campaigns against the 75 swing electoral constituencies in the UK.
Finally, this kind of operation, much like cyber attack, can be considered asymmetric. The effort required to counter disinformation far exceeds the effort required to deliver it, particularly where AI is involved - and there will likely be a natural limit to what can be achieved by those seeking to defend the truth.
With a more believable message (a wind farm in Chiswick is pure absurdity), targeting communities with hot-button or divisive messaging, a localized approach to disinformation could start to have a genuine impact.
Now I'm off to buy that journalist a coffee.
No-Nonsense Security Advisory | Security Audits | CREST Penetration Testing | Cyber Essentials Plus Certification | Risk Assessments & Remediation
1 年Nice build up, an easy documentary material :)
Fascinating!
The Encourager | Conference Speaker | Mentor | The "Plan-A" Guy | Chief Cheerleading Officer | Global Cyber Evangelist | Former WithSecure Board Member | #ManOnAMission
1 年Great article Peter Cohen. Demonstrating as you mentioned, if it was ever necessary, that defence takes more effort than attack. In the online world, a fake story or malicious allegation is created, and the world buys it. A 2018 MIT study using Twitter showed that false news reached 1,500 people SIX TIMES FASTER than the truth. Staggering!! Finally, your creativity may have prompted “an absurd” example, but it wasn’t that far fetched of course. How many fear campaigns or sensationalist stories are we really exposed to? We may never know. Awesome work, as always! ??