The Rise of Deepfakes…It’s Time For a Reality Check
Jacques van Wyk | MD - JGL Forensic Services | 22 May 2024

The Rise of Deepfakes…It’s Time For a Reality Check

You may or may not have heard of the Gorilla Experiment.

Volunteers were told to keep track of how many times basketball players in different coloured shirts tossed a basketball. While they did this, someone in a gorilla suit walked across the basketball court, in plain view, yet many of the volunteers failed to notice the beast.

What the invisible gorilla study shows is that, if we are paying very close attention to one thing, we often fail to notice other things in our field of vision—even very obvious things.

The tendency of our brains is to slightly close the curtains on the windows of our minds and focus only on the thing we expect to see/hear/experience at that moment.

This popularized a phenomenon of human perception—known in the jargon as “inattentional blindness”.

In itself, selective perception is not a bad thing. Quite the opposite, in fact. It’s a critical function of our brains, helping us filter out un-helpful data and focus on the signals we need to register. It’s why an exhausted new mother will sleep through someone vacuuming in the next room but wake up the second her baby makes a noise.

But, as with most things in life, you can have too much of a good thing.

The problem arises when selective perception collides with biased reference points, such as those fed to us via social media. (Think of Facebook or Instagram posts, for example, that only show us pictures of our friends living lives that seem so much more glamourous and exciting than our own).

This media melting pot can wreak havoc with our beliefs, creating a paradise for fake news.

Fake news is so dangerous because it’s so seductive. It grabs our attention and prejudices our thinking by providing information that is not only factually incorrect but actively and strategically chosen to affect our cognitive processes and influence the decisions we make.

Even more insidious, however, is fake news’ evil big brother, deepfakes.

Deepfakes emerged as the unplanned and unwanted lovechild of generative artificial intelligence (GenAI) and large language models (LLMs). The name comes from the ability of deep learning to create fake versions of real, existing videos, images or audio material. They look so realistic – and are becoming even more so as the technology develops – that it’s sometimes nearly impossible to tell them apart from the real thing.

The technology was originally used to great (and harmless) effect in the entertainment industry. The problems started when people started using it to make it appear as though politicians and other well-known people were saying or doing things they never actually said or did.

This kind of content is dangerously misleading as it can have a profound influence on public opinion – even to the point of influencing the outcome of elections or inciting tensions between countries.

Global verification platform Sumsub tells us that the number of detected deepfakes in the 1st quarter of last year was 10% higher than in the whole of 2022. Most of these came from the UK. More worrying, just over 50% of all online misinformation comes from manipulated images.

The problem is set to escalate throughout 2024, as major elections take place in the UK, USA, EU and right here in South Africa.

In January this year, the New Hampshire Attorney General's Office was called on to investigate reports of an apparent robocall that used artificial intelligence to mimic President Joe Biden's voice. The message reportedly told local residents not to vote in the primary election that was set to happen later that month.

Only a few days later, we read reports that sexually explicit deepfake images of Taylor Swift were circulating widely on social media.

Actor Tom Hanks was also deepfaked for an ad for a dental plan that he was apparently endorsing. He later took great pains to assure people he had nothing to do with the ad or the plan.

Popular radio personality Zoe Ball suffered a similar fate earlier this year when deepfake images of her promoting a fraudulent crypto investment website started circulating.

And here at home, SABC TV News anchor Bongiwe Zwane's voice was used in an automated telephone message asking people to donate money to a non-existent foundation.

Of course, politicians and other famous people aren’t the only ones at risk. Businesses and private individuals are easy pickings for criminals who use deepfakes to help them commit fraud.?

Harry Potter fans might understand a reference here to Polyjuice, an advanced potion which allows the drinker to assume the physical appearance of another human being. If only this ability was limited to fantasy novels.

Unfortunately, the real-life ability to look and sound like anyone, especially people who are authorised to approve payments, gives fraudsters almost unlimited opportunities to exploit weak internal procedures and extract vast sums of money almost at will.

No wonder, then, that the World Economic Forum ranks deepfakes as one of the most worrying uses of AI, and disinformation one of the top risks in 2024.

Ironically, the democratisation of information, and the increasing affordability of internet-connected devices means creating deepfake audios or visuals is relatively cheap and ridiculously easy.

You don’t need advanced IT knowledge – you don’t even need advanced software.

A quick Google of “how to make a deepfake” reveals a mind-blowing number of websites, apps and YouTube videos obligingly filled with helpful information.

In May last year, Russian state-owned media outlet SPUTNIK International posted a series of Tweets attacking the Biden administration. Each one prompted a well-written response from an account called CounterCloud, and sometimes included links to newspaper articles or websites.

This in itself is not unusual, but here’s what is:

Everything, from the Tweets to the articles - and even the journalists who wrote them – was created entirely by artificial intelligence algorithms.

The mastermind behind this “experiment” goes by the name of Nea Paw, who did it to highlight the danger of mass-produced AI disinformation.

The entire campaign cost in the region of $400.

The wide availability and low prices of many generative AI tools make creating sophisticated information campaigns fast, easy and cheap.

“I don't think there is a silver bullet for this,” says Paw, “just as there is no silver bullet for phishing attacks, spam, or social engineering.

There are things we can do to mitigate the damage – education, making GenAI systems that block misuse, or equipping browsers with AI-detection tools, for example.

“But I think none of these things is really elegant, cheap or particularly effective,” says Paw.

It’s a truly worrying situation; the implications of deepfake technology and the manipulation of data are both complex and frightening.

Experts predict that by 2025, around 8 million deepfakes will be circulating online, and the numbers are expected to double every 6 months.

As AI algorithms advance, the line between genuine and fake or manipulated content becomes ever blurrier, and there are serious implications for people, privacy, and trust itself.

There is an urgent need for sophisticated detection software and tougher, zero tolerance legislation. This is easier said than done, as while it’s important to have robust protection in place, it’s equally important not to cancel out useful, informative and entertaining GenAI content.?

It’s obvious there needs to be balance, but it’s also obvious things cannot keep progressing in same way they have been.

Education and awareness are key, as are more advanced deepfake detection tools. These must be backed up by governmental support – we need updated legislation that promises severe consequences for people misusing the technology.

Getting this right on a global level requires close collaboration – let’s share information, best practices and technology. Together, we CAN protect ourselves against the threat of deepfakes.





要查看或添加评论,请登录

Jacques van Wyk的更多文章

  • Lies, Damned Lies, and Statistics.

    Lies, Damned Lies, and Statistics.

    It was Mark Twain who famously popularised the saying, “There are 3 kinds of lies: lies, damned lies, and statistics.”…

  • Troops in Trouble: Why Are Our Soldiers Dying in the DRC?

    Troops in Trouble: Why Are Our Soldiers Dying in the DRC?

    “Those who do not learn from history are condemned to repeat it." George Santayana, philosopher Many incidences in…

  • Born Free, Taxed To Death

    Born Free, Taxed To Death

    The 11th-hour postponement of last week’s budget speech arguably caused more drama and consternation than any of the…

    2 条评论
  • Are we learning to love corruption?

    Are we learning to love corruption?

    This wonderful Madam and Eve cartoon got me thinking… Do South Africans have a weird version of Stockholm Syndrome?…

  • Are You Hearing Voices?

    Are You Hearing Voices?

    Is honesty always the best policy? That might seem like a strange question coming from someone for whom ethical…

  • Knowledge Alone Is Not Power

    Knowledge Alone Is Not Power

    In a world where information is at our fingertips, it's easy to assume that simply acquiring knowledge is the key to…

  • Johannesburg – has the City of Gold finally lost its shine?

    Johannesburg – has the City of Gold finally lost its shine?

    It’s been on the cards for a while, but a decade of deterioration may finally have brought the once golden city of…

    2 条评论
  • Will Fraud be The Death of the Life Insurance Industry?

    Will Fraud be The Death of the Life Insurance Industry?

    South Africans are not generally big fans of insurance. Although a 2022 Financial Sector Conduct Authority (FSCA) study…

    3 条评论
  • Organisational Growth in 2025 – This Year, it’s Learn or Burn

    Organisational Growth in 2025 – This Year, it’s Learn or Burn

    There’s no shortage of articles and reports on the pros and cons of AI adoption in organisations, and the subject is…

  • 2025 – Let’s make this a year to remember

    2025 – Let’s make this a year to remember

    As we all slowly return from our year-end breaks, hopefully feeling refreshed, relaxed and ready for all that 2025 has…

社区洞察

其他会员也浏览了