MISINFORMATION AND DISINFORMATION: The Biggest Global Risk in the Next Two Years

MISINFORMATION AND DISINFORMATION: The Biggest Global Risk in the Next Two Years

The World Economic Forum (WEF) published its 2024 Global Risk Report (GRR) in January of this year. The WEF is an international organisation for public and private cooperation with international influence. For the last 20 years, the WEF has published an annual report highlighting and discussing global risks to the world.??

For the 2024 GRR, the WEF consulted the opinions of nearly 1500 leading experts from academia, business and government areas, and civil society. A further 1100 business leaders from 113 economies were also consulted. The 2024 GRR can, therefore, truly be seen as a realistic and well-founded representation of opinions all over the world. The report discusses the short-term global risks over the next two years and evaluates long-term risks over the next ten years.

In the short-term category (two years), 34 risks were identified, which include extreme weather conditions, lack of economic opportunities, inflation, forced migration, erosion of human rights, censorship, chronic health conditions, unemployment, terrorism and more.

However, the top risk of these 34 risks is (Artificial Intelligence-manipulated) Misinformation and Disinformation. Let's refer to that as AIMiDi. One reason the WEF rates this risk as the highest is that 2024 will see a large number of national elections in which about three billion people can potentially vote, and AIMiDi can potentially cause havoc during such elections.

The American Psychological Association defines misinformation as "false or inaccurate information — getting the facts wrong." Disinformation, on the other hand, "is false information which is deliberately intended to mislead."

It is, therefore, worthwhile to investigate AIMiDi a little deeper. Misinformation and disinformation are in itself nothing new – they have impacted humanity for centuries and were usually seen as, at the most, a nuisance, and not much more. The role of Artificial Intelligence (AI) has now forced this matter to the foreground for several reasons. We will highlight a few such reasons.

Firstly, AI is used to create false information so seamlessly that it is accepted as real, correct and trustworthy – therefore creating misinformation.?

Secondly, AI is used to manipulate and distort correct and true information so that it becomes false information without being recognised as false – therefore creating disinformation.?

Thirdly, such misinformation and disinformation can be shared worldwide in seconds through social networks.

The risk of AIMiDi is that the boundary between real, correct information and false, incorrect information is becoming so vague that it is becoming impossible to distinguish between the two. This leads to a situation where the real and false realities are indistinguishable. Cybercriminals and scammers are already using AIMiDi to create false adverts and cyber-attacks that are becoming more sophisticated and believable.

Two AIMiDi techniques that play a significant role in such scams and attacks are deep voice and video fakes. With deep voice fakes, only a few seconds of a person's voice is required to create a complete presentation or talk in the target person's voice. Anybody who has heard the target person speak before will genuinely believe the target person made this fake presentation or talk. Understanding the consequences of such fakes in an election environment is not difficult.?

Deep video fakes add another dimension to this type of risk. With AI, the presentation is not only made with a fake voice but fake images of the target person can also be created. Such images look identical to the target person's body language, body and physical characteristics, head and typical actions associated with this person. Now, the observer not only hears a familiar person's voice but also recognises the person's physical images. Such a fake video/voice product is now broadcasted on social media, potentially causing chaos because the 'well-known' speaker is now saying things utterly different from their previous statements. Of course, the real person will deny such a fake product, but by that time, the fake product had been widely circulated in cyberspace, and there is absolutely no way to stop or delete it.

This AIMiDi risk is not only relevant to elections but also to every citizen in SA. A recent scam involving Johan Rupert marketing lucrative investment possibilities used precisely these techniques. Many people in SA fell for the fake AI product and lost a lot of money. More and more scams will appear using deep voice and video fakes of well-known people. These scams have only one purpose – to steal your money! Do not believe what you hear and see anymore – first, investigate well before you react.

AIMiDi is also a big risk in cyberbullying. The bully can easily make deep voice and video fakes of the potential victim and spread them on social media. Another AI-based risk is that of deep nude fakes. Any photo of a person can be provided to the relevant deep nude fake app, and the app will create a naked version of the image - just imagine how the cyberbully can use that!

The WEF evaluates the AIMiDi risk so high that, as stated above, it is classified as the most severe global risk over the next few years.


What can we as citizens do to lower our risk against AIMiDi??

There is no simple solution. We will all have to make a head shift to be highly critical of any phone call, email, social network message, photo, or video we are confronted with. We will have to develop a critical cyber evaluation mindset through which we critically evaluate, by default and automatically, any information or data coming through Cyberspace with which we are confronted. This mindset must also become part of all children's development as they grow up.

Here are a few more tips:

  • Mistrust any message, statement, photo, image or call that claims to represent the actual reality. Use your cyber evaluation mindset to first evaluate and then act.
  • Be extremely careful of social networks, as they are often the vehicles for many of these scams. Ensure that you properly protect your login and access information.
  • Understand that the big challenge is distinguishing between real and fake reality. For this reason, we must continuously develop our cyber evaluation mindset.

Author: Prof Basie von Solms (Published with permission)


Tony Botes

SASA Security Association of South Africa

7 个月
回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了