MEMETIC WARFARE: BACKGROUND, CONTEXT AND COUNTERMEASURES.

In this article we elucidate deepfakes and shallowfakes and point out where memetic warfare belongs. We also outline the implementation and countermeasure technology paradox where at implementation deepfakes requires “deep” technology and shallowfakes require “shallow” technologies. However, on countermeasures shallowfakes require “deeper” technologies than deepfakes.

We conclude with a brief illustration of memetic warfare in Kenya and the Kenyan government interventions.

WHAT IS MEME?

The term “meme” was coined by the evolutionary biologist Richard Dawkins in 1976. He needed a term to explain how cultural artifacts evolve over time as they spread across society. He coined the term “meme,” a combination of the ancient Greek word for imitation “mimeme” and the English word “gene.” ?Since then, memes have become an essential form of visual communication. Anything a person can conceive of and express visually is potential meme material. (There may exist other claims of the origin of “meme”. )

Meme can be described as an image or video that portrays a concept, idea or event. Often supported by humor, sarcasm or irony. Meme is circulated online on social media, forums, instant messaging, virtual boards, emails, blogs, etc.

WHAT IS MEMETIC WARFARE?

Memetic warfare is a form of information warfare conducted online through memes and other tactics to achieve political, strategic, or ideological objectives. It involves the offensive or defensive circulation of content to influence public opinion, disrupt discourse, and advance the interests of those engaged in the campaign. (Dr, Tine H?jsgaard Munk NTU)

Some of the uses of Memetic warfare are:

  1. Defensive and offensive use.
  2. Beyond Traditional Warfare.
  3. Humor, Satire, and irony.
  4. Resistance through Memes.
  5. Mobilizing Resistance.

DEEPFAKES AND SHALLOWFAKES

Before delving deep into memetic warfare let us describe two major kinds of media manipulation: deepfakes and shallowfakes. Are memes deepfakes or shallowfakes?

The advancement in artificial intelligence and its underlying technologies of machine learning, deep learning, natural language processing, computer vision, etc. has led to the emergence of two major kinds of media manipulations known as deepfakes and shallowfakes.

WHAT ARE DEEPFAKES?

Deepfakes are a compilation of artificial images and audio put together with AI systems and algorithms to spread misinformation and replace a real person’s appearance, voice, or both with similar artificial likenesses and/or voices. It can create people who do not exist, and it can fake real people saying and doing things they did not say.

To make a deepfake video of someone, a creator would first train an AI system on many hours of real video footage of the person to give it a realistic “understanding" of what he or she looks like from many angles and under different lighting. Then they would combine the trained AI system with computer-graphics techniques to superimpose a copy of the person onto a different actor.

Deep fakes differ from other forms of false information by being very difficult to identify as false.

Common instances where deepfakes have been used successfully include:

  1. Election manipulation.
  2. Social engineering.
  3. Automated disinformation attacks.
  4. Identity theft.
  5. Financial fraud.
  6. Scams and hoaxes.
  7. Celebrity Phonography.

WHAT ARE SHALLOWFAKES?

Shallow fakes or cheap fakes are pictures, videos, and voice clips created without the help of Artificial Intelligence (AI) technology but by either editing or by using other simple software tools. Why are they called shallow? The term ‘shallow’ implies the quality of such fakes, which are lower in quality compared to deepfakes.

Shallowfakes are made with existing technologies, for example, a conventional edit on a photo, slowing down a video to change the speech patterns of an individual or more often, relying on mis-captioning or mis-contextualizing an existing image or video, claiming it is from a time or place which it is not from.

Shallowfakes can even be original images or videos that someone has simply relabeled as depicting something else or has subtlety edited to change the perception of the content. Shallowfakes replicate and spread as rapidly as possible.

Shallowfakes are manipulations that range in production value from likely to obviously fake, i.e. they are easy to make. And precisely because of this easier way to create them, many experts consider shallowfakes to be bigger threats than deepfakes.

MEMES GENERALLY ARE A KIND OF SHALLOWFAKE.

Shallowfakes are much better for meme making than deepfakes, which strive for realism. For shallowfakes like memes, it is often the case that the less they correspond to reality, the more effective they can be in spreading online and even swaying human behavior.

CHARACTERISTICS OF MEMETIC WARFARE

Memetic warfare is about taking control of the dialogue, narrative, and psychological space.

  1. It’s about depreciating, disrupting, and subverting the enemy’s effort to do the same.
  2. It can be highly effective relative to cost.
  3. The attack surface can be large or small.
  4. Memetic warfare can be used in conjunction with troops, ships, aircraft, and missiles, or it can be employed without any kinetic military force at all.
  5. It operates in the communications battlespace.

Memetic warfare is everywhere on the internet or communication battlespace e.g.

  1. In political campaigns.
  2. In contested narratives about news events.
  3. In the thoughtless memes shared by Facebook friends, and in videos on YouTube.
  4. It shows up in movements where there’s an attempt to shape perceptions and galvanize public support.

MEMETIC WAREFARE COUNTERMEASURES

Perhaps the greatest obstacle to memetic warfare intervention is a lack of appreciation for social media as a battle space and the extent to which memetic warfare is already taking place. This is largely generational. It is sometimes difficult to appreciate how quickly information can spread, the understanding of its global scope, and the significance of its impact on perceptions, narratives, and social movements.

According to Michael Yankoski et al in “Meme warfare: AI countermeasures to disinformation should focus on popular, not perfect, fakes” , the real concern is the proliferation of narratives that emotionally reaffirm a belief that an audience already has (shallowfakes), and not the proliferation of fake images, videos, and audio that are so real as to convince someone of something untrue (deepfakes).

Memes are largely used in the communications battlespace to relay narratives that emotionally REAFFIRM and/or EXPRESS a belief that an audience or a group already has! Technical solutions will thus require AI systems sophisticated enough to detect coordinated campaigns designed to manipulate how groups of people feel about what they already believe, which is the motivation for campaigns involving memes.

EXISTING TECHNOLOGICAL COUNTERMEASURES

Most of the existing countermeasures are text-based e.g.

  1. Researchers have put significant resources into the creation of sophisticated AI systems to rapidly detect threats as they emerge in online social media networks i.e. hate speech detection systems.
  2. Social media platforms are developing AI systems to automatically remove harmful content primarily through text-based analysis. But these techniques won’t identify all the disinformation on social media as much of what people post are photos, videos, audio recordings, and memes.
  3. There are cryptographic QR code-based systems that can verify whether content has been edited from its original form.
  4. Using metadata. Metadata contains information about a piece of media, such as when it was recorded and on what device. The metadata embedded in a file can be used to cross-check the origins of the media, but this commonly used authentication technique isn’t foolproof. Some types of metadata can be added manually after a video or audio clip is recorded while other types can be stripped entirely.

The problem is that these technologies are often isolated from one another, and thus relatively incapable of detecting meme-based disinformation campaigns.

Systems capable of detecting deepfakes don’t actually do much to help to counter the proliferation of disinformation campaigns deploying shallowfakes designed to magnify and amplify an audience’s preexisting beliefs. Likewise, systems focused on identifying problematic text also are inadequate to the task of parsing memes and shallowfakes.

As more of the human population gains reliable and fast access to the internet, an increasing percentage of people will become susceptible to campaigns intended to manipulate, magnify, and amplify their preexisting notions and emotional dispositions. Realizing when this is occurring doesn’t just require technological systems capable of identifying deepfakes, but rather systems with the ability to identify coordinated shallowfake campaigns across platforms.

EFFECTIVE TECHNOLOGICAL COUNTERMEASURE: SEMANTIC ANALYSIS

This kind of AI analysis is on another level entirely from existing systems and technologies. Semantic analysis is a methodology aimed at mapping the meaning of the disinformation campaigns themselves. For semantic analysis, it isn’t enough to detect whether a post contains a manipulated image, faked audio clip, or hate-speech but:

  1. Algorithms need to be able to identify coordinated multimodal (i.e. text/image/video) campaigns deployed across platforms so as to inflame the emotional landscape around an audience’s beliefs.
  2. AI systems will need to understand history, humor, symbolic reference, inference, subtlety, and insinuation.

Only through such novel technologies will researchers be able to detect large-scale campaigns designed to use multimedia disinformation to amplify or magnify what a group of people feel about their preexisting beliefs. This is a much more difficult task than simply identifying manipulated multimedia, particular words in a hate-speech lexicon, or new instances of a known “fake news” story. ?This requires:

  1. Developing the capacity for machines and algorithms to better grasp the complex and multifaceted layers of human meaning making.
  2. Systems capable of parsing the complicated layers of meaning deployed in shallowfakes like memes represent the very cutting-edge of AI systems and are the fore front of the foreseeable future of technological development in this space.
  3. In many ways this represents the transition from existing AI perceptual systems to nascent AI cognitive systems.
  4. The enormous difference in complexity and computing power between these cannot be overstated.

AI INTERVENTIONS WILL NOT BE ENOUGH COUNTERMEASURE

Even if AI researchers are able to develop semantic analysis systems, and that it becomes possible to detect these coordinated disinformation campaigns as they occur. What then? Some responses to disinformation might involve not a technological fix to remove content, but rather techniques to help users know what they’re consuming online.

Beyond the necessary advances in technology, we also need a multi-faceted response involving the social media companies, policymakers and app developers that integrates:

  1. Policy-level responses that more carefully consider the complex relationship between disinformation, democracy, and freedom of speech.
  2. Information sharing agreements designed to coordinate the sharing of information across government agencies as well as social media platforms for the rapid identification and deceleration of disinformation campaigns in real time.
  3. Media literacy education campaigns that educate and prepare users to identify trustworthy sources of information and fact-check or further analyze sources of information that seem questionable.
  4. Content moderation strategies.
  5. The cultivation of new social norms around disinformation sharing online.
  6. The development of new disinformation consumption/interaction tools at the software level.

In other words, society needs a broad-spectrum approach to sufficiently prepare for the disinformation campaigns that are becoming increasingly common.

MEMETIC WARFARE IN KENYA

In the last two years they have been exponential growth of use of memetic warfare in Kenya. Most of them are used to express dissatisfaction with the ways the government is running the affairs of the nation, bad governance, highlighting unfulfilled election promises, portraying government always lies, giving empty promises, making unrealistic/unachievable pronouncements , etc.

KENYAN GOVERNMENT INTERVENTIONS

Kenya government initially strongly responded by abductions of suspected originators or propagators of the memes. Those in charge of security issued warnings/threats for people to stop creating what they considered disrespectful memes.

Lately the president of Kenya has appeared to be embracing (surrendered to) some of the memetic warfares for example dancing to the “Kasongo” song in public.

WHAT SHOULD THE KENYAN GOVERNMENT DO

The Kenyan government should quickly accept and appreciate social media as a battle space and the extent to which memetic warfare is already taking place. It is just the beginning, very unstoppable, can only get better, bigger.

Interventions highlighted above can go along way in helping the Kenyan government deal with or ride with memetic warfare.

Since memetic warfare are used to relay narratives that emotionally REAFFIRM and/or EXPRESS a belief that an audience or a group already has. Appearing to embrace the memes goes along way to strengthen or confirm such held beliefs, this should be avoided at all costs.

In conjunction with the countermeasures outlined above, the most effective way to counter the plethora of memetic warfare the Kenyan government is facing is to move fast to foster good governance, and work towards meeting the expectations of the citizens with tangible outcomes.

要查看或添加评论,请登录

joseph Okiro BSc., MSc., PMP?, PMI-ACP?, CISSP, CCSP, Smartsheet Certified.的更多文章

社区洞察

其他会员也浏览了