AI-Driven Marketing: Rise of Deepfakes in Brand Advertising

AI-Driven Marketing: Rise of Deepfakes in Brand Advertising

The analysis conducted by the Center for Countering Digital Hate (CCDH) on the rise of AI-generated disinformation and deepfake imagery linked to electoral activities on X (formerly Twitter) highlights a concerning pattern. The consistent 130% monthly surge in deceptive content poses a significant threat to the credibility of electoral procedures. This escalation underscores the sophisticated manipulation methods targeting voter misguidance, sparking legal and ethical deliberations in the digital domain.

Midjourney 's action to ban the generation of images pertaining to U.S. presidential candidates such as Donald Trump and Joe Biden reflects a heightened awareness of the potential for AI to be weaponized within political arenas. The initiative underscores the industry's acknowledgment of the profound implications that deepfake technology harbors for democracy, especially as we approach the 2024 elections. Other AI companies like OpenAI (DALL-E) and 谷歌 have also implemented restrictions around political candidates and election content to prevent misuse.

What is deepfake and how is it currently being used in marketing? What impact does it have? How can marketers leverage its positive effects while mitigating the negative ones? This article answers all of these crucial questions.

Understanding Deepfakes: A Synthetic Media Technology

Deepfakes, a term that combines "deep learning" and "fake", refers to a type of synthetic media. It uses artificial intelligence to create or alter video content so convincingly that it's almost impossible to distinguish from real footage. This technology uses machine learning algorithms to analyze a large amount of data, typically images or videos of a person, and then mimics their facial expressions, movements, and voice in a convincingly realistic manner.

However, it is crucial to acknowledge the dual implications of this technology. On the one hand, deepfakes can revolutionize sectors like film production or virtual reality, reducing costs and creating new possibilities for storytelling. It can also be used for benign purposes like dubbing movies into different languages, creating digital assistants, or enhancing video conferencing. Conversely, the potential misuse of deepfakes raises serious concerns. They can be used to spread misinformation, perpetrate fraud, or violate personal privacy.

I will categorize synthetic media or deepfake technology use in advertising into three groups: malicious, controversial, and successful case studies.

Fake Celebrity Endorsements: Taylor Swift and Le Creuset

The fraudulent AI ads with Taylor Swift for Le Creuset involved a scam where AI created a fake ad promoting a nonexistent giveaway partnership between the singer and the cookware brand. Exploiting Swift's image, scammers used deepfake technology to mimic her voice in a fake endorsement. The goal was to deceive fans by faking Swift's support for Le Creuset products. This misuse aimed to trick fans into thinking the endorsement was real, showcasing deepfake's potential for harm.

Here are the key objectives behind such scams:

  • Financial Gain: Scammers likely aimed for direct financial gain by deceiving victims into paying shipping fees, taxes, or other costs related to claiming the fake giveaway items. Victims might have been directed to provide credit card details or make payments through insecure channels, resulting in unauthorized charges or financial theft.

  • Personal Data Harvesting: The scam potentially targeted gathering personal information, such as names, addresses, phone numbers, and email addresses, under the guise of entering or winning the giveaway. This data could be valuable for future fraudulent activities, sold on the dark web, or used in identity theft schemes.

  • Malware Distribution: The advertisement might have served as a conduit for distributing malware. By enticing victims to click on links to claim their "prize," scammers could deceive individuals into downloading malicious software that compromises their devices, leading to data theft or ransomware attacks.

  • Eroding Trust in Brands and Celebrities: Though not the primary goal, a significant collateral effect of such scams is the erosion of trust between fans and the celebrities or brands they admire. When fans fall victim to scams that misuse the likenesses of celebrities or brands, it can harm reputations and instill skepticism towards authentic promotions and endorsements.

Controversial Campaigns: Volkswagen's "Generations" Ad

Volkswagen's "Generations" advertisement employed AI-powered synthetic media technology to create a powerful and groundbreaking marketing piece. The ad brought back the late Brazilian singer Elis Regina to virtually perform a duet with her daughter Maria Rita, a renowned artist in her own right. This strategic use of deepfakes enhanced the ad's emotional impact, presenting viewers with an imaginative scenario where mother and daughter could collaborate despite Elis Regina's passing in 1982. Crafted to commemorate Volkswagen's 70th anniversary in Brazil, the ad highlights the brand's enduring presence and growth in the country's automotive sector.

The "Generations" ad received mixed reactions from the public and industry observers. While some praised emotive and nostalgic elements of the ad, others were uncomfortable with the ethical implications. The advertisement prompted discussions on consent and the posthumous rights of individuals, sparking debates on the ethical use of deceased persons' likenesses in commercial settings. It opened up discussions about the limits and regulations needed to govern the use of deepfake technology, especially concerning individuals who are no longer able to voice their consent or objections.

Success Stories: Malaria No More with David Beckham

In an innovative campaign to eradicate malaria, David Beckham was featured 'speaking' nine languages as part of the "Malaria Must Die" movement. It begins with Beckham delivering his message in English before appearing to fluently converse in eight other languages. The campaign film aimed to reach a global audience, transcending language barriers to deliver a powerful call to action: "malaria must die, so millions can live."

It's important to note that Beckham didn't actually learn these languages for the campaign. Instead, producers created a 3D model of Beckham's face and reanimated it to make it appear as if he was speaking these languages. This technique uses AI to create hyper-realistic video content. The campaign also introduced the world's first voice petition against malaria, urging people to advocate for decisive action to end the disease.

The audience reaction was overwhelmingly positive and impactful. The campaign successfully raised global awareness about the ongoing fight against malaria. It inspired many to learn more about the disease and to contribute to eradication efforts in whatever way they could, whether through donations, advocacy, or simply by spreading the word.

Industry Collaboration: Combatting Misinformation

The Coalition for Content Provenance and Authenticity (C2PA) represents a cross-industry initiative dedicated to establishing standards for digital content provenance to counter misinformation and content manipulation, including that generated by artificial intelligence. With Google and Meta's recent alliance with Adobe and Microsoft in Feb 2024, the C2PA now unites a diverse coalition of technology firms, media entities, and content creators.

The C2PA aims at labeling content created using AI. It is developing standards that empower content creators and platforms to securely attach metadata to various media types, such as images, videos, and documents. This metadata can encompass details about the content's source, any modifications made, and the specifics of these changes, establishing a digital "breadcrumb trail" to validate its authenticity.

By establishing a standardized way to trace the origin and modifications of digital content, the C2PA seeks to combat misinformation and the spread of manipulated media, including deepfakes. The initiative aims to restore trust in digital content by making it possible to distinguish between genuine and fabricated material.

For the C2PA standards to be effective, widespread adoption across platforms, tools, and content creators is crucial. Big tech companies involved in the coalition are working to integrate these standards into their products and services, encouraging broader industry adoption.

Ethical Advertising with Synthetic Media: Guidelines for Marketers

When contemplating the use of AI-powered synthetic media technology in advertising, brands face a complex terrain of ethical, legal, and reputational issues. Key considerations for brands include:

Ethical Considerations

  • Consent and Transparency: Obtain clear consent from individuals depicted in synthetic media content to maintain trust and respect.
  • Respect and Integrity: Ensure advertisements do not harm the individuals portrayed or misrepresent their beliefs.

Legal Compliance

  • Copyright and Intellectual Property Laws: Avoid infringing on others' intellectual property rights.
  • Privacy Laws: Adhere to privacy regulations to prevent legal challenges.
  • Regulatory Guidelines: Stay compliant with advertising standards and disclosure requirements.

Reputational Risks

  • Public Perception: Evaluate how using synthetic media technology may affect brand trust and perception.
  • Brand Identity: Align synthetic media content with brand values to safeguard reputation.

Quality and Authenticity

  • Maintaining Authenticity: Use synthetic media technology to enhance messaging while preserving brand authenticity.
  • Technical Quality: Ensure high-quality synthetic media content to avoid unintended effects.

Potential for Misuse

  • Mitigating Misuse: Address unintended consequences and prevent malicious alterations or distribution.

Social Responsibility

  • Impact on Society: Reflect on societal implications to prevent misinformation and maintain trust in media.

Future-Proofing

  • Adapting to Regulations: Stay updated on evolving legal frameworks to adjust advertising practices accordingly.

Conclusion: Acceptance within Boundaries

In essence, the marketing industry should approach the latest AI innovation—deepfakes—with a discerning foresight that upholds ethical standards. We stand on the verge of a storytelling revolution that can enhance brand narratives with unparalleled vividness and customization. Yet, it is crucial to uphold authenticity and individual rights in this realm to safeguard the intrinsic value that brands and endorsers contribute to the industry.

Smart AI Marketing Newsletter Article No.7


About Smart AI Marketing

Smart AI Marketing newsletter is a go-to guide for marketing professionals, business leaders, and AI enthusiasts. It bridges the gap between artificial intelligence and marketing strategies, exploring trends, applications, and tools transforming customer engagement. Packed with case studies, expert insights, and practical advice, it's a resource for navigating digital marketing. Whether seasoned or new, gain knowledge to use AI to achieve impact in marketing. Subscribe now.

Yagub Rahimov

Polygraf AI | Agentic AI Risk, Fraud and Privacy management | 2X Patent holder

1 年

A technology in itself isn't good or bad. Like as an example, I can see that you've utilized AI to further enhance your writing to generate this post, according to Polygraf. That means AI made you more productive. However, unfortunately bad actors embrace the tech faster than good actors. We need dedication to find a solution to protect them.

  • 该图片无替代文字

Excited to delve into the world of deepfake and its impact on marketing! ??

Md Tahidul Islam

Digital Marketing Strategist | Helping Clients Reduce CAC by 30% While Increasing Sales by 34% Through Data-Driven Digital Marketing Strategies

1 年

Exciting to see brands exploring innovative AI technologies for marketing! ?? Eva Dong

要查看或添加评论,请登录

Eva Dong的更多文章

社区洞察

其他会员也浏览了