How Can Businesses Navigate the Legal Risks of Deepfake Testimonials in Online Advertising? UK Perspective
Jha Arunima CIPP(E)
Specialized Counsel in TMT, IP Governance, Sports Law, Private Equity & M&A | Data Privacy | Animal Advocacy | Ex-BookMyShow, LLM & MBA (Finance) | Author & Career Counselor. Book a paid career counseling call today.
Deepfake technology is no longer science fiction, it’s a business reality. But when it’s used to create fake testimonials that appear to come from real people, the legal consequences can be serious.
As businesses increasingly use digital tools to market their products and services, the line between ethical marketing and deceptive practices can easily blur. One area where this is becoming especially risky is with deepfake technology, AI-generated or manipulated content that mimics real people with striking realism.
So, what happens when a deepfake testimonial is used to promote a product or service online? What legal consequences could a company face?
Understanding Deepfake Testimonials: The Legal Landscape
A deepfake testimonial that falsely presents itself as coming from a real person can trigger several legal causes of action, depending on the jurisdiction. While laws are still catching up with the fast pace of AI technology, existing regulations already provide avenues for redress.
1. Defamation
In cases where the deepfake falsely attributes statements to a person that damage their reputation, defamation claims could be brought forward. In the UK, for example, the Defamation Act 2013 requires that a statement causes serious harm to someone’s reputation to qualify. Even if the intent wasn’t malicious, the harm to an individual’s reputation could still trigger legal action.
2. Copyright Infringement
If the deepfake uses copyrighted images, voices, or likenesses without consent, the Copyright, Designs, and Patents Act 1988 (CDPA) may come into play. Although copyright laws were not designed with deepfakes in mind, unauthorized use of personal likenesses could still constitute infringement in certain contexts.
3. Data Protection Violations
Deepfakes that use biometric data—like facial features or voice patterns—could fall under data protection laws like the UK GDPR. If personal data is processed without consent, companies could face hefty penalties.
4. Passing Off & Misrepresentation
If a deepfake testimonial makes it appear that a public figure or recognizable individual endorses a product without their permission, this could lead to a passing off claim. This is particularly relevant in commercial settings where reputation and brand identity are crucial.
5. Consumer Protection Laws
Under the UK’s Consumer Protection from Unfair Trading Regulations 2008 (CPUT), creating misleading testimonials is considered an unfair commercial practice. If a deepfake testimonial misleads consumers into making purchasing decisions they wouldn’t otherwise make, companies could face investigations by bodies like Trading Standards or the Competition and Markets Authority (CMA).
Getting Deepfake Content Removed: The Practical Approach
If a deepfake testimonial appears on social media or other online platforms, the first step is often to use the platform’s acceptable use policies to flag the content for removal. Many platforms have take-down procedures for AI-generated content, especially if it misleads or harms individuals.
In some cases, the Advertising Standards Authority (ASA) can intervene, particularly if the misleading content violates the CAP Code. The ASA has also issued guidance on deepfake ads, warning businesses that they must hold documentary evidence to support any testimonials used in their advertising.
The EU AI Act: What Lies Ahead
While current laws offer partial protection, the upcoming EU AI Act which includes specific provisions for deepfakes will add more structure to the legal framework. Set to take effect in August 2026, the Act mandates clear labeling of AI-generated content and requires disclosure when deepfakes are used. Businesses that operate in or market to EU consumers will need to comply with these new transparency requirements.
What Businesses Should Do Now
1. Audit Your Advertising Content – Ensure that testimonials, endorsements, and any AI-generated material comply with local laws and advertising codes.
2. Be Transparent – If you’re using AI or synthetic media in marketing materials, disclose it. Transparency builds trust and avoids legal pitfalls.
3. Consult Legal Counsel – As the regulatory landscape around deepfakes evolves, getting legal advice is crucial, especially when venturing into AI-driven marketing.
4. Educate Marketing Teams – Many legal risks arise from unintentional misuse. Ensure your team understands where ethical lines are drawn.
Final Thoughts: Technology vs. Accountability
While deepfake technology offers innovative opportunities for businesses, it also raises serious legal and ethical concerns. Using deepfakes for misleading testimonials isn’t just bad PR—it could invite lawsuits, regulatory scrutiny, and lasting damage to your brand.
As AI technology continues to evolve, businesses must balance innovation with accountability. Staying ahead of the legal curve isn’t just smart—it’s essential.
#Deepfakes #ArtificialIntelligence #DigitalMarketing #AdvertisingLaw #ConsumerProtection #DataPrivacy #GDPR #AICompliance #BrandReputation?#BusinessEthics