Chinese Internet Court finds AI platform liable for contributory infringement of Ultraman Copyright!
Paolo Beconcini
Head of China IP Team at Squire Patton Boggs / Lecturer in Law at USC Gould School of Law
I. Introduction
Artificial Intelligence (AI) is challenging the resilience of copyright law, raising legal and ethical concerns that extend beyond intellectual property. Issues such as infringement, plagiarism, and unauthorized AI training on copyrighted works have placed AI under scrutiny. As generative AI advances, critical questions arise: Should AI-generated works receive copyright protection? Should AI platforms be held liable for allowing users to train AI models using copyrighted material without the owner's consent? AI model training practices have indeed drawn criticism worldwide for allegedly using copyrighted works and unfairly competing with human artists.
China has been at the forefront of AI legal developments, aggressively pursuing AI dominance and leading in legal disputes over AI-generated works and copyright infringement.
In a significant ruling by the Hangzhou Internet Court on September 25, 2024, upheld by the Hangzhou Intermediate People’s Court on December 30, 2024, an AI model and platform were found liable for contributory infringement. This marks a pivotal moment in legal accountability within the AI landscape.
The second part compares U.S. legal approaches to similar issues.
II. The Ultraman Case and AI Platform Liability
Tsuburaya Productions Co., Ltd. owns the copyrights to the Ultraman franchise. Since 2019, Shanghai Cultural Development Co., Ltd. has been the exclusive Chinese licensee for Ultraman-related copyrights, covering reproduction, distribution, adaptation, and merchandising.
In early 2024, Shanghai Cultural Development Co., Ltd. (the Plaintiff) discovered that an AI platform operated by Hangzhou Intelligent Technology Co., Ltd. (the Defendant) allowed users to generate images using Ultraman-related LoRA (Low-Rank Adaptation) models.
In January 2024, the Plaintiff sued the Defendant before the Hangzhou Internet Court, alleging copyright infringement and unfair competition.
III. The Lawsuit
3.1. Plaintiff’s Allegations
The plaintiff claimed that Defendant's AI platform enabled users to train and apply LoRA models to generate Ultraman images, which were then published and shared. The Plaintiff alleged that the platform facilitated copyright infringement by allowing users to upload copyrighted images for model training, making infringing content widely accessible.
The Plaintiff presented evidence that the platform’s "Forum" section contained multiple images and LoRA models substantially similar to Ultraman. Some models and images explicitly included "Ultraman" in their names or prompt words.
The plaintiff sought a ruling of direct infringement and, alternatively, contributory infringement, arguing that Defendant failed to take preventive measures and knowingly allowed the training and use of Ultraman LoRA models.
3.2. Defendant’s Defense
The Defendant argued that it was a neutral network service provider, merely offering AI computing and storage services, without direct control over user-uploaded content. It invoked China’s "safe harbor" rule, asserting that:
IV. The Court Decision
The court analyzed the AI platform’s role and found that users uploaded Ultraman-related images, trained models, and used them to create substantially similar works, which were then shared on the platform. The court ruled that AI-generated modifications were minimal and that the images remained highly similar to copyrighted Ultraman content.
4.1. On Direct Infringement
The court found that, while the Defendant was a generative AI service provider, it was not directly liable for infringement because it did not upload infringing content itself. Users controlled the input and distribution, and there was no evidence that Defendant actively collaborated in infringement.
4.2. On Contributory Infringement
Plaintiff argued that Defendant, as a generative AI provider, had platform control, knew or should have known about infringement, yet failed to act, making it liable for aiding infringement.
The Defendant maintained that it merely provided computing power and that users had control over AI training. It claimed protection under "safe harbor" rules, asserting that it only needed to remove infringing content upon notice.
The court disagreed, emphasizing that AI providers must meet stricter liability standards than traditional network service providers.
Under Chinese Copyright Law, aiding infringement is defined as knowingly permitting violations without intervention. The court found that users could not directly generate Ultraman images with base models, but by combining them with Ultraman LoRA models, they produced highly recognizable and easily replicable infringing content.
For instance, the "Ultraman Blaze" LoRA model had been used 1,000 times, enabling continuous reproduction of infringing works. Given this high risk, the Defendant should have foreseen potential copyright infringement and exercised greater oversight.
The court also found that the Defendant monetized user-generated content by integrating it into its platform services.
Thus, the court ruled that the Defendant failed to exercise due diligence and that its safeguards were insufficient, given its heightened duty of care as an AI model provider.
V. Analysis and Implications
5.1. Expanding Liability for AI Platforms
This case sets a precedent by holding an AI platform accountable for contributory infringement. The ruling suggests that AI platforms may be directly liable if they intentionally train infringing models.
Here, the Defendant was absolved of direct infringement because it did not control LoRA model uploads. However, had the Defendant created and trained infringing AI models, the court would likely have found it directly liable.
5.2. Stricter Standards for AI Services
The ruling imposes a stricter duty of care on AI platforms than on traditional network service providers. While safe harbor applies to passive platforms, AI providers must actively monitor and prevent misuse. The court ruled that the Defendant’s safeguards were inadequate. Although it had a user agreement disclaiming responsibility, it lacked clear reporting mechanisms before litigation. Only after receiving a lawsuit notice did, it blocks content and implement reviews—demonstrating its ability to take measures but failure to act proactively. This ruling signal that AI platforms must proactively prevent infringement or face legal consequences.
5.3. Open-Source AI vs. Commercial AI Platforms
The court distinguished open-source AI models (e.g., Stable Diffusion) from commercial AI platforms like the Defendant.
VI. Conclusion
The Hangzhou Internet Court’s decision is a landmark ruling in AI copyright law. It establishes that AI platforms cannot ignore copyright violations and must implement stricter safeguards.
As AI regulations evolve, China and the U.S. will likely refine legal standards, forcing AI platforms to take greater responsibility for copyright compliance.
?
Thank you Paolo Beconcini for the update on copyright protection in China. AI platforms need to watch out for contributory infringement.