??AI & Tech Legal Digest || March 7, 2025
Anita Yaryna
Senior IP & Tech Legal Counsel | US-EU Product, Privacy, and Policy Counsel | AI Advisor | Commercial Counsel
Hi there! ?? Welcome to the AI & Tech Legal Digest ??
Here, I bring you the latest key legal news and trends in the AI and tech world.
Stay updated by clicking the "Subscribe" ?
Enjoy your read!
UK's Proposed "Opt-Out" Copyright Framework Raises Legal Concerns for Creative Professionals
The UK Department for Science, Innovation and Technology (DSIT) has concluded its consultation on controversial copyright reforms that would fundamentally alter the legal relationship between creators and AI developers. The proposal would establish an "opt-out" mechanism allowing AI companies to mine creative works for training data by default, effectively shifting the burden of protection from users of copyrighted material to the rights holders themselves—a significant departure from traditional copyright principles.
This proposed regime change raises critical legal questions regarding the compatibility with established IP frameworks, including potential conflicts with the Berne Convention's three-step test for copyright exceptions. Artists in Devon, including illustrator Sarah McIntyre, have highlighted the practical implementation challenges, noting the proposal would retroactively apply to existing works, creating an unprecedented requirement for creators to proactively protect their entire portfolio. The Devon Artist Network further emphasized the departure from fundamental property rights principles that typically do not require owners to explicitly declare ownership to maintain protection.
While Conservative MP Mel Stride has called for the government to "press pause on its rushed consultation," legal observers may note the tension between innovation policy and established IP protection frameworks. The DSIT maintains the current copyright system is "holding back" both creative industries and the AI sector, suggesting a potential rebalancing of competing interests. For legal practitioners advising clients in creative industries, this development signals a critical juncture in UK copyright law that may require proactive guidance on portfolio protection strategies and potential challenges under international IP agreements should the proposal advance.
UK Court of Appeal Grants Lenovo Interim Patent Relief in High-Stakes FRAND Dispute with Ericsson
The UK Court of Appeal has delivered a significant ruling in the ongoing global patent licensing battle between Lenovo and Ericsson, overturning a lower court decision and establishing that Lenovo is entitled to an interim license for Ericsson's telecom patents while awaiting final determination of FRAND (Fair, Reasonable, and Non-Discriminatory) terms. This February decision represents a notable development in UK jurisprudence regarding standard-essential patent disputes.
Judge Richard Arnold's opinion contained several critical legal determinations with implications for international patent licensing practices. The court found Ericsson in breach of its good faith obligations by pursuing parallel foreign litigation despite Lenovo's willingness to accept FRAND terms as determined by English courts. The ruling also specified that appropriate interim license terms would require Lenovo to make a "nine-figure dollar sum" payment to Ericsson—establishing a substantial financial framework for pre-trial licensing arrangements.
This decision reinforces the English courts' increasingly influential role in global FRAND disputes following the UK Supreme Court's 2020 ruling that confirmed their jurisdiction to set worldwide FRAND terms. The case joins other recent UK precedents, including the Amazon-Nokia dispute, where courts have permitted interim patent licensing arrangements pending final determinations. For legal professionals advising technology clients engaged in standards-essential patent negotiations, this ruling highlights both the potential strategic value of seeking interim relief in the UK and the courts' willingness to impose substantial interim payments to balance innovators' and implementers' interests during protracted FRAND litigation.
UK ICO Launches Data Protection Investigation into TikTok, Reddit, and Imgur's Handling of Children's Data
The Information Commissioner's Office (ICO) has initiated a formal investigation into how major social media platforms process children's personal data, focusing specifically on TikTok, Reddit, and Imgur. This regulatory action targets the algorithmic recommendation systems that potentially expose minors to inappropriate content and examines the adequacy of age verification measures implemented across these platforms.
For TikTok, the investigation centers on how the platform processes personal information of users aged 13-17 to generate content recommendations—a particularly sensitive issue given recent international scrutiny over the Chinese-owned platform's data practices. Information Commissioner John Edwards clarified that while the regulator expects to find "benign and positive uses of children's data," the investigation will assess whether these systems are "sufficiently robust to prevent children being exposed to harm, either from addictive practices...or from content that they see."
The probe represents a significant enforcement action under the UK's Children's Code for online privacy, implemented in 2021, which mandates specific protections for minors' data. Legal experts should note that the ICO's approach appears strategically designed to establish precedent applicable across the broader social media landscape, with Edwards explicitly stating the findings will have implications for similar algorithmic systems used by X, Instagram's Reels, and Snapchat. This investigation signals potential regulatory convergence between data protection compliance and online safety obligations—particularly relevant as platforms navigate competing imperatives of user engagement metrics and child safety requirements.
Scale AI Faces DOL Investigation Over Potential Fair Labor Standards Act Violations
The U.S. Department of Labor (DOL) has launched an investigation into Scale AI, the $13.8 billion data-labeling startup, to assess its compliance with the Fair Labor Standards Act (FLSA). The investigation, active since at least August 2024, focuses on key labor law issues including unpaid wages, potential misclassification of employees as contractors, and alleged illegal retaliation against workers—three critical areas that have significant implications for AI companies relying on contingent workforces.
This regulatory scrutiny coincides with mounting legal challenges from former Scale AI contractors, with two lawsuits filed in late 2024 and early 2025 claiming workers were underpaid and improperly classified as independent contractors rather than employees. While Scale AI disputes these allegations and maintains that its compensation meets or exceeds local living wage standards, the investigation highlights the evolving legal landscape surrounding AI workforce management. Similar DOL investigations have resulted in significant settlements, as demonstrated by hotel staffing platform Qwick's $2.1 million settlement and subsequent reclassification of California workers as employees.
The investigation raises particularly interesting questions regarding worker classification in the AI development ecosystem, where companies like Scale AI rely heavily on contractors to perform essential tasks such as data labeling for major tech organizations. Legal experts should note that regardless of the investigation's outcome, it underscores the increasing regulatory focus on labor practices within the AI industry and may establish precedent for how AI companies structure their workforces. Adding complexity to the situation is Scale AI's apparent political connections, with former managing director Michael Kratsios nominated as director of the White House's Office of Science and Technology Policy in the Trump administration—though this position would have no direct oversight of DOL investigations.
California's New AI Bill Aims to Protect Whistleblowers and Create Public Computing Infrastructure
California State Senator Scott Wiener has introduced SB 53, a new AI legislation focused on whistleblower protections and public computing resources, following his previous controversial AI safety bill SB 1047 that was vetoed by Governor Gavin Newsom in September. This latest legislative initiative strategically repackages the less contentious elements of the previous bill while maintaining focus on potential AI systemic risks.
The proposed legislation contains two key provisions with significant legal implications for AI developers and their employees. First, it would establish robust whistleblower protections for employees at frontier AI companies who report concerns about AI systems posing "critical risk"—defined as foreseeable hazards that could result in death or serious injury to more than 100 people, or property damage exceeding $1 billion. Companies like OpenAI, Anthropic, and xAI would be prohibited from retaliating against employees who disclose concerning information to California's Attorney General, federal authorities, or even internally to other employees, and would be required to formally respond to whistleblowers' concerns.
The second major component establishes "CalCompute," a public cloud computing cluster that would provide critical infrastructure access to researchers and startups developing AI applications for public benefit. This provision aims to democratize access to computing resources necessary for AI development beyond well-funded private companies. The initiative comes at a politically complex moment for AI regulation, with incoming Vice President J.D. Vance signaling a federal preference for innovation over safety regulations at the recent Paris AI Action Summit. Legal experts and technology companies will be closely monitoring how this legislation progresses through California's legislative process and whether it can avoid the contentious debate that ultimately derailed SB 1047.
Key AI Architect Subpoenaed in Pivotal Copyright Litigation Against OpenAI
Alec Radford, a foundational researcher behind OpenAI's transformative AI technologies, has been served with a subpoena in the ongoing copyright infringement litigation against the company. According to a February 25 court filing in the U.S. District Court for the Northern District of California, Radford—who departed OpenAI in late 2024 to pursue independent research—will be compelled to testify in a case that could establish critical precedent for AI training data practices.
As lead author of OpenAI's seminal research on generative pre-trained transformers (GPTs), Radford's testimony holds particular significance for the "re OpenAI ChatGPT Litigation" brought by prominent authors including Paul Tremblay, Sarah Silverman, and Michael Chabon. While the Court previously dismissed two claims against OpenAI, it allowed the direct infringement claim to proceed—challenging OpenAI's assertion that its use of copyrighted materials for training falls under fair use protection.
The case represents a broader legal trend targeting AI industry leaders, with plaintiffs' attorneys also moving to compel testimony from former OpenAI executives Dario Amodei and Benjamin Mann, who now lead Anthropic. A magistrate judge has already ruled that Amodei must submit to questioning in this case and a parallel Authors Guild action, signaling the judiciary's interest in thoroughly examining AI training methodologies through testimony from those who architected these systems. The outcome could fundamentally reshape how AI companies approach copyright compliance and potentially establish new standards for permissible use of copyrighted works in machine learning contexts.
Meta Expands Facial Recognition Testing to UK and EU Following Regulatory Approval
Meta has extended its facial recognition tools to the United Kingdom and European Union after "engaging with regulators," marking a significant expansion of technology the company had previously scaled back due to legal challenges. The dual-purpose system—which includes protection against celebrity-based scam advertisements and account recovery verification—represents Meta's cautious reentry into facial recognition following years of regulatory scrutiny.
The opt-in "celeb-bait protection" feature aims to address a persistent problem on Meta's platforms: unauthorized use of public figures' likenesses in fraudulent advertisements. This feature arrives alongside a "video selfie verification" tool designed to help users regain access to compromised accounts. Both implementations come with explicit privacy assurances, with Meta VP of Content Policy Monika Bickert emphasizing that facial data is deleted immediately after comparison and "not used for any other purpose"—a critical legal distinction from the company's previous facial recognition implementations.
This careful approach reflects Meta's complicated history with biometric data processing, including a substantial $1.4 billion settlement in 2024 over alleged inappropriate biometric data collection. Legal experts should note that while Meta shut down its photo tagging facial recognition system in 2021 amid regulatory pressure, it retained the underlying DeepFace model, which may form the foundation of these new limited-use tools. The expansion occurs against the backdrop of Meta's aggressive AI development strategy and the UK's increasingly AI-friendly regulatory environment, potentially creating space for facial recognition technology that addresses specific platform problems rather than serving broader data collection purposes.
Canadian Privacy Watchdog Seeks Court Order Against Pornhub Operator Over Consent Violations
Canada's Privacy Commissioner Philippe Dufresne has escalated enforcement actions against Aylo Holdings, operator of Pornhub and other adult entertainment websites, by seeking a Federal Court order to compel compliance with Canadian privacy laws. This significant legal step follows the Commissioner's February 2024 determination that the Montreal-based company had violated privacy regulations by enabling the sharing of intimate images without proper consent from all individuals depicted.
The court action represents a critical development in the ongoing regulatory scrutiny of adult content platforms, with Dufresne explicitly citing the company's failure to "adequately address the significant concerns" identified in his previous investigation. The case originated from a complaint by a woman who discovered non-consensual uploads of intimate videos by a former partner—highlighting the intersection between privacy law enforcement and image-based sexual abuse.
Aylo Holdings has responded by "strongly disagreeing" with the Commissioner's assertions, expressing surprise at the legal action after what it characterized as "productive dialogue" regarding a potential compliance agreement. The company maintains it has implemented substantial safeguards since the 2015 incident that triggered the complaint, including mandatory uploader verification, participant consent documentation, download restrictions, and expanded content moderation. Legal observers note this case could establish important precedent regarding platform obligations for verifying consent in user-generated content environments, particularly for sensitive material where privacy violations can cause significant harm.
In this fast-paced world, it can be challenging to stay updated with the latest legal news in the AI and tech sectors. I hope you found this digest helpful and were able to free up some time for the joys of life.
Don't forget to subscribe to receive the weekly digest next Friday.
Anita