??AI&Tech Legal Digest || October 18, 2024
Anita Yaryna
Senior IP & Tech Legal Counsel | US-EU Product, Privacy, and Policy Counsel | AI Advisor | Commercial Counsel
Hi there! ?? Thanks for stopping by. Welcome to the AI&Tech Legal Digest ??
Here, I bring you the latest key legal news and trends in the AI and tech world. Stay updated with expert insights, state approaches, regulations, key recommendations, safeguards, and practical advice in the tech and AI field by clicking the "subscribe" button above ? Enjoy your read!
X (Formerly Twitter) Updates Privacy Policy to Allow AI Training on User Data
X, the social media platform formerly known as Twitter, has made significant changes to its privacy policy, opening the door for third-party AI companies to train their models on user-generated content. This update, set to take effect on November 15, 2023, marks a potential new revenue stream for the Elon Musk-owned company amid financial challenges.
Key points of the policy update include:
- Third-party collaborators may use X data to train AI models, including generative AI.
- Users will have the option to opt out of this data sharing, though the specifics of how to do so are not yet clear.
- X has removed language specifying how long it retains user data, replacing it with a more flexible policy.
- The company added a reminder that public content may persist elsewhere even after removal from X.
- A new "Liquidated Damages" clause in the Terms of Service sets penalties for organizations scraping large amounts of content.
This move aligns X with other platforms like Reddit that are exploring data licensing to AI companies as a revenue source. However, it raises questions about user privacy and data control, especially given the recent EU investigation into X's use of user data for training its own Grok AI chatbot.
The policy change reflects the evolving landscape of AI and social media, where user-generated content is increasingly valuable for training large language models and other AI systems. As this trend continues, it's likely to spark further debates about data ownership, privacy, and the ethical use of social media content in AI development.
FTC Finalizes "Click-to-Cancel" Rule for Subscription Services
The Federal Trade Commission (FTC) has announced a final "click-to-cancel" rule aimed at simplifying the process of ending recurring subscriptions and memberships. This new regulation represents a significant step towards protecting consumers from deceptive subscription practices.
Key points of the FTC's new rule include:
- Companies must make canceling subscriptions as easy as signing up for them.
- The rule applies to all automatically renewing subscriptions, including streaming services, gym memberships, and plans like Amazon Prime.
- Businesses can't force customers to use a different method to cancel than the one used to sign up.
- Companies must provide cancellation information before obtaining billing information and charging customers.
- The rule is slightly modified from the initial proposal, removing requirements for annual cancellation reminders and restrictions on companies offering plan modifications during cancellation attempts.
The FTC cites a significant increase in complaints about deceptive subscription practices, with around 70 complaints per day in 2024, up from 42 per day in 2021. This rule aims to address these concerns and save consumers time and money.
Most provisions of the rule will take effect 180 days after publication in the federal register. This development marks a significant shift in how subscription-based businesses will need to operate, potentially impacting a wide range of industries and improving consumer rights in the digital age.
U.S. Prosecutors Target AI-Generated Child Abuse Imagery as Legal Landscape Evolves
Federal prosecutors in the United States are intensifying efforts to combat the emerging threat of AI-generated child sex abuse imagery. This year has seen two landmark criminal cases involving defendants accused of using generative AI systems to produce explicit images of children, marking a new frontier in the fight against digital exploitation.
The Justice Department's actions reflect growing concerns about AI's potential misuse in various criminal activities, including child exploitation. While AI-related reports currently constitute a small fraction of overall child exploitation tips, law enforcement and child safety advocates worry about the technology's capacity to morph innocent photos and complicate victim identification.
These cases are breaking new legal ground, particularly when no identifiable child is depicted. Prosecutors are exploring charges like obscenity offenses where traditional child pornography laws may not apply. However, the legal landscape remains uncertain, with potential constitutional challenges and appeals likely as courts grapple with AI's impact on existing laws.
The tech industry is also responding, with major AI players committing to prevent their systems from being used to generate abusive content. As the situation evolves, stakeholders emphasize the urgency of addressing this issue promptly to prevent it from spiraling out of control.
EU Considers Broadening Scope of Potential X Fines to Include Musk's Other Ventures
The European Union is exploring a novel approach to calculating potential fines against X (formerly Twitter) for alleged violations of the Digital Services Act (DSA). In a move that could significantly increase the platform's financial exposure, EU regulators are considering whether to include revenue from Elon Musk's other companies - such as SpaceX, Neuralink, xAI, and the Boring Company - in determining the basis for fines.
This unprecedented consideration effectively treats Musk's business empire as a single entity for regulatory purposes, potentially holding him personally accountable for X's compliance with EU content moderation rules. The DSA allows for fines of up to 6% of a company's global annual revenue for violations, making this decision crucial for both X and Musk.
While the investigation is ongoing and no final decisions have been made, this approach signals the EU's determination to enforce its digital regulations robustly. X, now a private company under Musk's sole control, may face significant financial penalties if found in breach of the DSA's requirements on tackling illegal content, disinformation, and transparency.
Musk has vowed to challenge any DSA fines in court, setting the stage for a potential legal battle that could shape the future of tech regulation in Europe and beyond.
EU AI Act Compliance Tool Unveils Gaps in Big Tech's AI Models
A new compliance tool designed to assess AI models against the European Union's upcoming AI Act has revealed significant shortcomings in some of the most prominent artificial intelligence systems. The "Large Language Model (LLM) Checker," developed by Swiss startup LatticeFlow AI in collaboration with ETH Zurich and Bulgaria's INSAIT, has tested AI models from major tech companies across various categories aligned with the EU's comprehensive AI regulations.
The tool's findings highlight areas where tech giants may need to focus their efforts to ensure compliance with the AI Act, which will be phased in over the next two years. Notable areas of concern include:
- Discriminatory Output: OpenAI's GPT-3.5 Turbo and Alibaba Cloud's Qwen1.5 72B Chat model scored low in this category, with 0.46 and 0.37 respectively out of 1.
- Cybersecurity Resilience: Meta's Llama 2 13B Chat and Mistral's 8x7B Instruct model showed vulnerabilities to "prompt hijacking," scoring 0.42 and 0.38 respectively.
While overall scores were generally high, with most tested models averaging 0.75 or above, these specific weaknesses could pose compliance challenges. With potential fines of up to 35 million euros or 7% of global annual turnover for non-compliance, tech companies face significant pressure to address these issues.
The European Commission has welcomed this initiative as a "first step" in implementing the AI Act, signaling the growing importance of such assessment tools in the evolving landscape of AI regulation.
NYT Escalates Legal Battle Against AI Firms with Perplexity Cease-and-Desist Notice
The New York Times (NYT) has intensified its stance against unauthorized use of its content by artificial intelligence companies, issuing a cease-and-desist notice to AI startup Perplexity. This move marks another significant development in the ongoing tension between traditional media publishers and AI firms over content usage and copyright issues.
领英推荐
Key points of the confrontation include:
- NYT alleges that Perplexity's use of its content for generating summaries and other AI-powered outputs violates copyright law.
- The publisher demands Perplexity immediately stop all unauthorized access and use of NYT content, including future attempts.
- NYT has requested information on how Perplexity is bypassing the publisher's prevention efforts to access its website.
- Perplexity claims it's not scraping data for foundation models but indexing web pages and citing factual content in response to user queries.
- This dispute follows NYT's ongoing lawsuit against OpenAI for similar content usage concerns.
The situation highlights the growing legal and ethical complexities surrounding AI's use of copyrighted material, potentially setting precedents for future interactions between media companies and AI developers. As the deadline for Perplexity's response approaches, the outcome could significantly impact the AI industry's practices regarding content sourcing and usage.
FCC Launches Investigation into Broadband Data Caps and Their Impact
The Federal Communications Commission (FCC) has initiated a formal inquiry into the practice of data capping by broadband providers, marking a significant step towards understanding and potentially regulating this contentious aspect of internet service in the United States.
Key points of the FCC's investigation include:
- Examining why some providers still impose data caps, especially when many eliminated or suspended such restrictions during the COVID-19 pandemic.
- Assessing the impact of data caps on consumers, particularly low-income families, small businesses, and individuals with disabilities.
- Investigating how data caps affect competition in the broadband market.
- Reviewing the FCC's legal authority to take action regarding data caps.
- Analyzing current trends in consumer data usage, noting a 36% increase in wireless data consumption in 2023.
FCC Chair Jessica Rosenworcel emphasized that rationing internet usage is impractical for most Americans, yet millions face constant concerns about data limitations. The inquiry aims to understand why these restrictions persist despite increased broadband needs and providers' demonstrated ability to offer unlimited data plans.
This investigation comes amidst ongoing debates about net neutrality and broadband accessibility, highlighting the FCC's efforts to ensure fair and open internet access for all Americans.
AI-Generated Child Abuse Imagery Reaches Alarming Levels, Watchdog Warns
The Internet Watch Foundation (IWF) has sounded the alarm on the rapidly increasing prevalence of AI-generated child sexual abuse material (CSAM) online. This troubling development highlights the urgent need for enhanced safeguards and regulatory measures in the AI and online content spaces.
Key points from the IWF's findings include:
- AI-generated CSAM has surpassed last year's total in just six months of 2023.
- Most of this content is found on the open web, not the dark web, making it more accessible.
- The sophistication of these images suggests AI tools are being trained on real abuse imagery.
- 74 reports of realistic AI-generated CSAM were actioned in six months, compared to 70 in the previous year.
- Over half of the flagged content is hosted on servers in Russia and the US.
- 80% of reports come from public discoveries on forums and AI galleries.
In response to related concerns, Instagram has introduced new features to combat sextortion, including image blurring and cautionary messages for nude content in direct messages.
This situation underscores the complex challenges facing tech companies, lawmakers, and child protection agencies as AI technology evolves. It emphasizes the critical need for proactive measures to prevent the misuse of AI in creating and disseminating illegal and harmful content.
Meta Faces Legal Challenge Over Teen Social Media Addiction Claims
In a significant legal development, a federal judge has ruled that Meta, the parent company of Facebook and Instagram, must face lawsuits from numerous U.S. states alleging the company's platforms contribute to mental health issues among teenagers. This decision marks a crucial step in the ongoing debate over social media's impact on youth mental health.
Key points of the ruling include:
- U.S. District Judge Yvonne Gonzalez Rogers rejected Meta's attempt to dismiss lawsuits filed by over 30 states, including California and New York, as well as a separate suit by Florida.
- While the judge acknowledged some protection for Meta under Section 230, which regulates online platforms, she found sufficient grounds for most of the states' claims to proceed.
- The ruling also allows personal injury lawsuits against Meta, TikTok, YouTube, and Snapchat to move forward.
- States are seeking court orders against Meta's alleged illegal business practices and unspecified monetary damages.
- The decision enables plaintiffs to seek further evidence and potentially proceed to trial.
This ruling reflects growing concerns about social media's addictive nature and its potential negative effects on adolescent mental health. It also highlights the increasing legal and regulatory scrutiny faced by major tech companies regarding their platforms' impact on young users.
As the cases progress, they could set important precedents for how social media companies are held accountable for the well-being of their younger users, potentially leading to significant changes in how these platforms operate and engage with teen audiences.
Federal Safety Regulator Launches Investigation into Tesla's Full Self-Driving Software
The National Highway Traffic Safety Administration (NHTSA) has initiated a new investigation into Tesla's "Full Self-Driving (Supervised)" software following reports of four crashes in low-visibility conditions. This probe marks another significant development in the ongoing scrutiny of Tesla's autonomous driving technologies.
Key points of the investigation include:
- NHTSA is examining whether the software can adequately detect and respond to reduced roadway visibility, such as sun glare, fog, or airborne dust.
- Four crashes between November 2023 and May 2024 are under scrutiny, including one fatal incident involving a pedestrian.
- The investigation comes shortly after Tesla CEO Elon Musk unveiled the "CyberCab" prototype and made claims about future autonomous capabilities.
- This new probe follows a recently closed investigation into Tesla's Autopilot system, which examined nearly 500 crashes.
- Tesla faces additional legal challenges, including a Department of Justice investigation and lawsuits over Autopilot crashes.
The NHTSA's investigation, classified as a preliminary evaluation, is expected to be completed within eight months. This development highlights the ongoing challenges and safety concerns surrounding autonomous driving technologies, particularly in adverse weather conditions.
As Tesla continues to push the boundaries of self-driving technology, this investigation underscores the critical importance of ensuring these systems can operate safely in all driving conditions. The outcome could have significant implications for the future development and regulation of autonomous vehicle technologies.
In this fast-paced, ever-changing world, it can be challenging to stay updated with the latest legal news in the AI and tech sectors. I hope you found this digest helpful and were able to free up some time for the real joys of life.
Don't forget to subscribe to receive the weekly digest next Friday.
Anita
Director IP Management Training CEIPI | Chairman DIN77006 | Director Research Programms IP Business Academy
4 个月Thank you Anita Yaryna for this great overview. When it came to Elon Musk's last topic, I would have honestly hoped for an investigation into why we have been able to claim for almost 10 years now that "next year all Tesla cars" will be autonomous - and that this is a lie year after year.