AI Under Fire ?? (Issue #40)
Illustration: Franklin Graves, Image generated using Adobe Express.

AI Under Fire ?? (Issue #40)

This is Creator Economy Law, a newsletter dedicated to exploring and analyzing the legal issues surrounding the creator economy, creators, and internet platforms. If you enjoy what you’re reading, share with friends, and invite them to subscribe using the button above and share using #CreatorEconomyLaw.


Happy 2024! ?? It's the first issue of Creator Economy Law for the new year, and I'm stoked to dive back into headlines and developments in law, policy, and more across the creator economy.


What You Should Know

?? AI Under Fire ??

Since the last issue, there have been some major developments in the world of artificial intelligence and copyright, particularly in the world of journalism. The New York Times sued OpenAI and Microsoft on December 27th.

The NYTimes complaint includes several claims of copyright infringement (direct, vicarious, and contributory), violation of the DMCA through the removal of copyright management information (CMI), unfair competition, and trademark dilution.

On January 5th, journalists Nicholas Basbanes and Nicholas Gage filed a class action against Microsoft and OpenAI. Their case is important, too, because they are authors who have (reportedly) maintained registrations for their works with the U.S. Copyright Office. It's a key turning point for many of these cases since authors need to have registered their works to take full advantage of the statutory advantages (such as filing an infringement action in federal court, greater damages, or receiving attorneys' fees).

OpenAI vehemently disagrees as they detail in a blog post published Monday. In the post, they argue four main points: (1) their technology supports journalists and media organizations; (2) training on "publicly available internet materials is fair use" and they voluntarily provide an opt-out mechanism; (3) "regurgitation" is rare, and they are continuing to solve for it; and (4) the NYTimes was in negotiations, but now they've filed a lawsuit without warning and point to publicly available materials in their claims.

This week in the U.S., journalism continued to be a major focus. Wednesday brought us another "Oversight of AI" session from the Senate Committee on the Judiciary's Subcommittee on Privacy, Technology, and the Law. "The Future of Journalism" session was certainly controversial with Jeff Jarvis (Tow Professor of Journalism Innovation at CUNY Graduate School of Journalism) often butting heads with a heavy pro-AI view in contrast to the other three witnesses: Danielle Coffey (President and CEO of News Media Alliance), Curtis LeGeyt (President and Chief Executive Officer of the National Association of Broadcasters), and Roger Lynch (CEO of Condé Nast).

Kate Knibbs at Wired reported: “It’s not only morally right,” said Richard Blumenthal, the Democrat who chairs the Judiciary Subcommittee on Privacy, Technology, and the Law that held the hearing. “It’s legally required.”

“As today’s hearing made clear, though,” Knibbs writes, “Congress is already highly critical of AI’s potential to amplify the power of the tech industry and its potentially deleterious impacts on journalism.”

RAG is an AI framework for retrieving facts from an external knowledge base to ground large language models (LLMs) on the most accurate, up-to-date information and to give users insight into LLMs' generative process.

Going forward, expect to see heightened attention given to licensing around materials used for both training and grounding (aka retrieval-augmented generation).

Axios reported this week on Fox Corp.'s use of a new blockchain protocol, called Verify Protocol, to manage licensing deals with AI companies. The tech was developed in-house at Fox through a collaboration with Polygon Labs. It sounds promising and super cool over the long term, especially if blockchain + LLMs can unite to properly track royalty management in some way!

We're also seeing the proliferation of customized chatbots that have been fine-tuned, or grounded in some instances, using a smaller set of training materials. The potential can range from curated (and licensed) content sets to personal data of an individual they've synced with a service.

These images, all produced by Midjourney, closely resemble film frames. They were produced with the prompt “screencap.” GARY MARCUS AND REID SOUTHEN VIA MIDJOURNEY [via IEEE Spectrum]

However, do designs of AI systems that utilize grounding techniques raise concerns about data leakage?

Check out this article in IEEE Spectrum that explores the ability of Midjourney users to generate images that appear to be direct replications of movie, television show, and video game scenes.

The FTC also published a blog post this week serving as a reminder to any AI company that privacy and confidentiality remain important. They wrote, "Model-as-a-service companies that fail to abide by their privacy commitments to their users and customers, may be liable under the laws enforced by the FTC."

On Wednesday, Reps.?María Elvira Salazar?(R-FL) and Madeleine Dean (D-PA) introduced the?No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act.?The bill (full text) establishes a federal framework to protect Americans’ individual right to their likeness and voice against AI-generated fakes and forgeries. It aims to be a federal-level answer to what historically has been a state-by-state approach to managing name, image, likeness, and voice (NIL) rights. Steven Brachmann wrote a great summary of the bill over at IPWatchdog. Check it out.

What's up next in my podcast queue? This week's interview with Adobe General Counsel and Chief Trust Officer Dana Rao from Nilay Patel over at The Verge. Yes, they talk copyright!

Also, check out this updated spreadsheet of the "AI Copyright Wars" put together and managed by Edward Lee over on ChatGPT Is Eating the World.

How much is digital media content worth to AI companies?

?? $1 million a year? $5 million? The Information's latest report cites two executives who have been involved in negotiations with OpenAI. They are saying that OpenAI offered between $1 million and $5 million per year to license news articles. This isn't exactly going to keep the lights on or the presses running, as the article's authors note.

We're at a critical stage in the overall economies of LLMs. Last year, we saw news of licensing deals reached by companies such as Axel Springer (Politico, Business Insider) and The Associated Press with OpenAI to offer news articles as part of responses in ChatGPT. OpenAI even entered into a deal with Shutterstock.

The still ongoing licensing work is setting up the future in which content publishers are adequately compensated and can survive as LLMs become more robust across use cases and/or tech companies building and using LLMs can innovate and still offer an affordable service or product that isn't priced out of the market due to licensing costs. Are both objectives achievable?

The most recently filed copyright infringement lawsuit by The New York Times highlights the leverage and negotiation tactics of each party. If a content company isn't happy with the terms of any licensing deal, they can take the chance in court and see how things shake out. On the flip side, tech companies can potentially obtain a judicial outcome that negates the need for any content licensing regime at all (hello, fair use).

Regardless, there's an argument that by entering into licensing deals, the tech companies may undercut defenses available in their ongoing copyright infringement lawsuits.

When you think about it, wouldn't you love to be representing the plaintiff content owners and be able to stand up in court, waiving a licensing agreement that shows an inherent value and market for content as training materials? ??

But perhaps there's a distinction to be made based on the types of deals that are being pursued. If tech companies building LLMs and offering chatbot interfaces need to ensure accuracy and trustworthiness, what better way to do that than by having more direct use of the content in the form of grounding or fine-tuning the foundational models?

In short, this would mean that any response to a user's prompt would utilize the foundation LLM (which didn't utilize a content license) but then check against a more curated, licensed database of content, or a fine-tuned model, to give a more accurate response.

Lots to think about right now! What do you think?

FTC Wants Your Thoughts on COPPA Revisions

Back in December, the FTC announced a renewed focus on updating the Children’s Online Privacy Protection Act (COPPA) Rule. Today, they announced the opening of the public comment period for the proposed revisions to the COPPA Rule. The deadline is March 11, 2024.

To recap, the proposed changes to COPPA from the FTC include:

  • Requiring Separate Opt-In For Targeted Advertising
  • Prohibition against conditioning a child’s participation on collection of personal information
  • Limits on the support for the internal operations exception
  • Limits on nudging kids to stay online
  • Changes related to Ed Tech
  • Increasing accountability for Safe Harbor programs
  • Strengthening data security requirements
  • Limits on data retention

The proposed changes would also bring "biometric identifiers" within the definition of "personal data" and the Commission can consider factors such as marketing materials and reviews or the age of users on similar services.

Also, save the date. The FTC announced a virtual summit on Artificial Intelligence that is scheduled for the afternoon of January 25th, from 12:00PM?-?4:30PM ET. No registration is required, but check out the event page.

Where is the line drawn between an employee and and contractor in the creator economy? It may not be as clear as you think! The NRLB is challenging Google but they’re already appealing.

“Creators on any given platform often have control over critical factors that labor enforcement agencies and courts examine” What are those factors? Read the newsletter and find out!

I had the chance to share some thoughts with Passionfruit earlier this week as part of their creator newsletter. Check it out (and subscribe)!


Don't Miss


Microsoft is under pressure again, both internally and externally, following a renewed focus on its AI research lab in Beijing, China, according to four employees.

After opening the lab in 1998, “The company hired hundreds of researchers for the lab, which pioneered Microsoft’s work in speech, image and facial recognition and the kind of artificial intelligence that later gave rise to online chatbots like”, report Karen Weise, Cade Metz and David McCabe for The New York Times (read the article).

The Federal Trade Commission may be given authority to demand transparency with training materials for foundational models...

Google is reportedly in an active round of layoffs, as reported by The New York Times. The Verge speculates the number is nearly 1,000 employees that are being impacted across hardware teams across AR and at Pixel, Neat, and Fitbit, plus core engineering teams and Google Assistant.

Amazon is cutting 500 employees at Twitch (reportedly about 35% of the staff). The decision was announced in a blog post on Wednesday.

Amazon is also cutting several hundred jobs in its studio divisions, which include Prime Video and MGM.

After just five months, the Head of Agency Partnerships at X is leaving the company.

The Securities and Exchange Commission’s X account was hacked on Tuesday with a post remaining up for 30 minutes which contained false information that the SEC approved Bitcoin ETFs, a highly-awaited development in the crypto world.

Meta announced new protections to give teens more age-appropriate experiences across its apps. The company notes it will begin “to hide more types of content for teens on Instagram and Facebook” based on expert guidance. They’ll also automatically put “all teens into the most restrictive content control settings.” Teens on Instagram will also see a prompt to update their privacy settings which takes one tap to implement.

Resso, the music streaming service from TikTok, is shutting down in India at the end of the month after launching in 2020.

TikTok also passed $10 billion in revenue from in-app purchases (the first non-gaming app to reach the milestone), according to a study from data.ai and as reported by Music Business Worldwide.

Music Business Worldwide also dives into some numbers behind music streaming platforms. 158 million tracks had 1,000 plays or fewer on music streaming platforms in 2023. Meanwhile, 45 million tracks had ZERO plays.

During CES this week, Amazon announced that Matter casing support is coming to more devices and that AI-powered experiences are coming to Alexa and powered by developers. Check out their full list of announcements from the show.

Also during CES, Sony Music unveiled "next generation immersive music experiences" for Fortnite and Roblox. via Music Business Worldwide

The Anti-Defamation League published a report in December on data accessibility by social media platforms. "TikTok is restricting access to its Creative Center, its only tool to study trending hashtags on issues such as the Israel-Gaza war & other topics," they posted on X. "This is a step in the wrong direction."

New York published an interesting profile by Brock Colyar on Bethenny Frankel, a former RHONY castmate turned entrepreneur who now builds a following on TikTok. Read or listen on Apple News+ or over on The Cut.

Last month, a new Pew Research Center survey of U.S. teens show they are still using social media despite the highly publicized risks and harms. Check out the full report: "Teens,?Social Media and Technology 2023".

“California court decision strengthens Facebook’s ability to deplatform its users,” writes attorney Evan Brown .

Laws restricting children from accessing websites were all the rage in 2023. To kick things off this year, the Ohio law faced a temporary restraining order, brought by NetChoice, which was granted by the court. Eric Goldman gives a full breakdown.

The Verge reports on YouTube's efforts to crack down on AI-generated true crime deepfakes.

During an event at CES, Replica Studios, an AI voice technology company, and SAG-AFTRA announced the introduction of a new agreement for professional voice-over artists to explore new employment opportunities for their digital voice replicas with "industry-leading protections" tailored to AI technology, allowing AAA video game studios and other companies working with Replica to access top SAG-AFTRA talent.


Learn with Me
A picture of the Roundtable Report

It was an honor to participate in this year’s Metaverse Safety Week among such an incredibly talented and bright group of experts! Now, the written report is out!

I’m excited to have contributed a small bit of thinking on the role trade secrets play when it comes to transparency and accountability.

Follow the link to download a copy of the full report!


Music Video of the Week

Sofi Tukker? The Knocks?? I’m sold! Their new track is so fun and will for sure get you groovin’. Plus, it’s a great break from Kylie Minogue’s Extension (The Extended Edition) that I’ve had on repeat.

Watch on YouTube or Apple Music.


Editor's Notes

Affiliate Links. As an Amazon Associate, I earn from qualifying purchases. I have noted above where links to products on Amazon may earn me a commission if you make a purchase. Thanks for supporting my work!

Not Legal Advice. This newsletter is published solely for educational and entertainment value. Nothing in this newsletter should be considered legal advice. If you need legal assistance or have specific questions, you should consult a licensed attorney in your jurisdiction. I am not your attorney. Do not share any information in the comments you should keep confidential.

Personal Opinions.?The opinions and thoughts shared in this newsletter are my own, and not those of my employer or any of the third parties mentioned or linked to in this newsletter. No affiliation or endorsement is implied or otherwise intended with third parties that are referenced or linked.


Enjoying this? Share with someone you think might be interested! If this was forwarded to you, jump over to LinkedIn and subscribe for free.

Brian T. Edmondson, Esq. ??

I show online entrepreneurs how to legally protect their business, brand, & body of work.

10 个月

That robot doesn't happen to be reading the New York Times?

Cyrus Johnson

AI/Law Thought Leader + Builder | Attorney Texas + California 22Y | Corporate Investment Technology | Post-Scarcity Law | gist.law | i(x)l | aicounsel.substack.com | @aicounseldallas on X

10 个月

t r a n s f o r m a t i v e u s e

回复

要查看或添加评论,请登录

Franklin Graves的更多文章

社区洞察

其他会员也浏览了