Neal's Deals (Vol. 65) - AI Goes Hollywood: The Case of the Stolen Voice ??????
Hey everyone - As some of you may know, actress Scarlett Johansson recently made headlines after expressing her concern about OpenAI's creation of an AI voice model that closely resembled hers, all without her consent. Johansson expressed being "shocked, angered, and in disbelief" upon hearing the demo, highlighting the uncanny resemblance. In response to the public uproar, OpenAI temporarily paused its voiceover tool and admitted that the use of the voice was unintended and against their ethical standards.
The incident not only underscored the issue of deepfakes but also triggered broader discussions about safeguarding individuals' name, image, and likeness rights in the era of generative artificial intelligence. As AI advances, the ongoing ethical debate continues to revolve around using copyrighted materials. This dynamic is pushing tech leaders and regulators to navigate the fine line between technological advancement and respecting individual rights and creativity in the digital realm. Therefore, in this edition of Neal’s Deals, we explore the extensive implications spanning across law, technology, and the startup ecosystem arising from this most recent controversy.
Legal implications
OpenAI faces many significant legal risks, particularly around copyright and publicity rights. Although copyright law might come into play if OpenAI had sampled Scarlett Johansson's films or published works without permission, the company claims it did not use her actual voice but that of a different actress. This claim potentially shields OpenAI from copyright infringement, but it does not insulate the company from publicity rights violations. Publicity rights laws, particularly strong in California, protect individuals from unauthorized commercial use of their likeness, including their voice. Johansson could argue that OpenAI violated her publicity rights by monetizing a voice closely resembling hers, even if it wasn't her actual voice.
The legal landscape for publicity rights is supported by precedents in successful lawsuits against companies that used sound-alikes in advertisements. These cases highlight that California's laws protect not just the actual voice but also the recognizable persona associated with it. OpenAI's defense, which claims no intent to imitate Johansson and asserts that promotional videos were not advertisements, may be weakened by public perception and statements implying a deliberate association with Johansson. Regardless, this situation highlights the broader challenges of navigating legal and ethical considerations in the evolving field of AI, where existing laws might need adaptation to address new technologies adequately.
Technology implications:
AI deepfakes, which include manipulated videos, photos, or audio recordings, are created using machine learning algorithms, specifically a type of deep learning. This technology involves feeding an algorithm examples to learn and produce outputs resembling those examples. As developers decide what data to include, AI models evolve, sometimes building on prior iterations or being freshly trained. This continuous evolution allows AI to assist creators with tasks like marketing, rights management, and the creative process itself, showcasing the technology's potential to augment human ingenuity.
However, the misuse of audio deepfakes for malicious purposes, such as scams or disinformation, poses significant challenges. The ability to create convincing voice replicas from short samples has enabled scammers to execute fraudulent schemes and spread disinformation, eroding trust in media and institutions. The increasing prevalence and sophistication of audio deepfakes highlight the urgent need for robust detection technologies and strategies to counter their misuse.
What does this have to do with the startup ecosystem?
A new category of AI cybersecurity startups are emerging to protect personal information and mitigate the risks associated with deepfakes. Organizations must proactively plan to defend against deepfakes by incorporating this threat into regular training and security testing. Security teams need to understand how to detect deepfake usage to impersonate employees, vendors, partners, or customers and safeguard their most sensitive business assets, particularly data.
In my role, I am seeing dozens of early-stage startups beginning to launch within this emerging cyber category. Let me know if you want to exchange notes and discuss further!
Let’s get to it:
Why this is interesting:
HoundDog.ai , a San Francisco startup that helps developers detect and prevent leaks of sensitive personal data in their code before it is released, raised a $3.1 million seed round with E14 Fund, Mozilla Ventures, and Ex/ante participating.
Why this is interesting: HoundDog is helping developers prevent code from leaking personally identifiable information. Unlike other scanning tools, HoundDog examines the code developers write, using traditional pattern matching and large language models to identify potential issues within the continuous integration flow, ensuring data leaks are caught before the code is merged. By focusing on the actual code rather than the data flow, the company can flag issues like the collection of Social Security numbers and alert the team before merging, preventing costly problems. As companies increasingly adopt AI-generated code, embedding security best practices early in the development cycle becomes crucial. What excites me most is that HoundDog goes beyond being a cyber prevention company; it reduces compliance costs for startups through automated reporting and generating records of processing activities, a capability that will become increasingly important in the coming years.
Slingshot , a Los Angeles startup that provides financial tools tailored for content creators to help manage their earnings and financial operations more efficiently, raised?a $2.2 million pre-seed round from Dorm Room Fund, 1916 Enterprises, and Key Partners Group.
领英推荐
Why this is interesting: Many talented creators today are forced to spend countless hours navigating the complex financial and business challenges of self-employment. Slingshot has emerged in the $250 billion creator economy with a unique approach to providing this infrastructure. The company differentiates itself by centralizing features and data, offering automated bookkeeping linked to its business card, and partnering with banks to facilitate financial transactions. Slingshot also provides additional benefits like saving a percentage of revenue, healthcare, retirement plans, HR services, legal filing, and tax paperwork management. Initially focused on providing legal and financial infrastructure to musicians, the company pivoted in late 2022 to meet a customer segment with more nuanced needs. However, my concern with the opportunity is that the creator economy consists of solo entrepreneurs who can churn quickly and often lack the financial resources to afford expensive contracts. This market structure can also be challenging from a go-to-market perspective. I’m not familiar with the founder, but unless he has strong distribution in this space, it may be difficult to support a venture scalable outcome.
Remark , a New York startup that offers live chat services to online stores to help them increase sales by providing expert advice to shoppers in real-time, raised a $10.3 million round from investors including Spero Ventures, Stripe, Shine Capital, and Neo.
Why this is interesting: Shopping can be fun, but making a decision among thousands of options can be overwhelming. That's where Remark comes in. This startup helps shoppers buy with confidence by connecting them with high-quality product experts through an asynchronous live chat. Remark’s network of 50,000 experts, ranging from artists to ski instructors, offers personalized advice similar to in-store staff. The company also trains AI models on these experts to create personas that provide consistent assistance, even when human experts are unavailable. Remark’s expert-assisted shopping surpasses traditional AI-based algorithms by focusing on presale decision support. This innovative approach has led to impressive results, including a 9% revenue lift and a 30% conversion rate for its clients. If the cost and time-to-value are reasonable, this solution seems like a no brainer from a brand’s perspective. I'm curious though to understand the delta between Remark's solution and generic AI chatbot tools. While having a human in the loop adds value, if AI chatbots can offer the same level of service in the future, then their product might not be too defensible…
Deals in the Works:?If you want to learn more - feel free to reach out
_____
Quote of the week:
“Our greatest weakness lies in giving up. The most certain way to succeed is always to try just one more time.”
— Thomas A. Edison ??
_____
Portfolio Company Spotlight!
Hello Wonder is the future of the internet for kids. Founded by a single father of five with former Google/Amazon/Disney veterans, it’s the first AI designed exclusively for families.
Wonder leverages AI and browser technology to block damaging sites in real-time, while surfacing results that are educational and fun. Wonder helps kids as young as 4 discover quality safe content from across the internet. At the same time, it allows parents to shape the values and content their kids are exposed to. Inspired by Montessori philosophy, Wonder supports child-led search so they can explore and develop their own passions and interests, rather than having their attention hijacked by a self-serving algorithm.?
Learn more here !
_____
Have a great weekend everyone, and excited to see you at NYC Tech Week !
Co-Founder AI Marketing Directory | I help people find the right AI tools without feeling confused
5 个月Awesome edition. I recommend everyone check this out! Personally, I think it's going to come down to the nature of the usage rather than the subject. Through various litigation - eventually we will derive a formula for what type of content is subject to intellectual property concerns and which lies in the public domain. And when you think about it, it actually raises a lot of questions outside of AI itself. Like for example - is any sort of text or assembling of words in a social media sense 'copyrightable'? - because I know a lot of things that I could post on Linkedin or X right now that someone else created that would get me a ton of views/impressions. Questions like this will need to be answered eventually through this very same discussion. And it will only become more prevalent as commercial activity on the internet grows. Love reading this man!