AI Hits a Sour Note With Musicians

AI Hits a Sour Note With Musicians

The list of reads like a Who’s Who of the music world, including legends from virtually every musical genre and era since the 1940s.

But this massive project isn’t a remake of “We Are the World.” It’s an open letter to tech companies and legislators demanding action to protect musical artists against the perceived threat of AI.

But are the blues these artists are singing real, or are they just whistling in the wind? What is it about AI that poses such a massive danger to names like Stevie Wonder, Billie Eilish, Jon Bon Jovi, and Kacey Musgrave?

And what do the artists hope their letter will accomplish in safeguarding the future of artists’ rights??

The AI “Art” Juggernaut

(Author’s note: From this point on, when I surround individual words or phrases in quotes, I am referring specifically to the AI usage of these signals. Full sentences in quotes are, of course, direct quotes.)

From its very inception, artificial intelligence has had both its detractors and its proponents.

Pro-AI developers have touted AI as the next logical step in the evolution of the arts, ranging from music to painting to photography to sculpture. AI-crafted “novels” are drawing massive attention in the literary world, causing consternation among authors and celebration from AI advocates.

There have been AI-generated “deepfakes” of famous artists, such as Jay-Z delivering the Hamlet soliloquy or Drake and The Weeknd “collaborating” on a song which, in reality, they never actually sang.

And, of course, there were the infamous faux “nudes” of pop/country legend Taylor Swift, which were created by AI.?(I won't include a link here, for obvious reasons.)

But as AI gets more advanced, artists in various disciplines, including the music industry, are sounding the alarm about the AI “art” juggernaut and its anticipated impact on artists in an already tenuous digital economy.

To understand why this is, we first need to take a brief digression and look at how AI programs “learn” their “craft.”

Building the Perfect Monster

Like a human being, an AI program does not simply emerge from the depths of cyberspace “knowing” things. It has to be “taught” and “trained.”

The method, much like a human’s education, involves exposing AI to new things until patterns emerge. In 2019, a group of musicologists aided by AI assembled in an attempt to complete Beethoven’s 10th Symphony, during the composition of which the famous composer died.

Using a mixture of Beethoven’s musical sketches and a body of recordings of Beethoven’s finished works, the AI-assisted team was able to produce a full orchestral version of Beethoven’s 10th which was subsequently performed for the first time in Vienna in October of 2021, roughly 194 and one-half years after his death.

According to one member of the team, members of the public who were well-acquainted with Beethoven’s work were unable to determine where the composer’s sketchwork left off and the machine took over.

This may seem like a fairly benign, even noble, use of AI: completing the last work of a musical genius whose reputation still looms large today. But AI has been put to more sinister, or at least more questionable, uses–uses which hint at the origins of the programs’ “knowledge” and raise chilling questions about intellectual property and the rights of those who create and hold it.

One common method of “training” AI programs is to simply feed a whole lot of data into the system. On paper, there’s nothing wrong with this. After all, that’s how we teach human children, right?

We fling vast amounts of data at them in their formative years in the certainty that at least some of it will stick, through schoolbooks and television and various forms of media.?

But where does the data to “train” AI programs come from?

In many cases, it has been alleged that AI programs are “trained” on datasets that are sourced from questionable or outright illicit channels, such as digital repositories of pirated books or pop songs. ChatGPT’s creators are currently facing a class-action lawsuit by authors who allege that the program’s dataset was sourced in part from such repositories, which were found to contain works created by the authors in question. A similar suit filed by journalistic outlets Reuters, The Intercept, and Raw Story among others, claims that OpenAI and/or Microsoft, the defendants in the case, deliberately stripped identifying authorial and copyright information from the information fed into ChatGPT’s dataset. This alleged violation of the DMCA’s statutory provisions, says the suit, had the effect of making the chatbot seem far more “intelligent” and all-knowing than it actually was, because it was simply regurgitating unattributed information rather than actually “thinking” and responding intelligently.?

According to the creators of ChatGPT and other AI programs which are currently being challenged in the courts, the datasets they utilize were sourced from the internet and fall under the heading of fair use, which would make the use noninfringing. Additionally, the plaintiffs would have to prove that the author and copyright information was intentionally removed. However, the defendants would have to prove that using such datasets did not in fact violate anyone’s intellectual property rights or infringe upon their legal and fiduciary rights, which may prove very difficult under a strict reading of the “fair use” guidelines. The US Copyright Office says:

“Under the fair use doctrine of the U.S. copyright statute, it is permissible to use limited portions of a work including quotes, for purposes such as commentary, criticism, news reporting, and scholarly reports.”

Two problems are revealed in this definition. First is that the alleged infringing repositories from which the datasets were sourced did not include merely limited portions of a work, but whole songs, albums, articles, and books, which would not meet the limited standard fair use requires. Second is that it would be very difficult for the defendants to prove that the intent behind the use was actually covered by valid purposes for invoking the fair use doctrine, as an AI program cannot be said to engage in any of those purposes on its own.

And The Band Played On

This brings us back to the problem modern musicians face with regards to AI. AI continues to proliferate and users of AI create their own “original” tracks from AI programs trained on datasets including these performers’ works. This in turn leads musical artists to rightly fear that AI may dilute and diminish their earning power to an untenable degree. In an age of on-demand music streaming where practically any song you can think of is available somewhere online for no more than the cost of an Internet connection, dragging album and single sales sharply downward, this is a frightening prospect which positions AI as a legitimate existential threat.?

Further raising the stakes, many of these artists view current iterations of AI as a springboard to unchecked copyright infringement on a level never before even conceived of. Based on the emergent rumblings from the literary and visual arts world, such as the SAG-AFTRA and Writers’ Guild strike of last year which put AI front and center in the negotiations with writers and screen actors, their fears appear to be well-founded. Musicians, being more viscerally tied to the moods, whims, and perceptions of their audiences than most other types of artists, are more vulnerable to the dangers of infringement, copycatting, deepfakery and other types of digital chicanery than the artistic community at large.???

Laws like Tennessee’s newly passed ELVIS (Ensuring Likeness, Voice, and Image Security ) Act, which aims to protect musicians and musical artists from the real and imagined threats and harm unchecked AI proliferation may cause, are being considered and passed throughout the country. But many musicians fear that these laws may be too little, too late, and too restrained a response to a technology that’s evolving much too quickly for the law to even hope to keep up.?

To be fair, this is not an unreasonable fear either. As I’ve noted several times in the past, the legal and legislative systems tend to move at a glacial pace in the face of new technology.? The fact that AI is even being discussed in the context of legislation so soon after its emergence is a telling hallmark of how concerned our society is as a whole about AI’s potential risks and dangers. And the fact that so many artists of so many stripes, from the crooner who performed the song which formed the soundtrack of your first kiss to the actor who played in your latest favorite movie, are worried about AI and uniting on a scale never before imagined to decry its use, is a telling and chilling barometer of just how far afield AI’s effects are already being felt–and where it might end up.

ABOUT JOHN RIZVI, ESQ.

John Rizvi is a Registered and Board Certified Patent Attorney, Adjunct Professor of Intellectual Property Law, best-selling author, and featured speaker on topics of interest to inventors and entrepreneurs (including TEDx).

His books include "Escaping the Gray" and "Think and Grow Rich for Inventors" and have won critical acclaim including an endorsement from Kevin Harrington, one of the original sharks on the hit TV show - Shark Tank, responsible for the successful launch of over 500 products resulting in more than $5 billion in sales worldwide. You can learn more about Professor Rizvi and his patent law practice at www.ThePatentProfessor.com

Follow John Rizvi on Social Media

YouTube: https://www.youtube.com/c/thepatentprofessor Facebook: https://business.facebook.com/patentprofessor/ Twitter: https://twitter.com/ThePatentProf Instagram: https://www.instagram.com/thepatentprofessor/???

Shravan Kumar Chitimilla

Information Technology Manager | I help Client's Solve Their Problems & Save $$$$ by Providing Solutions Through Technology & Automation.

11 个月

Interesting perspective! ?? John Rizvi

回复
Frank Frisby

?? Machine Learning Engineer, ?? Cofounder AI

11 个月

Honestly it would be interesting for these artists to sign a letter. But for them to stop AI would extremely difficult as this technology is permeated throughout the fabric of society. It would bode well if they actually supported it and find a blockchain distributed design. But blocking it will only make indie artists and undiscovered artists more discoverable.

回复
Stanley Russel

??? Engineer & Manufacturer ?? | Internet Bonding routers to Video Servers | Network equipment production | ISP Independent IP address provider | Customized Packet level Encryption & Security ?? | On-premises Cloud ?

11 个月

The concerns voiced by Stevie Wonder, Billie Eilish, Katy Perry, and other musicians regarding AI's potential infringement on their creative rights reflect the evolving landscape of technology and its impact on the arts. AI's capacity to replicate artistic works raises valid questions about intellectual property protection and the future of artistic expression. Beyond a mere letter, exploring avenues for collaboration between artists and AI developers could foster solutions that balance innovation with respect for creators' rights. How do you envision the intersection of AI and artistic integrity evolving, and what steps can be taken to address these complex ethical considerations?

回复

要查看或添加评论,请登录

John Rizvi的更多文章

社区洞察

其他会员也浏览了