What is the future of Music Production with AI?

What is the future of Music Production with AI?

Hey Everyone,

If we’re after data-driven analyses about the musical trends of yesterday and today, I think it would be hard to beat Can't Get Much Higher . Thankfully, Christopher Dalla Riva was ready to explore this topic for us.

Chris is a musician who spends his days working on analytics and personalization at Audiomack, a NYC-based music streaming service with millions of monthly users.?


?? From our sponsor: ??



New AI Infrastructure Research


Download this new survey of global AI leaders on the state of AI infrastructure at scale, exposing GPU challenges. Dive into respondents’ current scheduling, compute, and AI/ML needs and plans for training and deploying models. Read the research =>


Get the Report


This is the Newsletter of Chris, which I recommend if you enjoy the world of data and music and how they intersect.

Can't Get Much Higher

The intersection of music and data

By Chris Dalla Riva


Bio

As a musician and product manager, Chris is uniquely qualified to talk about this topic. Outside of work, he writes Can’t Get Much Higher , a weekly newsletter about the intersection of music and data. His writing has been featured in The Economist and Business Insider , among other publications.'


Subscribe to CGMH


If you want to get my best deep dives, consider going premium.


Subscribe now



By Christopher Dalla Riva , February-March, 2024.


The Day the Music Lied

When the song “Heart on My Sleeve” by Drake & The Weeknd came out in April 2023, it shook the music industry. The industry wasn’t shaken because two of its biggest stars were collaborating. It was shaken because the two pop stars weren’t collaborating.?

“Heart on My Sleeve” was created by a shadowy figure named Ghostwriter, the song getting millions of streams across TikTok, YouTube, and Spotify before Universal Music Group forced platforms to take it down . Ghostwriter used new technology powered by artificial intelligence to closely simulate the voices of both Drake and The Weeknd. Despite the questionable ethics behind the creation of the song, listeners loved it. Here are some posts on Twitter from around the time the song went viral.

  • Daily Loud : “this Drake & The Weeknd A.I song is a hit”
  • @MeekMill : “A.I generated song Drake ft. The Weeknd - heart on my sleeve via @YouTube this my 5th time banging this and it’s flame…. We need new music from y’all 2”
  • @TrungTPhan : “This song is a new collaboration between Drake and The Weeknd. It’s called “Heart On My Sleeve” and is an absolute banger. It’s also completely AI generated.”
  • @itroll93 : “Heart on my Sleeve is so much better than anything Drake has put out in two years”

Technological upheaval always seems to affect music before other cultural industries. When the file-sharing site Napster was forced to shut down in 2001 , for example, Netflix was still a private company mailing DVDs and six years away from launching a video streaming service . In fact, in the same time frame where you went from buying a VHS to buying a DVD in order to watch a movie at home, you also bought vinyl, 8-tracks, cassettes, CDs, MP3s, and a few other things to listen to music at home.

Part of the reason that musical technology is always evolving is due to cost. Making music is dramatically cheaper than making movies. It’s also due to size and complexity. Downloading a song to my phone only takes up about 3 megabytes of space. A television episode is closer to 100 megabytes and a movie is at least 700.?

Because of these factors, along with a few others, artificial intelligence technology will likely disrupt the music industry much sooner than other cultural industries, “Heart on My Sleeve” merely a harbinger of what is to come. But to understand what “Heart on My Sleeve” is a harbinger of, we need to understand how the music industry itself has changed in the last few decades.


Editor’s Picks of the Guest Author

If you are a fan of music, you will enjoy these:


The Changing Incentives in the Music Industry

What we think of as the modern music industry didn’t exist until the late-1800s when Thomas Edison invented the phonograph. Before that, the music industry was largely focused around sheet music sales and live performance. The phonograph and subsequent improvements made it so that music could be recorded and sold to the public to play again and again. From that point forward, the music industry was fundamentally a product industry, selling music in many different formats to consumers, along with devices to play that music on.

Because of this, the earliest and most successful music labels were partially technology companies. Edison Records invented the wax cylinder. RCA Victor invented the Victrola. Columbia Records invented the LP. Decca Records invented stereo sound. Philips Records invented the cassette. Sony invented the CD. If we look at industry revenues over the past half century, we can see that the products – usually internally invented – that powered the music industry.

Everything changed around the year 2000, though. That’s when industry revenues began to collapse as digital file sharing sites like Napster, Limewire, and Kazaa ballooned in popularity. File sharing and the technology that made it possible, including the world wide web and the MP3, represented a break with the past because they were a disruptive technology that emerged outside of the music industry. The industry’s response to this disruption was twofold.?

First, the music industry kicked off a ton of litigation against both file sharing sites and fans . Second, they turned back to what they know best: selling music in a new format, this time being downloads on digital music stores like iTunes. Even at the height of their powers, digital downloads were no panacea for an industry in trouble. When they peaked at $3.6 billion inflation-adjusted revenue in the United States in 2012, the music industry was generating the least amount of money since the RIAA began tracking in 1973.?

The rise of music streaming in the 2010s finally began to turn things around. Like file sharing, the disruptive technology that unpinned streaming also emerged outside of the music industry. The underlying economics of music streaming were fundamentally different from the model that supported the industry for almost a century. Streaming is about access rather than ownership. Subscribers pay a flat monthly fee for the ability to listen to whatever they want as much as they want.

Streaming has cemented the idea that the music industry no longer generates revenue by selling different musical products to listeners. They make money by licensing intellectual property to various outside entities, the most important of which are streaming services. This fundamental shift was driven by technological innovation, but two other factors also played a role.

  • Expansive Copyright Protection: When the modern music industry was born in the early-1900s, copyright protection only applied to songs, not recordings, and lasted for 28 years with an option to extend it an additional 28 years. Today, copyright applies to both songs and recordings and lasts the life of the author plus 70 years.
  • Industry Consolidation: The music industry was highly concentrated in the early 1900s. Independent labels emerged in greater number in the middle of the century before conglomeration began again as the century came to a close. Today, three companies – Sony Music Entertainment, Universal Music Group, and Warner Music Group – control a large portion of the world’s most popular music.

All of these factors combined leave us where we are today. Rather than developing the technology that made it possible to market music in different formats, we have three publicly traded companies who make most of their money by licensing huge swaths of intellectual property that they will control for over a century. It’s in this climate that AI technology is going to shape the next generation of? music.

How AI Will Transform the Music Industry in the Near Term

In a recent profile in the New Yorker , Universal Music Group head Lucian Grainge noted while every wave of technology – “From sheet music to upright pianos to big bands and the huge CBS radio network” – was once painted as a threat to the music business, technological improvements have usually been to the benefit of the industry. In the early 1980s, Grainge recalls, samplers and drum machines made it possible for people to make music who “lacked access to instruments, music lessons, and studios.”

Despite the apocalyptic predictions that have emerged around AI and music, I think their marriage will ultimately end up being similar to previous technological waves, generally enhancing rather than harming the creative process. Here are some areas where AI is already beginning to rear its head.

Vocal Impersonation & Translation

When the aforementioned “Heart on My Sleeve” took the internet by storm, its billing as an “AI Hit ” was slightly misleading. When most people think of an AI hit, they are imagining some computer program creating a popular song with little human intervention. “Heart on My Sleeve” required tons of human intervention.

“Heart on My Sleeve” was written and produced by a human. The “AI” piece in the phrase “AI Hit” was AI technology trained on the voices of Drake and The Weeknd to make another human sound like those pop stars. This impersonation trend is likely here to stay. In fact, it is part of a longer trend that began with the ability to buy digital simulations of certain guitar tones and studio ambiances .

While many of the impersonations that you find online – like “Heart on My Sleeve” – were done without permission, a wave of firms are trying to bring the idea into the legal realm. Voice Swap AI , for example, allows users to license the voices of notable musicians.

Other producers are also using AI-powered impersonation to translate their music. Lauv, an American artist best known for his hit “I Like Me Better”, used AI technology to build a model of his voice . Later, he had Korean artist Kevin Woo sing his newest song “Love U Like That”. The AI model then transformed Woo’s performance to sound exactly like Lauv, albeit now singing in Korean. These AI-powered voice filters will become commonplace.

Hooky Sessions: Creating Lauv's "Love U Like That (Korean Version)" with AI

Musical Remixes & Reimaginations

One of the most interesting trends in the last few years is the boom in popularity of sped up and slowed down versions of songs. If you’re not familiar, it sounds like exactly what it is. There is a song that people are enjoying. Somebody speeds the song up or slows it down, applies a small bit of processing, and then posts that version to the internet. That modified version can become more popular than the original recording.

AI will likely turbocharge a listener’s ability to adjust the songs they like far beyond small modifications exemplified by the recent speed and reverb trends. Imagine, for example, you are listening to the Guns N’ Roses ballad “Patience ”. You wonder to yourself, “What would it sound like if Guns N’ Roses performed ‘Welcome to the Jungle’ in this style?” Unless you could dig up some rendition of the group doing their hard rock classic with acoustic guitars, you’d be out of luck. But with sufficiently advanced AI, the song could likely be reimagined without amplifiers and distortion with the click of a button.?

Though this technology isn’t readily available yet – in fact, most efforts seem to be focused on vocal rather than musical transformation – it will likely be in the realm of possibility in the coming years.

Stem Separation

In 2023, it was announced that The Beatles were releasing their final song , “Now & Then”. This was an interesting piece of news given that two of the group’s members had been dead for decades. The song was made possible by advances in stem separation powered by AI.

In short, John Lennon recorded a demo at his home in 1977 that the remaining members of The Beatles tried to complete in the mid-1990s. The problem was that the demo was very noisy and Lennon’s piano playing sometimes drowned out his voice. Jump forward to today and a neural network was able to extract Lennon’s voice from the demo with incredible clarity, making it possible for Paul McCartney and Ringo Starr – the two living Beatles – to finish the song.

The Beatles - Now And Then (Official Music Video)

While this technology has already become accessible through companies like LaLaL , Deezer , and Serato , it will likely continue to improve and enable a whole slew of new things. For example, we might be able to remix and remaster degraded recordings from decades ago. Furthermore, a producer who is looking to sample a piano from an older recording will be able to extract the sound without hearing the bleed from the other instruments.

Loop Libraries

One of the most important music firms to emerge in the last decade is Splice . Among other things, Splice is a subscription-based library of royalty-free samples and sounds. Scores of popular songs in the last decade use Splice samples.

Most of these Splice samples, whether they be short loops or sound effects, are created by producers who receive upfront payment for the work. In the near term, we will likely see more and more of these sample libraries composed of AI-generated beats. Furthermore, rather than searching for a particular type of sound and then using various filters to try to find something, you will most likely use a text or audio prompt to get an AI to generate the sound you want.

Background Muzak

According to Glimpse , a trends discovery tool, two of the fast growing music-related search terms are various phrasings of “AI music” and “copyright free music”. In the same way that AI technology will transform how producers generate the short loops that they use to make songs, AI technology will also transform how we create entire recordings. When I say “recordings,” I don’t mean those that you hear on your favorite pop radio station, at least not in the short term. I mean those that you hear when you aren’t paying much attention.?

These recordings, sometimes called “muzak” or “functional music,” are the things you’ll hear in the background of a YouTube video or on one of the many study and sleep playlists that populate Spotify. This music is non-descript, almost anonymous, and will almost certainly not be generated by humans in the coming years. Companies like Boomy and Endel are already making headway. In fact, Universal Music Group and Endel announced a strategic partnership in the middle of 2023.

As with much of the AI technology that has emerged recently, the ability for musical AI? technology to work is presupposed on the idea that there is a huge library of songs and sounds that an algorithm can be trained on. With this idea in mind, recall the micro-history of recorded music that I provided. The large majority of the popular music of yesterday and today is owned by three major corporations. Furthermore, the music industry of today makes most of its money by licensing that huge catalog of music that they own. These facts hold the key to how the music industry will face the wave of AI technology that is just beginning to emerge.?

If you want a technology that allows you to perfectly simulate Taylor Swift’s voice, that technology needs access to the Taylor Swift discography. If you want a technology that allows you to prompt the creation of a psychedelic funk reminiscent of The Temptations in the 1970s, that technology needs access to the music of The Temptations and 1970s funk. If you want a technology that allows you to hear a lo-fi version of the Nirvana album Nevermind, that technology needs access to Nevermind and lo-fi music. To make any of this happen, there are two options: the largest music companies need to license their catalogs to a variety of AI firms or those AI firms can roll the dice and use said music without a license under the assumption that they will likely get sued.

And the music industry loves lawsuits! In fact, you could probably write a history of popular music in the last 50 years just focusing on major copyright infringement lawsuits. According to the George Washington University Music Copyright Infringement Resource , music copyright cases that had a judicial opinion entered rose over 200% between the 1990s and the 2010s. This is particularly astounding given that “Relatively few of these disputes go to trial, and fewer still generate published judicial opinions.”

Will the likely onslaught of lawsuits stop the development of AI technology focused on music? If the lawsuits of the 2000s are any indication, probably not. Shutting down Napster did not permanently end music piracy. The thing that ended piracy was the rise of streaming services who properly licensed music to create a better alternative than the illegal options.

In summary, the music industry will likely come to embrace much of this technology as long as AI firms properly license the music catalogs necessary to train their models. This still begs one final question: Is any of this good for music?

Will AI Destroy Music as We Know It?

Up to this point, I’ve been talking as if music and the music business are equivalent. They are not. And it’s worth ruminating on why this is a false equivalence for a moment. Tons of people work in the music industry because they want to listen to, create, and promote great music. But ultimately, the industry’s goal is to make money from that music. While music can be a means to an end, whether that be financial or not, music is also an end in and of itself.

Making and listening to music might be one of the most human activities in existence. No human culture has ever been found that did not make some form of music . We sing songs at birthdays and at funerals. We listen to music packed in crowds and sitting by ourselves. To make music and to enjoy music is to be human.

Much of the AI technology that we have discussed will impact how people make music. If you are sitting in your bedroom and want to use some AI vocal simulator to make yourself sound like Freddie Mercury, that is fine. You almost certainly won’t have legal representatives from the band Queen breaking down your door to stop you. All ways to make music are legitimate.

Any issues you hear related to AI and music are almost always connected to the music business rather than music itself. The concern in this case is the same concern that has been voiced for a century: new technology will push musicians out of work and prevent the next generation of artists from surviving.

I am an optimist when it comes to most musical technology. The advent of recording did not destroy music. Neither did radio or drum machines or autotune or streaming. All of these things changed music and the music business. But musicians are robust and people will never stop making music. That said, there are some concerns we should be looking out for.

  • If people can create “new” music from long-dead superstars using AI, will it make it even harder for young artists to grow a fanbase in an already competitive market?
  • If you don’t want to license your music or likeness to be part of the training data in an AI model, is there any good way to opt out?
  • If you do license your music and likeness to be part of the training data in an AI model, how can we adequately compensate you for what you have provided?
  • If listeners have the ability to modify each and every song to their tastes, will people be driven further apart because they have no truly common culture?

These are hard questions. And I don’t have the answers to them. But I am certain that despite the challenges that lay ahead that music will persist. Humans don’t seem to be able to survive without it.

Thanks to Chris for the incredible breakdown of this fascinating topic. You might enjoy reading:

  1. How Have One-Hit Wonders Changed Over Time?
  2. Recorded Music is a Hoax
  3. Learn more about “Can’t get much higher” .

Postscript - is this the Sora of Music?

Editor’s Note

Have you ever tried Suno.AI ? A new AI company called Suno can generate entire songs from simple text prompts. Suno’s primary function is to create original music pieces instantly based on the text prompts it receives. The platform initially engaged users through its Discord channel for testing, later expanding its capabilities to a web interface, making AI-generated music more accessible to a wider audience.

Suno is a bit like the Sora of music. Suno crafts a 15-second song snippet, showcasing the platform’s rapid response and the creative potential of AI in music.

Perhaps Chris will one day help us understand tools like Suno AI and others like it better.



Fernando Debernardi

Dise?ador UX/UI | Dise?ador Figma | Dise?o de interfaces | Dise?ador UI | Dise?o de apps | Dise?o web

6 个月

?? As a musician who incorporates AI into my work, I find your post quite relevant, Michael. The intersection of AI and music is indeed a fascinating area to explore. In my own experience, AI has opened up new avenues for creativity, allowing me to blend digital and tangible elements in my compositions. It’s interesting to see how AI is not only changing the way we produce music, but also how it’s impacting other aspects of our culture. I look forward to seeing how this continues to evolve in the coming years. ?? Fernando Debernardi - Spotify: https://bit.ly/3ROKB7a

Ilya Nejdanov

We help manufacturing and industrial companies to get 15-20 high-quality leads monthly | Founder @Axellerato

8 个月

Really insightful piece Michael on the blend of music and data analytics!

Erik Blakkestad

Sales Development Representative at GrowthZone AMS

8 个月

While successful mainstream artists and the major labels are rightly concerned about their music assets, AI can (and I believe will) democratize music for songwriters. As the opportunity becomes clear, indie labels and then the majors will begin exploring licensing opportunities for voice cloning of their current and deceased artist catalogs. Lots to unpack on this topic but the key issues are consent by the license holders, streamlining the legal processing of licensing deals and the need for a new "all in-one" music licensing, production, and distribution platform (that I call), the next Spotify. https://medium.com/the-cake-articles/the-next-spotify-d9a8197eb11f

NaDine Rawls, MCJ

HDI Certified Support Desk Analyst

8 个月

It will help bad singers and bad musicians

In fact, this question has already been resolved, by a composer-performer: Jean-Michel Jarre, via an album called "equinoxe infinity", where a track was composed by AI, well before the explosion of LLM. AI will only initially help the creator, but as in all creative professions, humans will have to defend their relevance

要查看或添加评论,请登录

社区洞察