What’s Old is New: How Audio Separation Technology is Revitalizing Music Catalog

What’s Old is New: How Audio Separation Technology is Revitalizing Music Catalog

It’s fair to say that generative AI has the music industry in a panic.

But while (incredibly legitimate) concerns over the legal and copyright implications of generative AI dominate the headlines, the potential for wider AI solutions to unlock new, previously unattainable value from existing music often gets lost in the noise.

Moreover, as the music industry shifts its attention to monetizing fandom , a new report from MIDiA Research on the state of music AI has found that consumers today are more interested in using AI to modify their favorite existingsongs than to generate new ones.

The opportunity is clear – the industry is sitting on a goldmine of content just waiting to be made editable, accessible, and interactive, and that’s where audio separation technology comes in.

As we announce our exciting new partnership with AudioShake , the industry leader in this space, we explore the diverse applications for this technology, from revitalizing deep catalog and unlocking lucrative syncs to building new immersive experiences for fans to engage with their favorite content like never before.

Landing lucrative sync deals

Sync is a vital revenue stream for music rights holders, but as we covered in a recent Synchblog , the sync industry often lags behind in its processes and adoption of technology.

According to AudioShake , not having the right assets available in the first place is leading to countless missed opportunities for artists, labels and publishers, who are unable to fulfill 30-50% of the sync requests they receive because they don’t have instrumentals.

This is a huge gap that can easily be filled using audio separation technology to create stems and instrumentals on the fly from any sound recording. No more tracking down original project files from the 70s (if they even exist) or constantly chasing artists for missing assets.

In the fast paced world of sync, these tools can give you the competitive edge you need to land more lucrative sync deals. Equally, integrating these processes into your catalog management and sync tools can be a huge time saver. Synchtank users, for example, now have the power to generate high-quality AudioShake stems in seconds, without ever leaving the platform!

Remixing and remastering classics

Audio separation technology unlocks so many possibilities for enhancing and repurposing legacy content. Take IP king Disney Music Group, for example, who have just collaborated with AudioShake to “unlock new listening and fan engagement experiences” for its legendary catalog.

DMG will now be able to separate any piece of audio, even from decades old classics, into its components which then enables the remixing and mastering of old hits for new audiences and experiences.

It also gives labels the ability to easily iterate and test ideas for their catalog. For example, by leveraging audio separation technology to keep existing tracks fresh and test out modern remixes in new styles and genres, while also offering up their catalog in its entirety for sampling.

Unlocking new fan engagement experiences

As the industry looks beyond streaming, it is increasingly turning to fandom to drive the next stage of growth. According to Goldman Sachs , improved monetization of super fans could represent $3.3bn of incremental revenue by 2030.

Fans today are looking for a more interactive and collaborative relationship with their favorite artists and there’s a huge opportunity to leverage stems to enhance engagement and deepen the artist-fan relationship. Just last month, for example, SoundCloud teamed up with a service called Fadr to allow fans to remix Tinashe’s track ‘Nasty’ using stems which they could also download to use with their own kit.

As outlined by MIDiA Research’s latest report on music and AI, “the consumer opportunity lies in modification, not generation”, meaning that music fans want to interact with and repurpose their favorite content, not replace it, presenting a huge opportunity for catalog owners.

As the line between consumption and creation becomes increasingly blurred, there’s so much room to create new experiences and it feels like we’re only just scratching the surface.

Experimenting with sound in spatial environments

In the past few decades there has been a growing trend toward immersive listening experiences. Amazon Music, Deezer, Tidal, and Apple Music all offer spatial audio, for example, with the latter recently announcing its plans to pay up to 10% higher royalty rates for streams of music mixed in Dolby Atmos.

Audio separation technology is helping to democratize this space and bring new opportunities to legacy catalogs. AudioShake’s AI stems , for example, have been used for creating new spatial Dolby Atmos mixes for legendary artists including De La Soul and Nina Simone.

Spatial audio is also sparking new opportunities in the health and wellness space with a number of apps integrating music for use in meditation, to relieve stress, improve sleep, and more.? We’re even seeing labels being created that are solely dedicated to creating ambient music in Dolby Atmos.

Creating customized, interactive music experiences

From gaming and social media to wellness and fitness, interactive audio is increasingly being leveraged to create an enhanced, immersive user experience. For example, you might be working out using a fitness app and the music responds to your movements.

The potential to use stems in this space is huge, and we can expect programmers and app makers to take more advantage of this technology as they look to level up their offerings in a competitive market.

Reactional Music is an example of a company creating custom audio-visual experiences in the gaming space, and has deals with a number of game developers and record labels. Generative music app Endel , meanwhile, creates personalized sound environments to match user activities and has partnered with a number of high profile labels.

Its CEO Oleg Stavitsky details the potential of this technology: “Devices around us know our average heart rate and step count, our wake up and sleep time, our sex, age, chronotype, and menstrual cycle. Imagine feeding all this information into a generative AI model and adding artist stems into the system. What you get is music that lives and breathes with you.”


Nic Burger

Director Digital and Digital Ops at Universal Music Group Sub Saharan Africa

3 个月

Jessica and the team at AudioShake are amazing and have helped us with some major syncs and catalogue revitalization projects as well. Congratulations on your partnership. cc Jessica Powell

要查看或添加评论,请登录

社区洞察

其他会员也浏览了