TOTW 21: The Human Factor
This article has been in the making for quite some time. But, overwhelmed by the Belgian Festival Season, I felt the urge to speed things up. As I relax by making and playing music myself (albeit at an amateur level), music generated by AI is a topic close to my heart. It raises questions about the future of musicians and, by extension, artists in general. I’ve read numerous articles on the subject and watched a few YouTube videos from my musical inspirations. The headline, undoubtedly, is this: humans are touched by other humans… mastering their instruments, playing in harmony with fellow musicians, dancing to the beat, clapping their hands, or delivering a vulnerable and moving speech. With all their small imperfections making things ... perfectly human.
Adam Neely (He’s coming to Gent Jazz, and I have no tickets—anyone?) captures this sentiment well (although I do not agree on all aspects). He states you need to think about creating a Turing test for music. This can be approached in two ways: Musical Output Tests and Musical Directive Tests. Let’s dive a little deeper into both.
Musical Output Tests
This test requires little interaction, as it focuses on the output. Imagine asking, “Generate me a funky ballad with a male falsetto voice, C# Major, 125 bpm, bell sounds, slow pads, held back beats.” (Can you guess the song I’m thinking of?) An output is generated, which a human then evaluates to determine whether it was made by a person or a machine. Here, music is seen as a product, something that is printed or streamed. In this context, music is a noun.
This is the realm of tools like Udio , Suno , and StableAudio ( Stability AI ). AI will pass, or already passes, this test. It has its value and represents a significant income source for many artists. There is a real risk of people losing income through streaming (although currently, only a few lucky ones earn a substantial income from streaming). AI may replace those who create music for video games, content creators, or stock music. Or better ... people unfamiliar with these tools might be replaced by those proficient in using them to enhance productivity (a trend true for all AI). It will likely replace the overabundant sample libraries in the market, as you can now request precisely what you need to mix into your music. Just like stock photos may soon become obsolete.
领英推荐
For some artists, these tools will never have a place in their toolkit, which is a valid and respectable choice. For others, AI can be a valuable addition to their arsenal. AI is here to stay, and I, for one, am keen to explore the possibilities it offers in everything I do, including playing music in my little studio. Although Andrew Huang’s video is nuanced, I don’t fully agree with him ;-) However, I do embrace initiatives like those of Jameson Nathan Jones (which already touch on the next type of test - https://youtu.be/4oXB4cy8Gkg?si=4z6IAs9MI6oSNZm4).
Musical Directive Tests
This test focuses on interaction—how musicians interact with each other to change rhythm or dynamics or engage with the audience, deciding when to go to the bridge or repeat the chorus. Here, real-time interaction is key, and so output is interactive, in the moment, non-repeatable. Music is seen as a process, transforming it into a verb.
To my knowledge, there are no current AI tools that span this domain. Given the raw computing power required, I doubt they’ll be here soon. Yes, AI-enhancing tools that enable your DAW to change rhythm in real-time will be here (or are they already?). But by the time jamming with AI systems becomes mainstream, we’ll be interacting with AI in more profound ways. Interfacing our brains with raw computing power will likely impact our daily lives as much as smartphones do today. (Those who attended Kathleen Demol ’s and my presentation about a year ago will remember we likened the current wave of AI to the impactful rise of personal computers, the internet, smartphones, etc.). So, I’ll be going to and enjoying gigs and festivals for quite some time. And artists mastering this domain of music will continue to shine. Because whatever happens, I do believe we will continu to connect with humans. AI is here to stay, but so is the Human Factor.
We’re moving into the steep part of the exponential curve of technology—be it computing, nanotechnology, healthcare, or biotech. It’s the combination of all these advancements that will revolutionize the way we live. Things are speeding up quickly, so this interaction capability may be here sooner than we think. For those interested, I’m currently reading Ray Kurzweil’s latest book, “The Singularity is Nearer: When we Merge with AI”, to better understand the era we’re entering. In that respect, I don’t think I’ve explained the ‘Flywheel of Continuous Accelerated Progress’ in these TOTWs. Let me know in the comments if you’d like to hear more about it.