Artificial Intelligence and Music: What to Expect?

Artificial Intelligence and Music: What to Expect?

Wanna be a singer? AI will compose you an album.

In the 2004 movie iRobot, Will Smith asks a robot the rhetorical questions:

- Can a robot write a symphony? Can a robot turn a... canvas into a beautiful masterpiece?

- CAN YOU?

And really, how many of us can succeed in composing something like the Chopin's Nocturne, Freddie Mercury's Bohemian Rhapsody or Stairway to Heaven by Led Zeppelin? Well, indeed, only a few. You know, it is quite obvious though, in order to create some extraordinary masterpiece you have to identify the exact algorithm of something really special. 

But what about Artificial Intelligence, today’s cutting-edge problem-solver that is by the way capable of conducting complex algorithms? Can a computer program really beat humans in creating such algorithms and generate musical compositions that are good?

Today, you can find AI applications in music composition, performance, theory, and digital sound processing. Moreover, AI helps musicians to test new ideas, find the optimal emotional context, integrate music into modern media and just have fun. But does it mean all of this stuff can be compared with human creations? Let’s find it out together. 

A Bit of History: How It All Got Started

No alt text provided for this image

Courtesy of the University of Illinois Archives

# 1 Illiac Suite - Not bad for the first time!

Illiac Suite for String Quartet it is the name of the first work totally written by artificial intelligence. Believe it or not, but this monumental event happened in 1957. For this, two great minds - Lejaren Hiller and Leonard Issacson programmed the ILLIAC I, for the record, one of the first EOMs created in the world. Just like that humans get the first computer capable of generating compositional material. 

In theory, Illiac Suite, the first song created by AI seemed to be a real masterpiece. Just look at its incredible and complex musical structure: the piece consists of four movements, corresponding to four experiments. The first is about the generation of cantus firmi, the second generates four-voice segments with various rules, the third deals with rhythm, dynamics and playing instructions, and the fourth with various models and probabilities for generative grammars or Markov chains.

But in practise, "Electric brain" just generated a series of random numbers that corresponded to certain musical elements and it turned out to be quite a failure. The piece led music fans into complete confusion, one of the listeners even compared the Suite with the sounds of the barnyard. Same as everyone I guess. After all, the first pancake is always a bit tricky, isn’t it?

# 2 Getting random lyrical writing with David Bowie

Next tries to involve AI in the music industry was much more interesting and accessible to the public. For instance, in the 70s, the interest in music algorithmization even touched the well-known artists of the pop scene. The one who first started thinking in this direction was David Bowie, an indisputable iconic figure in the music industry. Together with Ty Roberts, he developed Verbasizer, a lyric-writing Mac app. 

The Verbasizer was a digital version of an approach to lyrical writing that Bowie had been using for decades, called the cut-up technique. Popularized by writers William Burroughs and Brion Gysin, the technique relied on source literary material - a newspaper article or diary entry, perhaps - that had been cut up into words or phrases, and re-ordered into new, random, potentially significant meanings.

"What you end up with is a real kaleidoscope of meanings and topic and nouns and verbs all sort of slamming into each other.” - David Bowie

The results appeared on three albums from that period - Low, Heroes, and Lodger, the so-called Berlin Trilogy - today considered some of Bowie's best work. The Verbasizer helped Bowie to write, for example, the lyrics of the song Outside:

# 3 EMI - a real breakthrough in Music Intelligence

The more significant level of writing music using computers emerged in 1980 at the University of California while a professor and composer David Cope developed a system called EMI (Experiments in Musical Intelligence). The system was based on the idea of “recombinatorics,” and Markov chains (as well as Illiac Suite). AI was used to analyze existing musical passages and create its new pieces based on them. 

That's, for example, what imitation of Vivaldi’s works sound like:

EMI was a real breakthrough. Through analyzing different works, EMI could generate unique structured compositions within the framework of this genre. In total, the system has created over a thousand works based on the products of 39 composers who represented different musical styles. Cope said that at first, this music caused listeners to shock and even fear from the knowledge that it was written by a machine, but gradually the reaction became more positive.

Musical Intelligence Today: AI as a New Bach, Rock Artist, and More...

No alt text provided for this image

# 1 Jazz Continuator: Playing with Virtual Musicians

Wouldn't it be nice to play with your favorite, but inaccessible, jazz musicians?

So going above and beyond, Cope’s ideas were later transformed into the creation of Continuator, a new algorithm designed by Fran?ois Pachet in Sony Computer Science Laboratory. Continuator was a new invention, it could learn and interactively play with live-musician. More specifically, Continuator could continue to compose a piece of music from the place where the live-musician stopped.

So, this time, everything seemed much better than previous attempts. This has also been confirmed by the Turing test: Continuator played with a professional pianist, and most listeners could not even differentiate when the computer and when the live-musician plays. Just listen to this melody and judge for yourself, I'm sure you will be fascinated, at least:


# 2 David Cope: 'You pushed the button and out came hundreds and thousands of sonatas'

One more attempt to teach computer music creativity by restless David Cope. This time it was Emily Howell, a program created in 2012. Cope assumed that majority of songs are not unique, and the great composers absorbed the musical harmonies that had been created earlier, and their brains “rearranged” melodies and phrases with their characteristic, recognizable methods. These views formed the basis of new and not bad development in Music Intelligence:


# 3 Iamus: Is this the 21st century's answer to Mozart?

Since the time of David Cope's first research, science has taken a big step towards new achievements. New computer algorithms, which to some extent display the algorithms of neuron activity in the human brain, allowed the machines to memorize information and learn. Consequently, Artificial intelligence has learned to process unstructured data and partially understand even quite complex music.

Iamus' Opus one, created on October 15, 2010, is the first fragment of professional contemporary classical music ever composed by a computer in its own style (rather than attempting to emulate the style of existing composers as was previously done by David Cope). 

Iamus is a computer cluster, by the way, it is located at Universidad de Málaga. Powered by Melomics' technology, the composing module of Iamus takes 8 minutes to create a full composition in different musical formats, although the native representation can be obtained by the whole system in less than a second. 


# 4 IBM Watson writes an emo song with musician Alex Da Kid

Nowadays, AI-powered programs are able to create quite absorbable for our music industry melodies in a couple of minutes, as well as generate lyrics that correspond to a given emotional coloring. AI music is gradually advancing the well-known music charts, for example, the British producer Alex Da Kid's track “Not Easy”. The song took the first place in the Top 40 charts of Billboard Charts, and this takes more than a smile.

The IBM Watson supercomputer, equipped with an artificial intelligence question-answer system, was used to create the song. The computer analyzed a huge number of blogs, articles, and data from social media in order to formulate the most pressing topics of our time and characterize their emotional mood.


# 5 Beatles-inspired Daddy’s Car

Researchers at Sony have been working on AI-generated music for years and has previously used AI to create impressive jazz tracks. But in 2016 was the first time the Sony CSL Research Laboratory had released pop music composed by AI, and the results were impressive.

Daddy’s Car’s is a catchy, sunny tune reminiscent of The Beatles. To write the song, the AI program offered melodies and lyrics that are generated based on The Beatles original pieces, but the further arrangement was done by live-musicians. 


# 6 Hello World - the first music album composed by AI + artists

Hello World started as a research project (the Flow Machines project) in which scientists were looking for algorithms to capture and reproduce the concept of musical “style”. Many scientific and technical results were obtained. Some prototypes were built with rudimentary interfaces (and a lot of bugs). The novelty and huge potential of the approach triggered the attention of a few talented musicians who joined the team. 


# 7 Taryn Southern and her album entirely composed with AI

After participating in the American Idol TV show, Taryn became a star. The next logical step was to be the release of a new album. Southern decided to take a non-standard way - to write an album using AI. As a tool, she chose a startup called Amper. This program with the help of internal algorithms is capable of producing sets of melodies in accordance with a given genre and mood. The result was the song Break Free.

Later, Southern released the full-length album I-AI, co-produced by Amper program. However, to say that all the music was written to him by the AI would be an exaggeration. Amper created the basic random structures, the rest was the work of Southern. Not to mention the fact that she wrote the lyrics.


# 8 Meet Dadabots, the AI death metal band playing non-stop on Youtube

Founded by Dadabots Boston programmers CJ Carr and Zack Zukowski are engaged in a very unusual thing - they teach artificial intelligence how to write "heavy" music. In 2017, the developers presented the black metal album “Coditany of Timeness”, and now showed the public the result of the algorithm that composes music in the style of death metal. According to them, the algorithm creates a pretty decent for this genre of music that does not require corrections, so they decided to give him the will to compose tracks live on YouTube.

For learning neural network developers used the songs of the Canadian band Archspire, which are notable for their fast tempo. As a result, the algorithm learned to impose speed drums, guitar, and aggressive vocals so that the result sounded like real death metal music.


Interesting projects in the field of Musical Intelligence

Now research is focused on the use of artificial intelligence in the compilation of musical composition, performance, and digital sound processing, as well as the sale and consumption of music. Many AI-based programs and applications are used for teaching and creating music. Here are some of them:

  • AlgoTunes is a music company that develops applications that generate music. On the site, anyone can create a random music piece with a given style and mood with one keystroke - however, the choice of settings is very limited. Music is created by a web application in a few seconds and is available for download as WAV or MIDI files.
  • MXX (Mashtraxx Ltd), founded in 2015, is the first artificial intelligence mechanism in the world that instantly converts music into video, using only a stereo file. MXX allows you to adapt music for specific user content: for example, for sports activities and running, for plots of computer games, and so on. The first MXX product, Audition Pro, allows anyone to edit music for video: load an already existing song and automatically adjust the increase in sound intensity, attenuation and pause according to the dynamics of the video. Now MXX provides services to leading commercial libraries, music services, game producers and content studios who need music adapted for modern media.
  • Orb Composer — a program developed by Hexachords to assist in the compilation of orchestral compositions at the stages of genre selection, selection of instruments, the composition of the track.
  • OrchExtra can help a small high school or city theater ensemble to collect a complete Broadway score. OrchExtra plays the role of missing instruments, tracking tempo fluctuations and musical expression.

The Pros, Cons, and Future of Artificial Intelligence in Music

No alt text provided for this image

Let’s start with the advantages. So, today you can certainly reap benefits from artificial intelligence as a composer. If you need background music for video presentations - there is a Jukedeck. If you have a creative crisis and do not have enough ideas - Amper will help you. If you want to synthesize some unique longplays - pay attention to Magento. This list will undoubtedly be extended soon and over time there will be new applications with even more advanced opportunities.

Besides, AI has all chances to excel live-composers someday. And before thinking it's no way possible, let me explain my personal view on this matter. So, roughly speaking, AI is a machine, right? We can program it and show how to do something. Once giving it proper instruction and the right algorithm, it can drop one good song after another. More importantly, however, predict the songs we really want to listen to.

And while live-composers have their own ups and downs, AI can show steadfastness and flexibility. For example, through understanding various human propensities, a machine can generate any music 'on the fly' based on your current biorhythms. You're in high spirits - here's an appropriate melody for you. You are sad - here's ambient but without a violin, for example, because you hate its sound. A machine will just make everything easier. 

So what are the disadvantages, you may ask? Well, all that glitters isn't gold. Perhaps the future looks bright - we do not whether it will work or not. Probably, music made ‘artificially’ won't be able to evoke feelings we usually experience after the creation of real artists. As a rule, we identify ourselves, our feelings and experiences with a singer; we believe he or she had a similar life path, problems, temper and mood as we do. 

Although Artificial Intelligence is good in complex algorithms, it doesn't promise to compose notable songs like Chopin's Nocturne, Freddie Mercury's Bohemian Rhapsody or Stairway to Heaven by Led Zeppelin. After all, the possible mechanism of typically developing legendary songs occurs by itself, from deep inside the author's soul, his passionate feelings and unique life experience. 

So, whether a computer can completely oust musicians from a musical process is a rather philosophical question. And there is hardly any answer to it. 

Do you agree or disagree with this statement - and why?

Inspired to learn more about AI, ML & Data Science? Welcome to my Medium and Instagram blog.


要查看或添加评论,请登录

社区洞察