As a musician turned tech director, I simply HAVE to post about the AI recreation of "Another Brick in the Wall" based on electrical signals in the brain! ?? So what happened and what does it mean? THE GOSS ?? University of California, Berkeley researcher Robert Knight and his team have been using electrodes implanted into the surface of the brain to treat epilepsy in 29 patients. They decided to also record the electrical signals of those same patients while listening to Pink Floyd's "Another Brick in the Wall". They trained an AI model on the signal data and part of the song. There were some clear connections between some electrodes and pitch, melody, harmony, and rhythm. Using this model, they fed the AI electrical signals from a 15-second clip of the song (a clip NOT used in the training data) and the computer, in turn, delivered back its estimation of what it would sound like, with 43% accuracy. The New Scientist link in comments has a sample of the actual clip vs the AI reproduction. WHAT DOES THIS MEAN? ?? Well for now, not much. The training data and the results were very similar because the same song was used. However, should more training data be provided and more links shown, there are some pretty cool potential applications. COMPOSE VIA THOUGHT? ?? When I was about 11 or 12, I used to compose symphonies in my head, and throughout my teen years I often recorded myself humming or singing simple melodies for songs I was thinking up. The latter wasn't so hard to do, but as a kid with no musical or notation training, how would I ever get a symphony down onto paper? I wouldn't. This technology has the potential to change that. Imagine a studio where you could use AI technology to take music in your head and instantly hear it back or have it transcribed into musical notation for editing and revising. AI models could even help you refine it (think the old school 'Clippy' helper but for composition: "Are you trying to do a key change here? Don't forget the horns!"). The use of this advance for music composition was even mentioned by another member of the research team, Ludovic Bellier. SPEECH REGENERATION FOR CLINICAL PATIENTS ??? A more likely development that could happen within a few years(!) would be the ability to train the model to recognise the musical elements of speech: not just pitch and tone, but even emotion. Example use cases here are amyotrophic lateral sclerosis or aphasia, two conditions that can lead to patients being unable to talk (easily). With such an AI model hooked up to some sort of speaker, these patients could communicate using their thoughts alone, and in a way which doesn't sound robotic. OK LET'S GO WOOOO? ?? Before you get too excited or freaked out, take a breath. This involves literal electrodes placed into the brain's surface. It is unlikely to be hitting consumer shelves and will remain experimental and/or clinical for a long while yet. #ai #aiandml #aiinhealthcare #aiinmusic
Exciting times. I have fantasized about this kind of tech for years
Thrilled to see your passion for pushing boundaries in your project! Remember what Steve Jobs said: Stay hungry, stay foolish. Your journey reminds us all to embrace the unknown with curiosity and courage ????- Keep shining!
Engineering Manager in Mental Health Tech ???? | TEDx Speaker | Author of "You Belong in Tech"
1 年Scientific American: https://www.scientificamerican.com/article/neuroscientists-re-create-pink-floyd-song-from-listeners-brain-activity/ New Scientist: https://www.newscientist.com/article/2387343-ai-recreates-clip-of-pink-floyd-song-from-recordings-of-brain-activity/?utm_source=aitoolreport.beehiiv.com&utm_medium=newsletter&utm_campaign=will-chatgpt-moderate-your-social-content Rolling Stone: https://www.rollingstone.com/music/music-news/pink-floyd-song-reconstructed-brain-activity-1234807287/ Image courtesy of hotpot.ai.