Google turned brain waves into music tracks with the help of artificial intelligence

The new miracle of artificial intelligence: With the help of artificial intelligence, Google has succeeded in turning brain waves into music tracks.
 Google turned brain waves into music tracks with the help of artificial intelligence
READING NOW Google turned brain waves into music tracks with the help of artificial intelligence

The new products, programs and experiments that appear every month show us how big the potential of productive artificial intelligence really is. Google now comes up with a different example that shows an impressive side of this technology.

Working with researchers from Japan, Google has found a way to generate music from human brain activity captured using functional magnetic resonance imaging (fMRI) and reconstructed with Google’s MusicLM music creation model.

As Google explains in a research paper titled “Brain2Music: Reproducing Music from Human Brain Activity,” 15-second clips were randomly selected from 540 music tracks spanning ten different genres. Five participants listened to the clips with a pair of MRI-compatible headphones while their brain activity was scanned.

The researchers uploaded the data to MusicLM to “predict and reconstruct the types of music to which the human subject was exposed.” As a result, the music created bore “semantically” similarities to the music the subjects originally listened to.

If you want to listen to some of the reconstructed brain activity soundtracks, you can visit Google’s webpage with the original stimuli and reconstructions for the study.

In addition, the research team cites three factors that limit the quality of AI music:

  • The information contained in fMRI data is very sparse in time and space (observed regions are 2×2×2 mm3, many orders of magnitude larger than human neurons).
  • Information contained in music embeds where we recreated the music (we used MuLan, where ten seconds of music is represented by just 128 numbers).
  • Limitations of our music production system. When we reviewed MusicLM, we found that it was open to improvement, both in the way it adhered to the text prompt and in the fidelity of the sound produced.

As with most language models, this work can be further developed. Still, it has already been shown that AI can effectively use your brain activity to reproduce the sounds you hear. But if you’re worried about a stranger scanning your brainwaves and stealing your thoughts, you probably don’t need to worry—at least for now. The team notes that volunteers had to spend hours in a large fMRI scanner for this study.

Comments
Leave a Comment

Details
87 read
okunma62508