Many technology giants continue to make moves in the field of artificial intelligence. Meta, the parent company of Facebook and Instagram, is one of them. The company, led by Mark Zuckerberg, announced a few weeks ago that it has developed its own special chip for artificial intelligence.
Now, another artificial intelligence step has come from Meta. Meta’s Audiocraft research team has announced an open source language model called MusicGen.
MusicGen can generate music from text
We present MusicGen: A simple and controllable music generation model. MusicGen can be prompted by both text and melody.
We release code (MIT) and models (CC-BY NC) for open research, reproducibility, and for the music community: https://t.co/OkYjL4xDN7 pic.twitter.com/h1l4LGzYgf— Felix Kreuk (@FelixKreuk) June 9, 2023
The MusicGen model, which we can call the audio version of ChatGPT, can produce new music with text commands. Users can use the model by defining the style of music they want. They can also add an existing melody if they want.
https://twitter.com/twitter/status/1667086362867290112
A video shared by artificial intelligence researcher Felix Kreuk at Meta reveals what MusicGen is capable of. In the video, it is seen that a music has already been added, and then this music can be changed by entering a command. In another video, it is seen that a sound can be created by typing the command “a pop dance song suitable for the beach with catchy melodies, tropical percussion instruments and lively rhythms” without using any ready-made sounds.
The research team used 10,000 high-quality music from an internal dataset, as well as 20,000 hours of licensed music, including Shutterstock and Pond5 tracks, to train the model, which can produce around 12 seconds of sound. It is possible to access a demo via MusicGen’s Hugging Face AI.
The use of artificial intelligence in music production is increasing rapidly. A few months ago, Google announced a new artificial intelligence model called MusicLM, which turns texts into music.