November 8, 2024
Adobe Announces a Music Generating AI Tool Prototype; How It Works
Adobe announced a new experimental artificial intelligence (AI)-based tool on Wednesday that can generate music from simple text prompts. Named Project Music GenAI Control, Adobe says the AI tool not only generates music, but also gives users “pixel-level control for music” by letting them edit, lengthen, shorten, remix, and create loops. There are more edit optio...

Adobe announced a new experimental artificial intelligence (AI)-based tool on Wednesday that can generate music from simple text prompts. Named Project Music GenAI Control, Adobe says the AI tool not only generates music, but also gives users “pixel-level control for music” by letting them edit, lengthen, shorten, remix, and create loops. There are more edit options available that allow for granular control of the generated clip. Adobe’s latest AI tool arrives after the company announced an AI assistant for Acrobat and Reader last week that can summarise and analyse long PDFs.

Project Music GenAI Control, as per Adobe’s announcement, is a music creation and editing interface that works like most other AI tools. Users get a text field where they can add plain text prompts such as “happy rock song”, “intense hip hop beats”, or “sad jazz”. Once the input is sent, the AI will generate a music clip based on the request made. There are no voices in the generated clip; just musical instruments. But this is just one part of what this under-development platform can do.

As per Adobe, once the music track has been created, users then can access multiple editing tools to change how the music sounds. Some of these tools can change the intensity, structure, tempo, and more at any point of the clip. Users can further customise it to their preference by extending its length and adding another sequence to the music, remixing a part or entirety of the clip, or even turn it into a repeating loop.

One particular feature of the AI music generator is that it lets users change the audio using a reference melody, which can be any modern or classic music piece. Once done, user can change the style and genre of the music. Adobe says this deep control over the generated clip makes it easy to use as a video intro, for podcasts, to create songs and music pieces, and more.

On data privacy, Adobe claims that it only used publicly available data to train its AI model. The company has not shared any details around the architecture of the foundation model or the interface, so no other details are known. It is worth mentioning that Adobe is not the only player in the AI music generation space. Google has its own music generator called MusicLM which is under development, and Meta has its AudioCraft, which is an open-source AI model for music generation and is available on GitHub.


Affiliate links may be automatically generated – see our ethics statement for details.

For details of the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies at the Mobile World Congress in Barcelona, visit our MWC 2024 hub.