This is a ControlNet model that turns main melody spectrograms to accompaniment spectrograms. It's trained on top of Riffusion using music downloaded from YouTube Music.
The main melody and accompaniment is separated by Spleeter. This assumes that your main melody will be vocals. Main melodies other than vocals are not tested yet. The dataset contains vocals in Traditional Chinese, English and Japanese.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.