Edit model card

🎵🎵🎵AudioLCM:Text-to-Audio Generation with Latent Consistency Models

We develop AudioLCM building on LCM (latent consistency models) for text-to-audio generation.

code

Our code is released here : https://github.com/liuhuadai/AudioLCM)

Please follow the instructions in the repository for installation, usage and experiments.

Quickstart Guide

Download the AudioLCM model and generate audio from a text prompt:

from pythonscripts.InferAPI import AudioLCMInfer


prompt="Constant rattling noise and sharp vibrations"
config_path="./audiolcm.yaml"
model_path="./audiolcm.ckpt"
vocoder_path="./model/vocoder"
audio_path = AudioLCMInfer(prompt, config_path=config_path, model_path=model_path, vocoder_path=vocoder_path)

Use the AudioLCMBatchInfer function to generate multiple audio samples for a batch of text prompts:

from pythonscripts.InferAPI import AudioLCMBatchInfer


prompts=[
    "Constant rattling noise and sharp vibrations",
    "A rocket flies by followed by a loud explosion and fire crackling as a truck engine runs idle",
    "Humming and vibrating with a man and children speaking and laughing"
        ]
config_path="./audiolcm.yaml"
model_path="./audiolcm.ckpt"
vocoder_path="./model/vocoder"
audio_path = AudioLCMBatchInfer(prompts, config_path=config_path, model_path=model_path, vocoder_path=vocoder_path)

DEMO

🎵🎵Welcome to try our demo🎵🎵: https://huggingface.co/spaces/AIGC-Audio/AudioLCM

Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using liuhuadai/AudioLCM 3