Overview
MeanAudio is a novel MeanFlow-based model tailored for fast and faithful text-to-audio generation. It can synthesize realistic sound in a single step, achieving a real-time factor (RTF) of 0.013 on a single NVIDIA 3090 GPU. Moreover, it also demonstrates strong performance in multi-step generation.
Environmental Setup
1. Create a new conda environment:
conda create -n meanaudio python=3.11
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 --upgrade
2. Install with pip:
git clone https://github.com/xiquan-li/MeanAudio.git
cd MeanAudio
pip install -e .
Quick Start
To generate audio with our pre-trained model, simply run:
python demo.py --prompt 'your prompt' --num_steps 1
This will automatically download the pre-trained checkpoints from huggingface, and generate audio according to your prompt.
The output audio will be at MeanAudio/output/
, and the checkpoints will be at MeanAudio/weights/
.
Alternatively, you can download manually the pre-trained models from this Folder, and put them into MeanAudio/weights/
. Here is a detailed explanation of the downloaded checkpoints:
fluxaudio_fm.pth: The Flux-style flow transformer trained on WavCaps, AudioCaps and Clotho dataset with the standard flow matching objective. It is capable of generating audio with multiple ($\geq 25$) sampling steps. You can run
scripts/flowmatching/infer_flowmatching.sh
to generate sound with this model.meanaudio_mf.pth: The Flux-style flow transformer fine-tuned on AudioCaps with the Mean Flow Objective, supporting both single-step and multi-step audio generation. You can run
scripts/meanflow/infer_meanflow.sh
to generate sound with it.Others: The BigVGAN Vocoder: best_netG.pt. The 1D VAE: v1-16.pth. And the CLAP encoder:
music_speech_audioset_epoch_15_esc_89.98.pt:
- Downloads last month
- 10