StableAvatar
StableAvatar: Infinite-Length Audio-Driven Avatar Video Generation
*Shuyuan Tu1, Yueming Pan3, Yinming Huang1, Xintong Han4, Zhen Xing1, Qi Dai2, Chong Luo2, Zuxuan Wu1, Yu-Gang Jiang1
[1Fudan University; 2Microsoft Research Asia; 3Xi'an Jiaotong University; 4Hunyuan, Tencent Inc]
Audio-driven avatar videos generated by StableAvatar, showing its power to synthesize infinite-length and ID-preserving videos. All videos are directly synthesized by StableAvatar without the use of any face-related post-processing tools, such as the face-swapping tool FaceFusion or face restoration models like GFP-GAN and CodeFormer.
Comparison results between StableAvatar and state-of-the-art (SOTA) audio-driven avatar video generation models highlight the superior performance of StableAvatar in delivering infinite-length, high-fidelity, identity-preserving avatar animation.
Overview
The overview of the framework of StableAvatar.
Current diffusion models for audio-driven avatar video generation struggle to synthesize long videos with natural audio synchronization and identity consistency. This paper presents StableAvatar, the first end-to-end video diffusion transformer that synthesizes infinite-length high-quality videos without post-processing. Conditioned on a reference image and audio, StableAvatar integrates tailored training and inference modules to enable infinite-length video generation. We observe that the main reason preventing existing models from generating long videos lies in their audio modeling. They typically rely on third-party off-the-shelf extractors to obtain audio embeddings, which are then directly injected into the diffusion model via cross-attention. Since current diffusion backbones lack any audio-related priors, this approach causes severe latent distribution error accumulation across video clips, leading the latent distribution of subsequent segments to drift away from the optimal distribution gradually. To address this, StableAvatar introduces a novel Time-step-aware Audio Adapter that prevents error accumulation via time-step-aware modulation. During inference, we propose a novel Audio Native Guidance Mechanism to further enhance the audio synchronization by leveraging the diffusionβs own evolving joint audio-latent prediction as a dynamic guidance signal. To enhance the smoothness of the infinite-length videos, we introduce a Dynamic Weighted Sliding-window Strategy that fuses latent over time. Experiments on benchmarks show the effectiveness of StableAvatar both qualitatively and quantitatively.
News
[2025-8-11]
:π₯ The project page, code, technical report and a basic model checkpoint are released. Further lora training codes, the evaluation dataset and StableAvatar-pro will be released very soon. Stay tuned!
π οΈ To-Do List
- StableAvatar-1.3B-basic
- Inference Code
- Data Pre-Processing Code (Audio Extraction)
- Data Pre-Processing Code (Vocal Separation)
- Training Code
- Lora Training Code (Before 2025.8.17)
- Lora Finetuning Code (Before 2025.8.17)
- Full Finetuning Code (Before 2025.8.17)
- Inference Code with Audio Native Guidance
- StableAvatar-pro
π Quickstart
For the basic version of the model checkpoint (Wan2.1-1.3B-based), it supports generating infinite-length videos at a 480x832 or 832x480 or 512x512 resolution. If you encounter insufficient memory issues, you can appropriately reduce the number of animated frames or the resolution of the output.
π§± Environment setup
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu124
pip install -r requirements.txt
# Optional to install flash_attn to accelerate attention computation
pip install flash_attn
π§± Download weights
If you encounter connection issues with Hugging Face, you can utilize the mirror endpoint by setting the environment variable: export HF_ENDPOINT=https://hf-mirror.com
.
Please download weights manually as follows:
pip install "huggingface_hub[cli]"
cd StableAvatar
mkdir checkpoints
huggingface-cli download FrancisRing/StableAvatar --local-dir ./checkpoints
All the weights should be organized in models as follows The overall file structure of this project should be organized as follows:
StableAvatar/
βββ accelerate_config
βββ deepspeed_config
βββ examples
βββ wan
βββ checkpoints
β βββ Kim_Vocal_2.onnx
β βββ wav2vec2-base-960h
β βββ Wan2.1-Fun-V1.1-1.3B-InP
β βββ StableAvatar-1.3B
βββ inference.py
βββ inference.sh
βββ train_1B_square.py
βββ train_1B_square.sh
βββ train_1B_vec_rec.py
βββ train_1B_vec_rec.sh
βββ audio_extractor.py
βββ vocal_seperator.py
βββ requirement.txt
π§± Audio Extraction
Given the target video file (.mp4), you can use the following command to obtain the corresponding audio file (.wav):
python audio_extractor.py --video_path="path/test/video.mp4" --saved_audio_path="path/test/audio.wav"
π§± Vocal Separation
As noisy background music may negatively impact the performance of StableAvatar to some extents, you can further separate the vocal from the audio file for better lip synchronization. Given the path to an audio file (.wav), you can run the following command to extract the corresponding vocal signals:
pip install audio-separator
python vocal_seperator.py --audio_separator_model_file="path/StableAvatar/checkpoints/Kim_Vocal_2.onnx" --audio_file_path="path/test/audio.wav" --saved_vocal_path="path/test/vocal.wav"
π§± Base Model inference
A sample configuration for testing is provided as inference.sh
. You can also easily modify the various configurations according to your needs.
bash inference.sh
Wan2.1-1.3B-based StableAvatar supports audio-driven avatar video generation at three different resolution settings: 512x512, 480x832, and 832x480. You can modify "--width" and "--height" in inference.sh
to set the resolution of the animation. "--output_dir" in inference.sh
refers to the saved path of the generated animation. "--validation_reference_path", "--validation_driven_audio_path", and "--validation_prompts" in inference.sh
refer to the path of the given reference image, the path of the given audio, and the text prompts respectively.
Prompts are also very important. It is recommended to [Description of first frame]-[Description of human behavior]-[Description of background (optional)]
.
"--pretrained_model_name_or_path", "--pretrained_wav2vec_path", and "--transformer_path" in inference.sh
are the paths of pretrained Wan2.1-1.3B weights, pretrained Wav2Vec2.0 weights, and pretrained StableAvatar weights, respectively.
"--sample_steps", "--overlap_window_length", and "--clip_sample_n_frames" refer to the total number of inference steps, the overlapping context length between two context windows, and the synthesized frame number in a batch/context window, respectively.
Notably, the recommended --sample_steps
range is [30-50], more steps bring higher quality. The recommended --overlap_window_length
range is [5-15], as longer overlapping length results in higher quality and slower inference speed.
"--sample_text_guide_scale" and "--sample_audio_guide_scale" are Classify-Free-Guidance scale of text prompt and audio. The recommended range for prompt and audio cfg is [3-6]
. You can increase the audio cfg to facilitate the lip synchronization with audio.
We provide 6 cases in different resolution settings in path/StableAvatar/examples
for validation. β€οΈβ€οΈPlease feel free to try it out and enjoy the endless entertainment of infinite-length avatar video generationβ€οΈβ€οΈ!
π‘Tips
Wan2.1-1.3B-based StableAvatar weights have two versions:
transformer3d-square.pt
andtransformer3d-rec-vec.pt
, which are trained on two video datasets in two different resolution settings. Two versions both support generating audio-driven avatar video at three different resolution settings: 512x512, 480x832, and 832x480. You can modify--transformer_path
ininference.sh
to switch these two versions.If you have limited GPU resources, you can change the loading mode of StableAvatar by modifying "--GPU_memory_mode" in
inference.sh
. The options of "--GPU_memory_mode" aremodel_full_load
,sequential_cpu_offload
,model_cpu_offload_and_qfloat8
, andmodel_cpu_offload
. In particular, when you set--GPU_memory_mode
tosequential_cpu_offload
, the total GPU memory consumption is approximately 3G with slower inference speed. Setting--GPU_memory_mode
tomodel_cpu_offload
can significantly cut GPU memory usage, reducing it by roughly half compared tomodel_full_load
mode.If you have multiple Gpus, you can run Multi-GPU inference to speed up by modifying "--ulysses_degree" and "--ring_degree" in
inference.sh
. For example, if you have 8 GPUs, you can set--ulysses_degree=4
and--ring_degree=2
. Notably, you have to ensure ulysses_degree*ring_degree=total GPU number/world-size. Moreover, you can also add--fsdp_dit
ininference.sh
to activate FSDP in DiT to further reduce GPU memory consumption.
The video synthesized by StableAvatar is without audio. If you want to obtain the high quality MP4 file with audio, we recommend you to leverage ffmpeg on the output_path as follows:
ffmpeg -i video_without_audio.mp4 -i /path/audio.wav -c:v copy -c:a aac -shortest /path/output_with_audio.mp4
π§± Model Training
π₯π₯Itβs worth noting that if youβre looking to train a conditioned Video Diffusion Transformer (DiT) model, such as Wan2.1, this training tutorial will also be helpful.π₯π₯ For the training dataset, it has to be organized as follows:
talking_face_data/
βββ rec
β β βββspeech
β β β βββ00001
β β β β βββsub_clip.mp4
β β β β βββaudio.wav
β β β β βββimages
β β β β β βββframe_0.png
β β β β β βββframe_1.png
β β β β β βββframe_2.png
β β β β β βββ...
β β β β βββface_masks
β β β β β βββframe_0.png
β β β β β βββframe_1.png
β β β β β βββframe_2.png
β β β β β βββ...
β β β β βββlip_masks
β β β β β βββframe_0.png
β β β β β βββframe_1.png
β β β β β βββframe_2.png
β β β β β βββ...
β β β βββ00002
β β β β βββsub_clip.mp4
β β β β βββaudio.wav
β β β β βββimages
β β β β βββface_masks
β β β β βββlip_masks
β β β βββ...
β β βββsinging
β β β βββ00001
β β β β βββsub_clip.mp4
β β β β βββaudio.wav
β β β β βββimages
β β β β βββface_masks
β β β β βββlip_masks
β β β βββ...
β β βββdancing
β β β βββ00001
β β β β βββsub_clip.mp4
β β β β βββaudio.wav
β β β β βββimages
β β β β βββface_masks
β β β β βββlip_masks
β β β βββ...
βββ vec
β β βββspeech
β β β βββ00001
β β β β βββsub_clip.mp4
β β β β βββaudio.wav
β β β β βββimages
β β β β βββface_masks
β β β β βββlip_masks
β β β βββ...
β β βββsinging
β β β βββ00001
β β β β βββsub_clip.mp4
β β β β βββaudio.wav
β β β β βββimages
β β β β βββface_masks
β β β β βββlip_masks
β β β βββ...
β β βββdancing
β β β βββ00001
β β β β βββsub_clip.mp4
β β β β βββaudio.wav
β β β β βββimages
β β β β βββface_masks
β β β β βββlip_masks
β β β βββ...
βββ square
β β βββspeech
β β β βββ00001
β β β β βββsub_clip.mp4
β β β β βββaudio.wav
β β β β βββimages
β β β β βββface_masks
β β β β βββlip_masks
β β β βββ...
β β βββsinging
β β β βββ00001
β β β β βββsub_clip.mp4
β β β β βββaudio.wav
β β β β βββimages
β β β β βββface_masks
β β β β βββlip_masks
β β β βββ...
β β βββdancing
β β β βββ00001
β β β β βββsub_clip.mp4
β β β β βββaudio.wav
β β β β βββimages
β β β β βββface_masks
β β β β βββlip_masks
β β β βββ...
βββ video_rec_path.txt
βββ video_square_path.txt
βββ video_vec_path.txt
StableAvatar is trained on mixed-resolution videos, with 512x512 videos stored in talking_face_data/square
, 480x832 videos stored in talking_face_data/vec
, and 832x480 videos stored in talking_face_data/rec
. Each folder in talking_face_data/square
or talking_face_data/rec
or talking_face_data/vec
contains three subfolders which contains different types of videos (speech, singing, and dancing).
All .png
image files are named in the format frame_i.png
, such as frame_0.png
, frame_1.png
, and so on.
00001
, 00002
, 00003
indicate individual video information.
In terms of three subfolders, images
, face_masks
, and lip_masks
store RGB frames, corresponding human face masks, and corresponding human lip masks, respectively.
sub_clip.mp4
and audio.wav
refer to the corresponding RGB video of images
and the corresponding audio file.
video_square_path.txt
, video_rec_path.txt
, and video_vec_path.txt
record folder paths of talking_face_data/square
, talking_face_data/rec
, and talking_face_data/vec
, respectively.
For example, the content of video_rec_path.txt
is shown as follows:
path/StableAvatar/talking_face_data/rec/speech/00001
path/StableAvatar/talking_face_data/rec/speech/00002
...
path/StableAvatar/talking_face_data/rec/singing/00003
path/StableAvatar/talking_face_data/rec/singing/00004
...
path/StableAvatar/talking_face_data/rec/dancing/00005
path/StableAvatar/talking_face_data/rec/dancing/00006
...
If you only have raw videos, you can leverage ffmpeg
to extract frames from raw videos (speech) and store them in the subfolder images
.
ffmpeg -i raw_video_1.mp4 -q:v 1 -start_number 0 path/StableAvatar/talking_face_data/rec/speech/00001/images/frame_%d.png
The obtained frames are saved in path/StableAvatar/talking_face_data/rec/speech/00001/images
.
For extracting the human face masks, please refer to StableAnimator repo. The Human Face Mask Extraction section in the tutorial provides off-the-shelf codes.
For extracting the human lip masks, you can run the following command:
pip install mediapipe
python lip_mask_extractor.py --folder_root="path/StableAvatar/talking_face_data/rec/singing" --start=1 --end=500
--folder_root
refers to the root path of training datasets.
--start
and --end
specify the starting and ending indices of the selected training dataset. For example, --start=1 --end=500
indicates that the human lip extraction will start at path/StableAvatar/talking_face_data/rec/singing/00001
and end at path/StableAvatar/talking_face_data/rec/singing/00500
.
For extraction details of corresponding audio, please refer to the Audio Extraction section. When your dataset is organized exactly as outlined above, you can easily train your Wan2.1-1.3B-based StableAvatar by running the following command:
# Training StableAvatar on a single resolution setting (512x512) in a single machine
bash train_1B_square.sh
# Training StableAvatar on a single resolution setting (512x512) in multiple machines
bash train_1B_square_64.sh
# Training StableAvatar on a mixed resolution setting (480x832 and 832x480) in a single machine
bash train_1B_rec_vec.sh
# Training StableAvatar on a mixed resolution setting (480x832 and 832x480) in multiple machines
bash train_1B_rec_vec_64.sh
For the parameter details of train_1B_square.sh
and train_1B_rec_vec.sh
, CUDA_VISIBLE_DEVICES
refers to gpu devices. In my setting, I use 4 NVIDIA A100 80G to train StableAvatar (CUDA_VISIBLE_DEVICES=3,2,1,0
).
--pretrained_model_name_or_path
, --pretrained_wav2vec_path
, and --output_dir
refer to the pretrained Wan2.1-1.3B path, pretrained Wav2Vec2.0 path, and the checkpoint saved path of the trained StableAvatar.
--train_data_square_dir
, --train_data_rec_dir
, and --train_data_vec_dir
are the paths of video_square_path.txt
, video_rec_path.txt
, and video_vec_path.txt
, respectively.
--validation_reference_path
and --validation_driven_audio_path
are paths of the validation reference image and the validation driven audio.
--video_sample_n_frames
is the number of frames that StableAvatar processes in a single batch.
--num_train_epochs
is the training epoch number. It is worth noting that the default number of training epochs is set to infinite. You can manually terminate the training process once you observe that your StableAvatar has reached its peak performance.
For the parameter details of train_1B_square_64.sh
and train_1B_rec_vec_64.sh
, we set the GPU configuration in path/StableAvatar/accelerate_config/accelerate_config_machine_1B_multiple.yaml
. In my setting, the training setup consists of 8 nodes, each equipped with 8 NVIDIA A100 80GB GPUs, for training StableAvatar.
The overall file structure of StableAvatar at training is shown as follows:
StableAvatar/
βββ accelerate_config
βββ deepspeed_config
βββ talking_face_data
βββ examples
βββ wan
βββ checkpoints
β βββ Kim_Vocal_2.onnx
β βββ wav2vec2-base-960h
β βββ Wan2.1-Fun-V1.1-1.3B-InP
β βββ StableAvatar-1.3B
βββ inference.py
βββ inference.sh
βββ train_1B_square.py
βββ train_1B_square.sh
βββ train_1B_vec_rec.py
βββ train_1B_vec_rec.sh
βββ audio_extractor.py
βββ vocal_seperator.py
βββ requirement.txt
It is worth noting that training StableAvatar requires approximately 50GB of VRAM due to the mixed-resolution (480x832 and 832x480) training pipeline. However, if you train StableAvatar exclusively on 512x512 videos, the VRAM requirement is reduced to approximately 40GB. Additionally, The backgrounds of the selected training videos should remain static, as this helps the diffusion model calculate accurate reconstruction loss. The audio should be clear and free from excessive background noise.
Regarding training Wan2.1-14B-based StableAvatar, you can run the following command:
# Training StableAvatar on a mixed resolution setting (480x832, 832x480, and 512x512) in multiple machines
huggingface-cli download Wan-AI/Wan2.1-I2V-14B-480P --local-dir ./checkpoints/Wan2.1-I2V-14B-480P
huggingface-cli download Wan-AI/Wan2.1-I2V-14B-720P --local-dir ./checkpoints/Wan2.1-I2V-14B-720P # Optional
bash train_14B.sh
We utilize deepspeed stage-2 to train Wan2.1-14B-based StableAvatar. The GPU configuration can be modified in path/StableAvatar/accelerate_config/accelerate_config_machine_14B_multiple.yaml
.
The deepspeed optimization configuration and deepspeed scheduler configuration are in path/StableAvatar/deepspeed_config/zero_stage2_config.json
.
Notably, we observe that Wan2.1-1.3B-based StableAvatar is already capable of synthesizing infinite-length high quality avatar videos. The Wan2.1-14B backbone significantly increase the inference latency and GPU memory consumption during training, indicating limited efficiency in terms of performance-to-resource ratio.
If you want to train 720P Wan2.1-1.3B-based or Wan2.1-14B-based StableAvatar, you can directly modify the height and width of the dataloader (480p-->720p) in train_1B_square.py
/train_1B_vec_rec.py
/train_14B.py
.
π§± VRAM requirement and Runtime
For the 5s video (480x832, fps=25), the basic model (--GPU_memory_mode="model_full_load") requires approximately 18GB VRAM and finishes in 3 minutes on a 4090 GPU.
π₯π₯Theoretically, StableAvatar is capable of synthesizing hours of video without significant quality degradation; however, the 3D VAE decoder demands significant GPU memory, especially when decoding 10k+ frames. You have the option to run the VAE decoder on CPU.π₯π₯
Contact
If you have any suggestions or find our work helpful, feel free to contact me
Email: [email protected]
If you find our work useful, please consider giving a star β to this github repository and citing it β€οΈ:
@article{tu2025stableavatar,
title={StableAvatar: Infinite-Length Audio-Driven Avatar Video Generation},
author={Tu, Shuyuan and Pan, Yueming and Huang, Yinming and Han, Xintong and Xing, Zhen and Dai, Qi and Luo, Chong and Wu, Zuxuan and Jiang Yu-Gang},
journal={arXiv preprint arXiv:2508.08248},
year={2025}
}
- Downloads last month
- -
Model tree for FrancisRing/StableAvatar
Base model
Wan-AI/Wan2.1-T2V-14B