nielsr HF Staff commited on
Commit
f0b8b71
Β·
verified Β·
1 Parent(s): 401b568

Improve model card: Add abstract and detailed quick start guide

Browse files

This PR enhances the model card by:

1. **Adding the paper abstract**: Providing a concise overview of the model's purpose and key features directly on the model page.
2. **Including a detailed Quick Start guide**: Replacing the generic link to GitHub with the actual environment setup, model download instructions, and a runnable usage code snippet from the project's GitHub README. This makes it much easier for users to get started with the model directly from the Hugging Face Hub.

These changes aim to make the model card more comprehensive and user-friendly.

Files changed (1) hide show
  1. README.md +90 -3
README.md CHANGED
@@ -1,9 +1,10 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - Wan-AI/Wan2.1-T2V-14B
 
5
  pipeline_tag: text-to-video
6
  ---
 
7
  <div align="center">
8
 
9
  <h1>
@@ -28,6 +29,12 @@ pipeline_tag: text-to-video
28
 
29
  ---
30
 
 
 
 
 
 
 
31
  ## πŸ”₯ News
32
  * **[2025.09.30]** Released Wan-Alpha v1.0, the Wan2.1-14B-T2V–adapted weights and inference code are now open-sourced.
33
 
@@ -45,8 +52,88 @@ pipeline_tag: text-to-video
45
 
46
  ## πŸš€ Quick Start
47
 
48
- Please see [Github](https://github.com/WeChatCV/Wan-Alpha) for code running details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
 
50
 
51
 
52
  ## 🀝 Acknowledgements
@@ -81,4 +168,4 @@ If you find our work helpful for your research, please consider citing our paper
81
 
82
  ## πŸ“¬ Contact Us
83
 
84
- If you have any questions or suggestions, feel free to reach out via [GitHub Issues](https://github.com/WeChatCV/Wan-Alpha/issues) . We look forward to your feedback!
 
1
  ---
 
2
  base_model:
3
  - Wan-AI/Wan2.1-T2V-14B
4
+ license: apache-2.0
5
  pipeline_tag: text-to-video
6
  ---
7
+
8
  <div align="center">
9
 
10
  <h1>
 
29
 
30
  ---
31
 
32
+ ## Abstract
33
+
34
+ RGBA video generation, which includes an alpha channel to represent transparency, is gaining increasing attention across a wide range of applications. However, existing methods often neglect visual quality, limiting their practical usability. In this paper, we propose Wan-Alpha, a new framework that generates transparent videos by learning both RGB and alpha channels jointly. We design an effective variational autoencoder (VAE) that encodes the alpha channel into the RGB latent space. Then, to support the training of our diffusion transformer, we construct a high-quality and diverse RGBA video dataset. Compared with state-of-the-art methods, our model demonstrates superior performance in visual quality, motion realism, and transparency rendering. Notably, our model can generate a wide variety of semi-transparent objects, glowing effects, and fine-grained details such as hair strands. The released model is available on our website: this https URL .
35
+
36
+ ---
37
+
38
  ## πŸ”₯ News
39
  * **[2025.09.30]** Released Wan-Alpha v1.0, the Wan2.1-14B-T2V–adapted weights and inference code are now open-sourced.
40
 
 
52
 
53
  ## πŸš€ Quick Start
54
 
55
+ ##### 1. Environment Setup
56
+ ```bash
57
+ # Clone the project repository
58
+ git clone https://github.com/WeChatCV/Wan-Alpha.git
59
+ cd Wan-Alpha
60
+
61
+ # Create and activate Conda environment
62
+ conda create -n Wan-Alpha python=3.11 -y
63
+ conda activate Wan-Alpha
64
+
65
+ # Install dependencies
66
+ pip install -r requirements.txt
67
+ ```
68
+
69
+ ##### 2. Model Download
70
+ Download [Wan2.1-T2V-14B](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B)
71
+
72
+ Download [Lightx2v-T2V-14B](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Lightx2v/lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors)
73
+
74
+ Download [Wan-Alpha VAE](https://huggingface.co/htdong/Wan-Alpha)
75
+
76
+ ### πŸ§ͺ Usage
77
+ You can test our model through:
78
+ ```bash
79
+ torchrun --nproc_per_node=8 --master_port=29501 generate_dora_lightx2v.py --size 832*480\
80
+ --ckpt_dir "path/to/your/Wan-2.1/Wan2.1-T2V-14B" \
81
+ --dit_fsdp --t5_fsdp --ulysses_size 8 \
82
+ --vae_lora_checkpoint "path/to/your/decoder.bin" \
83
+ --lora_path "path/to/your/epoch-13-1500.safetensors" \
84
+ --lightx2v_path "path/to/your/lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors" \
85
+ --sample_guide_scale 1.0 \
86
+ --frame_num 81 \
87
+ --sample_steps 4 \
88
+ --lora_ratio 1.0 \
89
+ --lora_prefix "" \
90
+ --prompt_file ./data/prompt.txt \
91
+ --output_dir ./output
92
+ ```
93
+ You can specify the weights of `Wan2.1-T2V-14B` with `--ckpt_dir`, `LightX2V-T2V-14B with` `--lightx2v_path`, `Wan-Alpha-VAE` with `--vae_lora_checkpoint`, and `Wan-Alpha-T2V` with `--lora_path`. Finally, you can find the rendered RGBA videos with a checkerboard background and PNG frames at `--output_dir`.
94
+
95
+ **Prompt Writing Tip:** You need to specify that the background of the video is transparent, the visual style, the shot type (such as close-up, medium shot, wide shot, or extreme close-up), and a description of the main subject. Prompts support both Chinese and English input.
96
+
97
+ ```bash
98
+ # An example of prompt.
99
+ This video has a transparent background. Close-up shot. A colorful parrot flying. Realistic style.
100
+ ```
101
+
102
+ ### πŸ”¨ Official ComfyUI Version
103
+
104
+ Note: We have reorganized our models to ensure they can be easily loaded into ComfyUI. Please note that these models differ from the ones mentioned above.
105
+
106
+ 1. Download models
107
+ - The Wan DiT base model: [wan2.1_t2v_14B_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/split_files/diffusion_models/wan2.1_t2v_14B_fp16.safetensors)
108
+ - The Wan text encoder: [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors)
109
+ - The LightX2V model: [lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Lightx2v/lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors)
110
+ - Our RGBA Dora: [epoch-13-1500_changed.safetensors](https://huggingface.co/htdong/Wan-Alpha_ComfyUI/blob/main/epoch-13-1500_changed.safetensors)
111
+ - Our RGB VAE Decoder: [wan_alpha_2.1_vae_rgb_channel.safetensors.safetensors](https://huggingface.co/htdong/Wan-Alpha_ComfyUI/blob/main/wan_alpha_2.1_vae_rgb_channel.safetensors.safetensors)
112
+ - Our Alpha VAE Decoder: [wan_alpha_2.1_vae_alpha_channel.safetensors.safetensors](https://huggingface.co/htdong/Wan-Alpha_ComfyUI/blob/main/wan_alpha_2.1_vae_alpha_channel.safetensors.safetensors)
113
+
114
+ 2. Copy the files into the `ComfyUI/models` folder and organize them as follows:
115
+
116
+ ```
117
+ ComfyUI/models
118
+ β”œβ”€β”€ diffusion_models
119
+ οΏ½οΏ½ └── wan2.1_t2v_14B_fp16.safetensors
120
+ β”œβ”€β”€ loras
121
+ β”‚ β”œβ”€β”€ epoch-13-1500_changed.safetensors
122
+ β”‚ └── lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
123
+ β”œβ”€β”€ text_encoders
124
+ β”‚ └── umt5_xxl_fp8_e4m3fn_scaled.safetensors
125
+ β”œβ”€β”€ vae
126
+ β”‚ β”œβ”€β”€ wan_alpha_2.1_vae_alpha_channel.safetensors.safetensors
127
+ β”‚ └── wan_alpha_2.1_vae_rgb_channel.safetensors.safetensors
128
+ ```
129
+
130
+ 3. Install our custom RGBA video previewer and PNG frames zip packer. Copy the file [RGBA_save_tools.py](comfyui/RGBA_save_tools.py) into the `ComfyUI/custom_nodes` folder.
131
+
132
+ - Thanks to @mr-lab for an improved WebP version! You can find it in this [issue](https://github.com/WeChatCV/Wan-Alpha/issues/4).
133
+
134
+ 4. Example workflow: [wan_alpha_t2v_14B.json](comfyui/wan_alpha_t2v_14B.json)
135
 
136
+ <img src="comfyui/comfyui.jpg" style="margin:auto;"/>
137
 
138
 
139
  ## 🀝 Acknowledgements
 
168
 
169
  ## πŸ“¬ Contact Us
170
 
171
+ If you have any questions or suggestions, feel free to reach out via [GitHub Issues](https://github.com/WeChatCV/Wan-Alpha/issues) . We look forward to your feedback!