htdong commited on
Commit
b08559c
·
verified ·
1 Parent(s): 4e44c91

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -3
README.md CHANGED
@@ -1,3 +1,74 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Wan-AI/Wan2.1-T2V-14B
5
+ pipeline_tag: text-to-video
6
+ ---
7
+ <div align="center">
8
+
9
+ <h1>
10
+ Wan-Alpha
11
+ </h1>
12
+
13
+ <h3>Wan-Alpha: High-Quality Text-to-Video Generation with Alpha Channel</h3>
14
+
15
+
16
+
17
+ [![arXiv](https://img.shields.io/badge/arXiv-xxxx-b31b1b)](https://arxiv.org/abs/)
18
+ [![Project Page](https://img.shields.io/badge/Project_Page-Link-green)](https://donghaotian123.github.io/Wan-Alpha/)
19
+ [![🤗 HuggingFace](https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Model-orange)](https://huggingface.co/htdong/Wan-Alpha)
20
+
21
+ </div>
22
+
23
+ <img src="assets/teaser.png" alt="Wan-Alpha Qualitative Results" style="max-width: 100%; height: auto;">
24
+
25
+ >Qualitative results of video generation using **Wan-Alpha**. Our model successfully generates various scenes with accurate and clearly rendered transparency. Notably, it can synthesize diverse semi-transparent objects, glowing effects, and fine-grained details such as hair.
26
+
27
+ ---
28
+
29
+ ## 🔥 News
30
+ * **[2025.09.30]** Released Wan-Alpha v1.0, the Wan2.1-14B-T2V–adapted weights and inference code are now open-sourced.
31
+
32
+ ---
33
+ ## 🌟 Showcase
34
+
35
+ ### Text-to-Video Generation with Alpha Channel
36
+
37
+
38
+ | Prompt | Preview Video | Alpha Video |
39
+ | :---: | :---: | :---: |
40
+ | "Medium shot. A little girl holds a bubble wand and blows out colorful bubbles that float and pop in the air. The background of this video is transparent. Realistic style." | <img src="assets/girl.gif" width="320" height="180" style="object-fit:contain; display:block; margin:auto;"/> | <img src="assets/girl_pha.gif" width="335" height="180" style="object-fit:contain; display:block; margin:auto;"/> |
41
+
42
+ ### For more results, please visit [Our Website](https://donghaotian123.github.io/Wan-Alpha/)
43
+
44
+ ## 🚀 Quick Start
45
+
46
+ Please see [Github](https://github.com/WeChatCV/Wan-Alpha) for code running details
47
+ ```
48
+
49
+ ## 🤝 Acknowledgements
50
+
51
+ This project is built upon the following excellent open-source projects:
52
+ * [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio) (training/inference framework)
53
+ * [Wan2.1](https://github.com/Wan-Video/Wan2.1) (base video generation model)
54
+ * [LightX2V](https://github.com/ModelTC/LightX2V) (inference acceleration)
55
+ * [WanVideo_comfy](https://huggingface.co/Kijai/WanVideo_comfy) (inference acceleration)
56
+
57
+ We sincerely thank the authors and contributors of these projects.
58
+
59
+ ---
60
+
61
+ ## ✏ Citation
62
+
63
+ If you find our work helpful for your research, please consider citing our paper:
64
+
65
+ ```bibtex
66
+ @article{
67
+ }
68
+ ```
69
+
70
+ ---
71
+
72
+ ## 📬 Contact Us
73
+
74
+ If you have any questions or suggestions, feel free to reach out via [GitHub Issues](https://github.com/WeChatCV/Wan-Alpha/issues) . We look forward to your feedback!