File size: 6,352 Bytes
629f052
 
 
635352f
6aacdc3
 
 
 
 
 
 
 
635352f
 
6aacdc3
 
 
635352f
 
 
6b16d17
635352f
 
 
dda5b71
635352f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6b16d17
 
635352f
 
6b16d17
 
635352f
 
 
 
 
6b16d17
635352f
6b16d17
635352f
6b16d17
635352f
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
license: apache-2.0
---

<h3 align="center">
    Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment
</h3>

<div style="display:flex;justify-content: center">
<a href="https://arxiv.org/abs/2502.11079"><img alt="Build" src="https://img.shields.io/badge/arXiv%20paper-2502.11079-b31b1b.svg"></a> 
<a href="https://phantom-video.github.io/Phantom/"><img alt="Build" src="https://img.shields.io/badge/Project_page-More_visualizations-green"></a>
<a href="https://github.com/Phantom-video/Phantom"><img src="https://img.shields.io/static/v1?label=GitHub&message=Code&color=green&logo=github"></a>
</div>

><p align="center"> <span style="color:#137cf3; font-family: Gill Sans">Lijie Liu</span><sup>*</sup></a>,  <span style="color:#137cf3; font-family: Gill Sans">Tianxiang Ma</span><sup>*</sup></a>, <span style="color:#137cf3; font-family: Gill Sans">Bingchuan Li</span><sup>*</sup></a>,  <span style="color:#137cf3; font-family: Gill Sans">Zhuowei Chen</span><sup>*</sup></a>, <span style="color:#137cf3; font-family: Gill Sans">Jiawei Liu</span><sup></sup></a>, <span style="color:#137cf3; font-family: Gill Sans">Gen Li</span>, <span style="color:#137cf3; font-family: Gill Sans">Siyu Zhou</span>, <span style="color:#137cf3; font-family: Gill Sans">Qian He</span></a>, <span style="color:#137cf3; font-family: Gill Sans">Xinglong Wu</span></a> <br> 
><span style="font-size: 16px"><sup> * </sup>Equal contribution,<sup> &dagger; </sup>Project lead</span> <br>
><span style="font-size: 16px">Intelligent Creation Team, ByteDance</span>


<p align="center">
<img src="./assets/teaser.png" width=95%>
<p>

## 🔥 Latest News!
* Apr 10, 2025: We have updated the full version of the Phantom paper, which now includes more detailed descriptions of the model architecture and dataset pipeline.
* Apr 20, 2025: 👋 Phantom-Wan is coming! We adapted the Phantom framework into the [Wan2.1](https://github.com/Wan-Video/Wan2.1) video generation model. The inference codes and checkpoint have been released.

## 📑 Todo List
- [x] Inference codes and Checkpoint of Phantom-Wan 1.3B 
- [ ] Checkpoint of Phantom-Wan 14B
- [ ] Training codes of Phantom-Wan

## 📖 Overview
Phantom is a unified video generation framework for single and multi-subject references, built on existing text-to-video and image-to-video architectures. It achieves cross-modal alignment using text-image-video triplet data by redesigning the joint text-image injection model. Additionally, it emphasizes subject consistency in human generation while enhancing ID-preserving video generation.

## ⚡️ Quickstart

### Installation
Clone the repo:
```sh
git clone https://github.com/Phantom-video/Phantom.git
cd Phantom
```

Install dependencies:
```sh
# Ensure torch >= 2.4.0
pip install -r requirements.txt
```

### Model Download
First you need to download the 1.3B original model of Wan2.1. Download Wan2.1-1.3B using huggingface-cli:
``` sh
pip install "huggingface_hub[cli]"
huggingface-cli download Wan-AI/Wan2.1-T2V-1.3B --local-dir ./Wan2.1-T2V-1.3B
```
Then download the Phantom-Wan-1.3B model:
``` sh
huggingface-cli download xxx --local-dir ./Phantom-Wan-1.3B
```

### Run Subject-to-Video Generation

- Single-GPU inference

``` sh
python generate.py --task s2v-1.3B --size 832*480 --ckpt_dir ./Wan2.1-T2V-1.3B --phantom_ckpt ./Phantom-Wan-1.3B/Phantom-Wan-1.3B.pth  --ref_image "examples/ref1.png,examples/ref2.png" --prompt "暖阳漫过草地,扎着双马尾、头戴绿色蝴蝶结、身穿浅绿色连衣裙的小女孩蹲在盛开的雏菊旁。她身旁一只棕白相间的狗狗吐着舌头,毛茸茸尾巴欢快摇晃。小女孩笑着举起黄红配色、带有蓝色按钮的玩具相机,将和狗狗的欢乐瞬间定格。" --base_seed 42
```

- Multi-GPU inference using FSDP + xDiT USP

``` sh
pip install "xfuser>=0.4.1"
torchrun --nproc_per_node=8 generate.py --task s2v-1.3B --size 832*480 --ckpt_dir ./Wan2.1-T2V-1.3B --phantom_ckpt ./Phantom-Wan-1.3B/Phantom-Wan-1.3B.pth  --ref_image "examples/ref3.png,examples/ref4.png" --dit_fsdp --t5_fsdp --ulysses_size 4 --ring_size 2 --prompt "夕阳下,一位有着小麦色肌肤、留着乌黑长发的女人穿上有着大朵立体花朵装饰、肩袖处带有飘逸纱带的红色纱裙,漫步在金色的海滩上,海风轻拂她的长发,画面唯美动人。" --base_seed 42
```

> 💡Note: 
> * Changing `--ref_image` can achieve single reference Subject-to-Video generation or multi-reference Subject-to-Video generation. The number of reference images should be within 4.
> * To achieve the best generation results, we recommend that you describe the visual content of the reference image as accurately as possible when writing `--prompt`. For example, "examples/ref1.png" can be described as "a toy camera in yellow and red with blue buttons".
> * When the generated video is unsatisfactory, the most straightforward solution is to try changing the `--base_seed` and modifying the description in the `--prompt`.

For inferencing examples, please refer to "infer.sh". You will get the following generated results:

<table>
  <tr>
    <td><img src="./assets/result1.gif" alt="GIF 1" width="400"></td>
    <td><img src="./assets/result2.gif" alt="GIF 2" width="400"></td>
  </tr>
  <tr>
    <td><img src="./assets/result3.gif" alt="GIF 3" width="400"></td>
    <td><img src="./assets/result4.gif" alt="GIF 4" width="400"></td>
  </tr>
</table>

## 🆚 Comparative Results
- **Identity Preserving Video Generation**.
![image](./assets/id_eval.png)
- **Single Reference Subject-to-Video Generation**.
![image](./assets/ip_eval_s.png)
- **Multi-Reference Subject-to-Video Generation**.
![image](./assets/ip_eval_m_00.png)

## Acknowledgements
We would like to express our gratitude to the SEED team for their support. Special thanks to Lu Jiang, Haoyuan Guo, Zhibei Ma, and Sen Wang for their assistance with the model and data. In addition, we are also very grateful to Siying Chen, Qingyang Li, and Wei Han for their help with the evaluation.

## BibTeX
```bibtex
@article{liu2025phantom,
  title={Phantom: Subject-Consistent Video Generation via Cross-Modal Alignment},
  author={Liu, Lijie and Ma, Tianxaing and Li, Bingchuan and Chen, Zhuowei and Liu, Jiawei and He, Qian and Wu, Xinglong},
  journal={arXiv preprint arXiv:2502.11079},
  year={2025}
}
```