TianxiangMa commited on
Commit
6b16d17
·
verified ·
1 Parent(s): e1c62e3
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -18,7 +18,7 @@ license: apache-2.0
18
 
19
 
20
  <p align="center">
21
- <img src="https://github.com/Phantom-video/Phantom/blob/main/assets/teaser.png" width=95%>
22
  <p>
23
 
24
  ## 🔥 Latest News!
@@ -82,22 +82,22 @@ For inferencing examples, please refer to "infer.sh". You will get the following
82
 
83
  <table>
84
  <tr>
85
- <td><img src="https://github.com/Phantom-video/Phantom/blob/main/examples/ref_results/result1.gif" alt="GIF 1" width="200"></td>
86
- <td><img src="https://github.com/Phantom-video/Phantom/blob/main/examples/ref_results/result2.gif" alt="GIF 2" width="200"></td>
87
  </tr>
88
  <tr>
89
- <td><img src="https://github.com/Phantom-video/Phantom/blob/main/examples/ref_results/result3.gif" alt="GIF 3" width="200"></td>
90
- <td><img src="https://github.com/Phantom-video/Phantom/blob/main/examples/ref_results/result4.gif" alt="GIF 4" width="200"></td>
91
  </tr>
92
  </table>
93
 
94
  ## 🆚 Comparative Results
95
  - **Identity Preserving Video Generation**.
96
- ![image](https://github.com/Phantom-video/Phantom/blob/main/assets/images/id_eval.png)
97
  - **Single Reference Subject-to-Video Generation**.
98
- ![image](https://github.com/Phantom-video/Phantom/blob/main/assets/images/ip_eval_s.png)
99
  - **Multi-Reference Subject-to-Video Generation**.
100
- ![image](https://github.com/Phantom-video/Phantom/blob/main/assets/images/ip_eval_m_00.png)
101
 
102
  ## Acknowledgements
103
  We would like to express our gratitude to the SEED team for their support. Special thanks to Lu Jiang, Haoyuan Guo, Zhibei Ma, and Sen Wang for their assistance with the model and data. In addition, we are also very grateful to Siying Chen, Qingyang Li, and Wei Han for their help with the evaluation.
 
18
 
19
 
20
  <p align="center">
21
+ <img src="./assets/teaser.png" width=95%>
22
  <p>
23
 
24
  ## 🔥 Latest News!
 
82
 
83
  <table>
84
  <tr>
85
+ <td><img src="./assets/result1.gif" alt="GIF 1" width="400"></td>
86
+ <td><img src="./assets/result2.gif" alt="GIF 2" width="400"></td>
87
  </tr>
88
  <tr>
89
+ <td><img src="./assets/result3.gif" alt="GIF 3" width="400"></td>
90
+ <td><img src="./assets/result4.gif" alt="GIF 4" width="400"></td>
91
  </tr>
92
  </table>
93
 
94
  ## 🆚 Comparative Results
95
  - **Identity Preserving Video Generation**.
96
+ ![image](./assets/id_eval.png)
97
  - **Single Reference Subject-to-Video Generation**.
98
+ ![image](./assets/ip_eval_s.png)
99
  - **Multi-Reference Subject-to-Video Generation**.
100
+ ![image](./assets/ip_eval_m_00.png)
101
 
102
  ## Acknowledgements
103
  We would like to express our gratitude to the SEED team for their support. Special thanks to Lu Jiang, Haoyuan Guo, Zhibei Ma, and Sen Wang for their assistance with the model and data. In addition, we are also very grateful to Siying Chen, Qingyang Li, and Wei Han for their help with the evaluation.