Upload IMG_0711.jpeg

#11
by Gida3300 - opened
.gitattributes CHANGED
@@ -36,3 +36,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
36
  figures/algorithm.png filter=lfs diff=lfs merge=lfs -text
37
  figures/dit_architecture.png filter=lfs diff=lfs merge=lfs -text
38
  figures/inhouse_human_evaluation.png filter=lfs diff=lfs merge=lfs -text
 
 
36
  figures/algorithm.png filter=lfs diff=lfs merge=lfs -text
37
  figures/dit_architecture.png filter=lfs diff=lfs merge=lfs -text
38
  figures/inhouse_human_evaluation.png filter=lfs diff=lfs merge=lfs -text
39
+ IMG_0711.jpeg filter=lfs diff=lfs merge=lfs -text
ckpt/magi/4.5B_base/inference_weight/model-00001-of-00002.safetensors → IMG_0711.jpeg RENAMED
File without changes
README.md CHANGED
@@ -3,7 +3,6 @@ license: apache-2.0
3
  language:
4
  - en
5
  pipeline_tag: image-to-video
6
- library_name: magi-1
7
  ---
8
 
9
  ![magi-logo](figures/logo_black.png)
@@ -12,7 +11,7 @@ library_name: magi-1
12
  -----
13
 
14
  <p align="center" style="line-height: 1;">
15
- <a href="https://arxiv.org/abs/2505.13211" target="_blank" style="margin: 2px;">
16
  <img alt="paper" src="https://img.shields.io/badge/Paper-arXiv-B31B1B?logo=arxiv" style="display: inline-block; vertical-align: middle;">
17
  </a>
18
  <a href="https://sand.ai" target="_blank" style="margin: 2px;">
@@ -37,13 +36,11 @@ library_name: magi-1
37
 
38
  # MAGI-1: Autoregressive Video Generation at Scale
39
 
40
- This repository contains the [code](https://github.com/SandAI-org/MAGI-1) for the MAGI-1 model, pre-trained weights and inference code. You can find more information on our [technical report](https://static.magi.world/static/files/MAGI_1.pdf) or directly create magic with MAGI-1 [here](http://sand.ai) . 🚀✨
41
 
42
 
43
  ## 🔥🔥🔥 Latest News
44
 
45
- - Apr 30, 2025: MAGI-1 4.5B distill and distill+quant models are coming soon 🎉 — we’re putting on the final touches, stay tuned!
46
- - Apr 30, 2025: MAGI-1 4.5B model has been released 🎉. We've updated the model weights — check it out!
47
  - Apr 21, 2025: MAGI-1 is here 🎉. We've released the model weights and inference code — check it out!
48
 
49
 
@@ -81,41 +78,34 @@ We adopt a shortcut distillation approach that trains a single velocity-based mo
81
 
82
  We provide the pre-trained weights for MAGI-1, including the 24B and 4.5B models, as well as the corresponding distill and distill+quant models. The model weight links are shown in the table.
83
 
84
- | Model | Link | Recommend Machine |
85
- | ------------------------------ | -------------------------------------------------------------------- | ------------------------------- |
86
- | T5 | [T5](https://huggingface.co/sand-ai/MAGI-1/tree/main/ckpt/t5) | - |
87
- | MAGI-1-VAE | [MAGI-1-VAE](https://huggingface.co/sand-ai/MAGI-1/tree/main/ckpt/vae) | - |
88
- | MAGI-1-24B | [MAGI-1-24B](https://huggingface.co/sand-ai/MAGI-1/tree/main/ckpt/magi/24B_base) | H100/H800 × 8 |
89
- | MAGI-1-24B-distill | [MAGI-1-24B-distill](https://huggingface.co/sand-ai/MAGI-1/tree/main/ckpt/magi/24B_distill) | H100/H800 × 8 |
90
- | MAGI-1-24B-distill+fp8_quant | [MAGI-1-24B-distill+quant](https://huggingface.co/sand-ai/MAGI-1/tree/main/ckpt/magi/24B_distill_quant) | H100/H800 × 4 or RTX 4090 × 8 |
91
- | MAGI-1-4.5B | [MAGI-1-4.5B](https://huggingface.co/sand-ai/MAGI-1/tree/main/ckpt/magi/4.5B_base) | RTX 4090 × 1 |
92
- | MAGI-1-4.5B-distill | Coming soon | RTX 4090 × 1 |
93
- | MAGI-1-4.5B-distill+fp8_quant | Coming soon | RTX 4090 × 1 |
94
-
95
- > [!NOTE]
96
- >
97
- > For 4.5B models, any machine with at least 24GB of GPU memory is sufficient.
98
 
99
  ## 4. Evaluation
100
 
101
  ### In-house Human Evaluation
102
 
103
- MAGI-1 achieves state-of-the-art performance among open-source models like Wan-2.1 and HunyuanVideo and closed-source model like Hailuo (i2v-01), particularly excelling in instruction following and motion quality, positioning it as a strong potential competitor to closed-source commercial models such as Kling.
104
 
105
  ![inhouse human evaluation](figures/inhouse_human_evaluation.png)
106
 
107
  ### Physical Evaluation
108
 
109
- Thanks to the natural advantages of autoregressive architecture, Magi achieves far superior precision in predicting physical behavior on the [Physics-IQ benchmark](https://github.com/google-deepmind/physics-IQ-benchmark) through video continuation—significantly outperforming all existing models.
110
 
111
  | Model | Phys. IQ Score ↑ | Spatial IoU ↑ | Spatio Temporal ↑ | Weighted Spatial IoU ↑ | MSE ↓ |
112
  |----------------|------------------|---------------|-------------------|-------------------------|--------|
113
  | **V2V Models** | | | | | |
114
- | **Magi-24B (V2V)** | **56.02** | **0.367** | **0.270** | **0.304** | **0.005** |
115
- | **Magi-4.5B (V2V)** | **42.44** | **0.234** | **0.285** | **0.188** | **0.007** |
116
  | VideoPoet (V2V)| 29.50 | 0.204 | 0.164 | 0.137 | 0.010 |
117
  | **I2V Models** | | | | | |
118
- | **Magi-24B (I2V)** | **30.23** | **0.203** | **0.151** | **0.154** | **0.012** |
119
  | Kling1.6 (I2V) | 23.64 | 0.197 | 0.086 | 0.144 | 0.025 |
120
  | VideoPoet (I2V)| 20.30 | 0.141 | 0.126 | 0.087 | 0.012 |
121
  | Gen 3 (I2V) | 22.80 | 0.201 | 0.115 | 0.116 | 0.015 |
@@ -153,7 +143,7 @@ pip install -r requirements.txt
153
  # Install ffmpeg
154
  conda install -c conda-forge ffmpeg=4.4
155
 
156
- # For GPUs based on the Hopper architecture (e.g., H100/H800), it is recommended to install MagiAttention(https://github.com/SandAI-org/MagiAttention) for acceleration. For non-Hopper GPUs, installing MagiAttention is not necessary.
157
  git clone [email protected]:SandAI-org/MagiAttention.git
158
  cd MagiAttention
159
  git submodule update --init --recursive
@@ -207,12 +197,6 @@ By adjusting these parameters, you can flexibly control the input and output to
207
 
208
  ### Some Useful Configs (for config.json)
209
 
210
- > [!NOTE]
211
- >
212
- > - If you are running 24B model with RTX 4090 \* 8, please set `pp_size:2 cp_size: 4`.
213
- >
214
- > - Our model supports arbitrary resolutions. To accelerate inference process, the default resolution for the 4.5B model is set to 720×720 in the `4.5B_config.json`.
215
-
216
  | Config | Help |
217
  | -------------- | ------------------------------------------------------------ |
218
  | seed | Random seed used for video generation |
@@ -220,7 +204,7 @@ By adjusting these parameters, you can flexibly control the input and output to
220
  | video_size_w | Width of the video |
221
  | num_frames | Controls the duration of generated video |
222
  | fps | Frames per second, 4 video frames correspond to 1 latent_frame |
223
- | cfg_number | Base model uses cfg_number==3, distill and quant model uses cfg_number=1 |
224
  | load | Directory containing a model checkpoint. |
225
  | t5_pretrained | Path to load pretrained T5 model |
226
  | vae_pretrained | Path to load pretrained VAE model |
@@ -235,17 +219,14 @@ This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENS
235
  If you find our code or model useful in your research, please cite:
236
 
237
  ```bibtex
238
- @misc{ai2025magi1autoregressivevideogeneration,
239
  title={MAGI-1: Autoregressive Video Generation at Scale},
240
- author={Sand. ai and Hansi Teng and Hongyu Jia and Lei Sun and Lingzhi Li and Maolin Li and Mingqiu Tang and Shuai Han and Tianning Zhang and W. Q. Zhang and Weifeng Luo and Xiaoyang Kang and Yuchen Sun and Yue Cao and Yunpeng Huang and Yutong Lin and Yuxin Fang and Zewei Tao and Zheng Zhang and Zhongshu Wang and Zixun Liu and Dai Shi and Guoli Su and Hanwen Sun and Hong Pan and Jie Wang and Jiexin Sheng and Min Cui and Min Hu and Ming Yan and Shucheng Yin and Siran Zhang and Tingting Liu and Xianping Yin and Xiaoyu Yang and Xin Song and Xuan Hu and Yankai Zhang and Yuqiao Li},
241
  year={2025},
242
- eprint={2505.13211},
243
- archivePrefix={arXiv},
244
- primaryClass={cs.CV},
245
- url={https://arxiv.org/abs/2505.13211},
246
  }
247
  ```
248
 
249
  ## 8. Contact
250
 
251
- If you have any questions, please feel free to raise an issue or contact us at [research@sand.ai](mailto:research@sand.ai) .
 
3
  language:
4
  - en
5
  pipeline_tag: image-to-video
 
6
  ---
7
 
8
  ![magi-logo](figures/logo_black.png)
 
11
  -----
12
 
13
  <p align="center" style="line-height: 1;">
14
+ <a href="https://static.magi.world/static/files/MAGI_1.pdf" target="_blank" style="margin: 2px;">
15
  <img alt="paper" src="https://img.shields.io/badge/Paper-arXiv-B31B1B?logo=arxiv" style="display: inline-block; vertical-align: middle;">
16
  </a>
17
  <a href="https://sand.ai" target="_blank" style="margin: 2px;">
 
36
 
37
  # MAGI-1: Autoregressive Video Generation at Scale
38
 
39
+ This repository contains the code for the MAGI-1 model, pre-trained weights and inference code. You can find more information on our [technical report](https://static.magi.world/static/files/MAGI_1.pdf) or directly create magic with MAGI-1 [here](http://sand.ai) . 🚀✨
40
 
41
 
42
  ## 🔥🔥🔥 Latest News
43
 
 
 
44
  - Apr 21, 2025: MAGI-1 is here 🎉. We've released the model weights and inference code — check it out!
45
 
46
 
 
78
 
79
  We provide the pre-trained weights for MAGI-1, including the 24B and 4.5B models, as well as the corresponding distill and distill+quant models. The model weight links are shown in the table.
80
 
81
+ | Model | Link | Recommend Machine |
82
+ | ----------------------------- | ------------------------------------------------------------ | ------------------------------- |
83
+ | T5 | [T5](https://huggingface.co/sand-ai/MAGI-1/tree/main/ckpt/t5) | - |
84
+ | MAGI-1-VAE | [MAGI-1-VAE](https://huggingface.co/sand-ai/MAGI-1/tree/main/ckpt/vae) | - |
85
+ | MAGI-1-24B | [MAGI-1-24B](https://huggingface.co/sand-ai/MAGI-1/tree/main/ckpt/magi/24B_base) | H100/H800 \* 8 |
86
+ | MAGI-1-24B-distill | [MAGI-1-24B-distill](https://huggingface.co/sand-ai/MAGI-1/tree/main/ckpt/magi/24B_distill) | H100/H800 \* 8 |
87
+ | MAGI-1-24B-distill+fp8_quant | [MAGI-1-24B-distill+quant](https://huggingface.co/sand-ai/MAGI-1/tree/main/ckpt/magi/24B_distill_quant) | H100/H800 \* 4 or RTX 4090 \* 8 |
88
+ | MAGI-1-4.5B | MAGI-1-4.5B | RTX 4090 \* 1 |
 
 
 
 
 
 
89
 
90
  ## 4. Evaluation
91
 
92
  ### In-house Human Evaluation
93
 
94
+ MAGI-1 achieves state-of-the-art performance among open-source models (surpassing Wan-2.1 and significantly outperforming Hailuo and HunyuanVideo), particularly excelling in instruction following and motion quality, positioning it as a strong potential competitor to closed-source commercial models such as Kling.
95
 
96
  ![inhouse human evaluation](figures/inhouse_human_evaluation.png)
97
 
98
  ### Physical Evaluation
99
 
100
+ Thanks to the natural advantages of autoregressive architecture, Magi achieves far superior precision in predicting physical behavior through video continuation—significantly outperforming all existing models.
101
 
102
  | Model | Phys. IQ Score ↑ | Spatial IoU ↑ | Spatio Temporal ↑ | Weighted Spatial IoU ↑ | MSE ↓ |
103
  |----------------|------------------|---------------|-------------------|-------------------------|--------|
104
  | **V2V Models** | | | | | |
105
+ | **Magi (V2V)** | **56.02** | **0.367** | **0.270** | **0.304** | **0.005** |
 
106
  | VideoPoet (V2V)| 29.50 | 0.204 | 0.164 | 0.137 | 0.010 |
107
  | **I2V Models** | | | | | |
108
+ | **Magi (I2V)** | **30.23** | **0.203** | **0.151** | **0.154** | **0.012** |
109
  | Kling1.6 (I2V) | 23.64 | 0.197 | 0.086 | 0.144 | 0.025 |
110
  | VideoPoet (I2V)| 20.30 | 0.141 | 0.126 | 0.087 | 0.012 |
111
  | Gen 3 (I2V) | 22.80 | 0.201 | 0.115 | 0.116 | 0.015 |
 
143
  # Install ffmpeg
144
  conda install -c conda-forge ffmpeg=4.4
145
 
146
+ # Install MagiAttention, for more information, please refer to https://github.com/SandAI-org/MagiAttention#
147
  git clone [email protected]:SandAI-org/MagiAttention.git
148
  cd MagiAttention
149
  git submodule update --init --recursive
 
197
 
198
  ### Some Useful Configs (for config.json)
199
 
 
 
 
 
 
 
200
  | Config | Help |
201
  | -------------- | ------------------------------------------------------------ |
202
  | seed | Random seed used for video generation |
 
204
  | video_size_w | Width of the video |
205
  | num_frames | Controls the duration of generated video |
206
  | fps | Frames per second, 4 video frames correspond to 1 latent_frame |
207
+ | cfg_number | Base model uses cfg_number==2, distill and quant model uses cfg_number=1 |
208
  | load | Directory containing a model checkpoint. |
209
  | t5_pretrained | Path to load pretrained T5 model |
210
  | vae_pretrained | Path to load pretrained VAE model |
 
219
  If you find our code or model useful in your research, please cite:
220
 
221
  ```bibtex
222
+ @misc{magi1,
223
  title={MAGI-1: Autoregressive Video Generation at Scale},
224
+ author={Sand-AI},
225
  year={2025},
226
+ url={https://static.magi.world/static/files/MAGI_1.pdf},
 
 
 
227
  }
228
  ```
229
 
230
  ## 8. Contact
231
 
232
+ If you have any questions, please feel free to raise an issue or contact us at [support@sand.ai](support@sand.ai) .
ckpt/magi/4.5B_base/inference_weight/model-00002-of-00002.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ea357ebcf099cd0bf40ed68ca582cd8c573b309a44ebe3a25fbec47aa36bc1da
3
- size 4281314928
 
 
 
 
ckpt/magi/4.5B_base/inference_weight/model.safetensors.index.json DELETED
@@ -1,905 +0,0 @@
1
- {
2
- "metadata": {
3
- "total_size": 8961059904
4
- },
5
- "weight_map": {
6
- "final_linear.linear.weight": "model-00001-of-00002.safetensors",
7
- "rope.bands": "model-00001-of-00002.safetensors",
8
- "t_embedder.mlp.0.bias": "model-00001-of-00002.safetensors",
9
- "t_embedder.mlp.0.weight": "model-00001-of-00002.safetensors",
10
- "t_embedder.mlp.2.bias": "model-00001-of-00002.safetensors",
11
- "t_embedder.mlp.2.weight": "model-00001-of-00002.safetensors",
12
- "videodit_blocks.final_layernorm.bias": "model-00001-of-00002.safetensors",
13
- "videodit_blocks.final_layernorm.weight": "model-00001-of-00002.safetensors",
14
- "videodit_blocks.layers.0.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
15
- "videodit_blocks.layers.0.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
16
- "videodit_blocks.layers.0.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
17
- "videodit_blocks.layers.0.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
18
- "videodit_blocks.layers.0.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
19
- "videodit_blocks.layers.0.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
20
- "videodit_blocks.layers.0.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
21
- "videodit_blocks.layers.0.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
22
- "videodit_blocks.layers.0.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
23
- "videodit_blocks.layers.0.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
24
- "videodit_blocks.layers.0.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
25
- "videodit_blocks.layers.0.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
26
- "videodit_blocks.layers.0.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
27
- "videodit_blocks.layers.0.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
28
- "videodit_blocks.layers.0.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
29
- "videodit_blocks.layers.0.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
30
- "videodit_blocks.layers.0.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
31
- "videodit_blocks.layers.0.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
32
- "videodit_blocks.layers.0.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
33
- "videodit_blocks.layers.0.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
34
- "videodit_blocks.layers.0.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
35
- "videodit_blocks.layers.0.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
36
- "videodit_blocks.layers.0.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
37
- "videodit_blocks.layers.0.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
38
- "videodit_blocks.layers.0.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
39
- "videodit_blocks.layers.0.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
40
- "videodit_blocks.layers.1.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
41
- "videodit_blocks.layers.1.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
42
- "videodit_blocks.layers.1.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
43
- "videodit_blocks.layers.1.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
44
- "videodit_blocks.layers.1.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
45
- "videodit_blocks.layers.1.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
46
- "videodit_blocks.layers.1.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
47
- "videodit_blocks.layers.1.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
48
- "videodit_blocks.layers.1.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
49
- "videodit_blocks.layers.1.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
50
- "videodit_blocks.layers.1.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
51
- "videodit_blocks.layers.1.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
52
- "videodit_blocks.layers.1.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
53
- "videodit_blocks.layers.1.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
54
- "videodit_blocks.layers.1.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
55
- "videodit_blocks.layers.1.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
56
- "videodit_blocks.layers.1.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
57
- "videodit_blocks.layers.1.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
58
- "videodit_blocks.layers.1.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
59
- "videodit_blocks.layers.1.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
60
- "videodit_blocks.layers.1.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
61
- "videodit_blocks.layers.1.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
62
- "videodit_blocks.layers.1.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
63
- "videodit_blocks.layers.1.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
64
- "videodit_blocks.layers.1.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
65
- "videodit_blocks.layers.1.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
66
- "videodit_blocks.layers.10.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
67
- "videodit_blocks.layers.10.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
68
- "videodit_blocks.layers.10.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
69
- "videodit_blocks.layers.10.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
70
- "videodit_blocks.layers.10.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
71
- "videodit_blocks.layers.10.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
72
- "videodit_blocks.layers.10.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
73
- "videodit_blocks.layers.10.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
74
- "videodit_blocks.layers.10.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
75
- "videodit_blocks.layers.10.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
76
- "videodit_blocks.layers.10.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
77
- "videodit_blocks.layers.10.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
78
- "videodit_blocks.layers.10.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
79
- "videodit_blocks.layers.10.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
80
- "videodit_blocks.layers.10.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
81
- "videodit_blocks.layers.10.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
82
- "videodit_blocks.layers.10.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
83
- "videodit_blocks.layers.10.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
84
- "videodit_blocks.layers.10.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
85
- "videodit_blocks.layers.10.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
86
- "videodit_blocks.layers.10.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
87
- "videodit_blocks.layers.10.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
88
- "videodit_blocks.layers.10.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
89
- "videodit_blocks.layers.10.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
90
- "videodit_blocks.layers.10.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
91
- "videodit_blocks.layers.10.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
92
- "videodit_blocks.layers.11.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
93
- "videodit_blocks.layers.11.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
94
- "videodit_blocks.layers.11.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
95
- "videodit_blocks.layers.11.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
96
- "videodit_blocks.layers.11.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
97
- "videodit_blocks.layers.11.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
98
- "videodit_blocks.layers.11.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
99
- "videodit_blocks.layers.11.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
100
- "videodit_blocks.layers.11.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
101
- "videodit_blocks.layers.11.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
102
- "videodit_blocks.layers.11.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
103
- "videodit_blocks.layers.11.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
104
- "videodit_blocks.layers.11.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
105
- "videodit_blocks.layers.11.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
106
- "videodit_blocks.layers.11.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
107
- "videodit_blocks.layers.11.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
108
- "videodit_blocks.layers.11.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
109
- "videodit_blocks.layers.11.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
110
- "videodit_blocks.layers.11.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
111
- "videodit_blocks.layers.11.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
112
- "videodit_blocks.layers.11.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
113
- "videodit_blocks.layers.11.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
114
- "videodit_blocks.layers.11.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
115
- "videodit_blocks.layers.11.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
116
- "videodit_blocks.layers.11.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
117
- "videodit_blocks.layers.11.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
118
- "videodit_blocks.layers.12.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
119
- "videodit_blocks.layers.12.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
120
- "videodit_blocks.layers.12.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
121
- "videodit_blocks.layers.12.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
122
- "videodit_blocks.layers.12.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
123
- "videodit_blocks.layers.12.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
124
- "videodit_blocks.layers.12.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
125
- "videodit_blocks.layers.12.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
126
- "videodit_blocks.layers.12.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
127
- "videodit_blocks.layers.12.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
128
- "videodit_blocks.layers.12.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
129
- "videodit_blocks.layers.12.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
130
- "videodit_blocks.layers.12.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
131
- "videodit_blocks.layers.12.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
132
- "videodit_blocks.layers.12.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
133
- "videodit_blocks.layers.12.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
134
- "videodit_blocks.layers.12.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
135
- "videodit_blocks.layers.12.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
136
- "videodit_blocks.layers.12.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
137
- "videodit_blocks.layers.12.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
138
- "videodit_blocks.layers.12.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
139
- "videodit_blocks.layers.12.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
140
- "videodit_blocks.layers.12.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
141
- "videodit_blocks.layers.12.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
142
- "videodit_blocks.layers.12.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
143
- "videodit_blocks.layers.12.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
144
- "videodit_blocks.layers.13.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
145
- "videodit_blocks.layers.13.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
146
- "videodit_blocks.layers.13.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
147
- "videodit_blocks.layers.13.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
148
- "videodit_blocks.layers.13.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
149
- "videodit_blocks.layers.13.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
150
- "videodit_blocks.layers.13.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
151
- "videodit_blocks.layers.13.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
152
- "videodit_blocks.layers.13.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
153
- "videodit_blocks.layers.13.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
154
- "videodit_blocks.layers.13.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
155
- "videodit_blocks.layers.13.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
156
- "videodit_blocks.layers.13.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
157
- "videodit_blocks.layers.13.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
158
- "videodit_blocks.layers.13.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
159
- "videodit_blocks.layers.13.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
160
- "videodit_blocks.layers.13.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
161
- "videodit_blocks.layers.13.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
162
- "videodit_blocks.layers.13.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
163
- "videodit_blocks.layers.13.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
164
- "videodit_blocks.layers.13.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
165
- "videodit_blocks.layers.13.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
166
- "videodit_blocks.layers.13.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
167
- "videodit_blocks.layers.13.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
168
- "videodit_blocks.layers.13.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
169
- "videodit_blocks.layers.13.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
170
- "videodit_blocks.layers.14.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
171
- "videodit_blocks.layers.14.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
172
- "videodit_blocks.layers.14.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
173
- "videodit_blocks.layers.14.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
174
- "videodit_blocks.layers.14.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
175
- "videodit_blocks.layers.14.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
176
- "videodit_blocks.layers.14.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
177
- "videodit_blocks.layers.14.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
178
- "videodit_blocks.layers.14.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
179
- "videodit_blocks.layers.14.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
180
- "videodit_blocks.layers.14.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
181
- "videodit_blocks.layers.14.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
182
- "videodit_blocks.layers.14.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
183
- "videodit_blocks.layers.14.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
184
- "videodit_blocks.layers.14.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
185
- "videodit_blocks.layers.14.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
186
- "videodit_blocks.layers.14.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
187
- "videodit_blocks.layers.14.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
188
- "videodit_blocks.layers.14.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
189
- "videodit_blocks.layers.14.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
190
- "videodit_blocks.layers.14.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
191
- "videodit_blocks.layers.14.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
192
- "videodit_blocks.layers.14.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
193
- "videodit_blocks.layers.14.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
194
- "videodit_blocks.layers.14.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
195
- "videodit_blocks.layers.14.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
196
- "videodit_blocks.layers.15.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
197
- "videodit_blocks.layers.15.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
198
- "videodit_blocks.layers.15.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
199
- "videodit_blocks.layers.15.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
200
- "videodit_blocks.layers.15.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
201
- "videodit_blocks.layers.15.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
202
- "videodit_blocks.layers.15.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
203
- "videodit_blocks.layers.15.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
204
- "videodit_blocks.layers.15.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
205
- "videodit_blocks.layers.15.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
206
- "videodit_blocks.layers.15.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
207
- "videodit_blocks.layers.15.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
208
- "videodit_blocks.layers.15.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
209
- "videodit_blocks.layers.15.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
210
- "videodit_blocks.layers.15.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
211
- "videodit_blocks.layers.15.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
212
- "videodit_blocks.layers.15.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
213
- "videodit_blocks.layers.15.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
214
- "videodit_blocks.layers.15.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
215
- "videodit_blocks.layers.15.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
216
- "videodit_blocks.layers.15.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
217
- "videodit_blocks.layers.15.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
218
- "videodit_blocks.layers.15.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
219
- "videodit_blocks.layers.15.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
220
- "videodit_blocks.layers.15.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
221
- "videodit_blocks.layers.15.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
222
- "videodit_blocks.layers.16.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
223
- "videodit_blocks.layers.16.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
224
- "videodit_blocks.layers.16.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
225
- "videodit_blocks.layers.16.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
226
- "videodit_blocks.layers.16.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
227
- "videodit_blocks.layers.16.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
228
- "videodit_blocks.layers.16.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
229
- "videodit_blocks.layers.16.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
230
- "videodit_blocks.layers.16.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
231
- "videodit_blocks.layers.16.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
232
- "videodit_blocks.layers.16.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
233
- "videodit_blocks.layers.16.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
234
- "videodit_blocks.layers.16.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
235
- "videodit_blocks.layers.16.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
236
- "videodit_blocks.layers.16.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
237
- "videodit_blocks.layers.16.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
238
- "videodit_blocks.layers.16.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
239
- "videodit_blocks.layers.16.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
240
- "videodit_blocks.layers.16.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
241
- "videodit_blocks.layers.16.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
242
- "videodit_blocks.layers.16.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
243
- "videodit_blocks.layers.16.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
244
- "videodit_blocks.layers.16.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
245
- "videodit_blocks.layers.16.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
246
- "videodit_blocks.layers.16.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
247
- "videodit_blocks.layers.16.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
248
- "videodit_blocks.layers.17.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
249
- "videodit_blocks.layers.17.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
250
- "videodit_blocks.layers.17.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
251
- "videodit_blocks.layers.17.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
252
- "videodit_blocks.layers.17.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
253
- "videodit_blocks.layers.17.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
254
- "videodit_blocks.layers.17.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
255
- "videodit_blocks.layers.17.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
256
- "videodit_blocks.layers.17.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
257
- "videodit_blocks.layers.17.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
258
- "videodit_blocks.layers.17.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
259
- "videodit_blocks.layers.17.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
260
- "videodit_blocks.layers.17.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
261
- "videodit_blocks.layers.17.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
262
- "videodit_blocks.layers.17.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
263
- "videodit_blocks.layers.17.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
264
- "videodit_blocks.layers.17.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
265
- "videodit_blocks.layers.17.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
266
- "videodit_blocks.layers.17.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
267
- "videodit_blocks.layers.17.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
268
- "videodit_blocks.layers.17.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
269
- "videodit_blocks.layers.17.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
270
- "videodit_blocks.layers.17.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
271
- "videodit_blocks.layers.17.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
272
- "videodit_blocks.layers.17.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
273
- "videodit_blocks.layers.17.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
274
- "videodit_blocks.layers.18.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
275
- "videodit_blocks.layers.18.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
276
- "videodit_blocks.layers.18.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
277
- "videodit_blocks.layers.18.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
278
- "videodit_blocks.layers.18.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
279
- "videodit_blocks.layers.18.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
280
- "videodit_blocks.layers.18.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
281
- "videodit_blocks.layers.18.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
282
- "videodit_blocks.layers.18.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
283
- "videodit_blocks.layers.18.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
284
- "videodit_blocks.layers.18.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
285
- "videodit_blocks.layers.18.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
286
- "videodit_blocks.layers.18.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
287
- "videodit_blocks.layers.18.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
288
- "videodit_blocks.layers.18.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
289
- "videodit_blocks.layers.18.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
290
- "videodit_blocks.layers.18.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
291
- "videodit_blocks.layers.18.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
292
- "videodit_blocks.layers.18.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
293
- "videodit_blocks.layers.18.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
294
- "videodit_blocks.layers.18.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
295
- "videodit_blocks.layers.18.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
296
- "videodit_blocks.layers.18.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
297
- "videodit_blocks.layers.18.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
298
- "videodit_blocks.layers.18.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
299
- "videodit_blocks.layers.18.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
300
- "videodit_blocks.layers.19.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
301
- "videodit_blocks.layers.19.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
302
- "videodit_blocks.layers.19.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
303
- "videodit_blocks.layers.19.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
304
- "videodit_blocks.layers.19.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
305
- "videodit_blocks.layers.19.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
306
- "videodit_blocks.layers.19.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
307
- "videodit_blocks.layers.19.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
308
- "videodit_blocks.layers.19.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
309
- "videodit_blocks.layers.19.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
310
- "videodit_blocks.layers.19.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
311
- "videodit_blocks.layers.19.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
312
- "videodit_blocks.layers.19.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
313
- "videodit_blocks.layers.19.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
314
- "videodit_blocks.layers.19.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
315
- "videodit_blocks.layers.19.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
316
- "videodit_blocks.layers.19.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
317
- "videodit_blocks.layers.19.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
318
- "videodit_blocks.layers.19.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
319
- "videodit_blocks.layers.19.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
320
- "videodit_blocks.layers.19.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
321
- "videodit_blocks.layers.19.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
322
- "videodit_blocks.layers.19.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
323
- "videodit_blocks.layers.19.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
324
- "videodit_blocks.layers.19.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
325
- "videodit_blocks.layers.19.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
326
- "videodit_blocks.layers.2.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
327
- "videodit_blocks.layers.2.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
328
- "videodit_blocks.layers.2.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
329
- "videodit_blocks.layers.2.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
330
- "videodit_blocks.layers.2.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
331
- "videodit_blocks.layers.2.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
332
- "videodit_blocks.layers.2.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
333
- "videodit_blocks.layers.2.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
334
- "videodit_blocks.layers.2.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
335
- "videodit_blocks.layers.2.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
336
- "videodit_blocks.layers.2.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
337
- "videodit_blocks.layers.2.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
338
- "videodit_blocks.layers.2.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
339
- "videodit_blocks.layers.2.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
340
- "videodit_blocks.layers.2.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
341
- "videodit_blocks.layers.2.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
342
- "videodit_blocks.layers.2.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
343
- "videodit_blocks.layers.2.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
344
- "videodit_blocks.layers.2.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
345
- "videodit_blocks.layers.2.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
346
- "videodit_blocks.layers.2.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
347
- "videodit_blocks.layers.2.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
348
- "videodit_blocks.layers.2.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
349
- "videodit_blocks.layers.2.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
350
- "videodit_blocks.layers.2.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
351
- "videodit_blocks.layers.2.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
352
- "videodit_blocks.layers.20.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
353
- "videodit_blocks.layers.20.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
354
- "videodit_blocks.layers.20.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
355
- "videodit_blocks.layers.20.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
356
- "videodit_blocks.layers.20.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
357
- "videodit_blocks.layers.20.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
358
- "videodit_blocks.layers.20.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
359
- "videodit_blocks.layers.20.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
360
- "videodit_blocks.layers.20.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
361
- "videodit_blocks.layers.20.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
362
- "videodit_blocks.layers.20.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
363
- "videodit_blocks.layers.20.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
364
- "videodit_blocks.layers.20.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
365
- "videodit_blocks.layers.20.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
366
- "videodit_blocks.layers.20.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
367
- "videodit_blocks.layers.20.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
368
- "videodit_blocks.layers.20.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
369
- "videodit_blocks.layers.20.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
370
- "videodit_blocks.layers.20.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
371
- "videodit_blocks.layers.20.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
372
- "videodit_blocks.layers.20.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
373
- "videodit_blocks.layers.20.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
374
- "videodit_blocks.layers.20.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
375
- "videodit_blocks.layers.20.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
376
- "videodit_blocks.layers.20.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
377
- "videodit_blocks.layers.20.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
378
- "videodit_blocks.layers.21.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
379
- "videodit_blocks.layers.21.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
380
- "videodit_blocks.layers.21.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
381
- "videodit_blocks.layers.21.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
382
- "videodit_blocks.layers.21.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
383
- "videodit_blocks.layers.21.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
384
- "videodit_blocks.layers.21.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
385
- "videodit_blocks.layers.21.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
386
- "videodit_blocks.layers.21.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
387
- "videodit_blocks.layers.21.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
388
- "videodit_blocks.layers.21.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
389
- "videodit_blocks.layers.21.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
390
- "videodit_blocks.layers.21.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
391
- "videodit_blocks.layers.21.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
392
- "videodit_blocks.layers.21.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
393
- "videodit_blocks.layers.21.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
394
- "videodit_blocks.layers.21.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
395
- "videodit_blocks.layers.21.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
396
- "videodit_blocks.layers.21.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
397
- "videodit_blocks.layers.21.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
398
- "videodit_blocks.layers.21.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
399
- "videodit_blocks.layers.21.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
400
- "videodit_blocks.layers.21.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
401
- "videodit_blocks.layers.21.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
402
- "videodit_blocks.layers.21.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
403
- "videodit_blocks.layers.21.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
404
- "videodit_blocks.layers.22.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
405
- "videodit_blocks.layers.22.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
406
- "videodit_blocks.layers.22.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
407
- "videodit_blocks.layers.22.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
408
- "videodit_blocks.layers.22.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
409
- "videodit_blocks.layers.22.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
410
- "videodit_blocks.layers.22.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
411
- "videodit_blocks.layers.22.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
412
- "videodit_blocks.layers.22.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
413
- "videodit_blocks.layers.22.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
414
- "videodit_blocks.layers.22.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
415
- "videodit_blocks.layers.22.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
416
- "videodit_blocks.layers.22.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
417
- "videodit_blocks.layers.22.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
418
- "videodit_blocks.layers.22.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
419
- "videodit_blocks.layers.22.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
420
- "videodit_blocks.layers.22.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
421
- "videodit_blocks.layers.22.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
422
- "videodit_blocks.layers.22.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
423
- "videodit_blocks.layers.22.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
424
- "videodit_blocks.layers.22.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
425
- "videodit_blocks.layers.22.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
426
- "videodit_blocks.layers.22.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
427
- "videodit_blocks.layers.22.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
428
- "videodit_blocks.layers.22.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
429
- "videodit_blocks.layers.22.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
430
- "videodit_blocks.layers.23.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
431
- "videodit_blocks.layers.23.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
432
- "videodit_blocks.layers.23.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
433
- "videodit_blocks.layers.23.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
434
- "videodit_blocks.layers.23.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
435
- "videodit_blocks.layers.23.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
436
- "videodit_blocks.layers.23.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
437
- "videodit_blocks.layers.23.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
438
- "videodit_blocks.layers.23.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
439
- "videodit_blocks.layers.23.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
440
- "videodit_blocks.layers.23.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
441
- "videodit_blocks.layers.23.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
442
- "videodit_blocks.layers.23.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
443
- "videodit_blocks.layers.23.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
444
- "videodit_blocks.layers.23.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
445
- "videodit_blocks.layers.23.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
446
- "videodit_blocks.layers.23.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
447
- "videodit_blocks.layers.23.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
448
- "videodit_blocks.layers.23.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
449
- "videodit_blocks.layers.23.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
450
- "videodit_blocks.layers.23.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
451
- "videodit_blocks.layers.23.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
452
- "videodit_blocks.layers.23.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
453
- "videodit_blocks.layers.23.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
454
- "videodit_blocks.layers.23.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
455
- "videodit_blocks.layers.23.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
456
- "videodit_blocks.layers.24.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
457
- "videodit_blocks.layers.24.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
458
- "videodit_blocks.layers.24.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
459
- "videodit_blocks.layers.24.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
460
- "videodit_blocks.layers.24.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
461
- "videodit_blocks.layers.24.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
462
- "videodit_blocks.layers.24.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
463
- "videodit_blocks.layers.24.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
464
- "videodit_blocks.layers.24.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
465
- "videodit_blocks.layers.24.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
466
- "videodit_blocks.layers.24.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
467
- "videodit_blocks.layers.24.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
468
- "videodit_blocks.layers.24.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
469
- "videodit_blocks.layers.24.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
470
- "videodit_blocks.layers.24.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
471
- "videodit_blocks.layers.24.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
472
- "videodit_blocks.layers.24.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
473
- "videodit_blocks.layers.24.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
474
- "videodit_blocks.layers.24.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
475
- "videodit_blocks.layers.24.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
476
- "videodit_blocks.layers.24.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
477
- "videodit_blocks.layers.24.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
478
- "videodit_blocks.layers.24.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
479
- "videodit_blocks.layers.24.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
480
- "videodit_blocks.layers.24.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
481
- "videodit_blocks.layers.24.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
482
- "videodit_blocks.layers.25.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
483
- "videodit_blocks.layers.25.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
484
- "videodit_blocks.layers.25.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
485
- "videodit_blocks.layers.25.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
486
- "videodit_blocks.layers.25.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
487
- "videodit_blocks.layers.25.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
488
- "videodit_blocks.layers.25.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
489
- "videodit_blocks.layers.25.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
490
- "videodit_blocks.layers.25.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
491
- "videodit_blocks.layers.25.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
492
- "videodit_blocks.layers.25.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
493
- "videodit_blocks.layers.25.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
494
- "videodit_blocks.layers.25.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
495
- "videodit_blocks.layers.25.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
496
- "videodit_blocks.layers.25.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
497
- "videodit_blocks.layers.25.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
498
- "videodit_blocks.layers.25.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
499
- "videodit_blocks.layers.25.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
500
- "videodit_blocks.layers.25.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
501
- "videodit_blocks.layers.25.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
502
- "videodit_blocks.layers.25.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
503
- "videodit_blocks.layers.25.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
504
- "videodit_blocks.layers.25.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
505
- "videodit_blocks.layers.25.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
506
- "videodit_blocks.layers.25.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
507
- "videodit_blocks.layers.25.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
508
- "videodit_blocks.layers.26.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
509
- "videodit_blocks.layers.26.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
510
- "videodit_blocks.layers.26.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
511
- "videodit_blocks.layers.26.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
512
- "videodit_blocks.layers.26.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
513
- "videodit_blocks.layers.26.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
514
- "videodit_blocks.layers.26.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
515
- "videodit_blocks.layers.26.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
516
- "videodit_blocks.layers.26.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
517
- "videodit_blocks.layers.26.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
518
- "videodit_blocks.layers.26.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
519
- "videodit_blocks.layers.26.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
520
- "videodit_blocks.layers.26.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
521
- "videodit_blocks.layers.26.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
522
- "videodit_blocks.layers.26.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
523
- "videodit_blocks.layers.26.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
524
- "videodit_blocks.layers.26.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
525
- "videodit_blocks.layers.26.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
526
- "videodit_blocks.layers.26.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
527
- "videodit_blocks.layers.26.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
528
- "videodit_blocks.layers.26.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
529
- "videodit_blocks.layers.26.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
530
- "videodit_blocks.layers.26.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
531
- "videodit_blocks.layers.26.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
532
- "videodit_blocks.layers.26.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
533
- "videodit_blocks.layers.26.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
534
- "videodit_blocks.layers.27.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
535
- "videodit_blocks.layers.27.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
536
- "videodit_blocks.layers.27.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
537
- "videodit_blocks.layers.27.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
538
- "videodit_blocks.layers.27.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
539
- "videodit_blocks.layers.27.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
540
- "videodit_blocks.layers.27.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
541
- "videodit_blocks.layers.27.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
542
- "videodit_blocks.layers.27.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
543
- "videodit_blocks.layers.27.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
544
- "videodit_blocks.layers.27.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
545
- "videodit_blocks.layers.27.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
546
- "videodit_blocks.layers.27.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
547
- "videodit_blocks.layers.27.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
548
- "videodit_blocks.layers.27.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
549
- "videodit_blocks.layers.27.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
550
- "videodit_blocks.layers.27.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
551
- "videodit_blocks.layers.27.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
552
- "videodit_blocks.layers.27.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
553
- "videodit_blocks.layers.27.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
554
- "videodit_blocks.layers.27.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
555
- "videodit_blocks.layers.27.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
556
- "videodit_blocks.layers.27.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
557
- "videodit_blocks.layers.27.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
558
- "videodit_blocks.layers.27.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
559
- "videodit_blocks.layers.27.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
560
- "videodit_blocks.layers.28.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
561
- "videodit_blocks.layers.28.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
562
- "videodit_blocks.layers.28.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
563
- "videodit_blocks.layers.28.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
564
- "videodit_blocks.layers.28.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
565
- "videodit_blocks.layers.28.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
566
- "videodit_blocks.layers.28.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
567
- "videodit_blocks.layers.28.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
568
- "videodit_blocks.layers.28.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
569
- "videodit_blocks.layers.28.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
570
- "videodit_blocks.layers.28.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
571
- "videodit_blocks.layers.28.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
572
- "videodit_blocks.layers.28.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
573
- "videodit_blocks.layers.28.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
574
- "videodit_blocks.layers.28.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
575
- "videodit_blocks.layers.28.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
576
- "videodit_blocks.layers.28.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
577
- "videodit_blocks.layers.28.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
578
- "videodit_blocks.layers.28.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
579
- "videodit_blocks.layers.28.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
580
- "videodit_blocks.layers.28.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
581
- "videodit_blocks.layers.28.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
582
- "videodit_blocks.layers.28.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
583
- "videodit_blocks.layers.28.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
584
- "videodit_blocks.layers.28.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
585
- "videodit_blocks.layers.28.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
586
- "videodit_blocks.layers.29.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
587
- "videodit_blocks.layers.29.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
588
- "videodit_blocks.layers.29.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
589
- "videodit_blocks.layers.29.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
590
- "videodit_blocks.layers.29.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
591
- "videodit_blocks.layers.29.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
592
- "videodit_blocks.layers.29.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
593
- "videodit_blocks.layers.29.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
594
- "videodit_blocks.layers.29.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
595
- "videodit_blocks.layers.29.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
596
- "videodit_blocks.layers.29.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
597
- "videodit_blocks.layers.29.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
598
- "videodit_blocks.layers.29.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
599
- "videodit_blocks.layers.29.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
600
- "videodit_blocks.layers.29.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
601
- "videodit_blocks.layers.29.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
602
- "videodit_blocks.layers.29.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
603
- "videodit_blocks.layers.29.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
604
- "videodit_blocks.layers.29.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
605
- "videodit_blocks.layers.29.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
606
- "videodit_blocks.layers.29.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
607
- "videodit_blocks.layers.29.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
608
- "videodit_blocks.layers.29.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
609
- "videodit_blocks.layers.29.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
610
- "videodit_blocks.layers.29.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
611
- "videodit_blocks.layers.29.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
612
- "videodit_blocks.layers.3.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
613
- "videodit_blocks.layers.3.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
614
- "videodit_blocks.layers.3.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
615
- "videodit_blocks.layers.3.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
616
- "videodit_blocks.layers.3.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
617
- "videodit_blocks.layers.3.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
618
- "videodit_blocks.layers.3.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
619
- "videodit_blocks.layers.3.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
620
- "videodit_blocks.layers.3.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
621
- "videodit_blocks.layers.3.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
622
- "videodit_blocks.layers.3.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
623
- "videodit_blocks.layers.3.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
624
- "videodit_blocks.layers.3.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
625
- "videodit_blocks.layers.3.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
626
- "videodit_blocks.layers.3.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
627
- "videodit_blocks.layers.3.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
628
- "videodit_blocks.layers.3.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
629
- "videodit_blocks.layers.3.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
630
- "videodit_blocks.layers.3.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
631
- "videodit_blocks.layers.3.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
632
- "videodit_blocks.layers.3.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
633
- "videodit_blocks.layers.3.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
634
- "videodit_blocks.layers.3.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
635
- "videodit_blocks.layers.3.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
636
- "videodit_blocks.layers.3.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
637
- "videodit_blocks.layers.3.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
638
- "videodit_blocks.layers.30.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
639
- "videodit_blocks.layers.30.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
640
- "videodit_blocks.layers.30.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
641
- "videodit_blocks.layers.30.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
642
- "videodit_blocks.layers.30.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
643
- "videodit_blocks.layers.30.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
644
- "videodit_blocks.layers.30.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
645
- "videodit_blocks.layers.30.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
646
- "videodit_blocks.layers.30.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
647
- "videodit_blocks.layers.30.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
648
- "videodit_blocks.layers.30.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
649
- "videodit_blocks.layers.30.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
650
- "videodit_blocks.layers.30.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
651
- "videodit_blocks.layers.30.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
652
- "videodit_blocks.layers.30.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
653
- "videodit_blocks.layers.30.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
654
- "videodit_blocks.layers.30.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
655
- "videodit_blocks.layers.30.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
656
- "videodit_blocks.layers.30.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
657
- "videodit_blocks.layers.30.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
658
- "videodit_blocks.layers.30.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
659
- "videodit_blocks.layers.30.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
660
- "videodit_blocks.layers.30.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
661
- "videodit_blocks.layers.30.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
662
- "videodit_blocks.layers.30.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
663
- "videodit_blocks.layers.30.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
664
- "videodit_blocks.layers.31.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
665
- "videodit_blocks.layers.31.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
666
- "videodit_blocks.layers.31.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
667
- "videodit_blocks.layers.31.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
668
- "videodit_blocks.layers.31.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
669
- "videodit_blocks.layers.31.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
670
- "videodit_blocks.layers.31.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
671
- "videodit_blocks.layers.31.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
672
- "videodit_blocks.layers.31.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
673
- "videodit_blocks.layers.31.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
674
- "videodit_blocks.layers.31.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
675
- "videodit_blocks.layers.31.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
676
- "videodit_blocks.layers.31.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
677
- "videodit_blocks.layers.31.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
678
- "videodit_blocks.layers.31.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
679
- "videodit_blocks.layers.31.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
680
- "videodit_blocks.layers.31.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
681
- "videodit_blocks.layers.31.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
682
- "videodit_blocks.layers.31.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
683
- "videodit_blocks.layers.31.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
684
- "videodit_blocks.layers.31.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
685
- "videodit_blocks.layers.31.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
686
- "videodit_blocks.layers.31.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
687
- "videodit_blocks.layers.31.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
688
- "videodit_blocks.layers.31.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
689
- "videodit_blocks.layers.31.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
690
- "videodit_blocks.layers.32.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
691
- "videodit_blocks.layers.32.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
692
- "videodit_blocks.layers.32.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
693
- "videodit_blocks.layers.32.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
694
- "videodit_blocks.layers.32.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
695
- "videodit_blocks.layers.32.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
696
- "videodit_blocks.layers.32.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
697
- "videodit_blocks.layers.32.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
698
- "videodit_blocks.layers.32.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
699
- "videodit_blocks.layers.32.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
700
- "videodit_blocks.layers.32.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
701
- "videodit_blocks.layers.32.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
702
- "videodit_blocks.layers.32.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
703
- "videodit_blocks.layers.32.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
704
- "videodit_blocks.layers.32.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
705
- "videodit_blocks.layers.32.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
706
- "videodit_blocks.layers.32.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
707
- "videodit_blocks.layers.32.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
708
- "videodit_blocks.layers.32.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
709
- "videodit_blocks.layers.32.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
710
- "videodit_blocks.layers.32.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
711
- "videodit_blocks.layers.32.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
712
- "videodit_blocks.layers.32.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
713
- "videodit_blocks.layers.32.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
714
- "videodit_blocks.layers.32.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
715
- "videodit_blocks.layers.32.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
716
- "videodit_blocks.layers.33.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
717
- "videodit_blocks.layers.33.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
718
- "videodit_blocks.layers.33.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
719
- "videodit_blocks.layers.33.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
720
- "videodit_blocks.layers.33.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
721
- "videodit_blocks.layers.33.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
722
- "videodit_blocks.layers.33.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
723
- "videodit_blocks.layers.33.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
724
- "videodit_blocks.layers.33.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
725
- "videodit_blocks.layers.33.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
726
- "videodit_blocks.layers.33.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
727
- "videodit_blocks.layers.33.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
728
- "videodit_blocks.layers.33.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
729
- "videodit_blocks.layers.33.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
730
- "videodit_blocks.layers.33.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
731
- "videodit_blocks.layers.33.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
732
- "videodit_blocks.layers.33.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
733
- "videodit_blocks.layers.33.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
734
- "videodit_blocks.layers.33.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
735
- "videodit_blocks.layers.33.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
736
- "videodit_blocks.layers.33.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
737
- "videodit_blocks.layers.33.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
738
- "videodit_blocks.layers.33.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
739
- "videodit_blocks.layers.33.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
740
- "videodit_blocks.layers.33.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
741
- "videodit_blocks.layers.33.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
742
- "videodit_blocks.layers.4.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
743
- "videodit_blocks.layers.4.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
744
- "videodit_blocks.layers.4.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
745
- "videodit_blocks.layers.4.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
746
- "videodit_blocks.layers.4.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
747
- "videodit_blocks.layers.4.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
748
- "videodit_blocks.layers.4.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
749
- "videodit_blocks.layers.4.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
750
- "videodit_blocks.layers.4.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
751
- "videodit_blocks.layers.4.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
752
- "videodit_blocks.layers.4.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
753
- "videodit_blocks.layers.4.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
754
- "videodit_blocks.layers.4.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
755
- "videodit_blocks.layers.4.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
756
- "videodit_blocks.layers.4.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
757
- "videodit_blocks.layers.4.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
758
- "videodit_blocks.layers.4.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
759
- "videodit_blocks.layers.4.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
760
- "videodit_blocks.layers.4.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
761
- "videodit_blocks.layers.4.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
762
- "videodit_blocks.layers.4.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
763
- "videodit_blocks.layers.4.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
764
- "videodit_blocks.layers.4.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
765
- "videodit_blocks.layers.4.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
766
- "videodit_blocks.layers.4.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
767
- "videodit_blocks.layers.4.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
768
- "videodit_blocks.layers.5.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
769
- "videodit_blocks.layers.5.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
770
- "videodit_blocks.layers.5.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
771
- "videodit_blocks.layers.5.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
772
- "videodit_blocks.layers.5.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
773
- "videodit_blocks.layers.5.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
774
- "videodit_blocks.layers.5.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
775
- "videodit_blocks.layers.5.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
776
- "videodit_blocks.layers.5.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
777
- "videodit_blocks.layers.5.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
778
- "videodit_blocks.layers.5.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
779
- "videodit_blocks.layers.5.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
780
- "videodit_blocks.layers.5.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
781
- "videodit_blocks.layers.5.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
782
- "videodit_blocks.layers.5.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
783
- "videodit_blocks.layers.5.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
784
- "videodit_blocks.layers.5.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
785
- "videodit_blocks.layers.5.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
786
- "videodit_blocks.layers.5.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
787
- "videodit_blocks.layers.5.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
788
- "videodit_blocks.layers.5.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
789
- "videodit_blocks.layers.5.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
790
- "videodit_blocks.layers.5.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
791
- "videodit_blocks.layers.5.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
792
- "videodit_blocks.layers.5.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
793
- "videodit_blocks.layers.5.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
794
- "videodit_blocks.layers.6.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
795
- "videodit_blocks.layers.6.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
796
- "videodit_blocks.layers.6.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
797
- "videodit_blocks.layers.6.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
798
- "videodit_blocks.layers.6.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
799
- "videodit_blocks.layers.6.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
800
- "videodit_blocks.layers.6.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
801
- "videodit_blocks.layers.6.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
802
- "videodit_blocks.layers.6.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
803
- "videodit_blocks.layers.6.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
804
- "videodit_blocks.layers.6.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
805
- "videodit_blocks.layers.6.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
806
- "videodit_blocks.layers.6.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
807
- "videodit_blocks.layers.6.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
808
- "videodit_blocks.layers.6.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
809
- "videodit_blocks.layers.6.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
810
- "videodit_blocks.layers.6.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
811
- "videodit_blocks.layers.6.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
812
- "videodit_blocks.layers.6.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
813
- "videodit_blocks.layers.6.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
814
- "videodit_blocks.layers.6.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
815
- "videodit_blocks.layers.6.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
816
- "videodit_blocks.layers.6.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
817
- "videodit_blocks.layers.6.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
818
- "videodit_blocks.layers.6.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
819
- "videodit_blocks.layers.6.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
820
- "videodit_blocks.layers.7.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
821
- "videodit_blocks.layers.7.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
822
- "videodit_blocks.layers.7.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
823
- "videodit_blocks.layers.7.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
824
- "videodit_blocks.layers.7.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
825
- "videodit_blocks.layers.7.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
826
- "videodit_blocks.layers.7.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
827
- "videodit_blocks.layers.7.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
828
- "videodit_blocks.layers.7.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
829
- "videodit_blocks.layers.7.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
830
- "videodit_blocks.layers.7.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
831
- "videodit_blocks.layers.7.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
832
- "videodit_blocks.layers.7.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
833
- "videodit_blocks.layers.7.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
834
- "videodit_blocks.layers.7.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
835
- "videodit_blocks.layers.7.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
836
- "videodit_blocks.layers.7.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
837
- "videodit_blocks.layers.7.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
838
- "videodit_blocks.layers.7.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
839
- "videodit_blocks.layers.7.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
840
- "videodit_blocks.layers.7.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
841
- "videodit_blocks.layers.7.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
842
- "videodit_blocks.layers.7.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
843
- "videodit_blocks.layers.7.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
844
- "videodit_blocks.layers.7.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
845
- "videodit_blocks.layers.7.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
846
- "videodit_blocks.layers.8.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
847
- "videodit_blocks.layers.8.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
848
- "videodit_blocks.layers.8.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
849
- "videodit_blocks.layers.8.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
850
- "videodit_blocks.layers.8.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
851
- "videodit_blocks.layers.8.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
852
- "videodit_blocks.layers.8.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
853
- "videodit_blocks.layers.8.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
854
- "videodit_blocks.layers.8.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
855
- "videodit_blocks.layers.8.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
856
- "videodit_blocks.layers.8.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
857
- "videodit_blocks.layers.8.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
858
- "videodit_blocks.layers.8.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
859
- "videodit_blocks.layers.8.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
860
- "videodit_blocks.layers.8.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
861
- "videodit_blocks.layers.8.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
862
- "videodit_blocks.layers.8.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
863
- "videodit_blocks.layers.8.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
864
- "videodit_blocks.layers.8.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
865
- "videodit_blocks.layers.8.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
866
- "videodit_blocks.layers.8.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
867
- "videodit_blocks.layers.8.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
868
- "videodit_blocks.layers.8.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
869
- "videodit_blocks.layers.8.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
870
- "videodit_blocks.layers.8.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
871
- "videodit_blocks.layers.8.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
872
- "videodit_blocks.layers.9.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
873
- "videodit_blocks.layers.9.ada_modulate_layer.proj.0.weight": "model-00001-of-00002.safetensors",
874
- "videodit_blocks.layers.9.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
875
- "videodit_blocks.layers.9.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
876
- "videodit_blocks.layers.9.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
877
- "videodit_blocks.layers.9.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
878
- "videodit_blocks.layers.9.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
879
- "videodit_blocks.layers.9.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
880
- "videodit_blocks.layers.9.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
881
- "videodit_blocks.layers.9.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
882
- "videodit_blocks.layers.9.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
883
- "videodit_blocks.layers.9.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
884
- "videodit_blocks.layers.9.self_attention.linear_kv_xattn.weight": "model-00001-of-00002.safetensors",
885
- "videodit_blocks.layers.9.self_attention.linear_proj.weight": "model-00001-of-00002.safetensors",
886
- "videodit_blocks.layers.9.self_attention.linear_qkv.k.weight": "model-00002-of-00002.safetensors",
887
- "videodit_blocks.layers.9.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
888
- "videodit_blocks.layers.9.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
889
- "videodit_blocks.layers.9.self_attention.linear_qkv.q.weight": "model-00002-of-00002.safetensors",
890
- "videodit_blocks.layers.9.self_attention.linear_qkv.qx.weight": "model-00002-of-00002.safetensors",
891
- "videodit_blocks.layers.9.self_attention.linear_qkv.v.weight": "model-00002-of-00002.safetensors",
892
- "videodit_blocks.layers.9.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
893
- "videodit_blocks.layers.9.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
894
- "videodit_blocks.layers.9.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
895
- "videodit_blocks.layers.9.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
896
- "videodit_blocks.layers.9.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
897
- "videodit_blocks.layers.9.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
898
- "x_embedder.weight": "model-00001-of-00002.safetensors",
899
- "y_embedder.null_caption_embedding": "model-00001-of-00002.safetensors",
900
- "y_embedder.y_proj_adaln.0.bias": "model-00001-of-00002.safetensors",
901
- "y_embedder.y_proj_adaln.0.weight": "model-00001-of-00002.safetensors",
902
- "y_embedder.y_proj_xattn.0.bias": "model-00001-of-00002.safetensors",
903
- "y_embedder.y_proj_xattn.0.weight": "model-00001-of-00002.safetensors"
904
- }
905
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ckpt/magi/4.5B_distill/inference_weight.distill/model-00001-of-00002.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:43b4b8c5feac8b0ec534cafa5d68227f23369999c459c4bd8c449b16d2e31443
3
- size 4359001088
 
 
 
 
ckpt/magi/4.5B_distill/inference_weight.distill/model-00002-of-00002.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4f880ce499636f21bdb5ffe8d600abd57c213b66cd1945ffd4efe492c67641d0
3
- size 4602174432
 
 
 
 
ckpt/magi/4.5B_distill/inference_weight.distill/model.safetensors.index.json DELETED
@@ -1,905 +0,0 @@
1
- {
2
- "metadata": {
3
- "total_size": 8961059904
4
- },
5
- "weight_map": {
6
- "final_linear.linear.weight": "model-00001-of-00002.safetensors",
7
- "rope.bands": "model-00001-of-00002.safetensors",
8
- "t_embedder.mlp.0.bias": "model-00001-of-00002.safetensors",
9
- "t_embedder.mlp.0.weight": "model-00001-of-00002.safetensors",
10
- "t_embedder.mlp.2.bias": "model-00001-of-00002.safetensors",
11
- "t_embedder.mlp.2.weight": "model-00001-of-00002.safetensors",
12
- "videodit_blocks.final_layernorm.bias": "model-00001-of-00002.safetensors",
13
- "videodit_blocks.final_layernorm.weight": "model-00001-of-00002.safetensors",
14
- "videodit_blocks.layers.0.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
15
- "videodit_blocks.layers.0.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
16
- "videodit_blocks.layers.0.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
17
- "videodit_blocks.layers.0.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
18
- "videodit_blocks.layers.0.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
19
- "videodit_blocks.layers.0.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
20
- "videodit_blocks.layers.0.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
21
- "videodit_blocks.layers.0.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
22
- "videodit_blocks.layers.0.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
23
- "videodit_blocks.layers.0.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
24
- "videodit_blocks.layers.0.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
25
- "videodit_blocks.layers.0.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
26
- "videodit_blocks.layers.0.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
27
- "videodit_blocks.layers.0.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
28
- "videodit_blocks.layers.0.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
29
- "videodit_blocks.layers.0.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
30
- "videodit_blocks.layers.0.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
31
- "videodit_blocks.layers.0.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
32
- "videodit_blocks.layers.0.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
33
- "videodit_blocks.layers.0.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
34
- "videodit_blocks.layers.0.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
35
- "videodit_blocks.layers.0.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
36
- "videodit_blocks.layers.0.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
37
- "videodit_blocks.layers.0.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
38
- "videodit_blocks.layers.0.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
39
- "videodit_blocks.layers.0.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
40
- "videodit_blocks.layers.1.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
41
- "videodit_blocks.layers.1.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
42
- "videodit_blocks.layers.1.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
43
- "videodit_blocks.layers.1.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
44
- "videodit_blocks.layers.1.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
45
- "videodit_blocks.layers.1.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
46
- "videodit_blocks.layers.1.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
47
- "videodit_blocks.layers.1.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
48
- "videodit_blocks.layers.1.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
49
- "videodit_blocks.layers.1.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
50
- "videodit_blocks.layers.1.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
51
- "videodit_blocks.layers.1.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
52
- "videodit_blocks.layers.1.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
53
- "videodit_blocks.layers.1.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
54
- "videodit_blocks.layers.1.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
55
- "videodit_blocks.layers.1.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
56
- "videodit_blocks.layers.1.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
57
- "videodit_blocks.layers.1.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
58
- "videodit_blocks.layers.1.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
59
- "videodit_blocks.layers.1.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
60
- "videodit_blocks.layers.1.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
61
- "videodit_blocks.layers.1.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
62
- "videodit_blocks.layers.1.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
63
- "videodit_blocks.layers.1.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
64
- "videodit_blocks.layers.1.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
65
- "videodit_blocks.layers.1.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
66
- "videodit_blocks.layers.10.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
67
- "videodit_blocks.layers.10.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
68
- "videodit_blocks.layers.10.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
69
- "videodit_blocks.layers.10.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
70
- "videodit_blocks.layers.10.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
71
- "videodit_blocks.layers.10.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
72
- "videodit_blocks.layers.10.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
73
- "videodit_blocks.layers.10.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
74
- "videodit_blocks.layers.10.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
75
- "videodit_blocks.layers.10.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
76
- "videodit_blocks.layers.10.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
77
- "videodit_blocks.layers.10.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
78
- "videodit_blocks.layers.10.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
79
- "videodit_blocks.layers.10.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
80
- "videodit_blocks.layers.10.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
81
- "videodit_blocks.layers.10.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
82
- "videodit_blocks.layers.10.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
83
- "videodit_blocks.layers.10.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
84
- "videodit_blocks.layers.10.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
85
- "videodit_blocks.layers.10.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
86
- "videodit_blocks.layers.10.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
87
- "videodit_blocks.layers.10.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
88
- "videodit_blocks.layers.10.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
89
- "videodit_blocks.layers.10.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
90
- "videodit_blocks.layers.10.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
91
- "videodit_blocks.layers.10.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
92
- "videodit_blocks.layers.11.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
93
- "videodit_blocks.layers.11.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
94
- "videodit_blocks.layers.11.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
95
- "videodit_blocks.layers.11.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
96
- "videodit_blocks.layers.11.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
97
- "videodit_blocks.layers.11.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
98
- "videodit_blocks.layers.11.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
99
- "videodit_blocks.layers.11.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
100
- "videodit_blocks.layers.11.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
101
- "videodit_blocks.layers.11.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
102
- "videodit_blocks.layers.11.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
103
- "videodit_blocks.layers.11.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
104
- "videodit_blocks.layers.11.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
105
- "videodit_blocks.layers.11.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
106
- "videodit_blocks.layers.11.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
107
- "videodit_blocks.layers.11.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
108
- "videodit_blocks.layers.11.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
109
- "videodit_blocks.layers.11.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
110
- "videodit_blocks.layers.11.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
111
- "videodit_blocks.layers.11.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
112
- "videodit_blocks.layers.11.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
113
- "videodit_blocks.layers.11.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
114
- "videodit_blocks.layers.11.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
115
- "videodit_blocks.layers.11.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
116
- "videodit_blocks.layers.11.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
117
- "videodit_blocks.layers.11.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
118
- "videodit_blocks.layers.12.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
119
- "videodit_blocks.layers.12.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
120
- "videodit_blocks.layers.12.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
121
- "videodit_blocks.layers.12.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
122
- "videodit_blocks.layers.12.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
123
- "videodit_blocks.layers.12.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
124
- "videodit_blocks.layers.12.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
125
- "videodit_blocks.layers.12.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
126
- "videodit_blocks.layers.12.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
127
- "videodit_blocks.layers.12.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
128
- "videodit_blocks.layers.12.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
129
- "videodit_blocks.layers.12.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
130
- "videodit_blocks.layers.12.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
131
- "videodit_blocks.layers.12.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
132
- "videodit_blocks.layers.12.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
133
- "videodit_blocks.layers.12.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
134
- "videodit_blocks.layers.12.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
135
- "videodit_blocks.layers.12.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
136
- "videodit_blocks.layers.12.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
137
- "videodit_blocks.layers.12.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
138
- "videodit_blocks.layers.12.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
139
- "videodit_blocks.layers.12.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
140
- "videodit_blocks.layers.12.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
141
- "videodit_blocks.layers.12.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
142
- "videodit_blocks.layers.12.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
143
- "videodit_blocks.layers.12.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
144
- "videodit_blocks.layers.13.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
145
- "videodit_blocks.layers.13.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
146
- "videodit_blocks.layers.13.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
147
- "videodit_blocks.layers.13.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
148
- "videodit_blocks.layers.13.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
149
- "videodit_blocks.layers.13.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
150
- "videodit_blocks.layers.13.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
151
- "videodit_blocks.layers.13.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
152
- "videodit_blocks.layers.13.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
153
- "videodit_blocks.layers.13.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
154
- "videodit_blocks.layers.13.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
155
- "videodit_blocks.layers.13.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
156
- "videodit_blocks.layers.13.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
157
- "videodit_blocks.layers.13.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
158
- "videodit_blocks.layers.13.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
159
- "videodit_blocks.layers.13.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
160
- "videodit_blocks.layers.13.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
161
- "videodit_blocks.layers.13.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
162
- "videodit_blocks.layers.13.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
163
- "videodit_blocks.layers.13.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
164
- "videodit_blocks.layers.13.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
165
- "videodit_blocks.layers.13.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
166
- "videodit_blocks.layers.13.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
167
- "videodit_blocks.layers.13.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
168
- "videodit_blocks.layers.13.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
169
- "videodit_blocks.layers.13.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
170
- "videodit_blocks.layers.14.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
171
- "videodit_blocks.layers.14.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
172
- "videodit_blocks.layers.14.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
173
- "videodit_blocks.layers.14.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
174
- "videodit_blocks.layers.14.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
175
- "videodit_blocks.layers.14.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
176
- "videodit_blocks.layers.14.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
177
- "videodit_blocks.layers.14.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
178
- "videodit_blocks.layers.14.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
179
- "videodit_blocks.layers.14.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
180
- "videodit_blocks.layers.14.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
181
- "videodit_blocks.layers.14.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
182
- "videodit_blocks.layers.14.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
183
- "videodit_blocks.layers.14.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
184
- "videodit_blocks.layers.14.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
185
- "videodit_blocks.layers.14.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
186
- "videodit_blocks.layers.14.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
187
- "videodit_blocks.layers.14.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
188
- "videodit_blocks.layers.14.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
189
- "videodit_blocks.layers.14.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
190
- "videodit_blocks.layers.14.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
191
- "videodit_blocks.layers.14.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
192
- "videodit_blocks.layers.14.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
193
- "videodit_blocks.layers.14.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
194
- "videodit_blocks.layers.14.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
195
- "videodit_blocks.layers.14.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
196
- "videodit_blocks.layers.15.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
197
- "videodit_blocks.layers.15.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
198
- "videodit_blocks.layers.15.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
199
- "videodit_blocks.layers.15.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
200
- "videodit_blocks.layers.15.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
201
- "videodit_blocks.layers.15.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
202
- "videodit_blocks.layers.15.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
203
- "videodit_blocks.layers.15.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
204
- "videodit_blocks.layers.15.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
205
- "videodit_blocks.layers.15.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
206
- "videodit_blocks.layers.15.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
207
- "videodit_blocks.layers.15.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
208
- "videodit_blocks.layers.15.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
209
- "videodit_blocks.layers.15.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
210
- "videodit_blocks.layers.15.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
211
- "videodit_blocks.layers.15.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
212
- "videodit_blocks.layers.15.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
213
- "videodit_blocks.layers.15.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
214
- "videodit_blocks.layers.15.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
215
- "videodit_blocks.layers.15.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
216
- "videodit_blocks.layers.15.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
217
- "videodit_blocks.layers.15.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
218
- "videodit_blocks.layers.15.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
219
- "videodit_blocks.layers.15.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
220
- "videodit_blocks.layers.15.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
221
- "videodit_blocks.layers.15.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
222
- "videodit_blocks.layers.16.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
223
- "videodit_blocks.layers.16.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
224
- "videodit_blocks.layers.16.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
225
- "videodit_blocks.layers.16.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
226
- "videodit_blocks.layers.16.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
227
- "videodit_blocks.layers.16.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
228
- "videodit_blocks.layers.16.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
229
- "videodit_blocks.layers.16.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
230
- "videodit_blocks.layers.16.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
231
- "videodit_blocks.layers.16.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
232
- "videodit_blocks.layers.16.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
233
- "videodit_blocks.layers.16.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
234
- "videodit_blocks.layers.16.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
235
- "videodit_blocks.layers.16.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
236
- "videodit_blocks.layers.16.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
237
- "videodit_blocks.layers.16.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
238
- "videodit_blocks.layers.16.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
239
- "videodit_blocks.layers.16.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
240
- "videodit_blocks.layers.16.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
241
- "videodit_blocks.layers.16.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
242
- "videodit_blocks.layers.16.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
243
- "videodit_blocks.layers.16.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
244
- "videodit_blocks.layers.16.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
245
- "videodit_blocks.layers.16.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
246
- "videodit_blocks.layers.16.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
247
- "videodit_blocks.layers.16.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
248
- "videodit_blocks.layers.17.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
249
- "videodit_blocks.layers.17.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
250
- "videodit_blocks.layers.17.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
251
- "videodit_blocks.layers.17.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
252
- "videodit_blocks.layers.17.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
253
- "videodit_blocks.layers.17.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
254
- "videodit_blocks.layers.17.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
255
- "videodit_blocks.layers.17.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
256
- "videodit_blocks.layers.17.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
257
- "videodit_blocks.layers.17.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
258
- "videodit_blocks.layers.17.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
259
- "videodit_blocks.layers.17.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
260
- "videodit_blocks.layers.17.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
261
- "videodit_blocks.layers.17.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
262
- "videodit_blocks.layers.17.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
263
- "videodit_blocks.layers.17.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
264
- "videodit_blocks.layers.17.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
265
- "videodit_blocks.layers.17.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
266
- "videodit_blocks.layers.17.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
267
- "videodit_blocks.layers.17.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
268
- "videodit_blocks.layers.17.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
269
- "videodit_blocks.layers.17.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
270
- "videodit_blocks.layers.17.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
271
- "videodit_blocks.layers.17.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
272
- "videodit_blocks.layers.17.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
273
- "videodit_blocks.layers.17.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
274
- "videodit_blocks.layers.18.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
275
- "videodit_blocks.layers.18.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
276
- "videodit_blocks.layers.18.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
277
- "videodit_blocks.layers.18.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
278
- "videodit_blocks.layers.18.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
279
- "videodit_blocks.layers.18.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
280
- "videodit_blocks.layers.18.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
281
- "videodit_blocks.layers.18.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
282
- "videodit_blocks.layers.18.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
283
- "videodit_blocks.layers.18.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
284
- "videodit_blocks.layers.18.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
285
- "videodit_blocks.layers.18.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
286
- "videodit_blocks.layers.18.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
287
- "videodit_blocks.layers.18.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
288
- "videodit_blocks.layers.18.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
289
- "videodit_blocks.layers.18.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
290
- "videodit_blocks.layers.18.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
291
- "videodit_blocks.layers.18.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
292
- "videodit_blocks.layers.18.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
293
- "videodit_blocks.layers.18.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
294
- "videodit_blocks.layers.18.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
295
- "videodit_blocks.layers.18.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
296
- "videodit_blocks.layers.18.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
297
- "videodit_blocks.layers.18.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
298
- "videodit_blocks.layers.18.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
299
- "videodit_blocks.layers.18.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
300
- "videodit_blocks.layers.19.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
301
- "videodit_blocks.layers.19.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
302
- "videodit_blocks.layers.19.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
303
- "videodit_blocks.layers.19.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
304
- "videodit_blocks.layers.19.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
305
- "videodit_blocks.layers.19.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
306
- "videodit_blocks.layers.19.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
307
- "videodit_blocks.layers.19.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
308
- "videodit_blocks.layers.19.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
309
- "videodit_blocks.layers.19.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
310
- "videodit_blocks.layers.19.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
311
- "videodit_blocks.layers.19.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
312
- "videodit_blocks.layers.19.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
313
- "videodit_blocks.layers.19.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
314
- "videodit_blocks.layers.19.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
315
- "videodit_blocks.layers.19.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
316
- "videodit_blocks.layers.19.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
317
- "videodit_blocks.layers.19.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
318
- "videodit_blocks.layers.19.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
319
- "videodit_blocks.layers.19.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
320
- "videodit_blocks.layers.19.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
321
- "videodit_blocks.layers.19.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
322
- "videodit_blocks.layers.19.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
323
- "videodit_blocks.layers.19.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
324
- "videodit_blocks.layers.19.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
325
- "videodit_blocks.layers.19.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
326
- "videodit_blocks.layers.2.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
327
- "videodit_blocks.layers.2.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
328
- "videodit_blocks.layers.2.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
329
- "videodit_blocks.layers.2.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
330
- "videodit_blocks.layers.2.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
331
- "videodit_blocks.layers.2.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
332
- "videodit_blocks.layers.2.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
333
- "videodit_blocks.layers.2.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
334
- "videodit_blocks.layers.2.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
335
- "videodit_blocks.layers.2.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
336
- "videodit_blocks.layers.2.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
337
- "videodit_blocks.layers.2.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
338
- "videodit_blocks.layers.2.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
339
- "videodit_blocks.layers.2.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
340
- "videodit_blocks.layers.2.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
341
- "videodit_blocks.layers.2.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
342
- "videodit_blocks.layers.2.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
343
- "videodit_blocks.layers.2.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
344
- "videodit_blocks.layers.2.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
345
- "videodit_blocks.layers.2.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
346
- "videodit_blocks.layers.2.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
347
- "videodit_blocks.layers.2.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
348
- "videodit_blocks.layers.2.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
349
- "videodit_blocks.layers.2.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
350
- "videodit_blocks.layers.2.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
351
- "videodit_blocks.layers.2.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
352
- "videodit_blocks.layers.20.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
353
- "videodit_blocks.layers.20.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
354
- "videodit_blocks.layers.20.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
355
- "videodit_blocks.layers.20.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
356
- "videodit_blocks.layers.20.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
357
- "videodit_blocks.layers.20.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
358
- "videodit_blocks.layers.20.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
359
- "videodit_blocks.layers.20.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
360
- "videodit_blocks.layers.20.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
361
- "videodit_blocks.layers.20.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
362
- "videodit_blocks.layers.20.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
363
- "videodit_blocks.layers.20.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
364
- "videodit_blocks.layers.20.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
365
- "videodit_blocks.layers.20.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
366
- "videodit_blocks.layers.20.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
367
- "videodit_blocks.layers.20.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
368
- "videodit_blocks.layers.20.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
369
- "videodit_blocks.layers.20.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
370
- "videodit_blocks.layers.20.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
371
- "videodit_blocks.layers.20.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
372
- "videodit_blocks.layers.20.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
373
- "videodit_blocks.layers.20.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
374
- "videodit_blocks.layers.20.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
375
- "videodit_blocks.layers.20.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
376
- "videodit_blocks.layers.20.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
377
- "videodit_blocks.layers.20.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
378
- "videodit_blocks.layers.21.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
379
- "videodit_blocks.layers.21.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
380
- "videodit_blocks.layers.21.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
381
- "videodit_blocks.layers.21.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
382
- "videodit_blocks.layers.21.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
383
- "videodit_blocks.layers.21.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
384
- "videodit_blocks.layers.21.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
385
- "videodit_blocks.layers.21.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
386
- "videodit_blocks.layers.21.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
387
- "videodit_blocks.layers.21.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
388
- "videodit_blocks.layers.21.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
389
- "videodit_blocks.layers.21.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
390
- "videodit_blocks.layers.21.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
391
- "videodit_blocks.layers.21.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
392
- "videodit_blocks.layers.21.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
393
- "videodit_blocks.layers.21.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
394
- "videodit_blocks.layers.21.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
395
- "videodit_blocks.layers.21.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
396
- "videodit_blocks.layers.21.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
397
- "videodit_blocks.layers.21.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
398
- "videodit_blocks.layers.21.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
399
- "videodit_blocks.layers.21.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
400
- "videodit_blocks.layers.21.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
401
- "videodit_blocks.layers.21.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
402
- "videodit_blocks.layers.21.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
403
- "videodit_blocks.layers.21.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
404
- "videodit_blocks.layers.22.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
405
- "videodit_blocks.layers.22.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
406
- "videodit_blocks.layers.22.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
407
- "videodit_blocks.layers.22.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
408
- "videodit_blocks.layers.22.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
409
- "videodit_blocks.layers.22.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
410
- "videodit_blocks.layers.22.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
411
- "videodit_blocks.layers.22.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
412
- "videodit_blocks.layers.22.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
413
- "videodit_blocks.layers.22.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
414
- "videodit_blocks.layers.22.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
415
- "videodit_blocks.layers.22.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
416
- "videodit_blocks.layers.22.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
417
- "videodit_blocks.layers.22.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
418
- "videodit_blocks.layers.22.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
419
- "videodit_blocks.layers.22.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
420
- "videodit_blocks.layers.22.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
421
- "videodit_blocks.layers.22.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
422
- "videodit_blocks.layers.22.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
423
- "videodit_blocks.layers.22.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
424
- "videodit_blocks.layers.22.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
425
- "videodit_blocks.layers.22.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
426
- "videodit_blocks.layers.22.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
427
- "videodit_blocks.layers.22.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
428
- "videodit_blocks.layers.22.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
429
- "videodit_blocks.layers.22.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
430
- "videodit_blocks.layers.23.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
431
- "videodit_blocks.layers.23.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
432
- "videodit_blocks.layers.23.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
433
- "videodit_blocks.layers.23.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
434
- "videodit_blocks.layers.23.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
435
- "videodit_blocks.layers.23.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
436
- "videodit_blocks.layers.23.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
437
- "videodit_blocks.layers.23.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
438
- "videodit_blocks.layers.23.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
439
- "videodit_blocks.layers.23.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
440
- "videodit_blocks.layers.23.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
441
- "videodit_blocks.layers.23.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
442
- "videodit_blocks.layers.23.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
443
- "videodit_blocks.layers.23.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
444
- "videodit_blocks.layers.23.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
445
- "videodit_blocks.layers.23.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
446
- "videodit_blocks.layers.23.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
447
- "videodit_blocks.layers.23.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
448
- "videodit_blocks.layers.23.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
449
- "videodit_blocks.layers.23.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
450
- "videodit_blocks.layers.23.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
451
- "videodit_blocks.layers.23.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
452
- "videodit_blocks.layers.23.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
453
- "videodit_blocks.layers.23.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
454
- "videodit_blocks.layers.23.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
455
- "videodit_blocks.layers.23.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
456
- "videodit_blocks.layers.24.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
457
- "videodit_blocks.layers.24.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
458
- "videodit_blocks.layers.24.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
459
- "videodit_blocks.layers.24.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
460
- "videodit_blocks.layers.24.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
461
- "videodit_blocks.layers.24.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
462
- "videodit_blocks.layers.24.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
463
- "videodit_blocks.layers.24.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
464
- "videodit_blocks.layers.24.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
465
- "videodit_blocks.layers.24.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
466
- "videodit_blocks.layers.24.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
467
- "videodit_blocks.layers.24.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
468
- "videodit_blocks.layers.24.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
469
- "videodit_blocks.layers.24.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
470
- "videodit_blocks.layers.24.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
471
- "videodit_blocks.layers.24.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
472
- "videodit_blocks.layers.24.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
473
- "videodit_blocks.layers.24.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
474
- "videodit_blocks.layers.24.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
475
- "videodit_blocks.layers.24.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
476
- "videodit_blocks.layers.24.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
477
- "videodit_blocks.layers.24.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
478
- "videodit_blocks.layers.24.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
479
- "videodit_blocks.layers.24.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
480
- "videodit_blocks.layers.24.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
481
- "videodit_blocks.layers.24.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
482
- "videodit_blocks.layers.25.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
483
- "videodit_blocks.layers.25.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
484
- "videodit_blocks.layers.25.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
485
- "videodit_blocks.layers.25.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
486
- "videodit_blocks.layers.25.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
487
- "videodit_blocks.layers.25.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
488
- "videodit_blocks.layers.25.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
489
- "videodit_blocks.layers.25.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
490
- "videodit_blocks.layers.25.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
491
- "videodit_blocks.layers.25.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
492
- "videodit_blocks.layers.25.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
493
- "videodit_blocks.layers.25.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
494
- "videodit_blocks.layers.25.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
495
- "videodit_blocks.layers.25.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
496
- "videodit_blocks.layers.25.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
497
- "videodit_blocks.layers.25.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
498
- "videodit_blocks.layers.25.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
499
- "videodit_blocks.layers.25.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
500
- "videodit_blocks.layers.25.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
501
- "videodit_blocks.layers.25.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
502
- "videodit_blocks.layers.25.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
503
- "videodit_blocks.layers.25.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
504
- "videodit_blocks.layers.25.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
505
- "videodit_blocks.layers.25.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
506
- "videodit_blocks.layers.25.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
507
- "videodit_blocks.layers.25.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
508
- "videodit_blocks.layers.26.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
509
- "videodit_blocks.layers.26.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
510
- "videodit_blocks.layers.26.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
511
- "videodit_blocks.layers.26.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
512
- "videodit_blocks.layers.26.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
513
- "videodit_blocks.layers.26.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
514
- "videodit_blocks.layers.26.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
515
- "videodit_blocks.layers.26.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
516
- "videodit_blocks.layers.26.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
517
- "videodit_blocks.layers.26.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
518
- "videodit_blocks.layers.26.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
519
- "videodit_blocks.layers.26.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
520
- "videodit_blocks.layers.26.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
521
- "videodit_blocks.layers.26.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
522
- "videodit_blocks.layers.26.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
523
- "videodit_blocks.layers.26.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
524
- "videodit_blocks.layers.26.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
525
- "videodit_blocks.layers.26.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
526
- "videodit_blocks.layers.26.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
527
- "videodit_blocks.layers.26.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
528
- "videodit_blocks.layers.26.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
529
- "videodit_blocks.layers.26.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
530
- "videodit_blocks.layers.26.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
531
- "videodit_blocks.layers.26.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
532
- "videodit_blocks.layers.26.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
533
- "videodit_blocks.layers.26.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
534
- "videodit_blocks.layers.27.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
535
- "videodit_blocks.layers.27.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
536
- "videodit_blocks.layers.27.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
537
- "videodit_blocks.layers.27.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
538
- "videodit_blocks.layers.27.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
539
- "videodit_blocks.layers.27.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
540
- "videodit_blocks.layers.27.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
541
- "videodit_blocks.layers.27.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
542
- "videodit_blocks.layers.27.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
543
- "videodit_blocks.layers.27.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
544
- "videodit_blocks.layers.27.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
545
- "videodit_blocks.layers.27.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
546
- "videodit_blocks.layers.27.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
547
- "videodit_blocks.layers.27.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
548
- "videodit_blocks.layers.27.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
549
- "videodit_blocks.layers.27.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
550
- "videodit_blocks.layers.27.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
551
- "videodit_blocks.layers.27.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
552
- "videodit_blocks.layers.27.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
553
- "videodit_blocks.layers.27.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
554
- "videodit_blocks.layers.27.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
555
- "videodit_blocks.layers.27.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
556
- "videodit_blocks.layers.27.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
557
- "videodit_blocks.layers.27.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
558
- "videodit_blocks.layers.27.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
559
- "videodit_blocks.layers.27.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
560
- "videodit_blocks.layers.28.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
561
- "videodit_blocks.layers.28.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
562
- "videodit_blocks.layers.28.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
563
- "videodit_blocks.layers.28.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
564
- "videodit_blocks.layers.28.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
565
- "videodit_blocks.layers.28.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
566
- "videodit_blocks.layers.28.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
567
- "videodit_blocks.layers.28.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
568
- "videodit_blocks.layers.28.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
569
- "videodit_blocks.layers.28.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
570
- "videodit_blocks.layers.28.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
571
- "videodit_blocks.layers.28.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
572
- "videodit_blocks.layers.28.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
573
- "videodit_blocks.layers.28.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
574
- "videodit_blocks.layers.28.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
575
- "videodit_blocks.layers.28.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
576
- "videodit_blocks.layers.28.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
577
- "videodit_blocks.layers.28.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
578
- "videodit_blocks.layers.28.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
579
- "videodit_blocks.layers.28.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
580
- "videodit_blocks.layers.28.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
581
- "videodit_blocks.layers.28.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
582
- "videodit_blocks.layers.28.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
583
- "videodit_blocks.layers.28.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
584
- "videodit_blocks.layers.28.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
585
- "videodit_blocks.layers.28.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
586
- "videodit_blocks.layers.29.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
587
- "videodit_blocks.layers.29.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
588
- "videodit_blocks.layers.29.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
589
- "videodit_blocks.layers.29.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
590
- "videodit_blocks.layers.29.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
591
- "videodit_blocks.layers.29.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
592
- "videodit_blocks.layers.29.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
593
- "videodit_blocks.layers.29.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
594
- "videodit_blocks.layers.29.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
595
- "videodit_blocks.layers.29.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
596
- "videodit_blocks.layers.29.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
597
- "videodit_blocks.layers.29.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
598
- "videodit_blocks.layers.29.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
599
- "videodit_blocks.layers.29.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
600
- "videodit_blocks.layers.29.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
601
- "videodit_blocks.layers.29.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
602
- "videodit_blocks.layers.29.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
603
- "videodit_blocks.layers.29.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
604
- "videodit_blocks.layers.29.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
605
- "videodit_blocks.layers.29.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
606
- "videodit_blocks.layers.29.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
607
- "videodit_blocks.layers.29.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
608
- "videodit_blocks.layers.29.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
609
- "videodit_blocks.layers.29.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
610
- "videodit_blocks.layers.29.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
611
- "videodit_blocks.layers.29.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
612
- "videodit_blocks.layers.3.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
613
- "videodit_blocks.layers.3.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
614
- "videodit_blocks.layers.3.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
615
- "videodit_blocks.layers.3.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
616
- "videodit_blocks.layers.3.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
617
- "videodit_blocks.layers.3.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
618
- "videodit_blocks.layers.3.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
619
- "videodit_blocks.layers.3.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
620
- "videodit_blocks.layers.3.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
621
- "videodit_blocks.layers.3.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
622
- "videodit_blocks.layers.3.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
623
- "videodit_blocks.layers.3.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
624
- "videodit_blocks.layers.3.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
625
- "videodit_blocks.layers.3.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
626
- "videodit_blocks.layers.3.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
627
- "videodit_blocks.layers.3.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
628
- "videodit_blocks.layers.3.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
629
- "videodit_blocks.layers.3.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
630
- "videodit_blocks.layers.3.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
631
- "videodit_blocks.layers.3.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
632
- "videodit_blocks.layers.3.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
633
- "videodit_blocks.layers.3.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
634
- "videodit_blocks.layers.3.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
635
- "videodit_blocks.layers.3.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
636
- "videodit_blocks.layers.3.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
637
- "videodit_blocks.layers.3.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
638
- "videodit_blocks.layers.30.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
639
- "videodit_blocks.layers.30.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
640
- "videodit_blocks.layers.30.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
641
- "videodit_blocks.layers.30.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
642
- "videodit_blocks.layers.30.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
643
- "videodit_blocks.layers.30.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
644
- "videodit_blocks.layers.30.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
645
- "videodit_blocks.layers.30.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
646
- "videodit_blocks.layers.30.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
647
- "videodit_blocks.layers.30.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
648
- "videodit_blocks.layers.30.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
649
- "videodit_blocks.layers.30.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
650
- "videodit_blocks.layers.30.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
651
- "videodit_blocks.layers.30.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
652
- "videodit_blocks.layers.30.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
653
- "videodit_blocks.layers.30.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
654
- "videodit_blocks.layers.30.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
655
- "videodit_blocks.layers.30.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
656
- "videodit_blocks.layers.30.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
657
- "videodit_blocks.layers.30.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
658
- "videodit_blocks.layers.30.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
659
- "videodit_blocks.layers.30.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
660
- "videodit_blocks.layers.30.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
661
- "videodit_blocks.layers.30.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
662
- "videodit_blocks.layers.30.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
663
- "videodit_blocks.layers.30.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
664
- "videodit_blocks.layers.31.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
665
- "videodit_blocks.layers.31.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
666
- "videodit_blocks.layers.31.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
667
- "videodit_blocks.layers.31.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
668
- "videodit_blocks.layers.31.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
669
- "videodit_blocks.layers.31.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
670
- "videodit_blocks.layers.31.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
671
- "videodit_blocks.layers.31.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
672
- "videodit_blocks.layers.31.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
673
- "videodit_blocks.layers.31.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
674
- "videodit_blocks.layers.31.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
675
- "videodit_blocks.layers.31.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
676
- "videodit_blocks.layers.31.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
677
- "videodit_blocks.layers.31.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
678
- "videodit_blocks.layers.31.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
679
- "videodit_blocks.layers.31.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
680
- "videodit_blocks.layers.31.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
681
- "videodit_blocks.layers.31.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
682
- "videodit_blocks.layers.31.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
683
- "videodit_blocks.layers.31.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
684
- "videodit_blocks.layers.31.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
685
- "videodit_blocks.layers.31.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
686
- "videodit_blocks.layers.31.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
687
- "videodit_blocks.layers.31.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
688
- "videodit_blocks.layers.31.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
689
- "videodit_blocks.layers.31.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
690
- "videodit_blocks.layers.32.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
691
- "videodit_blocks.layers.32.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
692
- "videodit_blocks.layers.32.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
693
- "videodit_blocks.layers.32.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
694
- "videodit_blocks.layers.32.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
695
- "videodit_blocks.layers.32.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
696
- "videodit_blocks.layers.32.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
697
- "videodit_blocks.layers.32.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
698
- "videodit_blocks.layers.32.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
699
- "videodit_blocks.layers.32.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
700
- "videodit_blocks.layers.32.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
701
- "videodit_blocks.layers.32.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
702
- "videodit_blocks.layers.32.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
703
- "videodit_blocks.layers.32.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
704
- "videodit_blocks.layers.32.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
705
- "videodit_blocks.layers.32.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
706
- "videodit_blocks.layers.32.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
707
- "videodit_blocks.layers.32.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
708
- "videodit_blocks.layers.32.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
709
- "videodit_blocks.layers.32.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
710
- "videodit_blocks.layers.32.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
711
- "videodit_blocks.layers.32.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
712
- "videodit_blocks.layers.32.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
713
- "videodit_blocks.layers.32.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
714
- "videodit_blocks.layers.32.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
715
- "videodit_blocks.layers.32.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
716
- "videodit_blocks.layers.33.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
717
- "videodit_blocks.layers.33.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
718
- "videodit_blocks.layers.33.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
719
- "videodit_blocks.layers.33.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
720
- "videodit_blocks.layers.33.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
721
- "videodit_blocks.layers.33.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
722
- "videodit_blocks.layers.33.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
723
- "videodit_blocks.layers.33.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
724
- "videodit_blocks.layers.33.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
725
- "videodit_blocks.layers.33.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
726
- "videodit_blocks.layers.33.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
727
- "videodit_blocks.layers.33.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
728
- "videodit_blocks.layers.33.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
729
- "videodit_blocks.layers.33.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
730
- "videodit_blocks.layers.33.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
731
- "videodit_blocks.layers.33.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
732
- "videodit_blocks.layers.33.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
733
- "videodit_blocks.layers.33.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
734
- "videodit_blocks.layers.33.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
735
- "videodit_blocks.layers.33.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
736
- "videodit_blocks.layers.33.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
737
- "videodit_blocks.layers.33.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
738
- "videodit_blocks.layers.33.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
739
- "videodit_blocks.layers.33.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
740
- "videodit_blocks.layers.33.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
741
- "videodit_blocks.layers.33.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
742
- "videodit_blocks.layers.4.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
743
- "videodit_blocks.layers.4.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
744
- "videodit_blocks.layers.4.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
745
- "videodit_blocks.layers.4.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
746
- "videodit_blocks.layers.4.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
747
- "videodit_blocks.layers.4.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
748
- "videodit_blocks.layers.4.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
749
- "videodit_blocks.layers.4.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
750
- "videodit_blocks.layers.4.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
751
- "videodit_blocks.layers.4.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
752
- "videodit_blocks.layers.4.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
753
- "videodit_blocks.layers.4.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
754
- "videodit_blocks.layers.4.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
755
- "videodit_blocks.layers.4.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
756
- "videodit_blocks.layers.4.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
757
- "videodit_blocks.layers.4.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
758
- "videodit_blocks.layers.4.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
759
- "videodit_blocks.layers.4.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
760
- "videodit_blocks.layers.4.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
761
- "videodit_blocks.layers.4.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
762
- "videodit_blocks.layers.4.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
763
- "videodit_blocks.layers.4.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
764
- "videodit_blocks.layers.4.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
765
- "videodit_blocks.layers.4.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
766
- "videodit_blocks.layers.4.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
767
- "videodit_blocks.layers.4.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
768
- "videodit_blocks.layers.5.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
769
- "videodit_blocks.layers.5.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
770
- "videodit_blocks.layers.5.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
771
- "videodit_blocks.layers.5.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
772
- "videodit_blocks.layers.5.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
773
- "videodit_blocks.layers.5.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
774
- "videodit_blocks.layers.5.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
775
- "videodit_blocks.layers.5.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
776
- "videodit_blocks.layers.5.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
777
- "videodit_blocks.layers.5.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
778
- "videodit_blocks.layers.5.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
779
- "videodit_blocks.layers.5.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
780
- "videodit_blocks.layers.5.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
781
- "videodit_blocks.layers.5.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
782
- "videodit_blocks.layers.5.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
783
- "videodit_blocks.layers.5.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
784
- "videodit_blocks.layers.5.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
785
- "videodit_blocks.layers.5.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
786
- "videodit_blocks.layers.5.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
787
- "videodit_blocks.layers.5.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
788
- "videodit_blocks.layers.5.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
789
- "videodit_blocks.layers.5.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
790
- "videodit_blocks.layers.5.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
791
- "videodit_blocks.layers.5.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
792
- "videodit_blocks.layers.5.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
793
- "videodit_blocks.layers.5.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
794
- "videodit_blocks.layers.6.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
795
- "videodit_blocks.layers.6.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
796
- "videodit_blocks.layers.6.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
797
- "videodit_blocks.layers.6.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
798
- "videodit_blocks.layers.6.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
799
- "videodit_blocks.layers.6.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
800
- "videodit_blocks.layers.6.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
801
- "videodit_blocks.layers.6.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
802
- "videodit_blocks.layers.6.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
803
- "videodit_blocks.layers.6.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
804
- "videodit_blocks.layers.6.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
805
- "videodit_blocks.layers.6.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
806
- "videodit_blocks.layers.6.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
807
- "videodit_blocks.layers.6.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
808
- "videodit_blocks.layers.6.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
809
- "videodit_blocks.layers.6.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
810
- "videodit_blocks.layers.6.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
811
- "videodit_blocks.layers.6.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
812
- "videodit_blocks.layers.6.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
813
- "videodit_blocks.layers.6.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
814
- "videodit_blocks.layers.6.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
815
- "videodit_blocks.layers.6.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
816
- "videodit_blocks.layers.6.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
817
- "videodit_blocks.layers.6.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
818
- "videodit_blocks.layers.6.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
819
- "videodit_blocks.layers.6.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
820
- "videodit_blocks.layers.7.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
821
- "videodit_blocks.layers.7.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
822
- "videodit_blocks.layers.7.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
823
- "videodit_blocks.layers.7.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
824
- "videodit_blocks.layers.7.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
825
- "videodit_blocks.layers.7.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
826
- "videodit_blocks.layers.7.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
827
- "videodit_blocks.layers.7.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
828
- "videodit_blocks.layers.7.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
829
- "videodit_blocks.layers.7.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
830
- "videodit_blocks.layers.7.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
831
- "videodit_blocks.layers.7.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
832
- "videodit_blocks.layers.7.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
833
- "videodit_blocks.layers.7.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
834
- "videodit_blocks.layers.7.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
835
- "videodit_blocks.layers.7.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
836
- "videodit_blocks.layers.7.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
837
- "videodit_blocks.layers.7.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
838
- "videodit_blocks.layers.7.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
839
- "videodit_blocks.layers.7.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
840
- "videodit_blocks.layers.7.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
841
- "videodit_blocks.layers.7.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
842
- "videodit_blocks.layers.7.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
843
- "videodit_blocks.layers.7.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
844
- "videodit_blocks.layers.7.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
845
- "videodit_blocks.layers.7.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
846
- "videodit_blocks.layers.8.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
847
- "videodit_blocks.layers.8.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
848
- "videodit_blocks.layers.8.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
849
- "videodit_blocks.layers.8.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
850
- "videodit_blocks.layers.8.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
851
- "videodit_blocks.layers.8.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
852
- "videodit_blocks.layers.8.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
853
- "videodit_blocks.layers.8.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
854
- "videodit_blocks.layers.8.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
855
- "videodit_blocks.layers.8.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
856
- "videodit_blocks.layers.8.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
857
- "videodit_blocks.layers.8.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
858
- "videodit_blocks.layers.8.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
859
- "videodit_blocks.layers.8.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
860
- "videodit_blocks.layers.8.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
861
- "videodit_blocks.layers.8.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
862
- "videodit_blocks.layers.8.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
863
- "videodit_blocks.layers.8.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
864
- "videodit_blocks.layers.8.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
865
- "videodit_blocks.layers.8.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
866
- "videodit_blocks.layers.8.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
867
- "videodit_blocks.layers.8.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
868
- "videodit_blocks.layers.8.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
869
- "videodit_blocks.layers.8.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
870
- "videodit_blocks.layers.8.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
871
- "videodit_blocks.layers.8.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
872
- "videodit_blocks.layers.9.ada_modulate_layer.proj.0.bias": "model-00002-of-00002.safetensors",
873
- "videodit_blocks.layers.9.ada_modulate_layer.proj.0.weight": "model-00002-of-00002.safetensors",
874
- "videodit_blocks.layers.9.mlp.layer_norm.bias": "model-00002-of-00002.safetensors",
875
- "videodit_blocks.layers.9.mlp.layer_norm.weight": "model-00002-of-00002.safetensors",
876
- "videodit_blocks.layers.9.mlp.linear_fc1.weight": "model-00001-of-00002.safetensors",
877
- "videodit_blocks.layers.9.mlp.linear_fc2.weight": "model-00002-of-00002.safetensors",
878
- "videodit_blocks.layers.9.mlp_post_norm.bias": "model-00002-of-00002.safetensors",
879
- "videodit_blocks.layers.9.mlp_post_norm.weight": "model-00002-of-00002.safetensors",
880
- "videodit_blocks.layers.9.self_attention.k_layernorm.bias": "model-00002-of-00002.safetensors",
881
- "videodit_blocks.layers.9.self_attention.k_layernorm.weight": "model-00002-of-00002.safetensors",
882
- "videodit_blocks.layers.9.self_attention.k_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
883
- "videodit_blocks.layers.9.self_attention.k_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
884
- "videodit_blocks.layers.9.self_attention.linear_kv_xattn.weight": "model-00002-of-00002.safetensors",
885
- "videodit_blocks.layers.9.self_attention.linear_proj.weight": "model-00002-of-00002.safetensors",
886
- "videodit_blocks.layers.9.self_attention.linear_qkv.k.weight": "model-00001-of-00002.safetensors",
887
- "videodit_blocks.layers.9.self_attention.linear_qkv.layer_norm.bias": "model-00002-of-00002.safetensors",
888
- "videodit_blocks.layers.9.self_attention.linear_qkv.layer_norm.weight": "model-00002-of-00002.safetensors",
889
- "videodit_blocks.layers.9.self_attention.linear_qkv.q.weight": "model-00001-of-00002.safetensors",
890
- "videodit_blocks.layers.9.self_attention.linear_qkv.qx.weight": "model-00001-of-00002.safetensors",
891
- "videodit_blocks.layers.9.self_attention.linear_qkv.v.weight": "model-00001-of-00002.safetensors",
892
- "videodit_blocks.layers.9.self_attention.q_layernorm.bias": "model-00002-of-00002.safetensors",
893
- "videodit_blocks.layers.9.self_attention.q_layernorm.weight": "model-00002-of-00002.safetensors",
894
- "videodit_blocks.layers.9.self_attention.q_layernorm_xattn.bias": "model-00002-of-00002.safetensors",
895
- "videodit_blocks.layers.9.self_attention.q_layernorm_xattn.weight": "model-00002-of-00002.safetensors",
896
- "videodit_blocks.layers.9.self_attn_post_norm.bias": "model-00002-of-00002.safetensors",
897
- "videodit_blocks.layers.9.self_attn_post_norm.weight": "model-00002-of-00002.safetensors",
898
- "x_embedder.weight": "model-00001-of-00002.safetensors",
899
- "y_embedder.null_caption_embedding": "model-00001-of-00002.safetensors",
900
- "y_embedder.y_proj_adaln.0.bias": "model-00001-of-00002.safetensors",
901
- "y_embedder.y_proj_adaln.0.weight": "model-00001-of-00002.safetensors",
902
- "y_embedder.y_proj_xattn.0.bias": "model-00001-of-00002.safetensors",
903
- "y_embedder.y_proj_xattn.0.weight": "model-00001-of-00002.safetensors"
904
- }
905
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ckpt/magi/4.5B_distill_quant/inference_weight.fp8.distill/model.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:fa220da5fe19fdd466151d6f6c511b7c71d8d47adc5348267cb8df1cf666c4af
3
- size 5140362808