Update README.md
Browse files
README.md
CHANGED
@@ -59,7 +59,7 @@ license: apache-2.0
|
|
59 |
|
60 |
## Description:
|
61 |
|
62 |
-
Kandinsky 4.0 is a text-to-video generation model based on latent diffusion for 480p
|
63 |
|
64 |
<img src="https://github.com/ai-forever/Kandinsky-4/assets/pipeline.png">
|
65 |
|
@@ -70,7 +70,7 @@ A serious problem for all diffusion models, and especially video generation mode
|
|
70 |
|
71 |
## Architecture
|
72 |
|
73 |
-
For training Kandinsky 4 Flash we used the following architecture of diffusion transformer, based on MMDiT proposed in [Stable Diffusion 3](https://arxiv.org/pdf/2403.03206).
|
74 |
|
75 |
<img src="https://github.com/ai-forever/Kandinsky-4/assets/MMDiT1.png"> <img src="https://github.com/ai-forever/Kandinsky-4/assets/MMDiT_block1.png">
|
76 |
|
|
|
59 |
|
60 |
## Description:
|
61 |
|
62 |
+
Kandinsky 4.0 T2V Flash is a text-to-video generation model based on latent diffusion for 480p resolution, that can generate **12 second videos** in 480p resolution in **11 seconds** on a single NVIDIA H100 gpu. The pipeline consist of 3D causal [CogVideoX](https://arxiv.org/pdf/2408.06072) VAE, text embedder [T5-V1.1-XXL](https://huggingface.co/google/t5-v1_1-xxl) and our trained MMDiT-like transformer model.
|
63 |
|
64 |
<img src="https://github.com/ai-forever/Kandinsky-4/assets/pipeline.png">
|
65 |
|
|
|
70 |
|
71 |
## Architecture
|
72 |
|
73 |
+
For training Kandinsky 4.0 T2V Flash we used the following architecture of diffusion transformer, based on MMDiT proposed in [Stable Diffusion 3](https://arxiv.org/pdf/2403.03206).
|
74 |
|
75 |
<img src="https://github.com/ai-forever/Kandinsky-4/assets/MMDiT1.png"> <img src="https://github.com/ai-forever/Kandinsky-4/assets/MMDiT_block1.png">
|
76 |
|