Update README.md
Browse fileslatent diffusion model model
README.md
CHANGED
@@ -22,7 +22,7 @@ library_name: diffusers
|
|
22 |
|
23 |
Training code, PyTorch and FLAX implementation are available here: <https://github.com/lopho/makeavid-sd-tpu>
|
24 |
|
25 |
-
This model extends an inpainting
|
26 |
with temporal convolution and temporal self-attention ported from [Make-A-Video PyTorch](https://github.com/lucidrains/make-a-video-pytorch)
|
27 |
|
28 |
It has then been fine tuned for ~150k steps on a [dataset](https://huggingface.co/datasets/TempoFunk/tempofunk-sdance) of 10,000 videos themed around dance.
|
|
|
22 |
|
23 |
Training code, PyTorch and FLAX implementation are available here: <https://github.com/lopho/makeavid-sd-tpu>
|
24 |
|
25 |
+
This model extends an inpainting latent-diffusion image generation model ([Stable Diffusion v1.5 Inpaint](https://huggingface.co/runwayml/stable-diffusion-inpainting))
|
26 |
with temporal convolution and temporal self-attention ported from [Make-A-Video PyTorch](https://github.com/lucidrains/make-a-video-pytorch)
|
27 |
|
28 |
It has then been fine tuned for ~150k steps on a [dataset](https://huggingface.co/datasets/TempoFunk/tempofunk-sdance) of 10,000 videos themed around dance.
|