File size: 2,207 Bytes
f3b7966 cdc9be5 2b8ee32 cdc9be5 2b8ee32 f3b7966 cdc9be5 f3b7966 9eea363 cdc9be5 9eea363 cdc9be5 398f064 cdc9be5 e0973c0 7a997ff e0973c0 cdc9be5 a46661b cdc9be5 f3b7966 0fd8b17 f3b7966 d40f989 a5ecfdf d40f989 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
---
base_model: Lightricks/LTX-Video
library_name: gguf
quantized_by: wsbagnsv1
tags:
- ltx-video
- text-to-video
- image-to-video
language:
- en
license: other
license_link: LICENSE.md
---
This is a direct GGUF conversion of the 13b-0.9.7-dev variant from [Lightricks/LTX-Video](https://huggingface.co/Lightricks/LTX-Video).
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders:
| Type | Name | Location | Download |
| ------------ | ------------------- | --------------------------------- | ------ |
| Main Model | ltxv-13b-0.9.7-dev | `ComfyUI/models/diffusion_models` | GGUF (this repo) |
| Text Encoder | T5-V1.1-XXL-Encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main) / [GGUF](https://huggingface.co/city96/t5-v1_1-xxl-encoder-gguf) |
| VAE | ltxv-13b-0.9.7-vae | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/wsbagnsv1/ltxv-13b-0.9.7-dev-GGUF/blob/main/ltxv-13b-0.9.7-vae-BF16.safetensors) |
[**Example workflow**](https://huggingface.co/wsbagnsv1/ltxv-13b-0.9.7-dev-GGUF/blob/main/exampleworkflow.json) - based on the [official example workflow](https://github.com/Lightricks/ComfyUI-LTXVideo/tree/master/example_workflows/)
### Notes
*As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.*
*Comfyui now supports the ggufs natively, so you just need to update comfyui to the latest version and if some issues persist update all the nodes in the workflow*
*Other T5 clips will probably work as well, just use one that you like, you can get them as safetensors or ggufs. The best I tried was the t5 v1.1 xxl one*
*Loras do work, but you need to follow the steps in the example workflow and dont use torchcompile with loras!*
*Teacache works with ltx but atm not really good. The rel_l1_thresh only seems to work at 0.01 in my testing and even that causes some noticable quality drops, best leave it disabled.* |