--- license: apache-2.0 base_model: - Lightricks/LTX-Video library_name: gguf tags: - video - video-generation pipeline_tag: image-to-video --- Comfyui doesnt natively support these Quants in the latest stable version but the nightly/dev build has a fix, which makes the old workaround irrelevant. To update to the dev version just execute the ComfyUI_windows_portable\update\update_comfyui.bat (or equivalent on other install options) and it should load without issues! Also you need to use the right vae. Ive tested it with some old vae for ltxv and it gave massive errors, anyway just use this one here [vae](https://huggingface.co/wsbagnsv1/ltxv-13b-0.9.7-dev-GGUF/blob/main/ltxv-13b-0.9.7-vae-BF16.safetensors) This is a direct GGUF conversion of [Lightricks/ltxv-13b-0.9.7-dev](https://huggingface.co/Lightricks/LTX-Video) All quants are created from the FP32 base file, though I only uploaded the Q8_0 and less, if you want the F16 or BF16 one I would upload it per request. The model files can be used with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions. Please refer to [this chart](https://github.com/ggerganov/llama.cpp/blob/master/examples/perplexity/README.md#llama-3-8b-scoreboard) for a basic overview of quantization types. For conversion I used the conversion scripts from [city96](https://huggingface.co/city96)