Extra models for use with Blissful Tuner - https://github.com/Sarania/blissful-tuner/
- ./wan_lcm/* - These are LCM LoRA for Wan architecture models extracted from https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill
- ./VFI/gimm-vfi - For use with
python -m blissful_tuner.GIMMVFI
, this is excellent frame rate interpolate provided by https://github.com/GSeanCDAT/GIMM-VFI (S-Lab License 1.0) - ./upscaling/* - For use with
python -m blissful_tuner.upscaler
, these are upscaling models SwinIR ( https://github.com/JingyunLiang/SwinIR ) and ESRGAN 4x_NMKD-Siax_200k - ./face_restoration/* - The face restoration models CodeFormer ( https://github.com/sczhou/CodeFormer (S-Lab License 1.0)) and GFPGAN ( https://github.com/TencentARC/GFPGAN) for use with
python -m blissful_tuner.facefix
- ./yolo/* - yolov8-face model from https://github.com/lindevs/yolov8-face for use with
python -m blissful_tuner.yolo_blur
- ./taehv/* - Tiny Autoencoders for use with Wan and Hunyuan as
--preview_vae
from https://github.com/madebyollin/taehv (MIT License)
license: apache-2.0 except where otherwise noted
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support