optimized demos for Wan 2.2 14B models, using FP8 quantization + AoT compilation & community LoRAs for fast & high quality inference on ZeroGPU 💨
ZeroGPU AoTI
community
AI & ML interests
AoT compilation, ZeroGPU inference optimization
Recent Activity
View all activity