Files

Filename File size Target modules Quantizing Type
Illustrious-XL-v2.0.fp8_e4m3fn.safetensors 4.04 GB UNet (attn, ff), Text encoders (self_attn, mlp) torch.float8_e4m3fn
Illustrious-XL-v2.0.unet-fp8_e4m3fn.safetensors (Recommended) 4.75 GB UNet (attn, ff) torch.float8_e4m3fn
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for p1atdev/Illustrious-XL-v2.0-fp8

Quantized
(2)
this model