Quantization settings
vae
(first_stage_model
):torch.float16
. No quantization.text_encoder
,text_encoder_2
(conditioner.embedders
):- NF4 with bitsandbytes
- Target layers:
["self_attn", ".mlp."]
diffusion_model
:- Int8 with bitsandbytes
- Target layers:
["attn1", "attn2", ".ff."]
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for p1atdev/animagine-xl-4.0-bnb-nf4
Base model
stabilityai/stable-diffusion-xl-base-1.0
Finetuned
cagliostrolab/animagine-xl-4.0