converted to fp8 via the following method:

image/png

unfortunately, adding the long clip l directly doesn't work at all.

image/png

the error text begins with the following:

got prompt
model weight dtype torch.float16, manual cast: None
model_type EPS
WARNING: No VAE weights detected, VAE not initalized.
no CLIP/text encoder weights in checkpoint, the text encoder model will not be loaded.
comfy_extras.chainner_models is deprecated and has been replaced by the spandrel library.
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 1560.802734375 True
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
clip missing: ['text_projection.weight']
lora key not loaded: lora_te1_text_model_encoder_layers_0_mlp_fc1.alpha
lora key not loaded: lora_te1_text_model_encoder_layers_0_mlp_fc1.lora_down.weight
lora key not loaded: lora_te1_text_model_encoder_layers_0_mlp_fc1.lora_up.weight
lora key not loaded: lora_te1_text_model_encoder_layers_0_mlp_fc2.alpha

and the error message continues... so aparrently it is not as easy as just adding a clip to it,

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for kellempxt/convertedtofp8

Finetuned
(1)
this model