gguf quantized ace-step-v1-3.5b

  • base model from ace-step
  • as need to keep some key tensors (in f32 status) to make it works; file size might not decrease that much; but load faster than the model from checkpoint itself a lot
  • umt5_base tokenizer has problem, before it was resolved, take the encoder inside safetensors checkpoint instead recently
  • dry running

setup (once)

  • drag gguf to > ./ComfyUI/models/diffusion_models
  • drag safetensors to > ./ComfyUI/models/checkpoints
  • drag pig to > ./ComfyUI/models/vae

screenshot

extra: fp8/16/32 scaled stable-audio-open-1.0 with gguf quantized t5_base encoder

setup (once)

  • drag t5-base to > ./ComfyUI/models/text_encoders
  • drag safetensors to > ./ComfyUI/models/checkpoints
  • drag pig to > ./ComfyUI/models/vae

screenshot

reference

Downloads last month
6
GGUF
Model size
3.31B params
Architecture
pig
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for calcuis/ace-gguf

Quantized
(2)
this model