gguf quantized ace-step-v1-3.5b
- base model from ace-step
- as need to keep some key tensors (in f32 status) to make it works; file size might not decrease that much; but load faster than the model from checkpoint itself a lot
- umt5_base tokenizer has problem, before it was resolved, take the encoder inside safetensors checkpoint instead recently
- dry running
setup (once)
- drag gguf to >
./ComfyUI/models/diffusion_models
- drag safetensors to >
./ComfyUI/models/checkpoints
- drag pig to >
./ComfyUI/models/vae
extra: fp8/16/32 scaled stable-audio-open-1.0 with gguf quantized t5_base encoder
- base model from stabilityai
- dry running
setup (once)
- drag t5-base to >
./ComfyUI/models/text_encoders
- drag safetensors to >
./ComfyUI/models/checkpoints
- drag pig to >
./ComfyUI/models/vae
reference
- comfyui from comfyanonymous
- pig architecture from connector
- gguf-node (pypi|repo|pack)
- Downloads last month
- 6
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for calcuis/ace-gguf
Base model
ACE-Step/ACE-Step-v1-3.5B