GGUF/Discussion
#3
by
Lewdiculous
- opened
To be uploaded:
quantization_options = [
"Q4_K_M", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K",
"Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
]
https://huggingface.co/Lewdiculous/Sinerva_7B-GGUF-IQ-Imatrix
Thanks so much for picking this up! This is a unique model, hope it finds an audience.
For sure!