Request release GGUF Q8_0 Quantized Version

#1
by makisekurisu-jp - opened

Hi, thank you very much for your amazing work on this model!

Would it be possible for you to release a GGUF format quantized version with Q8_0 precision? Many of us running on CPU or using tools like llama.cpp or koboldcpp would greatly benefit from a high-precision Q8_0 version for both speed and quality.

Thanks again for your contributions β€” looking forward to your reply!

Sign up or log in to comment