Possible to re-quantize Trendyol/Trendyol-Cybersecurity-LLM-Qwen3-32B-Q8_0-GGUF ?
#1127
by
ykarout
- opened
The model seems really useful for cybersecurity use-cases... I couldnt find the bf16/fp16 versions, is it possible to re-quantize from Q8_0 or is there a big accuracy loss that overcomes the size loss?
It's certainly possible, just not standard. I would guess that the accuracy loss will be limited, i.e. the Q4_K_M will be worse when done from Q8_0, but probably not in a noticable way, since the Q8_0 is already so good.
I will give it a try, once I find time.
Well, break a leg.
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Trendyol-Cybersecurity-LLM-Qwen3-32B-Q8_0-GGUF-GGUF for quants to appear.
you rock buddy