Model Size
#1
by
nikhil-arm
- opened
Hello,
Thanks for uploading gguf model and your contribution.
Quick Question, Should combined f16 model size be 216 GB ?
Oh thanks for noticing, it's a Q8_0 and I mislabed it as F16
I changed the file name to Q8_0 as I don't have a machine available to requant it to f16 (and also lm-studio + bartowski uploaded their quant anyway)
ngxson
changed discussion status to
closed