could you upload with the weights in bfloat16?
#3
by
seba
- opened
Hi, I tried converting but the 30B model is too much for my hardware, could you upload a version with the models in bfloat16 so it is faster to download and uses less disk space?
congratz on the work!
I found there are some quantified versions of Qwen3-30B-A3B-MegaScience:
mradermacher/Qwen3-30B-A3B-MegaScience-GGUF
mradermacher/Qwen3-30B-A3B-MegaScience-i1-GGUF