GPTQ Int 8 Quants PLease
#5 opened 28 days ago
by
rjmehta
Can you make a 2.25bpw quantization for this model?
#4 opened 2 months ago
by
xldistance
Reason for high performance may be an error in evaluation
4
#3 opened 4 months ago
by
ChuckMcSneed
![](https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/4VOzArmrRaX_DUTxGmm59.jpeg)
what is your "continuous finetuning"
7
#2 opened 4 months ago
by
MaziyarPanahi
![](https://cdn-avatars.huggingface.co/v1/production/uploads/5fd5e18a90b6dc4633f6d292/gZXHW5dd9R86AV9LMZ--y.png)