Attention and shared experts: 4-bit

All routed experts: 1-bit

Quantized by https://github.com/tflsxyy/BiMoE

Downloads last month
3
Safetensors
Model size
1.98B params
Tensor type
F16
·
I32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tflsxyy/DeepSeek-V2-Lite-BiMoE-w1g16

Finetuned
(1)
this model