Update README.md
Browse files
README.md
CHANGED
@@ -15,11 +15,23 @@ base_model_relation: quantized
|
|
15 |
# DeepSeek-R1-0528-GPTQ-Int4-Int8Mix-Compact
|
16 |
Base mode [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528)
|
17 |
|
18 |
-
This repository
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
### 【Model Update Date】
|
25 |
```
|
|
|
15 |
# DeepSeek-R1-0528-GPTQ-Int4-Int8Mix-Compact
|
16 |
Base mode [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528)
|
17 |
|
18 |
+
This repository delivers an Int4 + selectively-Int8 GPTQ `DeepSeek-R1-0528` model: only layers that are highly sensitive to quantization remain in Int8, while the rest stay Int4—preserving generation quality with minimal file-size overhead.
|
19 |
|
20 |
+
Preliminary trials show that converting the entire model to pure Int4 (AWQ/GPTQ) under the quantization layout used in vLLM’s current DeepSeek-R1 implementation degrades inference accuracy and can produce faulty outputs. Layer-wise fine-grained quantization substantially mitigates this issue.
|
21 |
|
22 |
+
Temporary patch:
|
23 |
+
vLLM == 0.9.0 does not yet natively support per-layer quantization for MoE modules.
|
24 |
+
We added get_moe_quant_method to gptq_marlin.py as an interim fix.
|
25 |
+
Until the upstream PR is merged, please replace the original file with the one provided in this repo.
|
26 |
+
|
27 |
+
Variant Overview
|
28 |
+
|
29 |
+
| Variant | Characteristics | File Size | Recommended Scenario |
|
30 |
+
|-------------|-------------------------------------------------------------------------|-----------|----------------------------------------------------------|
|
31 |
+
| **Compact** | More Int8 layers, higher fidelity | 414 GB | Ample GPU memory & strict quality needs (e.g., 8 × A100) |
|
32 |
+
| **Lite** | Only the most critical layers upgraded to Int8; size close to pure Int4 | 355 GB | Resource-constrained, lightweight server deployments |
|
33 |
+
|
34 |
+
Choose the variant that best matches your hardware and quality requirements.
|
35 |
|
36 |
### 【Model Update Date】
|
37 |
```
|