File size: 312 Bytes
9f607be |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
---
license: gpl-3.0
base_model:
- Qwen/Qwen3-0.6B
---
# Qwen3-0.6B-AWQ
```yaml
zero_piont: true
bits: 4
version: GEMM
dataset: wikitext + Orion-zhen/gsm8k-r1-qwen-32b
num_examples: 256
```
The very **first** Qwen3-0.6B-AWQ on HuggingFace. I'm not sure if you really need a quantization of such a small model. |