license: gpl-3.0 | |
base_model: | |
- Qwen/Qwen3-14B | |
# Qwen3-14B-AWQ | |
```yaml | |
zero_piont: true | |
bits: 4 | |
version: GEMM | |
dataset: wikitext + Orion-zhen/gsm8k-r1-qwen-32b | |
num_examples: 256 | |
``` | |
The very **first** Qwen3-14B-AWQ quantization on HuggingFace. 🥳 | |