Qwen2.5-14B-All-Variants-q8_0-q6_K-GGUF
This repo contains GGUF quantizations of Qwen/Qwen2.5-14B, Qwen/Qwen2.5-14B-Instruct, and Qwen/Qwen2.5-Coder-14B-Instruct models at q6_K, using q8_0 for output and embedding tensors.
- Downloads last month
- 55
Hardware compatibility
Log In
to view the estimation
6-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for ddh0/Qwen2.5-14B-All-Variants-q8_0-q6_K-GGUF
Base model
Qwen/Qwen2.5-14B