Wolf-Rayet-2B-Prime3-GGUF

Wolf-Rayet-2B-Prime3 is a compact, coding-optimized language model built on the Qwen3 1.7B architecture, fine-tuned for high-accuracy code generation, debugging, and technical reasoning. With approximately 2 billion effective parameters, it offers a strong balance between performance and deployability—ideal for developers, educators, and engineers operating in resource-constrained or latency-sensitive environments.

Model Files

File Name Precision Size
Wolf-Rayet-2B-Prime3.BF16.gguf BF16 3.45 GB
Wolf-Rayet-2B-Prime3.F16.gguf FP16 3.45 GB
Wolf-Rayet-2B-Prime3.F32.gguf FP32 6.89 GB
Wolf-Rayet-2B-Prime3.Q2_K.gguf Q2_K 778 MB
Wolf-Rayet-2B-Prime3.Q3_K_M.gguf Q3_K_M 940 MB
Wolf-Rayet-2B-Prime3.Q4_K_M.gguf Q4_K_M 1.11 GB
Wolf-Rayet-2B-Prime3.Q5_K_M.gguf Q5_K_M 1.26 GB
Wolf-Rayet-2B-Prime3.Q8_0.gguf Q8_0 1.83 GB
config.json Config File 31 Bytes

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
59
GGUF
Model size
1.72B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Wolf-Rayet-2B-Prime3-GGUF

Finetuned
Qwen/Qwen3-1.7B
Quantized
(3)
this model

Collection including prithivMLmods/Wolf-Rayet-2B-Prime3-GGUF