Cerium-Qwen3-R1-Dev-GGUF

Cerium-Qwen3-R1-Dev is a high-efficiency, multi-domain model fine-tuned on Qwen-0.6B using the rStar-Coder dataset, enhanced with code expert clusters, an extended open code reasoning dataset, and DeepSeek R1 coding sample traces. This model blends symbolic precision, scientific logic, and structured output fluency—making it an ideal tool for developers, educators, and researchers seeking advanced reasoning under constrained compute.

Model Files

File Name Quant Type File Size
Cerium-Qwen3-R1-Dev.BF16.gguf BF16 1.2 GB
Cerium-Qwen3-R1-Dev.F16.gguf F16 1.2 GB
Cerium-Qwen3-R1-Dev.F32.gguf F32 2.39 GB
Cerium-Qwen3-R1-Dev.Q2_K.gguf Q2_K 296 MB
Cerium-Qwen3-R1-Dev.Q3_K_L.gguf Q3_K_L 368 MB
Cerium-Qwen3-R1-Dev.Q3_K_M.gguf Q3_K_M 347 MB
Cerium-Qwen3-R1-Dev.Q3_K_S.gguf Q3_K_S 323 MB
Cerium-Qwen3-R1-Dev.Q4_K_M.gguf Q4_K_M 397 MB
Cerium-Qwen3-R1-Dev.Q4_K_S.gguf Q4_K_S 383 MB
Cerium-Qwen3-R1-Dev.Q5_K_M.gguf Q5_K_M 444 MB
Cerium-Qwen3-R1-Dev.Q5_K_S.gguf Q5_K_S 437 MB
Cerium-Qwen3-R1-Dev.Q6_K.gguf Q6_K 495 MB
Cerium-Qwen3-R1-Dev.Q8_0.gguf Q8_0 639 MB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
171
GGUF
Model size
596M params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Cerium-Qwen3-R1-Dev-GGUF

Finetuned
Qwen/Qwen3-0.6B
Quantized
(2)
this model

Collection including prithivMLmods/Cerium-Qwen3-R1-Dev-GGUF