Capella-Qwen3-DS-V3.1-4B-GGUF
Capella-Qwen3-DS-V3.1-4B is a reasoning-focused model fine-tuned on Qwen-4B using DeepSeek v3.1 synthetic traces (10K). It specializes in random event simulations, logical problem analysis, and structured reasoning tasks. The model blends symbolic precision, probabilistic logic, and structured output fluency—making it an ideal tool for researchers, educators, and developers working with uncertainty modeling and event-driven analysis.
Model Files
File Name | Quant Type | File Size |
---|---|---|
Capella-Qwen3-DS-V3.1-4B.BF16.gguf | BF16 | 8.05 GB |
Capella-Qwen3-DS-V3.1-4B.F16.gguf | F16 | 8.05 GB |
Capella-Qwen3-DS-V3.1-4B.F32.gguf | F32 | 16.1 GB |
Capella-Qwen3-DS-V3.1-4B.Q2_K.gguf | Q2_K | 1.67 GB |
Capella-Qwen3-DS-V3.1-4B.Q3_K_L.gguf | Q3_K_L | 2.24 GB |
Capella-Qwen3-DS-V3.1-4B.Q3_K_M.gguf | Q3_K_M | 2.08 GB |
Capella-Qwen3-DS-V3.1-4B.Q3_K_S.gguf | Q3_K_S | 1.89 GB |
Capella-Qwen3-DS-V3.1-4B.Q4_K_M.gguf | Q4_K_M | 2.5 GB |
Capella-Qwen3-DS-V3.1-4B.Q4_K_S.gguf | Q4_K_S | 2.38 GB |
Capella-Qwen3-DS-V3.1-4B.Q5_K_M.gguf | Q5_K_M | 2.89 GB |
Capella-Qwen3-DS-V3.1-4B.Q5_K_S.gguf | Q5_K_S | 2.82 GB |
Capella-Qwen3-DS-V3.1-4B.Q6_K.gguf | Q6_K | 3.31 GB |
Capella-Qwen3-DS-V3.1-4B.Q8_0.gguf | Q8_0 | 4.28 GB |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 3,076
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit
Model tree for prithivMLmods/Capella-Qwen3-DS-V3.1-4B-GGUF
Base model
Qwen/Qwen3-4B-Base