QVikhr-3-4B-it-F32-GGUF

QVikhr-3-4B-Instruction, Instructive model based on Qwen/Qwen3-4B, trained on the Russian-language dataset GrandMaster2. Designed for high-efficiency text processing in Russian and English, delivering precise responses and fast task execution.

Model Files

File Name Size Type Description
QVikhr-3-4B-Instruction.Q2_K.gguf 1.67 GB Model Q2_K quantized model (smallest)
QVikhr-3-4B-Instruction.Q3_K_S.gguf 1.89 GB Model Q3_K_S quantized model
QVikhr-3-4B-Instruction.Q3_K_M.gguf 2.08 GB Model Q3_K_M quantized model
QVikhr-3-4B-Instruction.Q3_K_L.gguf 2.24 GB Model Q3_K_L quantized model
QVikhr-3-4B-Instruction.Q4_K_S.gguf 2.38 GB Model Q4_K_S quantized model
QVikhr-3-4B-Instruction.Q4_K_M.gguf 2.5 GB Model Q4_K_M quantized model
QVikhr-3-4B-Instruction.Q5_K_S.gguf 2.82 GB Model Q5_K_S quantized model
QVikhr-3-4B-Instruction.Q5_K_M.gguf 2.89 GB Model Q5_K_M quantized model
QVikhr-3-4B-Instruction.Q6_K.gguf 3.31 GB Model Q6_K quantized model
QVikhr-3-4B-Instruction.Q8_0.gguf 4.28 GB Model Q8_0 quantized model
QVikhr-3-4B-Instruction.BF16.gguf 8.05 GB Model BF16 precision model
QVikhr-3-4B-Instruction.F16.gguf 8.05 GB Model F16 precision model
QVikhr-3-4B-Instruction.F32.gguf 16.1 GB Model F32 full precision model (largest)
.gitattributes 2.58 kB Config Git LFS configuration
config.json 31 Bytes Config Model configuration
README.md 149 Bytes Documentation Repository documentation

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
182
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/QVikhr-3-4B-it-F32-GGUF

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Quantized
(7)
this model