imatrix GGUF quants

Execution tips from my private experience:

  • don't quantize context
  • use top_p 0.9, top_k 20, temp 0.6, min_p 0.05
Downloads last month
129
GGUF
Model size
15B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ilintar/Apriel-Nemotron-15b-Thinker-iGGUF

Quantized
(8)
this model