These are non-imatrix FP16 here. i1-GGUF here.

The Q2_K is pretty good like 7B models.

Downloads last month
90
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for h34v7/DXP-Zero-V1.2-24b-Small-Instruct-GGUF

Quantized
(2)
this model

Collection including h34v7/DXP-Zero-V1.2-24b-Small-Instruct-GGUF