Q4_K_M Quantization of Instinct, Continue's Open Next-Edit Model

This is a Q4_K_M quantized GGUF version of the original model for efficient local inference.

Downloads last month
206
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for continuedev/instinct-GGUF

Base model

Qwen/Qwen2.5-7B
Quantized
(2)
this model