Quantized and converted gguf file from https://huggingface.co/kxdw2580/Qwen2.5-3B-Instruct-Catgirl-v2

Obviously the parameter number is rather small, thus it is not a powerful enough model, but its reply is interesting. In a sense it's not a useless model.

Since parameter number is small, FP16 precision is recomended.

Downloads last month
125
GGUF
Model size
3.09B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Misaka27260/Qwen2.5-3B-Instruct-Catgirl-v2-GGUF

Base model

Qwen/Qwen2.5-3B
Quantized
(3)
this model