Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1-gguf

Model Description

This is the GGUF Quantisationn of Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1.

Ollama

ollama run goekdenizguelmez/JOSIEFIED-Qwen3:1.7b
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:1.7b-q4_0
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:1.7b-q5_0
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:1.7b-q6_k
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:1.7b-q8_0
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:1.7b-fp16
  • Developed by: Gökdeniz Gülmez
  • Funded by: Gökdeniz Gülmez
  • Shared by: Gökdeniz Gülmez
  • Origional model: Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1
Downloads last month
298
GGUF
Model size
1.72B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1-gguf

Collection including Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1-gguf