Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-4B-abliterated-v1-gguf

Model Description

This is the GGUF Quantisationn of Goekdeniz-Guelmez/Josiefied-Qwen3-4B-abliterated-v1.

Ollama

ollama run goekdenizguelmez/JOSIEFIED-Qwen3:4b
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:4b-q4_0
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:4b-q5_0
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:4b-q6_k
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:4b-q8_0
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:4b-fp16
  • Developed by: Gökdeniz Gülmez
  • Funded by: Gökdeniz Gülmez
  • Shared by: Gökdeniz Gülmez
  • Origional model: Goekdeniz-Guelmez/Josiefied-Qwen3-4B-abliterated-v1
Downloads last month
1,442
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Goekdeniz-Guelmez/Josiefied-Qwen3-4B-abliterated-v1-gguf

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Quantized
(6)
this model

Collection including Goekdeniz-Guelmez/Josiefied-Qwen3-4B-abliterated-v1-gguf