JOSIEFIED Model Family

The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“gabliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities.

Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility.
These models are intended for advanced users who require unrestricted, high-performance language generation.

Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2

Model Description

Introducing Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2, a new addition to the JOSIEFIED family — fine-tuned and gabliterated with a focus on openness and instruction alignment. This one marks my new dataset, which gives Josie more personality and a little humor.

Gabliteration

With this model series, I introduce the first Gabliteration, a novel neural weight modification technique that advances beyond traditional abliteration methods through adaptive multi-directional projections with regularized layer selection. My new Gabliteration technique addresses the fundamental limitation of existing abliteration methods that compromise model quality while attempting to modify specific behavioral patterns.

Technical Background

Building upon the foundational work of Arditi et al. (2024) on single-direction abliteration, Gabliteration extends to a comprehensive multi-directional framework with theoretical guarantees. My method employs singular value decomposition on difference matrices between harmful and harmless prompt representations to extract multiple refusal directions.

Quantisations

Ollama

not uploaded yet
  • Developed by: Goekdeniz-Guelmez
  • Funded by: Goekdeniz-Guelmez
  • Shared by: Goekdeniz-Guelmez
  • Model type: qwen3
  • Finetuned from model: Qwen/Qwen3-4B-Instruct-2507

Bias, Risks, and Limitations

This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.

Downloads last month
53
Safetensors
Model size
4.02B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2

Finetuned
(15)
this model
Quantizations
2 models

Collection including Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2