๐Ÿน Qwen3-4B-abliterated

image/png

Qwen3 Abliterated 0.6B โ€ข 1.7B โ€ข 4B โ€ข 8B โ€ข 14B โ€ข 30B-A3B

This is an uncensored version of Qwen/Qwen3-4B created with a new abliteration technique. See this article to know more about abliteration.

This is a research project to understand how refusals and latent fine-tuning work in LLMs. I played with different sizes of Qwen3 and noticed there was no one-size-fits-all abliteration strategy. In addition, the reasoning mode interfered with non-reasoning refusals, which made it more challenging. This made me iterate over different recipes and significantly consolidate my scripts with accumulation and better evaluations.

Note that this is fairly experimental, so it might not turn out as well as expected.

I recommend using these generation parameters: temperature=0.6, top_k=20, top_p=0.95, min_p=0.

โœ‚๏ธ Abliteration

The refusal direction is computed by comparing the residual streams between target (harmful) and baseline (harmless) samples. The hidden states of target modules (e.g., o_proj) are orthogonalized to subtract this refusal direction with a given weight factor. These weight factors follow a normal distribution with a certain spread and peak layer. Modules can be iteratively orthogonalized in batches, or the refusal direction can be accumulated to save memory.

Finally, I used a hybrid evaluation with a dedicated test set to calculate the acceptance rate. This uses both a dictionary approach and NousResearch/Minos-v1. The goal is to obtain an acceptance rate >90% and still produce coherent outputs.

Downloads last month
338
Safetensors
Model size
4.02B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mlabonne/Qwen3-4B-abliterated

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Finetuned
(63)
this model
Finetunes
1 model
Merges
1 model
Quantizations
9 models

Collection including mlabonne/Qwen3-4B-abliterated