๐น Qwen3-4B-abliterated
This is an uncensored version of Qwen/Qwen3-4B created with a new abliteration technique. See this article to know more about abliteration.
This is a research project to understand how refusals and latent fine-tuning work in LLMs. I played with different sizes of Qwen3 and noticed there was no one-size-fits-all abliteration strategy. In addition, the reasoning mode interfered with non-reasoning refusals, which made it more challenging. This made me iterate over different recipes and significantly consolidate my scripts with accumulation and better evaluations.
Note that this is fairly experimental, so it might not turn out as well as expected.
I recommend using these generation parameters: temperature=0.6
, top_k=20
, top_p=0.95
, min_p=0
.
โ๏ธ Abliteration
The refusal direction is computed by comparing the residual streams between target (harmful) and baseline (harmless) samples. The hidden states of target modules (e.g., o_proj) are orthogonalized to subtract this refusal direction with a given weight factor. These weight factors follow a normal distribution with a certain spread and peak layer. Modules can be iteratively orthogonalized in batches, or the refusal direction can be accumulated to save memory.
Finally, I used a hybrid evaluation with a dedicated test set to calculate the acceptance rate. This uses both a dictionary approach and NousResearch/Minos-v1. The goal is to obtain an acceptance rate >90% and still produce coherent outputs.
- Downloads last month
- 338
Model tree for mlabonne/Qwen3-4B-abliterated
Base model
Qwen/Qwen3-4B-Base