image/png

Xiaolong-Qwen3-0.6B

Xiaolong is a small, uncensored, reasoning-focused model finetuned using ORPO and QLoRA on top of Qwen3-0.6B-abliterated-TIES.

Finetuning Details

  • Method: ORPO
  • Epochs: 1.3
  • Learning Rate: 5e-6, cosine decay w/ 5% warmup
  • Batch Size: 4 x 8 (32 effective)
  • Max Grad Norm: 0.3
  • LoRA Rank: 64
  • Hardware: 1x NVIDIA RTX A6000

Dataset Composition

~9,100 samples. 3,000 used Chain of Thought reasoning.

Chain of Thought

Downloads last month
65
Safetensors
Model size
596M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nbeerbower/Xiaolong-Qwen3-0.6B

Finetuned
(1)
this model
Quantizations
8 models

Datasets used to train nbeerbower/Xiaolong-Qwen3-0.6B