Triangle104/Xiaolong-Qwen3-0.6B-Q5_K_M-GGUF

This model was converted to GGUF format from nbeerbower/Xiaolong-Qwen3-0.6B using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.


Xiaolong is a small, uncensored, reasoning-focused model finetuned using ORPO and QLoRA on top of Qwen3-0.6B-abliterated-TIES.

Finetuning Details

  • Method: ORPO
  • Epochs: 1.3
  • Learning Rate: 5e-6, cosine decay w/ 5% warmup
  • Batch Size: 4 x 8 (32 effective)
  • Max Grad Norm: 0.3
  • LoRA Rank: 64
  • Hardware: 1x NVIDIA RTX A6000

Dataset Composition

~9,100 samples. 3,000 used Chain of Thought reasoning.

  • nbeerbower/GreatFirewall-DPO
  • nbeerbower/Schule-DPO
  • nbeerbower/Purpura-DPO
  • nbeerbower/Arkhaios-DPO
  • jondurbin/truthy-dpo-v0.1
  • antiven0m/physical-reasoning-dpo
  • flammenai/Date-DPO-NoAsterisks
  • flammenai/Prude-Phi3-DPO
  • Atsunori/HelpSteer2-DPO (1000 samples)
  • jondurbin/gutenberg-dpo-v0.1
  • nbeerbower/gutenberg2-dpo
  • nbeerbower/gutenberg-moderne-dpo

Chain of Thought

  • GeneralReasoning/GeneralThought-430K (1000 samples)
  • nvidia/OpenMathReasoning (1000 samples)
  • nvidia/OpenCodeReasoning (1000 samples)

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Triangle104/Xiaolong-Qwen3-0.6B-Q5_K_M-GGUF --hf-file xiaolong-qwen3-0.6b-q5_k_m.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Triangle104/Xiaolong-Qwen3-0.6B-Q5_K_M-GGUF --hf-file xiaolong-qwen3-0.6b-q5_k_m.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Triangle104/Xiaolong-Qwen3-0.6B-Q5_K_M-GGUF --hf-file xiaolong-qwen3-0.6b-q5_k_m.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Triangle104/Xiaolong-Qwen3-0.6B-Q5_K_M-GGUF --hf-file xiaolong-qwen3-0.6b-q5_k_m.gguf -c 2048
Downloads last month
5
GGUF
Model size
596M params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Triangle104/Xiaolong-Qwen3-0.6B-Q5_K_M-GGUF

Datasets used to train Triangle104/Xiaolong-Qwen3-0.6B-Q5_K_M-GGUF

Collections including Triangle104/Xiaolong-Qwen3-0.6B-Q5_K_M-GGUF