Astral-4B

Astral 0.6B is the smallest sized model in the Astral family. It was fine-tuned from Qwen3 0.6b on LucidityAI/Astral-Post-Training-Dataset.

As with usual Qwen3 models, reasoning can be toggled through the usage of /no_think or not.

Example Prompt (ChatML Format (THINKING)):

<|im_start|>user
What is the capital of France?
<|im_end|>
<|im_start|>assistant
<think>

Example Prompt (ChatML Format (NON-THINKING)):

<|im_start|>user
What is the capital of France? /no_think
<|im_end|>
<|im_start|>assistant
<think>
Downloads last month
501
Safetensors
Model size
596M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LucidityAI/Astral-0.6B-Flash

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(312)
this model
Quantizations
2 models

Dataset used to train LucidityAI/Astral-0.6B-Flash

Collection including LucidityAI/Astral-0.6B-Flash