Tiny Reasoning Language Model
Collection
Collection dedicated to the development of the Tiny Reasoning Language Model (trlm)
β’
7 items
β’
Updated
β’
4
trlm-stage-2-sft-final-2
is the Stage 2 post-training model for the Tiny Reasoning Language Model (trlm) project.
This stage focuses on reasoning tasks, fine-tuned on a curated dataset of 78,000 entries with reasoning tokens (<think>...</think>
).
<think>
chain-of-thought representations.This stage teaches the model to analyze problems step-by-step, reason with intermediate thoughts, and provide structured answers.
<think>
traces <think>
tokens in simple tasks This model was trained on the dataset:
π Shekswess/trlm-sft-stage-2-final-2
Dataset summary:
<think>
annotationsSource Dataset | Entries | Percentage % |
---|---|---|
Llama_Nemotron_Post_Training_Dataset_reasoning_r1 | 40,200 | 51.5% |
OpenThoughts3_1.2M | 20,000 | 25.6% |
multi_turn_reasoning_if_think | 10,000 | 12.8% |
aya_dataset_Qwen3_32B_think | 5,000 | 6.4% |
smoltalk_everyday_convs_reasoning_Qwen3_32B_think | 2,000 | 2.6% |
s1k_1.1_think | 800 | 1.0% |
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Shekswess/trlm-stage-2-sft-final-2"
# Load tokenizer & model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example inference with reasoning
messages = [
{"role": "user", "content": "If a train travels 60 km in 1 hour and another 90 km in 1.5 hours, what is the average speed?"}
]
# Apply chat template
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Part of the Tiny Reasoning Language Model (trlm) post-training pipeline.
Base model
HuggingFaceTB/SmolLM2-135M