TRLm Stage 1 Banner

🧠 trlm-stage-2-sft-final-2

trlm-stage-2-sft-final-2 is the Stage 2 post-training model for the Tiny Reasoning Language Model (trlm) project.
This stage focuses on reasoning tasks, fine-tuned on a curated dataset of 78,000 entries with reasoning tokens (<think>...</think>).


πŸ“– Model Description

  • Base Model: Shekswess/trlm-stage-1-sft-final-2
  • Type: Causal Language Model (decoder-only transformer)
  • Stage: Post-training Stage 2 (SFT)
  • Objective: Equip the model with reasoning ability, multi-turn thought structuring, and explicit <think> chain-of-thought representations.

This stage teaches the model to analyze problems step-by-step, reason with intermediate thoughts, and provide structured answers.


🎯 Intended Uses & Limitations

Intended Uses

  • Reasoning-based question answering
  • Step-by-step logical explanations
  • Multi-turn reasoning with <think> traces
  • Precursor to preference optimization (Stage 3)

Limitations

  • May overfit on reasoning style and hallucinate <think> tokens in simple tasks
  • Still limited in knowledge scope (135M parameters)
  • Trained only on English datasets

πŸ“Š Training Data

This model was trained on the dataset:
πŸ‘‰ Shekswess/trlm-sft-stage-2-final-2

Dataset summary:

  • Entries: 78,000
  • Sources: 6 HuggingFaceTB/smoltalk2 subsets
  • Focus: Reasoning tasks with <think> annotations
Source Dataset Entries Percentage %
Llama_Nemotron_Post_Training_Dataset_reasoning_r1 40,200 51.5%
OpenThoughts3_1.2M 20,000 25.6%
multi_turn_reasoning_if_think 10,000 12.8%
aya_dataset_Qwen3_32B_think 5,000 6.4%
smoltalk_everyday_convs_reasoning_Qwen3_32B_think 2,000 2.6%
s1k_1.1_think 800 1.0%

βš™οΈ Training Procedure

Training Hyperparameters

  • Learning rate: 3e-4
  • Train batch size: 32
  • Eval batch size: 8
  • Gradient accumulation steps: 4
  • Total effective batch size: 128
  • Optimizer: AdamW (betas=(0.9, 0.99), eps=1e-08)
  • LR Scheduler: Cosine with warmup ratio 0.1
  • Epochs: 1
  • Seed: 42

Framework Versions

  • Transformers: 4.56.2
  • PyTorch: 2.7.1+rocm7.0.0.git698b58a9
  • Datasets: 4.0.0
  • Tokenizers: 0.22.1

πŸš€ Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "Shekswess/trlm-stage-2-sft-final-2"

# Load tokenizer & model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Example inference with reasoning
messages = [
    {"role": "user", "content": "If a train travels 60 km in 1 hour and another 90 km in 1.5 hours, what is the average speed?"}
]

# Apply chat template
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ“Œ Next Steps

  • Stage 3: DPO / preference optimization for reasoning stability

Part of the Tiny Reasoning Language Model (trlm) post-training pipeline.

Downloads last month
57
Safetensors
Model size
135M params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Shekswess/trlm-stage-2-sft-final-2

Collection including Shekswess/trlm-stage-2-sft-final-2