Tiny Reasoning Language Model
Collection
Collection dedicated to the development of the Tiny Reasoning Language Model (trlm)
•
7 items
•
Updated
•
4
The Tiny Reasoning Language Model (trlm-135) is a 135M parameter research prototype designed to study how small models can learn step-by-step reasoning. It was built on top of SmolLM2-135M-Instruct and fine-tuned through a 3-stage pipeline:
<think>
tags.The code for everything can be found here
pip install -U transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Shekswess/trlm-135m"
device = "cuda" # or "cpu"
# Load tokenizer & model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
).to(device)
# Example prompt
prompt = "Give me a brief explanation of gravity in simple terms."
messages = [
{"role": "user", "content": prompt}
]
# Apply chat template
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
For reasoning-heavy tasks, set
temperature=0.6
andtop_p=0.95
.
Special thanks to @HotAisle
<think>
segments.Evaluation was done with lm-eval-harness
:
Benchmark | Tiny Reasoning Language Model (trlm-135M) | SmolLM2-135M-Instruct | Improvements |
---|---|---|---|
ARC Challenge | 40.61 (avg) | 37.3 (avg) | +3.31 |
BBH | 36.80 (3-shot) | 28.2 (3-shot) | +8.6 |
BoolQ | 62.17 | – | N/A |
GSM8K | 2.59 (5-shot) | 1.4 (5-shot) | +1.19 |
IFEval | 35.49 (avg) | 29.9 (avg) | +5.59 |
MMLU | 34.95 | 29.3 | +5.65 |
PIQA | 64.91 | 66.3 | –1.39 |
HellaSwag | – | 40.9 | N/A |
MT-Bench | – | 19.8 | N/A |
Base model
HuggingFaceTB/SmolLM2-135M