Self-Training Elicits Concise Reasoning in Large Language Models

This model is fine-tuned using self-training methods to generate more concise reasoning paths for reasoning tasks while maintaining accuracy.

Model Details

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "tergel/llama-3.2-3b-instruct-math-fs-gpt4o-bon"
device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map=device, torch_dtype=torch.bfloat16)

question = "Convert the point $(0,3)$ in rectangular coordinates to polar coordinates.  Enter your answer in the form $(r,\\theta),$ where $r > 0$ and $0 \\le \\theta < 2 \\pi.$"

inputs = tokenizer(question, return_tensors="pt").to(device)
input_length = len(inputs['input_ids'][0])

outputs = model.generate(**inputs, max_new_tokens=512)

response = tokenizer.decode(outputs[0][input_length:], skip_special_tokens=True)
print(response)

For more detailed information about training methods, evaluation results, limitations, and technical specifications, please refer to our paper.

Citation

@article{munkhbat2025self,
  title={Self-Training Elicits Concise Reasoning in Large Language Models},
  author={Munkhbat, Tergel and Ho, Namgyu and Kim, Seohyun and Yang, Yongjin and Kim, Yujin and Yun, Se-Young},
  journal={arXiv preprint arXiv:2502.20122},
  year={2025}
}
Downloads last month
20
Safetensors
Model size
3.21B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for tergel/llama-3.2-3b-instruct-math-fs-gpt4o-bon

Finetuned
(321)
this model