RLT-32B

This repository contains a 32B parameter student model trained using the Reinforcement-Learned Teachers (RLT) pipeline introduced in our paper Reinforcement Learning Teachers.

Model Details

Model Description

This 32B RLT student was distilled from a 7B Reinforcement-Learned Teacher, which has been explicitly trained to produce high-quality reasoning traces optimized for student distillation. The model was trained with supervised fine-tuning using the same hyperparameters, the system prompt, and the reasoning tags from Li et al. 2025. Evaluation was conducted using the SkyThought library at commit 4bb8f3e. Please refer to our repository and paper for details and results.

Usage

This model is provided for research and development purposes only and should be considered as an experimental prototype. It is not intended for commercial use or deployment in mission-critical environments. Use of this model is at the user's own risk, and its performance and outcomes are not guaranteed. Sakana AI shall not be liable for any direct, indirect, special, incidental, or consequential damages, or any loss arising from the use of this model, regardless of the results obtained. Users must fully understand the risks associated with the use of this model and use it at their own discretion.

Downloads last month
82
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SakanaAI/RLT-32B

Base model

Qwen/Qwen2.5-32B
Finetuned
(231)
this model
Quantizations
2 models

Collection including SakanaAI/RLT-32B