SIRI: Scaling Iterative Reinforcement Learning with Interleaved Compression

📃 Paper • 📝 Wandb


🔍 Overview

SIRI (Scaling Iterative Reinforcement Learning with Interleaved Compression) is a reinforcement-learning–based framework designed to improve the efficiency and accuracy of Large Reasoning Models (LRMs).

Traditional RL training often causes overthinking and long, redundant reasoning traces. Prior methods that compress outputs (length penalties, pruning, or skipping thought tokens) improve efficiency but hurt accuracy.

SIRI solves this trade-off by iteratively alternating between compression and expansion of the reasoning budget, controlled by a cosine length scheduler. This approach dynamically balances concise reasoning with long-horizon exploration.

pareto_front


🚀 Key Features

  • Interleaved Compression–Expansion:
    • Compression phase: forces concise, high-density reasoning by limiting rollout length.
    • Expansion phase: restores longer rollouts to encourage exploration and planning.
  • Token Efficiency without Accuracy Loss: Unlike previous methods, SIRI improves accuracy while reducing average token usage.
  • Iterative RL Training: Built on GRPO with modifications from DAPO (clip-high/low decoupling, KL removal).
  • Generalization Across Model Sizes: Validated on both 1.5B and 7B models.

📊 Benchmarks

perf


📝 Citation

@misc{wen2025siriscalingiterativereinforcement,
      title={SIRI: Scaling Iterative Reinforcement Learning with Interleaved Compression}, 
      author={Haoming Wen and Yushi Bai and Juanzi Li and Jie Tang},
      year={2025},
      eprint={2509.25176},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2509.25176}, 
}
Downloads last month
18
Safetensors
Model size
1.78B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for THU-KEG/SIRI-1.5B-high

Finetuned
(455)
this model
Quantizations
2 models

Dataset used to train THU-KEG/SIRI-1.5B-high

Collection including THU-KEG/SIRI-1.5B-high