SIRI: Scaling Iterative Reinforcement Learning with Interleaved Compression
🔍 Overview
SIRI (Scaling Iterative Reinforcement Learning with Interleaved Compression) is a reinforcement-learning–based framework designed to improve the efficiency and accuracy of Large Reasoning Models (LRMs).
Traditional RL training often causes overthinking and long, redundant reasoning traces. Prior methods that compress outputs (length penalties, pruning, or skipping thought tokens) improve efficiency but hurt accuracy.
SIRI solves this trade-off by iteratively alternating between compression and expansion of the reasoning budget, controlled by a cosine length scheduler. This approach dynamically balances concise reasoning with long-horizon exploration.
🚀 Key Features
- Interleaved Compression–Expansion:
- Compression phase: forces concise, high-density reasoning by limiting rollout length.
- Expansion phase: restores longer rollouts to encourage exploration and planning.
- Token Efficiency without Accuracy Loss: Unlike previous methods, SIRI improves accuracy while reducing average token usage.
- Iterative RL Training: Built on GRPO with modifications from DAPO (clip-high/low decoupling, KL removal).
- Generalization Across Model Sizes: Validated on both 1.5B and 7B models.
📊 Benchmarks
📝 Citation
@misc{wen2025siriscalingiterativereinforcement,
title={SIRI: Scaling Iterative Reinforcement Learning with Interleaved Compression},
author={Haoming Wen and Yushi Bai and Juanzi Li and Jie Tang},
year={2025},
eprint={2509.25176},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2509.25176},
}
- Downloads last month
- 18
Model tree for THU-KEG/SIRI-1.5B-high
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B