Recycling Pretrained Checkpoints: Orthogonal Growth of Mixture-of-Experts for Efficient Large Language Model Pre-Training
Abstract
Recycling pretrained checkpoints through orthogonal growth methods improves large language model performance with reduced computational cost.
The rapidly increasing computational cost of pretraining Large Language Models necessitates more efficient approaches. Numerous computational costs have been invested in existing well-trained checkpoints, but many of them remain underutilized due to engineering constraints or limited model capacity. To efficiently reuse this "sunk" cost, we propose to recycle pretrained checkpoints by expanding their parameter counts and continuing training. We propose orthogonal growth method well-suited for converged Mixture-of-Experts model: interpositional layer copying for depth growth and expert duplication with injected noise for width growth. To determine the optimal timing for such growth across checkpoints sequences, we perform comprehensive scaling experiments revealing that the final accuracy has a strong positive correlation with the amount of sunk cost, indicating that greater prior investment leads to better performance. We scale our approach to models with 70B parameters and over 1T training tokens, achieving 10.66% accuracy gain over training from scratch under the same additional compute budget. Our checkpoint recycling approach establishes a foundation for economically efficient large language model pretraining.
Community
Training large language models from scratch demands enormous computational resources. Modern LLM development pipelines routinely produce smaller pre-trained model checkpoints from processes like hyperparameter tuning or preliminary evaluations. We demonstrate that pre-trained checkpoints, often considered disposable assets, can be effectively ”recycled” to create larger and more capable models, thus preserving their significant sunk cost.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LongCat-Flash Technical Report (2025)
- Training Matryoshka Mixture-of-Experts for Elastic Inference-Time Expert Utilization (2025)
- LExI: Layer-Adaptive Active Experts for Efficient MoE Model Inference (2025)
- Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks (2025)
- From Acceleration to Saturation: Scaling Behavior of Bootstrapped Language Model Pretraining (2025)
- InfiR2: A Comprehensive FP8 Training Recipe for Reasoning-Enhanced Language Models (2025)
- Dirichlet-Prior Shaping: Guiding Expert Specialization in Upcycled MoEs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper