MiniPLM-Mamba-130M
MiniPLM-Mamba-130M is a 130M parameter language model with the Mamba architecture pre-trained from scratch on the Pile using the MiniPLM knowledge distillation framework. It uses the official Qwen1.5-1.8B as the teacher model. This model demonstrates the flexibility of the MiniPLM framework in conducting knowledge distillation across model families. The pre-training corpus refined by Difference Sampling in MiniPLM is open-sourced for reproducibility.
MiniPLM: Knowledge Distillation for Pre-Training Language Models
Knowledge distillation (KD) is widely used to train small, high-performing student language models (LMs) using large teacher LMs. While effective in fine-tuning, KD during pre-training faces challenges in efficiency, flexibility, and effectiveness. Existing methods either incur high computational costs due to online teacher inference, require tokenization matching between teacher and student LMs, or risk losing the difficulty and diversity of the teacher-generated training data. To address these issues, MiniPLM is proposed, a KD framework for pre-training LMs by refining the training data distribution with the teacher's knowledge. For efficiency, MiniPLM performs offline teacher LM inference, allowing KD for multiple student LMs without adding training-time costs. For flexibility, MiniPLM operates solely on the training corpus, enabling KD across model families. For effectiveness, MiniPLM leverages the differences between large and small LMs to enhance the difficulty and diversity of the training data, helping student LMs acquire versatile and sophisticated knowledge.
Evaluation
MiniPLM models achieve better performance given the same computation and scale well across model sizes:
Baseline Models
Citation
@article{miniplm,
title={MiniPLM: Knowledge Distillation for Pre-Training Language Models},
author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
journal={arXiv preprint arXiv:2410.17215},
year={2024}
}
- Downloads last month
- 41