A Controllable Examination for Long-Context Language Models Paper • 2506.02921 • Published Jun 3 • 33
Demons in the Detail: On Implementing Load Balancing Loss for Training Specialized Mixture-of-Expert Models Paper • 2501.11873 • Published Jan 21 • 66
A Closer Look into Mixture-of-Experts in Large Language Models Paper • 2406.18219 • Published Jun 26, 2024 • 16
Unlocking Continual Learning Abilities in Language Models Paper • 2406.17245 • Published Jun 25, 2024 • 31
Emergent Mixture-of-Experts: Can Dense Pre-trained Transformers Benefit from Emergent Modular Structures? Paper • 2310.10908 • Published Oct 17, 2023 • 1