Linear-MoE: Linear Sequence Modeling Meets Mixture-of-Experts Paper • 2503.05447 • Published Mar 7 • 8
Liger: Linearizing Large Language Models to Gated Recurrent Structures Paper • 2503.01496 • Published Mar 3 • 18
LASP-2: Rethinking Sequence Parallelism for Linear Attention and Its Hybrid Paper • 2502.07563 • Published Feb 11 • 24