mid-training datasets OctoThinker/MegaMath-Web-Pro-Max Viewer • Updated Jul 6, 2025 • 69.2M • 3.49k • 36 allenai/dolmino-mix-1124 Viewer • Updated Oct 29, 2025 • 170M • 25.4k • 90 nvidia/Nemotron-ClimbMix Viewer • Updated Oct 21, 2025 • 355M • 12.4k • 88 allenai/big-reasoning-traces Viewer • Updated Jun 30, 2025 • 677k • 157 • 18
papers DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search Paper • 2408.08152 • Published Aug 15, 2024 • 61 ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition Paper • 2402.15220 • Published Feb 23, 2024 • 20 Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models Paper • 2402.19427 • Published Feb 29, 2024 • 56 Simple linear attention language models balance the recall-throughput tradeoff Paper • 2402.18668 • Published Feb 28, 2024 • 20
DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search Paper • 2408.08152 • Published Aug 15, 2024 • 61
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition Paper • 2402.15220 • Published Feb 23, 2024 • 20
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models Paper • 2402.19427 • Published Feb 29, 2024 • 56
Simple linear attention language models balance the recall-throughput tradeoff Paper • 2402.18668 • Published Feb 28, 2024 • 20
mid-training datasets OctoThinker/MegaMath-Web-Pro-Max Viewer • Updated Jul 6, 2025 • 69.2M • 3.49k • 36 allenai/dolmino-mix-1124 Viewer • Updated Oct 29, 2025 • 170M • 25.4k • 90 nvidia/Nemotron-ClimbMix Viewer • Updated Oct 21, 2025 • 355M • 12.4k • 88 allenai/big-reasoning-traces Viewer • Updated Jun 30, 2025 • 677k • 157 • 18
papers DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search Paper • 2408.08152 • Published Aug 15, 2024 • 61 ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition Paper • 2402.15220 • Published Feb 23, 2024 • 20 Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models Paper • 2402.19427 • Published Feb 29, 2024 • 56 Simple linear attention language models balance the recall-throughput tradeoff Paper • 2402.18668 • Published Feb 28, 2024 • 20
DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search Paper • 2408.08152 • Published Aug 15, 2024 • 61
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition Paper • 2402.15220 • Published Feb 23, 2024 • 20
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models Paper • 2402.19427 • Published Feb 29, 2024 • 56
Simple linear attention language models balance the recall-throughput tradeoff Paper • 2402.18668 • Published Feb 28, 2024 • 20