Lost in the Noise: How Reasoning Models Fail with Contextual Distractors Paper • 2601.07226 • Published 3 days ago • 25
naver-hyperclovax/HyperCLOVAX-SEED-Think-32B Text Generation • 33B • Updated 9 days ago • 31.3k • 379
The CoT Encyclopedia: Analyzing, Predicting, and Controlling how a Reasoning Model will Think Paper • 2505.10185 • Published May 15, 2025 • 26
The CoT Encyclopedia: Analyzing, Predicting, and Controlling how a Reasoning Model will Think Paper • 2505.10185 • Published May 15, 2025 • 26
Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free Paper • 2410.10814 • Published Oct 14, 2024 • 51
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets Paper • 2307.10928 • Published Jul 20, 2023 • 13