The Valley of Code Reasoning: Scaling Knowledge Distillation of Large Language Models
Abstract
Research on distilling coding skills from large language models to smaller ones reveals a "valley of code reasoning" where performance initially decreases with more data before improving sharply, and that small models benefit more from easier questions during distillation.
Distilling the thinking traces of a Large Language Model (LLM) with reasoning capabilities into a smaller model has been proven effective. Yet, there is a scarcity of work done on how model performances scale with the quantity of distillation data. In this work, we study the scaling trend of distilling competitive coding skills on two small non-reasoning LLMs. We validate the hypothesis that there is a valley of code reasoning: downstream performance on competitive coding first drops as data quantity increases, then it steadily increases in a sharper-than-log-linear fashion. Having identified the trend, we further fine-tune the models at two different distillation stages on the same data to ground conclusions on their respective learning phases. We learn that across stages in the low and medium-low data regimes, small models benefit significantly from easier coding questions than from harder ones. We also find that, surprisingly, the correctness of outputs in training data makes no difference to distillation outcomes. Our work represents a step forward in understanding the training dynamics of code reasoning distillation outside intuition
Community
When distilling reasoning into small models, performance doesn’t rise smoothly with more data. Instead, it first drops before steadily climbing again. In the “valley”, small models learn more from easy problems than hard ones and are insensitive to whether training outputs are correct.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Beyond Scaling Law: A Data-Efficient Distillation Framework for Reasoning (2025)
- MobileLLM-R1: Exploring the Limits of Sub-Billion Language Model Reasoners with Open Training Recipes (2025)
- Influence Functions for Efficient Data Selection in Reasoning (2025)
- Merge-of-Thought Distillation (2025)
- Revealing the Power of Post-Training for Small Language Models via Knowledge Distillation (2025)
- Distilling Reasoning into Student LLMs: Local Naturalness for Selecting Teacher Data (2025)
- Long Chain-of-Thought Reasoning Across Languages (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper