The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
Abstract
Large Reasoning Models (LRMs) exhibit varying performance across task complexities, with limitations in exact computation and inconsistent reasoning, as assessed using controllable puzzle environments.
Recent generations of language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. Current evaluations primarily focus on established math and coding benchmarks, emphasizing final answer accuracy. However, this evaluation paradigm often suffers from contamination and does not provide insights into the reasoning traces. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs think. Through extensive experiments, we show that LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counterintuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having remaining token budget. By comparing LRMs with their standard LLM counterparts under same inference compute, we identify three performance regimes: (1) low-complexity tasks where standard models outperform LRMs, (2) medium-complexity tasks where LRMs demonstrates advantage, and (3) high-complexity tasks where both models face complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across scales. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models' computational behavior, shedding light on their strengths, limitations, and raising questions about their reasoning capabilities.
Community
Scaling compute is helpful, but not enough to close the reasoning gaps.
In this paper, our findings challenge assumptions about reasoning model capabilities. Despite sophisticated self-reflection mechanisms from RL training, our results suggest that these models can't follow algorithm steps and importantly can't generalize algorithmic reasoning beyond certain complexity thresholds.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Reasoning Model is Stubborn: Diagnosing Instruction Overriding in Reasoning Models (2025)
- Computational Thinking Reasoning in Large Language Models (2025)
- ToTRL: Unlock LLM Tree-of-Thoughts Reasoning Potential through Puzzles Solving (2025)
- Think or Not? Exploring Thinking Efficiency in Large Reasoning Models via an Information-Theoretic Lens (2025)
- Scalable Chain of Thoughts via Elastic Reasoning (2025)
- Thought Manipulation: External Thought Can Be Efficient for Large Reasoning Models (2025)
- Reasoning Models Can Be Effective Without Thinking (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper