MITS: Enhanced Tree Search Reasoning for LLMs via Pointwise Mutual Information
Abstract
Mutual Information Tree Search (MITS) uses information-theoretic principles to guide and evaluate reasoning paths in large language models, improving performance and efficiency.
Tree search has become as a representative framework for test-time reasoning with large language models (LLMs), exemplified by methods such as Tree-of-Thought and Monte Carlo Tree Search that explore multiple reasoning paths. However, it remains difficult to provide instant and reliable quantitative assessments of intermediate reasoning step quality, and extensive path exploration is computationally costly. To address this, we propose Mutual Information Tree Search (MITS), a novel framework that guides reasoning with information-theoretic principles. MITS introduces an effective scoring function based on pointwise mutual information (PMI), which enables step-wise evaluation of reasoning paths and search tree expansion via beam search without expensive look-ahead simulations, achieving superior reasoning performances while maintaining computational efficiency. The framework is complemented by an entropy-based dynamic sampling strategy that adaptively allocates computational resources to uncertain reasoning steps where exploration is most beneficial. For final prediction, MITS employs a weighted voting scheme that combines PMI scores with prediction consensus. Through comprehensive experiments on diverse reasoning benchmarks, MITS consistently surpasses baseline methods, establishing a principled and efficient framework for LLM reasoning.
Community
We introduce Mutual Information Tree Search (MITS), a new framework that makes large language models (LLMs) reason more effectively and efficiently. Unlike previous tree search methods that explore multiple reasoning paths but struggle with real-time quality assessment, MITS uses information-theoretic principles to evaluate reasoning steps instantly. Our approach employs pointwise mutual information (PMI) to score each step and combines it with entropy-based dynamic sampling that focuses computational resources where they're most needed. Experiments across diverse reasoning datasets show that MITS consistently outperforms baseline methods, offering a principled and computationally efficient solution for LLM reasoning.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Adaptive Test-Time Reasoning via Reward-Guided Dual-Phase Search (2025)
- Dynamic Experts Search: Enhancing Reasoning in Mixture-of-Experts LLMs at Test Time (2025)
- PiCSAR: Probabilistic Confidence Selection And Ranking for Reasoning Chains (2025)
- Learning from Diverse Reasoning Paths with Routing and Collaboration (2025)
- DeepSearch: Overcome the Bottleneck of Reinforcement Learning with Verifiable Rewards via Monte Carlo Tree Search (2025)
- THOR: Tool-Integrated Hierarchical Optimization via RL for Mathematical Reasoning (2025)
- From Implicit Exploration to Structured Reasoning: Leveraging Guideline and Refinement for LLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper