Solving Inequality Proofs with Large Language Models
Abstract
The investigation into inequality proving using large language models uncovers significant challenges in constructing rigorous proofs, revealing gaps between finding answers and generating valid step-wise solutions.
Inequality proving, crucial across diverse scientific and mathematical fields, tests advanced reasoning skills such as discovering tight bounds and strategic theorem application. This makes it a distinct, demanding frontier for large language models (LLMs), offering insights beyond general mathematical problem-solving. Progress in this area is hampered by existing datasets that are often scarce, synthetic, or rigidly formal. We address this by proposing an informal yet verifiable task formulation, recasting inequality proving into two automatically checkable subtasks: bound estimation and relation prediction. Building on this, we release IneqMath, an expert-curated dataset of Olympiad-level inequalities, including a test set and training corpus enriched with step-wise solutions and theorem annotations. We also develop a novel LLM-as-judge evaluation framework, combining a final-answer judge with four step-wise judges designed to detect common reasoning flaws. A systematic evaluation of 29 leading LLMs on IneqMath reveals a surprising reality: even top models like o1 achieve less than 10% overall accuracy under step-wise scrutiny; this is a drop of up to 65.5% from their accuracy considering only final answer equivalence. This discrepancy exposes fragile deductive chains and a critical gap for current LLMs between merely finding an answer and constructing a rigorous proof. Scaling model size and increasing test-time computation yield limited gains in overall proof correctness. Instead, our findings highlight promising research directions such as theorem-guided reasoning and self-refinement. Code and data are available at https://ineqmath.github.io/.
Community
Ever wonder if LLMs truly understand math proofs, or just guess answers? š¤ Our new 52-page study on IneqMath dives deep into solving Olympiad-level inequality proofs!
We introduce:
1ļøā£ A novel "informal yet verifiable" task formulation ā making complex proofs accessible to LLMs while still checkable.
2ļøā£ IneqMath: The first expert-curated benchmark of its kind, packed with Olympiad-level inequalities, step-by-step solutions, and theorem annotations.
3ļøā£ An innovative LLM-as-judge framework that doesn't just check final answers but meticulously scrutinizes each reasoning step for common flaws.
š„ The Big Finding: Even top LLMs (like o1 and Grok 3 mini) see accuracy plummet by up to 65.5% (to <10% overall!) when we look beyond the final answer. This reveals a crucial gap: LLMs are often good at finding answers, but struggle to construct rigorous, sound proofs.
Our work uncovers challenges and points towards promising directions like theorem-guided reasoning.
Paper: https://arxiv.org/abs/2506.07927
Project: https://ineqmath.github.io/
Code: https://github.com/lupantech/ineqmath
Data: https://huggingface.co/datasets/AI4Math/IneqMath
Leaderboard: https://huggingface.co/spaces/AI4Math/IneqMath-Leaderboard
Visualization: https://ineqmath.github.io/#visualization
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Can LLMs $\textit{understand}$ Math? -- Exploring the Pitfalls in Mathematical Reasoning (2025)
- DeepTheorem: Advancing LLM Reasoning for Theorem Proving Through Natural Language and Reinforcement Learning (2025)
- Let's Verify Math Questions Step by Step (2025)
- LIMOPro: Reasoning Refinement for Efficient and Effective Test-time Scaling (2025)
- RealMath: A Continuous Benchmark for Evaluating Language Models on Research-Level Mathematics (2025)
- rStar-Coder: Scaling Competitive Code Reasoning with a Large-Scale Verified Dataset (2025)
- Enumerate-Conjecture-Prove: Formally Solving Answer-Construction Problems in Math Competitions (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper