AbstentionBench: Reasoning LLMs Fail on Unanswerable Questions
Abstract
AbstentionBench evaluates the ability of LLMs to abstain from answering uncertain or unanswerable questions, revealing that reasoning fine-tuning often degrades this capability.
For Large Language Models (LLMs) to be reliably deployed in both everyday and high-stakes domains, knowing when not to answer is equally critical as answering correctly. Real-world user queries, which can be underspecified, ill-posed, or fundamentally unanswerable, require LLMs to reason about uncertainty and selectively abstain -- i.e., refuse to answer definitively. However, abstention remains understudied, without a systematic evaluation framework for modern LLMs. In this work, we introduce AbstentionBench, a large-scale benchmark for holistically evaluating abstention across 20 diverse datasets, including questions with unknown answers, underspecification, false premises, subjective interpretations, and outdated information. Evaluating 20 frontier LLMs reveals abstention is an unsolved problem, and one where scaling models is of little use. While recent reasoning LLMs have shown impressive results in complex problem solving, surprisingly, we find that reasoning fine-tuning degrades abstention (by 24% on average), even for math and science domains on which reasoning models are explicitly trained. We find that while a carefully crafted system prompt can boost abstention in practice, it does not resolve models' fundamental inability to reason about uncertainty. We release AbstentionBench to foster research into advancing LLM reliability.
Community
A good language model should say “I don’t know” by reasoning about the limits of its knowledge. Our new work AbstentionBench carefully measures this overlooked skill in leading models in an open-codebase others can build on!
https://github.com/facebookresearch/AbstentionBench
We find frontier reasoning degrades models’ ability to know when NOT to answer.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- BARREL: Boundary-Aware Reasoning for Factual and Reliable LRMs (2025)
- FalseReject: A Resource for Improving Contextual Safety and Mitigating Over-Refusals in LLMs via Structured Reasoning (2025)
- ConfQA: Answer Only If You Are Confident (2025)
- Scaling Reasoning can Improve Factuality in Large Language Models (2025)
- ReasoningShield: Content Safety Detection over Reasoning Traces of Large Reasoning Models (2025)
- Revisiting Uncertainty Estimation and Calibration of Large Language Models (2025)
- General-Reasoner: Advancing LLM Reasoning Across All Domains (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper