Papers
arxiv:2506.07309

ConfQA: Answer Only If You Are Confident

Published on Jun 8
· Submitted by MaggieHuang on Jun 10
Authors:
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

ConfQA fine-tuning strategy reduces factual statement hallucination in LLMs by 80%, using a dampening prompt and factual statements from knowledge graphs to improve confidence calibration and knowledge selection.

AI-generated summary

Can we teach Large Language Models (LLMs) to refrain from hallucinating factual statements? In this paper we present a fine-tuning strategy that we call ConfQA, which can reduce hallucination rate from 20-40% to under 5% across multiple factuality benchmarks. The core idea is simple: when the LLM answers a question correctly, it is trained to continue with the answer; otherwise, it is trained to admit "I am unsure". But there are two key factors that make the training highly effective. First, we introduce a dampening prompt "answer only if you are confident" to explicitly guide the behavior, without which hallucination remains high as 15%-25%. Second, we leverage simple factual statements, specifically attribute values from knowledge graphs, to help LLMs calibrate the confidence, resulting in robust generalization across domains and question types. Building on this insight, we propose the Dual Neural Knowledge framework, which seamlessly select between internally parameterized neural knowledge and externally recorded symbolic knowledge based on ConfQA's confidence. The framework enables potential accuracy gains to beyond 95%, while reducing unnecessary external retrievals by over 30%.

Community

Paper author Paper submitter
edited 2 days ago

This paper shows comprehensive study of teaching Large Language Models (LLMs) to refrain from hallucinating factual statements, and also proposes the Dual Neural Knowledge framework, which seamlessly select between internally parameterized neural knowledge and externally recorded symbolic knowledge based on ConfQA's confidence.

  • Reducing hallucinations from 20-40% to <5% across multiple benchmarks with a dampening prompt "answer only if you are confident"
  • Maintaining similar accuracy whereas reducing hallucinations by up to 10% w/o the dampening prompt
  • High transferability across domains, short/long-form answers, w/o regressing general benchmarks
  • Efficient and effective RAG triggering strategy w. high accuracy gain and lower latency increase.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.07309 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.07309 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.07309 in a Space README.md to link it from this page.

Collections including this paper 3