Papers
arxiv:2503.08890

PlainQAFact: Automatic Factuality Evaluation Metric for Biomedical Plain Language Summaries Generation

Published on Mar 11
ยท Submitted by uzw on Mar 12
Authors:
,

Abstract

Hallucinated outputs from language models pose risks in the medical domain, especially for lay audiences making health-related decisions. Existing factuality evaluation methods, such as entailment- and question-answering-based (QA), struggle with plain language summary (PLS) generation due to elaborative explanation phenomenon, which introduces external content (e.g., definitions, background, examples) absent from the source document to enhance comprehension. To address this, we introduce PlainQAFact, a framework trained on a fine-grained, human-annotated dataset PlainFact, to evaluate the factuality of both source-simplified and elaboratively explained sentences. PlainQAFact first classifies factuality type and then assesses factuality using a retrieval-augmented QA-based scoring method. Our approach is lightweight and computationally efficient. Empirical results show that existing factuality metrics fail to effectively evaluate factuality in PLS, especially for elaborative explanations, whereas PlainQAFact achieves state-of-the-art performance. We further analyze its effectiveness across external knowledge sources, answer extraction strategies, overlap measures, and document granularity levels, refining its overall factuality assessment.

Community

PlainQAFact is a retrieval-augmented and question-answering (QA)-based factuality evaluation framework for assessing the factuality of biomedical plain language summarization tasks. PlainFact is a high-quality human-annotated dataset with fine-grained explanation (i.e., added information) annotations.

๐Ÿš€ ๐Ÿš€ ๐Ÿš€ To use our proposed metric, simple install through pip install plainqafact.

๐Ÿ’ป For more details about how to use our evaluation framework and the benchmark, please refer to the Github repo: https://github.com/zhiwenyou103/PlainQAFact

๐Ÿ“Š PlainFact is available on ๐Ÿค— Hugging Face now!

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.08890 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.