Papers
arxiv:2504.01201

Medical large language models are easily distracted

Published on Apr 1
· Submitted by KrithikV on Apr 3
Authors:
,
,
,
,

Abstract

Large language models (LLMs) have the potential to transform medicine, but real-world clinical scenarios contain extraneous information that can hinder performance. The rise of assistive technologies like ambient dictation, which automatically generates draft notes from live patient encounters, has the potential to introduce additional noise making it crucial to assess the ability of LLM's to filter relevant data. To investigate this, we developed MedDistractQA, a benchmark using USMLE-style questions embedded with simulated real-world distractions. Our findings show that distracting statements (polysemous words with clinical meanings used in a non-clinical context or references to unrelated health conditions) can reduce LLM accuracy by up to 17.9%. Commonly proposed solutions to improve model performance such as retrieval-augmented generation (RAG) and medical fine-tuning did not change this effect and in some cases introduced their own confounders and further degraded performance. Our findings suggest that LLMs natively lack the logical mechanisms necessary to distinguish relevant from irrelevant clinical information, posing challenges for real-world applications. MedDistractQA and our results highlights the need for robust mitigation strategies to enhance LLM resilience to extraneous information.

Community

Paper author Paper submitter
edited 3 days ago

We develop MedDistractQA, a novel benchmark based on the MedQA aimed at evaluating the vulnerability of large language models to distractions within medical scenarios. We find that all tested large language models experience significant degradation to distractors, with accuracies dropping as much as 17.9%. We also demonstrate that RAG can often act as a distractor itself; hindering model performance and accuracy.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.01201 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.01201 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.01201 in a Space README.md to link it from this page.

Collections including this paper 1