MedAgentsBench: Benchmarking Thinking Models and Agent Frameworks for Complex Medical Reasoning
Abstract
Large Language Models (LLMs) have shown impressive performance on existing medical question-answering benchmarks. This high performance makes it increasingly difficult to meaningfully evaluate and differentiate advanced methods. We present MedAgentsBench, a benchmark that focuses on challenging medical questions requiring multi-step clinical reasoning, diagnosis formulation, and treatment planning-scenarios where current models still struggle despite their strong performance on standard tests. Drawing from seven established medical datasets, our benchmark addresses three key limitations in existing evaluations: (1) the prevalence of straightforward questions where even base models achieve high performance, (2) inconsistent sampling and evaluation protocols across studies, and (3) lack of systematic analysis of the interplay between performance, cost, and inference time. Through experiments with various base models and reasoning methods, we demonstrate that the latest thinking models, DeepSeek R1 and OpenAI o3, exhibit exceptional performance in complex medical reasoning tasks. Additionally, advanced search-based agent methods offer promising performance-to-cost ratios compared to traditional approaches. Our analysis reveals substantial performance gaps between model families on complex questions and identifies optimal model selections for different computational constraints. Our benchmark and evaluation framework are publicly available at https://github.com/gersteinlab/medagents-benchmark.
Community
This paper addresses a critical gap in medical AI evaluation. While current LLMs perform well on standard medical tests, this work reveals they still struggle with complex clinical reasoning. The benchmark specifically targets challenging scenarios requiring multi-step reasoning and offers valuable insights on model performance trade-offs, showing that thinking models like DeepSeek R1 and OpenAI O3 significantly outperform traditional approaches on complex medical tasks. The comprehensive analysis of cost-performance-time trade-offs provides practical guidance for researchers and practitioners selecting models for medical applications. This innovative benchmark and methodology would benefit the Huggingface community by establishing more rigorous standards for evaluating medical AI systems.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper