LiveCodeBench Pro: How Do Olympiad Medalists Judge LLMs in Competitive Programming?
Abstract
LLMs perform well on implementation-heavy competitive programming problems but struggle with nuanced algorithmic reasoning, as highlighted by LiveCodeBench Pro.
Recent reports claim that large language models (LLMs) now outperform elite humans in competitive programming. Drawing on knowledge from a group of medalists in international algorithmic contests, we revisit this claim, examining how LLMs differ from human experts and where limitations still remain. We introduce LiveCodeBench Pro, a benchmark composed of problems from Codeforces, ICPC, and IOI that are continuously updated to reduce the likelihood of data contamination. A team of Olympiad medalists annotates every problem for algorithmic categories and conducts a line-by-line analysis of failed model-generated submissions. Using this new data and benchmark, we find that frontier models still have significant limitations: without external tools, the best model achieves only 53% pass@1 on medium-difficulty problems and 0% on hard problems, domains where expert humans still excel. We also find that LLMs succeed at implementation-heavy problems but struggle with nuanced algorithmic reasoning and complex case analysis, often generating confidently incorrect justifications. High performance appears largely driven by implementation precision and tool augmentation, not superior reasoning. LiveCodeBench Pro thus highlights the significant gap to human grandmaster levels, while offering fine-grained diagnostics to steer future improvements in code-centric LLM reasoning.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- rStar-Coder: Scaling Competitive Code Reasoning with a Large-Scale Verified Dataset (2025)
- ICPC-Eval: Probing the Frontiers of LLM Reasoning with Competitive Programming Contests (2025)
- OIBench: Benchmarking Strong Reasoning Models with Olympiad in Informatics (2025)
- LLMSR@XLLM25: An Empirical Study of LLM for Structural Reasoning (2025)
- Can LLMs Generate Reliable Test Case Generators? A Study on Competition-Level Programming Problems (2025)
- HeuriGym: An Agentic Benchmark for LLM-Crafted Heuristics in Combinatorial Optimization (2025)
- Breakpoint: Scalable evaluation of system-level reasoning in LLM code agents (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper