Papers
arxiv:2511.17803

Pillar-0: A New Frontier for Radiology Foundation Models

Published on Nov 21
ยท Submitted by Kumar Krishna Agrawal on Nov 25
ยท YalaLab Yala Lab
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Pillar-0, a radiology foundation model pretrained on diverse imaging datasets, outperforms existing models across various tasks and extends to new applications using RATE for label extraction.

AI-generated summary

Radiology plays an integral role in modern medicine, yet rising imaging volumes have far outpaced workforce growth. Foundation models offer a path toward assisting with the full spectrum of radiology tasks, but existing medical models remain limited: they process volumetric CT and MRI as low-fidelity 2D slices, discard critical grayscale contrast information, and lack evaluation frameworks that reflect real clinical practice. We introduce Pillar-0, a radiology foundation model pretrained on 42,990 abdomen-pelvis CTs, 86,411 chest CTs, 14,348 head CTs, and 11,543 breast MRIs from a large academic center, together with RATE, a scalable framework that extracts structured labels for 366 radiologic findings with near-perfect accuracy using LLMs. Across internal test sets of 14,230 abdomen-pelvis CTs, 10,646 chest CTs, 4,906 head CTs, and 1,585 breast MRIs, Pillar-0 establishes a new performance frontier, achieving mean AUROCs of 86.4, 88.0, 90.1, and 82.9, outperforming MedGemma (Google), MedImageInsight (Microsoft), Lingshu (Alibaba), and Merlin (Stanford) by 7.8-15.8 AUROC points and ranking best in 87.2\% (319/366) tasks. Pillar-0 similarly outperforms all baselines in an external validation on the Stanford Abdominal CT dataset, including Merlin (82.2 vs 80.6 AUROC). Pillar-0 extends to tasks beyond its pretraining, such as long-horizon lung cancer risk prediction, where it improves upon the state-of-the-art Sybil by 3.0 C-index points on NLST, and generalizes with gains of 5.9 (MGH) and 1.9 (CGMH). In brain hemorrhage detection, Pillar-0 obtained a >95 AUROC when using only 1/20th of the data of the next most sample efficient baseline. Pillar-0 and RATE together provide an open, clinically rigorous foundation for building high-performance radiology systems, enabling applications that were previously infeasible due to computational, data, and evaluation constraints.

Community

Paper submitter

We introduce Pillar-0, a new radiology foundation model trained on over 150,000 volumetric CT and MRI scans, alongside RATE, an automated framework for labeling clinical findings. Pillar-0 significantly outperforms major competitor models (including those from Google, Microsoft, and Stanford) across 366 tasks, demonstrating superior accuracy, generalization, and extreme data efficiency in detecting conditions like brain hemorrhages and predicting lung cancer risk.

fig_1_teaser

๐Ÿ“œ ArXiv: https://arxiv.org/abs/2511.17803
๐ŸŒ Website: https://yalalab.github.io/pillar-0/
๐Ÿค— Model: https://huggingface.co/collections/YalaLab/pillar-0
๐Ÿ—๏ธ Pillar-Pretrain: https://github.com/YalaLab/pillar-pretrain
๐ŸŽ›๏ธ Pillar-Finetune: https://github.com/YalaLab/pillar-finetune
๐Ÿฉป Radiology-Vision-Engine: https://github.com/YalaLab/rave
๐Ÿ› ๏ธ Rate: https://github.com/YalaLab/rate
๐Ÿ“Š Rate-Evals: https://github.com/YalaLab/rate-evals

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2511.17803 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2511.17803 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2511.17803 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.