Abstract
Pillar-0, a radiology foundation model pretrained on diverse imaging datasets, outperforms existing models across various tasks and extends to new applications using RATE for label extraction.
Radiology plays an integral role in modern medicine, yet rising imaging volumes have far outpaced workforce growth. Foundation models offer a path toward assisting with the full spectrum of radiology tasks, but existing medical models remain limited: they process volumetric CT and MRI as low-fidelity 2D slices, discard critical grayscale contrast information, and lack evaluation frameworks that reflect real clinical practice. We introduce Pillar-0, a radiology foundation model pretrained on 42,990 abdomen-pelvis CTs, 86,411 chest CTs, 14,348 head CTs, and 11,543 breast MRIs from a large academic center, together with RATE, a scalable framework that extracts structured labels for 366 radiologic findings with near-perfect accuracy using LLMs. Across internal test sets of 14,230 abdomen-pelvis CTs, 10,646 chest CTs, 4,906 head CTs, and 1,585 breast MRIs, Pillar-0 establishes a new performance frontier, achieving mean AUROCs of 86.4, 88.0, 90.1, and 82.9, outperforming MedGemma (Google), MedImageInsight (Microsoft), Lingshu (Alibaba), and Merlin (Stanford) by 7.8-15.8 AUROC points and ranking best in 87.2\% (319/366) tasks. Pillar-0 similarly outperforms all baselines in an external validation on the Stanford Abdominal CT dataset, including Merlin (82.2 vs 80.6 AUROC). Pillar-0 extends to tasks beyond its pretraining, such as long-horizon lung cancer risk prediction, where it improves upon the state-of-the-art Sybil by 3.0 C-index points on NLST, and generalizes with gains of 5.9 (MGH) and 1.9 (CGMH). In brain hemorrhage detection, Pillar-0 obtained a >95 AUROC when using only 1/20th of the data of the next most sample efficient baseline. Pillar-0 and RATE together provide an open, clinically rigorous foundation for building high-performance radiology systems, enabling applications that were previously infeasible due to computational, data, and evaluation constraints.
Community
We introduce Pillar-0, a new radiology foundation model trained on over 150,000 volumetric CT and MRI scans, alongside RATE, an automated framework for labeling clinical findings. Pillar-0 significantly outperforms major competitor models (including those from Google, Microsoft, and Stanford) across 366 tasks, demonstrating superior accuracy, generalization, and extreme data efficiency in detecting conditions like brain hemorrhages and predicting lung cancer risk.
๐ ArXiv: https://arxiv.org/abs/2511.17803
๐ Website: https://yalalab.github.io/pillar-0/
๐ค Model: https://huggingface.co/collections/YalaLab/pillar-0
๐๏ธ Pillar-Pretrain: https://github.com/YalaLab/pillar-pretrain
๐๏ธ Pillar-Finetune: https://github.com/YalaLab/pillar-finetune
๐ฉป Radiology-Vision-Engine: https://github.com/YalaLab/rave
๐ ๏ธ Rate: https://github.com/YalaLab/rate
๐ Rate-Evals: https://github.com/YalaLab/rate-evals
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- X-WIN: Building Chest Radiograph World Model via Predictive Sensing (2025)
- SpineBench: A Clinically Salient, Level-Aware Benchmark Powered by the SpineMed-450k Corpus (2025)
- LAND: Lung and Nodule Diffusion for 3D Chest CT Synthesis with Anatomical Guidance (2025)
- Adapted Foundation Models for Breast MRI Triaging in Contrast-Enhanced and Non-Contrast Enhanced Protocols (2025)
- A Review of Longitudinal Radiology Report Generation: Dataset Composition, Methods, and Performance Evaluation (2025)
- Navigating Gigapixel Pathology Images with Large Multimodal Models (2025)
- Dolphin v1.0 Technical Report (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper
