Towards a Unified Copernicus Foundation Model for Earth Vision
Abstract
Advances in Earth observation (EO) foundation models have unlocked the potential of big satellite data to learn generic representations from space, benefiting a wide range of downstream applications crucial to our planet. However, most existing efforts remain limited to fixed spectral sensors, focus solely on the Earth's surface, and overlook valuable metadata beyond imagery. In this work, we take a step towards next-generation EO foundation models with three key components: 1) Copernicus-Pretrain, a massive-scale pretraining dataset that integrates 18.7M aligned images from all major Copernicus Sentinel missions, spanning from the Earth's surface to its atmosphere; 2) Copernicus-FM, a unified foundation model capable of processing any spectral or non-spectral sensor modality using extended dynamic hypernetworks and flexible metadata encoding; and 3) Copernicus-Bench, a systematic evaluation benchmark with 15 hierarchical downstream tasks ranging from preprocessing to specialized applications for each Sentinel mission. Our dataset, model, and benchmark greatly improve the scalability, versatility, and multimodal adaptability of EO foundation models, while also creating new opportunities to connect EO, weather, and climate research. Codes, datasets and models are available at https://github.com/zhu-xlab/Copernicus-FM.
Community
Towards a Unified Copernicus Foundation Model for Earth Vision:
- π Copernicus-Pretrain: A massive-scale pretraining dataset with 18.7M aligned images from all major Copernicus Sentinel missions, spanning from the Earth's surface to its atmosphere.
- π€ Copernicus-FM: A unified foundation model capable of processing any spectral or non-spectral sensor modality using extended dynamic hypernetworks and flexible metadata encoding.
- π Copernicus-Bench: A systematic evaluation benchmark with 15 hierarchical downstream tasks ranging from preprocessing to specialized applications for each Sentinel mission.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Panopticon: Advancing Any-Sensor Foundation Models for Earth Observation (2025)
- GeoLangBind: Unifying Earth Observation with Agglomerative Vision-Language Foundation Models (2025)
- GAIR: Improving Multimodal Geo-Foundation Model with Geo-Aligned Implicit Representations (2025)
- Parameter-Efficient Adaptation of Geospatial Foundation Models through Embedding Deflection (2025)
- Towards Scalable Foundation Model for Multi-modal and Hyperspectral Geospatial Data (2025)
- HiRes-FusedMIM: A High-Resolution RGB-DSM Pre-trained Model for Building-Level Remote Sensing Applications (2025)
- Beyond the Visible: Multispectral Vision-Language Learning for Earth Observation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Panopticon: Advancing Any-Sensor Foundation Models for Earth Observation (2025)
- GeoLangBind: Unifying Earth Observation with Agglomerative Vision-Language Foundation Models (2025)
- GAIR: Improving Multimodal Geo-Foundation Model with Geo-Aligned Implicit Representations (2025)
- Parameter-Efficient Adaptation of Geospatial Foundation Models through Embedding Deflection (2025)
- Towards Scalable Foundation Model for Multi-modal and Hyperspectral Geospatial Data (2025)
- HiRes-FusedMIM: A High-Resolution RGB-DSM Pre-trained Model for Building-Level Remote Sensing Applications (2025)
- Beyond the Visible: Multispectral Vision-Language Learning for Earth Observation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper