
TerraMesh
A planetary‑scale, multimodal analysis‑ready dataset for Earth‑Observation foundation models.
TerraMesh merges data from Sentinel‑1 SAR, Sentinel‑2 optical, Copernicus DEM, NDVI and land‑cover sources into more than 9 million co‑registered patches ready for large‑scale representation learning.
Dataset to be released soon.
Samples from the TerraMesh dataset with seven spatiotemporal aligned modalities. Sentinel-2 L2A uses IRRG pseudo-coloring and Sentinel-1 RTC is visualized in db scale as VH-VV-VV/VH. Copernicus DEM is scaled based on the image value range with an additional 10 meter buffer to highlight flat scenes.
Dataset organisation
The archive ships two top‑level splits train/
and val/
, each holding one folder per modality. terramesh.py
includes code for data loading, see Usage.
TerraMesh
├── train
│ ├── DEM
│ ├── LULC
│ ├── NDVI
│ ├── S1GRD
│ ├── S1RTC
│ ├── S2L1C
│ ├── S2L2A
│ └── S2RGB
├── val
│ ├── DEM
│ └── ...
└── terramesh.py
Each folder includes up to 889 shard files, containing up to 10240 samples each. Samples from MajorTom-Core are stored in shards with the pattern majortom_{split}_{id}.tar
while shards with SSL4EO-S12 samples start with ssl4eos12_
.
Samples are stored as Zarr Zip files which can be loaded with zarr
(Version <= 2.18) or xarray.load_zarr()
. Each sample location includes seven modalities that share the same shard and sample name. Note that each sample only inludes one Sentinel-1 version (S1GRD or S1RTC) because of different processing versions in the source datasets.
Each Zarr file includes aligned metadata as demonstrated by this S1GRD example from sample ssl4eos12_val_0080385.zarr.zip
:
<xarray.Dataset> Size: 283kB
Dimensions: (band: 2, time: 1, y: 264, x: 264)
Coordinates:
* band (band) <U2 16B "vv" "vh"
sample <U9 36B "0194630_1"
spatial_ref int64 8B 0
* time (time) datetime64[ns] 8B 2020-05-03T02:07:17
* x (x) float64 2kB 6.004e+05 6.004e+05 ... 6.03e+05 6.03e+05
* y (y) float64 2kB 4.275e+06 4.275e+06 ... 4.273e+06 4.273e+06
Data variables:
bands (time, band, y, x) float16 279kB -9.461 -10.77 ... -16.67
center_lat float64 8B 38.61
center_lon float64 8B -121.8
crs int64 8B 32610
file_id (time) <U67 268B "S1A_IW_GRDH_1SDV_20201105T020809_20201105T...
Sentinel-2 modalities and LULC additionally provide a cloud_mask
as additional metadata.
Description
TerraMesh fuses complementary optical, radar, topographic and thematic layers into pixel‑aligned 10 m cubes, allowing models to learn joint representations of land cover, vegetation dynamics and surface structure at planetary scale. The dataset is globally distributed and covers multiple years.
Performance evaluation
TerraMesh was used to pre-train TerraMind-B. On the six evaluated segmentation tasks from PANGAEA bench, TerraMind‑B reaches an average mIoU of 66.6%, the best overall score with an average rank of 2.33. This amounts to roughly a 3pp improvement over the next‑best open model (CROMA), underscoring the benefits of pre‑training on TerraMesh. Compared to an ablation model pre-trained only on SSL4EO-S12 locations TerraMind-B performs overall 1pp better with better global generalization on more remote tasks like CTM-SS. More details in our paper.
Usage
Setup
Install the required packages with:
pip install huggingface_hub webdataset torch numpy albumentations fsspec braceexpand zarr==2.18.0 numcodecs==0.15.1
Important! The dataset was created using zarr==2.18.0
and numcodecs==0.15.1
. Unfortunately, Zarr 3.0 has backwards compatibility issues, and Zarr 2.18 is incompatible with NumCodecs >= 0.16.
Download
You can download the dataset with the Hugging Face CLI tool. Please note that the dataset requires 16TB or storage.
huggingface-cli download ibm-esa-geospatial/TerraMesh --repo-type dataset --local-dir data/TerraMesh
If you like to download only a subset of the data, you can specify it with --include
.
# Only download val data
huggingface-cli download ibm-esa-geospatial/TerraMesh --repo-type dataset --include "val/*" --local-dir data/TerraMesh
# Only download a single modality (e.g., S2L2A)
huggingface-cli download ibm-esa-geospatial/TerraMesh --repo-type dataset --include "*/S2L2A/*" --local-dir data/TerraMesh
Data loader
We provide the data loading code in terramesh.py
which is downloaded together with the dataset. For development use streaming, you can download the file via this link or with:
wget https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/terramesh.py
You can use the build_terramesh_dataset
function to initalize a dataset, which uses the WebDataset package to load samples from the shard files. You can stream the data from Hugging Face using the urls or download the full dataset and pass a local path (e.g, data/TerraMesh/
).
from terramesh import build_terramesh_dataset
from torch.utils.data import DataLoader
# If you only pass one modality, the modality is loaded with the "image" key
dataset = build_terramesh_dataset(
path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/", # Streaming or local path
modalities=["S2L2A"],
split="val",
batch_size=8
)
# Batch keys: ["__key__", "__url__", "image"]
# If you pass multiple modalities, the modalities are returned using the modality names as keys
dataset = build_terramesh_dataset(
path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/", # Streaming or local path
modalities=["S2L2A", "S2L1C", "S2RGB", "S1GRD", "S1RTC", "DEM", "NDVI", "LULC"],
split="val",
batch_size=8
)
# Set batch size to None because batching is handled by WebDataset.
dataloader = DataLoader(dataset, batch_size=None, num_workers=4)
# Iterate over the dataloader
for batch in dataloader:
print("Batch keys:", list(batch.keys()))
# Batch keys: ["__key__", "__url__", "S2L2A", "S2L1C", "S2RGB", "S1RTC", "DEM", "NDVI", "LULC"]
# Because S1RTC and S1GRD are not present for all samples, each batch only includes one S1 version.
print("Data shape:", batch["S2L2A"].shape)
# Data shape: torch.Size([8, 12, 264, 264]
# Dimensions [batch, channel, h, w]. The code removes the time dim from the source data.
break
Data transform
We provide some additional code for wrapping albumentations
transform functions.
We recommend albumentations because parameters are shared between all image modalities (e.g., same random crop).
However, it requires some wrapping to bring the data into the expected shape.
import albumentations as A
from albumentations.pytorch import ToTensorV2
from terramesh import build_terramesh_dataset, Transpose, MultimodalTransforms
# Define all image modalities
modalities = ["S2L2A", "S2L1C", "S2RGB", "S1GRD", "S1RTC", "DEM", "NDVI", "LULC"]
# Define multimodal transform function that converts the data into the expected shape from albumentations
val_transform = MultimodalTransforms(
transforms=A.Compose([ # We use albumentations because of the shared transform between image modalities
Transpose([1, 2, 0]), # Convert data to channel last (expected shape from albumentations)
A.CenterCrop(224, 224), # Use center crop in val split
# A.RandomCrop(224, 224), # Use random crop in train split
# A.D4(), # Optionally, use random flipping and rotation for the train split
ToTensorV2(), # Convert to tensor and back to channel first
],
is_check_shapes=False, # Not needed because of aligned data in TerraMesh
additional_targets={m: "image" for m in modalities}
),
non_image_modalities=["__key__", "__url__"], # Additional non-image keys
)
dataset = build_terramesh_dataset(
path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/",
modalities=modalities,
split="val",
transform=val_transform,
batch_size=8,
)
If you only use a single modality, you don't need to specify additional_targets
.
Returning metadata
You can pass return_metadata=True
to build_terramesh_dataset()
to load center longitude and latitude, timestamps, and the S2 cloud mask as additional metadata.
The resulting batch keys include: ["__key__", "__url__", "S2L2A", "S1RTC", ..., "center_lon", "center_lat", "cloud_mask", "time_S2L2A", "time_S1RTC", ...]
.
Therefore, you need to update the transform
if you use one:
...
additional_targets={m: "image" for m in modalities + ["cloud_mask"]}
),
non_image_modalities=["__key__", "__url__", "center_lon", "center_lat"] + ["time_" + m for m in modalities]
For a single modality dataset, "time" does not have a suffix and the following changes for the transform
are required:
...
additional_targets={"cloud_mask": "image"}
),
non_image_modalities=["__key__", "__url__", "center_lon", "center_lat", "time"]
Note that center points are not updated when random crop is used. The cloud mask provides the classes land (0), water (1), snow (2), thin cloud (3), thick cloud (4), and cloud shadow (5), and no data (6). DEM does not return a time value while LULC uses the S2 timestamp because of the augmentation usign the S2 cloud and ice mask. Time values are returned as integer values but can be converted back to datetime with
batch["time_S2L2A"].numpy().astype("datetime64[ns]")
If you have any issues with data loading, please create a discussion in the community tab and tag @blumenstiel
.
Citation
If you use TerraMesh, please cite:
@article{blumenstiel2025terramesh,
title={Terramesh: A planetary mosaic of multimodal earth observation data},
author={Blumenstiel, Benedikt and Fraccaro, Paolo and Marsocci, Valerio and Jakubik, Johannes and Maurogiovanni, Stefano and Czerkawski, Mikolaj and Sedona, Rocco and Cavallaro, Gabriele and Brunschwiler, Thomas and Bernabe-Moreno, Juan and others},
journal={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
year={2025}
}
License
TerraMesh is released under the Creative Commons Attribution‑ShareAlike 4.0 (CC‑BY‑SA‑4.0) license.
Acknowledgements
TerraMesh is part of the FAST‑EO project funded by the European Space Agency Φ‑Lab (contract #4000143501/23/I‑DT).
The satellite data (S2L1C, S2L2A, S1GRD, S1RTC) is sourced from the SSL4EO‑S12 v1.1 (CC-BY-4.0) and MajorTOM‑Core (CC-BY-SA-4.0) datasets.
The LULC data is provided by ESRI, Impact Observatory, and Microsoft (CC-BY-4.0).
The cloud masks used for augmentating the LULC maps and provided as metadata are produced using the SEnSeIv2 model.
The DEM data is produced using Copernicus WorldDEM-30 © DLR e.V. 2010-2014 and © Airbus Defence and Space GmbH 2014-2018 provided under COPERNICUS by the European Union and ESA; all rights reserved
- Downloads last month
- 515
Models trained or fine-tuned on ibm-esa-geospatial/TerraMesh
