IGNF
/

AGarioud's picture
Update README.md
19979cb verified
metadata
license: etalab-2.0
pipeline_tag: image-segmentation
tags:
  - semantic segmentation
  - pytorch
  - landcover
model-index:
  - name: FLAIR-HUB_LPIS-I_swinbase-upernet
    results:
      - task:
          type: semantic-segmentation
        dataset:
          name: IGNF/FLAIR-HUB/
          type: earth-observation-dataset
        metrics:
          - type: mIoU
            value: 35.76
            name: mIoU
          - type: OA
            value: 87.189
            name: Overall Accuracy
          - type: IoU
            value: 83.86
            name: IoU building
          - type: IoU
            value: 78.38
            name: IoU greenhouse
          - type: IoU
            value: 61.59
            name: IoU swimming pool
          - type: IoU
            value: 61.59
            name: IoU impervious surface
          - type: IoU
            value: 57.17
            name: IoU pervious surface
          - type: IoU
            value: 62.94
            name: IoU bare soil
          - type: IoU
            value: 90.35
            name: IoU water
          - type: IoU
            value: 63.38
            name: IoU snow
          - type: IoU
            value: 54.34
            name: IoU herbaceous vegetation
          - type: IoU
            value: 57.14
            name: IoU agricultural land
          - type: IoU
            value: 34.85
            name: IoU plowed land
          - type: IoU
            value: 33.017
            name: IoU vineyard
          - type: IoU
            value: 71.73
            name: IoU deciduous
          - type: IoU
            value: 62.6
            name: IoU coniferous
          - type: IoU
            value: 30.19
            name: IoU brushwood
library_name: pytorch

🌐 FLAIR-HUB Model Collection

  • Trained on: FLAIR-HUB dataset 🔗
  • Available modalities: Aerial images, SPOT images, Topographic info, Sentinel-2 yearly time-series, Sentinel-1 yearly time-series, Historical aerial images
  • Encoders: ConvNeXTV2, Swin (Tiny, Small, Base, Large)
  • Decoders: UNet, UPerNet
  • Tasks: Land-cover mapping (LC), Crop-type mapping (LPIS)
  • Class nomenclature: 15 classes for LC, 23 classes for LPIS
🆔
Model ID
🗺️
Land-cover
🌾
Crop-types
🛩️
Aerial
⛰️
Elevation
🛰️
SPOT
🛰️
S2 t.s.
🛰️
S1 t.s.
🛩️
Historical
LC-A
LC-D
LC-F
LC-G
LC-I
LC-L
LPIS-A
LPIS-F
LPIS-I
LPIS-J

🔍 Model: FLAIR-HUB_LPIS-I_swinbase-upernet

  • Encoder: swin_base_patch4_window12_384
  • Decoder: upernet
  • Metrics:
  • mIoU O.A. F-score Precision Recall
    35.76% 87.19% 46.66% 52.77% 44.45%
  • Params.: 97.5

General Informations


Training Config Hyperparameters

- Model architecture: swin_base_patch4_window12_384-upernet
- Optimizer: AdamW (betas=[0.9, 0.999], weight_decay=0.01)
- Learning rate: 5e-5
- Scheduler: one_cycle_lr (warmup_fraction=0.2)
- Epochs: 150
- Batch size: 5
- Seed: 2025
- Early stopping: patience 20, monitor val_miou (mode=max)
- Class weights:
    - default: 1.0
    - masked classes: [clear cut, ligneous, mixed, other]  weight = 0
- Input channels:
    - SPOT_RGBI: [4, 1, 2]
    - SENTINEL2_TS: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
    - SENTINEL1-ASC_TS: [1, 2]
    - SENTINEL1-DESC_TS: [1, 2]
- Input normalization (custom):
    - SPOT_RGBI:
        mean: [1137.03, 433.26, 508.75]
        std:  [543.11, 312.76, 284.61]

Training Data

- Train patches: 152225
- Validation patches: 38175
- Test patches: 50700
Classes distribution.

Training Logging

Training logging.

Metrics

Metric Value
mIoU 35.76%
Overall Accuracy 87.19%
F-score 46.66%
Precision 52.77%
Recall 44.45%
Class IoU (%) F-score (%) Precision (%) Recall (%)
grasses 47.65 64.54 68.36 61.13
wheat 65.72 79.32 76.87 81.93
barley 45.99 63.00 69.21 57.82
maize 74.46 85.36 79.16 92.61
other cereals 13.98 24.54 26.33 22.97
rice 0.00 0.00 0.00 0.00
flax/hemp/tobacco 56.98 72.59 85.52 63.06
sunflower 44.07 61.17 62.25 60.14
rapeseed 81.60 89.87 86.69 93.29
other oilseed crops 0.00 0.00 0.00 0.00
soy 51.80 68.24 75.15 62.50
other protein crops 8.65 15.93 18.03 14.26
fodder legumes 28.25 44.05 50.58 39.01
beetroots 75.18 85.83 91.19 81.07
potatoes 7.18 13.41 51.09 7.71
other arable crops 22.77 37.10 32.97 42.41
vineyard 33.02 49.64 58.03 43.37
olive groves 14.16 24.80 25.63 24.02
fruit orchards 27.82 43.53 49.41 38.90
nut orchards 29.83 45.95 68.55 34.56
other permanent crops 0.27 0.53 20.92 0.27
mixed crops 5.49 10.42 25.67 6.53
background 87.62 93.40 92.01 94.84

Inference

Aerial ROI

AERIAL

Inference ROI

INFERENCE

Cite

BibTeX:

@article{ign2025flairhub,
  doi = {10.48550/arXiv.2506.07080},
  url = {https://arxiv.org/abs/2506.07080},
  author = {Garioud, Anatol and Giordano, Sébastien and David, Nicolas and Gonthier, Nicolas},
  title = {FLAIR-HUB: Large-scale Multimodal Dataset for Land Cover and Crop Mapping},
  publisher = {arXiv},
  year = {2025}
}

APA:

Anatol Garioud, Sébastien Giordano, Nicolas David, Nicolas Gonthier. 
FLAIR-HUB: Large-scale Multimodal Dataset for Land Cover and Crop Mapping. (2025). 
DOI: https://doi.org/10.48550/arXiv.2506.07080