MultiTask ConvLSTM for Precipitation Prediction

This repository contains two MultiTask ConvLSTM models:

  • veg/: Model trained with vegetation input variables
  • noveg/: Model trained without vegetation input variables

Both directories include:

  • convlstm.py: base ConvLSTM layers
  • model.py: MultiTask ConvLSTM model definition
  • example_inference.py: inference script
  • data/: example .pth files (test)

These scripts are provided for reproducibility of the model architecture and workflow. Exact runtime and performance may vary depending on hardware.

Example Data

We provide a large test .pth files so you can immediately run the inference script without preprocessing. These files are already preprocessed and normalized from the ECWMF REA5 reanalysis data.

Each .pth file loads as a list of batches:

  • X_batch: shape (B, T_in, C_in, H*W)
  • y_batch: shape (B, T_out, C_out, H*W)
  • y_zero_batch: shape (B, T_out, C_out, H*W)

with H=81, W=97. Inside evaluate(...), these are reshaped to (B, T, C, H, W).


How to Use

Ensure all files are in the correct directory then run the example_inference.py file.

1 Get the repo

git clone https://huggingface.co//MultiTaskConvLSTM cd MultiTaskConvLSTM

2 Install minimal deps

pip install -r requirements.txt

3 Run inference (choose one variant)

python veg/example_inference.py

or

python noveg/example_inference.py

Citation If you use this model, please cite: > Lilly Horvath-Makkos (2025). [title] [journal] BibTeX:

bibtex @article{horvathmakkos2025, title={Title}, author={Horvath-Makkos, Lilly}, journal={Journal}, year={2025} }

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results

  • mean_squared_error on ERA5-Land Amazon Basin (2021–2023)
    self-reported
    0.280
  • spearman_correlation on ERA5-Land Amazon Basin (2021–2023)
    self-reported
    0.870
  • pearson_correlation on ERA5-Land Amazon Basin (2021–2023)
    self-reported
    0.790
  • kendall_tau on ERA5-Land Amazon Basin (2021–2023)
    self-reported
    0.700
  • nash_sutcliffe_efficiency on ERA5-Land Amazon Basin (2021–2023)
    self-reported
    0.620
  • f1 on ERA5-Land Amazon Basin (2021–2023)
    self-reported
    0.820
  • accuracy on ERA5-Land Amazon Basin (2021–2023)
    self-reported
    0.900
  • precision on ERA5-Land Amazon Basin (2021–2023)
    self-reported
    0.900
  • ROC-AUC on ERA5-Land Amazon Basin (2021–2023)
    self-reported
    0.970
  • recall on ERA5-Land Amazon Basin (2021–2023)
    self-reported
    0.750