You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

MAD: Multimodal Actions Dataset (Gated Access)

Summary.
MAD is a large-scale video–language dataset created during my PhD research at KAUST (Image and Video Understanding Lab).
It provides curated annotations and pre-computed feature representations designed to support tasks such as video grounding, action understanding, and multimodal retrieval.

Maintainer: @soldelli
Contact: [email protected]

Repository Structure

data/
├── metadata.csv # general information on dataset items
├── annotations/ # annotations in JSON/CSV format
│ ├── split1/ # example: train/val for split1
│ └── split2/ # example: train/val for split2
└── features/ # pre-computed video features (HDF5)
├── video1.h5
├── video2.h5
└── ...
  • annotations/: Contains annotation files (JSON/CSV) for multiple splits.
  • features/: Contains pre-computed features stored in HDF5 format.
  • metadata.csv: A high-level index of dataset items.

Access & Terms of Use

Access to MAD is gated. By requesting access you confirm:

  • You are a bona fide researcher and will use MAD only for non-commercial research unless explicitly permitted otherwise.
  • You (and your institution) agree to the MAD NDA / Terms of Use stated below.
  • You will not redistribute MAD, its features, or annotations in a way that allows reconstruction of the dataset.
  • You will properly cite MAD in any published work using it.

MAD NDA / Terms of Use (binding)

  1. No redistribution of MAD or derivative datasets that allow reconstruction.
  2. Research-only use, unless separately licensed for commercial purposes.
  3. No re-identification attempts or misuse of the data.
  4. Proper citation of MAD in all publications.
  5. Users are responsible for data security and must prevent unauthorized sharing.
  6. Report any misuse or data concerns to the maintainers.
  7. Institutional responsibility applies if accessed via a university or lab account.

⚠️ By checking “I agree” in the access request form, you agree to these conditions.

How to Request Access

  1. On this page, click “Request access”.
  2. Fill in the short form:
    • Full name
    • Institutional email address
    • Affiliation / Lab / Research group
    • Intended use / project description
    • Commercial use? (Yes/No)
    • Confirm your Hugging Face username
    • Checkbox: “I have read and agree to the MAD NDA / Terms of Use”
  3. Once approved, you will be able to download MAD directly.
  • If auto-approve is enabled, access is granted immediately.
  • If manual approval is used, requests will be reviewed periodically.

Download Instructions

After approval, authenticate with your Hugging Face token:

huggingface-cli login

Option 1: Programmatic access with datasets

from datasets import load_dataset
ds = load_dataset("soldelli/MAD", split="train")  # requires approval & token


Option 2: Direct file download
from huggingface_hub import hf_hub_download

# Example: download one annotation file
path = hf_hub_download("soldelli/MAD", "data/annotations/split1/train.json")


Option 3: Snapshot the whole repo
from huggingface_hub import snapshot_download
snapshot_download("soldelli/MAD")

Download Instructions

If you use MAD, please cite:

@InProceedings{Soldan_2022_CVPR,
    author    = {Soldan, Mattia and Pardo, Alejandro and Alc\'azar, Juan Le\'on and Caba, Fabian and Zhao, Chen and Giancola, Silvio and Ghanem, Bernard},
    title     = {MAD: A Scalable Dataset for Language Grounding in Videos From Movie Audio Descriptions},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {5026-5035}
}
Downloads last month
11