MAD: Multimodal Actions Dataset (Gated Access)
Summary.
MAD is a large-scale video–language dataset created during my PhD research at KAUST (Image and Video Understanding Lab).
It provides curated annotations and pre-computed feature representations designed to support tasks such as video grounding, action understanding, and multimodal retrieval.
Maintainer: @soldelli
Contact: [email protected]
Repository Structure
data/
├── metadata.csv # general information on dataset items
├── annotations/ # annotations in JSON/CSV format
│ ├── split1/ # example: train/val for split1
│ └── split2/ # example: train/val for split2
└── features/ # pre-computed video features (HDF5)
├── video1.h5
├── video2.h5
└── ...
annotations/
: Contains annotation files (JSON/CSV) for multiple splits.features/
: Contains pre-computed features stored in HDF5 format.metadata.csv
: A high-level index of dataset items.
Access & Terms of Use
Access to MAD is gated. By requesting access you confirm:
- You are a bona fide researcher and will use MAD only for non-commercial research unless explicitly permitted otherwise.
- You (and your institution) agree to the MAD NDA / Terms of Use stated below.
- You will not redistribute MAD, its features, or annotations in a way that allows reconstruction of the dataset.
- You will properly cite MAD in any published work using it.
MAD NDA / Terms of Use (binding)
- No redistribution of MAD or derivative datasets that allow reconstruction.
- Research-only use, unless separately licensed for commercial purposes.
- No re-identification attempts or misuse of the data.
- Proper citation of MAD in all publications.
- Users are responsible for data security and must prevent unauthorized sharing.
- Report any misuse or data concerns to the maintainers.
- Institutional responsibility applies if accessed via a university or lab account.
⚠️ By checking “I agree” in the access request form, you agree to these conditions.
How to Request Access
- On this page, click “Request access”.
- Fill in the short form:
- Full name
- Institutional email address
- Affiliation / Lab / Research group
- Intended use / project description
- Commercial use? (Yes/No)
- Confirm your Hugging Face username
- Checkbox: “I have read and agree to the MAD NDA / Terms of Use”
- Once approved, you will be able to download MAD directly.
- If auto-approve is enabled, access is granted immediately.
- If manual approval is used, requests will be reviewed periodically.
Download Instructions
After approval, authenticate with your Hugging Face token:
huggingface-cli login
Option 1: Programmatic access with datasets
from datasets import load_dataset
ds = load_dataset("soldelli/MAD", split="train") # requires approval & token
Option 2: Direct file download
from huggingface_hub import hf_hub_download
# Example: download one annotation file
path = hf_hub_download("soldelli/MAD", "data/annotations/split1/train.json")
Option 3: Snapshot the whole repo
from huggingface_hub import snapshot_download
snapshot_download("soldelli/MAD")
Download Instructions
If you use MAD, please cite:
@InProceedings{Soldan_2022_CVPR,
author = {Soldan, Mattia and Pardo, Alejandro and Alc\'azar, Juan Le\'on and Caba, Fabian and Zhao, Chen and Giancola, Silvio and Ghanem, Bernard},
title = {MAD: A Scalable Dataset for Language Grounding in Videos From Movie Audio Descriptions},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {5026-5035}
}
- Downloads last month
- 11