Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
End of preview.

⭐ FRED: Florence RGB-Event Drone dataset ⭐

Demo HDR

Official repository for the FRED dataset, a large-scale multimodal dataset specifically designed for drone detection, tracking, and trajectory forecasting, with spatiotemprally synchronized RGB and event data. It includes train and test splits with zipped subfolders for each sequence.

The dataset can also be downloaded from here. The dataset splits in .txt format, along with the alternative challenging split, can be found here.

Demos and examples can be found in the official website. Check it out, it's pretty cool! :)


📂 Dataset Structure

FRED/
 ├── train/
 │    ├── 0.zip
 │    ├── 1.zip
 │    └── ...
 ├── test/
 │    ├── 100.zip
 │    ├── 101.zip
 │    └── ...

Each .zip file corresponds to one sequence (rgb frames, event data, and annotations). Event data comprends both already extracted event frames and the relative .hdf5 file, containing the raw event stream.


📝 Annotation Format

Each sequence includes two .txt annotation file with bounding box and identity information for every frame.
Since rgb images are padded to enable a same coordinate space between the two modalities, the event videos have additional boxes corresponding to the padded area in the RGB. To better divide the 2 cases, we divided the annotation into coordinates.txt that represents the extended boxes and the coordinates_rgb.txt for the boxes excluding the padding.

We reccomend using the extended boxes version since it facilitates the training and the overall cases in which the drone falls into the padding is relatively limited when compared to the number of samples.

The format of the annotations is:

time: x1, y1, x2, y2, id, class
  • time → time relative to the start of the recording in 'seconds.microseconds' for the annotation
  • x1, y1 → top-left corner of the bounding box
  • x2, y2 → bottom-right corner of the bounding box
  • id → unique identifier for the drone, consistent across frames (for tracking)
  • class → drone type label

📌 Example:

1.33332: 490.0, 413.0, 539.0, 448.0, 1, DJI Mini 2
6.33327: 609.0, 280.0, 651.0, 308.0, 2, DarwinFPV cineape20 

This structure is compatible with standard detection and tracking pipelines, while maintaining instance-level identity across time.


📥 Download

Clone the entire dataset

git lfs install
git clone https://huggingface.co/datasets/GabrieleMagrini/FRED

Download specific sequences

wget https://huggingface.co/datasets/GabrieleMagrini/FRED/train/0.zip
wget https://huggingface.co/datasets/GabrieleMagrini/FRED/test/100.zip

Use with 🤗 Datasets

from datasets import load_dataset

# Load full dataset
ds = load_dataset("GabrieleMagrini/FRED")

# Load specific split
train_set = load_dataset("GabrieleMagrini/FRED", split="train")
test_set  = load_dataset("GabrieleMagrini/FRED", split="test")

🖼️ Examples

Night

Raining

Indoor


✨ Citation

If you use FRED in your research, please cite:

@inproceedings{magrini2025fred,
  title={FRED: The Florence RGB-Event Drone Dataset},
  author={Magrini, Gabriele and Marini, Niccol{`o} and Becattini, Federico and Berlincioni, Lorenzo and Biondi, Niccol{`o} and Pala, Pietro and Del Bimbo, Alberto},
  booktitle={Proceedings of the 33rd ACM International conference on multimedia},
  year={2025}
}
Downloads last month
446