Datasets:

Modalities:
Image
ArXiv:
Libraries:
Datasets
License:
Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
image
imagewidth (px)
2.05k
2.05k
End of preview. Expand in Data Studio

QuadTrack

QuadTrack is a dataset designed for multi-object tracking (MOT) research, with a focus on panoramic and long-span scenarios. It provides challenging tracking sequences that include drastic appearance variations, prolonged occlusions, and wide field-of-view distortions, enabling the development and evaluation of robust MOT algorithms.

Dataset Details

Dataset Description

  • Curated by: [HNU CVPU]
  • Funded by [optional]: [National Natural Science Foundation of China (No.62473139 and No.12174341), Zhejiang Provincial Natural Science Foundation of China (Grant No. LZ24F050003) and Shanghai SUPREMIND Technology Co. Ltd.]
  • Shared by [optional]: [HNU CVPU]
  • License: [CC BY-NC 4.0]

Dataset Sources [optional]

Uses

Direct Use

QuadTrack is designed for multi-object tracking (MOT) research, particularly in panoramic.

Dataset Structure

The dataset is organized into two main splits: train and test.

QuadTrack/
├── train/                # Training set
│   ├── img1/             # Training images (video frames)
│   └── gt/               # Ground-truth annotations (bounding boxes, IDs, etc.)
│
└── test/                 # Test set
    └── img1/             # Test images (no ground-truth provided)

Dataset Creation

Curation Rationale

QuadTrack was created to address the limitations of existing multi-object tracking (MOT) datasets, which often focus on narrow field-of-view scenarios and short-term associations. In contrast, panoramic and long-span tracking poses unique challenges such as:

  • Prolonged occlusions leading to identity switches.

  • Wide field-of-view distortions caused by panoramic cameras.

  • Dramatic appearance variations across long sequences.

The dataset aims to provide a benchmark for developing algorithms that achieve long-term identity stability and robust re-identification in real-world panoramic environments.

Source Data

Data Collection and Processing

Collection: The video sequences were captured using panoramic and wide-angle cameras in complex real-world environments (e.g., urban traffic, crowded public areas).

  • Annotation:

    • Bounding boxes and unique object IDs were assigned frame-by-frame.

    • Annotations follow the standard MOTChallenge format for compatibility.

  • Processing:

    • Frames were extracted at fixed intervals to balance temporal resolution and storage.

    • Quality checks ensured consistency in ID assignment across long occlusions.

    • Tools used: https://www.cvat.ai/

Who are the source data producers?

The source videos were collected and annotated by the QuadTrack research team.

  • Producers: Internal annotation team trained for MOT labeling tasks.

  • Demographics: Not applicable, as the dataset focuses on object trajectories rather than personal or sensitive identity information.

  • Note: No personally identifiable information (PII) is included. The dataset is curated strictly for research purposes.

Bias, Risks, and Limitations

While QuadTrack provides challenging panoramic multi-object tracking scenarios, several limitations and risks should be noted:

  • Domain bias: The dataset primarily consists of panoramic and wide field-of-view sequences. Models trained on QuadTrack may not generalize well to conventional narrow-angle tracking datasets.

  • Scene diversity: Although collected across different environments, the dataset may not cover all possible real-world scenarios (e.g., extreme weather, night-time, or thermal imagery).

  • Annotation errors: Despite quality control, occasional inaccuracies in bounding boxes or identity switches may exist, especially under heavy occlusion.

  • Ethical risks: As a vision dataset, improper use in surveillance or privacy-intrusive applications could raise ethical concerns.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

@inproceedings{luo2025omnidirectional,
  title={Omnidirectional Multi-Object Tracking},
  author={Luo, Kai and Shi, Hao and Wu, Sheng and Teng, Fei and Duan, Mengfei and Huang, Chang and Wang, Yuhang and Wang, Kaiwei and Yang, Kailun},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={21959--21969},
  year={2025}
}

APA:

Luo, K., Shi, H., Wu, S., Teng, F., Duan, M., Huang, C., Wang, Y., Wang, K., & Yang, K. (2025). Omnidirectional multi-object tracking. *Proceedings of the Computer Vision and Pattern Recognition Conference*, 21959–21969.

Dataset Card Authors [optional]

xifen527

Dataset Card Contact

[email protected], [email protected]

Downloads last month
12