Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
image
imagewidth (px)
406
1.92k
label
class label
71 classes
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
44train_0_00002000
End of preview. Expand in Data Studio

HardTracksDataset: A Benchmark for Robust Object Tracking under Heavy Occlusion and Challenging Conditions

Computer Vision Lab, ETH Zurich

image/png

Introduction

We introduce the HardTracksDataset (HTD), a novel multi-object tracking (MOT) benchmark specifically designed to address two critical limitations prevalent in existing tracking datasets. First, most current MOT benchmarks narrowly focus on restricted scenarios, such as pedestrian movements, dance sequences, or autonomous driving environments, thus lacking the object diversity and scenario complexity representative of real-world conditions. Second, datasets featuring broader vocabularies, such as, OVT-B and TAO, typically do not sufficiently emphasize challenging scenarios involving long-term occlusions, abrupt appearance changes, and significant position variations. As a consequence, the majority of tracking instances evaluated are relatively easy, obscuring trackers’ limitations on truly challenging cases. HTD addresses these gaps by curating a challenging subset of scenarios from existing datasets, explicitly combining large vocabulary diversity with severe visual challenges. By emphasizing difficult tracking scenarios, particularly long-term occlusions and substantial appearance shifts, HTD provides a focused benchmark aimed at fostering the development of more robust and reliable tracking algorithms for complex real-world situations.

Results of state of the art trackers on HTD

Method Validation Test
TETA LocA AssocA ClsA TETA LocA AssocA ClsA
Motion-based
ByteTrack 34.877 54.624 19.085 30.922 37.875 56.135 19.464 38.025
DeepSORT 33.782 57.350 15.009 28.987 37.099 58.766 15.729 36.803
OCSORT 33.012 57.599 12.558 28.880 35.164 59.117 11.549 34.825
Appearance-based
MASA 42.246 60.260 34.241 32.237 43.656 60.125 31.454 39.390
OV-Track 29.179 47.393 25.758 14.385 33.586 51.310 26.507 22.941
Transformer-based
OVTR 26.585 44.031 23.724 14.138 29.771 46.338 24.974 21.643
MASA+ 42.716 60.364 35.252 32.532 44.063 60.319 32.735 39.135

Download Instructions

To download the dataset you can use the HuggingFace CLI. First install the HuggingFace CLI according to the official HuggingFace documentation and login with your HuggingFace account. Then, you can download the dataset using the following command:

huggingface-cli download mscheidl/htd --repo-type dataset

The dataset is organized in the following structure:

β”œβ”€β”€ htd
    β”œβ”€β”€ data
        β”œβ”€β”€ AnimalTrack
        β”œβ”€β”€ BDD
        β”œβ”€β”€ ...
    β”œβ”€β”€ annotations
        β”œβ”€β”€ classes.txt
        β”œβ”€β”€ hard_tracks_dataset_coco_test.json
        β”œβ”€β”€ hard_tracks_dataset_coco_val.json
        β”œβ”€β”€ ...
    β”œβ”€β”€ metadata
        β”œβ”€β”€ lvis_v1_clip_a+cname.npy
        β”œβ”€β”€ lvis_v1_train_cat_info.json

The data folder contains the videos, the annotations folder contains the annotations in COCO (TAO) format, and the metadata folder contains the metadata files for running MASA+. If you use HTD independently, you can ignore the metadata folder.

Annotation format for HTD dataset

The annotations folder is structured as follows:

β”œβ”€β”€ annotations
    β”œβ”€β”€ classes.txt
    β”œβ”€β”€ hard_tracks_dataset_coco_test.json
    β”œβ”€β”€ hard_tracks_dataset_coco_val.json
    β”œβ”€β”€ hard_tracks_dataset_coco.json
    β”œβ”€β”€ hard_tracks_dataset_coco_class_agnostic.json

Details about the annotations:

  • classes.txt: Contains the list of classes in the dataset. Useful for Open-Vocabulary tracking.
  • hard_tracks_dataset_coco_test.json: Contains the annotations for the test set.
  • hard_tracks_dataset_coco_val.json: Contains the annotations for the validation set.
  • hard_tracks_dataset_coco.json: Contains the annotations for the entire dataset.
  • hard_tracks_dataset_coco_class_agnostic.json: Contains the annotations for the entire dataset in a class-agnostic format. This means that there is only one category namely "object" and all the objects in the dataset are assigned to this category.

The HTD dataset is annotated in COCO format. The annotations are stored in JSON files, which contain information about the images, annotations, categories, and other metadata. The format of the annotations is as follows:

{
    "images": [image],
    "videos": [video],
    "tracks": [track],
    "annotations": [annotation],
    "categories": [category]
}

image: {
    "id": int,                            # Unique ID of the image
    "video_id": int,                      # Reference to the parent video
    "file_name": str,                     # Path to the image file
    "width": int,                         # Image width in pixels
    "height": int,                        # Image height in pixels
    "frame_index": int,                   # Index of the frame within the video (starting from 0)
    "frame_id": int                       # Redundant or external frame ID (optional alignment)
    "video": str,                         # Name of the video 
    "neg_category_ids": [int],            # List of category IDs explicitly not present (optional)
    "not_exhaustive_category_ids": [int]  # Categories not exhaustively labeled in this image (optional)
        
video: {
    "id": int,                            # Unique video ID
    "name": str,                          # Human-readable or path-based name
    "width": int,                         # Frame width
    "height": int,                        # Frame height
    "neg_category_ids": [int],            # List of category IDs explicitly not present (optional)
    "not_exhaustive_category_ids": [int]  # Categories not exhaustively labeled in this video (optional)
    "frame_range": int,                   # Number of frames between annotated frames
    "metadata": dict,                     # Metadata for the video    
}
        
track: {
    "id": int,             # Unique track ID
    "category_id": int,    # Object category
    "video_id": int        # Associated video
}
        
category: {
    "id": int,            # Unique category ID
    "name": str,          # Human-readable name of the category
}
        
annotation: {
    "id": int,                    # Unique annotation ID
    "image_id": int,              # Image/frame ID
    "video_id": int,              # Video ID
    "track_id": int,              # Associated track ID
    "bbox": [x, y, w, h],         # Bounding box in absolute pixel coordinates
    "area": float,                # Area of the bounding box
    "category_id": int            # Category of the object
    "iscrowd": int,               # Crowd flag (from COCO)
    "segmentation": [],           # Polygon-based segmentation (if available)
    "instance_id": int,           # Instance index with a video
    "scale_category": str         # Scale type (e.g., 'moving-object')
}
Downloads last month
7