Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError Exception: ArrowInvalid Message: JSON parse error: Column() changed from object to string in row 0 Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables df = pandas_read_json(f) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json return pd.read_json(path_or_buf, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json return json_reader.read() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read obj = self._get_object_parser(self.data) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser obj = FrameParser(json, **kwargs).parse() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse self._parse() File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1402, in _parse self.obj = DataFrame( File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/frame.py", line 778, in __init__ mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 503, in dict_to_mgr return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 114, in arrays_to_mgr index = _extract_index(arrays) File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 677, in _extract_index raise ValueError("All arrays must be of the same length") ValueError: All arrays must be of the same length During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1855, in _prepare_split_single for _, table in generator: File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 687, in wrapped for item in generator(*args, **kwargs): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 163, in _generate_tables raise e File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0 The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1428, in compute_config_parquet_and_info_response parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet( File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 989, in stream_convert_to_parquet builder._prepare_split( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1898, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
id
int64 | name
string | image_count
int64 | instance_count
int64 |
---|---|---|---|
1 | backpack | 14 | 14 |
2 | ball | 127 | 145 |
3 | basket | 12 | 12 |
4 | beanie | 3 | 3 |
5 | bicycle | 256 | 256 |
6 | binder | 17 | 17 |
7 | book | 41 | 41 |
8 | bottle | 3,007 | 3,007 |
9 | bowl | 52 | 52 |
10 | briefcase | 10 | 10 |
11 | calf | 14 | 17 |
12 | can | 6 | 6 |
13 | car_(automobile) | 8,206 | 15,150 |
14 | cat | 61 | 61 |
15 | cellular_telephone | 29 | 29 |
16 | cigarette | 53 | 53 |
17 | clipboard | 14 | 28 |
18 | cup | 39 | 39 |
19 | dog | 17 | 17 |
20 | drumstick | 62 | 77 |
21 | elephant | 224 | 224 |
22 | fish | 34 | 35 |
23 | flag | 41 | 41 |
24 | giraffe | 17 | 17 |
25 | gorilla | 19 | 19 |
26 | grocery_bag | 12 | 12 |
27 | guitar | 2,076 | 2,076 |
28 | gun | 55 | 76 |
29 | hat | 12 | 12 |
30 | helmet | 7 | 7 |
31 | hockey_stick | 92 | 115 |
32 | laptop_computer | 19 | 19 |
33 | paddle | 16 | 16 |
34 | pen | 11 | 11 |
35 | pencil | 43 | 43 |
36 | person | 42,397 | 157,982 |
37 | potato | 11 | 26 |
38 | racket | 211 | 223 |
39 | remote_control | 10 | 10 |
40 | sandwich | 7 | 7 |
41 | scrubbing_brush | 28 | 28 |
42 | shoe | 21 | 21 |
43 | ski_pole | 22 | 22 |
44 | spatula | 33 | 33 |
45 | stool | 25 | 25 |
46 | army_tank | 14 | 14 |
47 | teakettle | 5 | 5 |
48 | tennis_racket | 27 | 27 |
49 | towel | 79 | 79 |
50 | toy | 55 | 55 |
51 | volleyball | 3,610 | 3,631 |
52 | zebra | 641 | 953 |
53 | rider | 106 | 106 |
54 | truck | 1,314 | 1,454 |
55 | bus | 281 | 363 |
56 | motorcycle | 97 | 97 |
57 | australian terrier | 163 | 163 |
58 | goral | 118 | 118 |
59 | football | 106 | 106 |
60 | swamprabbit | 84 | 84 |
61 | rock hyrax | 96 | 96 |
62 | otter shrew | 81 | 81 |
63 | racerunner | 98 | 98 |
64 | hudson bay collared lemming | 88 | 88 |
65 | fall cankerworm | 79 | 79 |
66 | lorry | 469 | 469 |
67 | bear cub | 1,245 | 1,479 |
68 | barge | 296 | 296 |
69 | angora goat | 100 | 100 |
70 | cruise missile | 56 | 56 |
71 | yellow-throated marten | 87 | 87 |
72 | brush-tailed porcupine | 70 | 70 |
73 | sea otter | 93 | 93 |
74 | ctenophore | 89 | 89 |
75 | pine marten | 79 | 79 |
76 | cheetah | 71 | 71 |
77 | hand truck | 86 | 86 |
78 | basketball | 1,792 | 1,792 |
79 | swing | 1,908 | 1,908 |
80 | yoyo | 3,721 | 3,721 |
81 | ant | 194 | 1,003 |
82 | antelope | 172 | 172 |
83 | apple | 67 | 67 |
84 | balloon | 930 | 24,403 |
85 | barcode | 6 | 6 |
86 | bee | 186 | 978 |
87 | bell | 12 | 12 |
88 | billboard | 25 | 44 |
89 | bird | 2,342 | 3,295 |
90 | bolt | 29 | 29 |
91 | bracelet | 19 | 19 |
92 | building_blocks | 30 | 67 |
93 | chicken | 1,714 | 5,302 |
94 | coin | 27 | 27 |
95 | computer_keyboard | 18 | 18 |
96 | correction_fluid | 6 | 6 |
97 | cotton_swab | 8 | 8 |
98 | crate | 5 | 5 |
99 | cushion | 39 | 73 |
100 | dolphin | 1,232 | 1,710 |
HardTracksDataset: A Benchmark for Robust Object Tracking under Heavy Occlusion and Challenging Conditions
Computer Vision Lab, ETH Zurich
Introduction
We introduce the HardTracksDataset (HTD), a novel multi-object tracking (MOT) benchmark specifically designed to address two critical limitations prevalent in existing tracking datasets. First, most current MOT benchmarks narrowly focus on restricted scenarios, such as pedestrian movements, dance sequences, or autonomous driving environments, thus lacking the object diversity and scenario complexity representative of real-world conditions. Second, datasets featuring broader vocabularies, such as, OVT-B and TAO, typically do not sufficiently emphasize challenging scenarios involving long-term occlusions, abrupt appearance changes, and significant position variations. As a consequence, the majority of tracking instances evaluated are relatively easy, obscuring trackersβ limitations on truly challenging cases. HTD addresses these gaps by curating a challenging subset of scenarios from existing datasets, explicitly combining large vocabulary diversity with severe visual challenges. By emphasizing difficult tracking scenarios, particularly long-term occlusions and substantial appearance shifts, HTD provides a focused benchmark aimed at fostering the development of more robust and reliable tracking algorithms for complex real-world situations.
Results of state of the art trackers on HTD
Method | Validation | Test | ||||||
---|---|---|---|---|---|---|---|---|
TETA | LocA | AssocA | ClsA | TETA | LocA | AssocA | ClsA | |
Motion-based | ||||||||
ByteTrack | 34.877 | 54.624 | 19.085 | 30.922 | 37.875 | 56.135 | 19.464 | 38.025 |
DeepSORT | 33.782 | 57.350 | 15.009 | 28.987 | 37.099 | 58.766 | 15.729 | 36.803 |
OCSORT | 33.012 | 57.599 | 12.558 | 28.880 | 35.164 | 59.117 | 11.549 | 34.825 |
Appearance-based | ||||||||
MASA | 42.246 | 60.260 | 34.241 | 32.237 | 43.656 | 60.125 | 31.454 | 39.390 |
OV-Track | 29.179 | 47.393 | 25.758 | 14.385 | 33.586 | 51.310 | 26.507 | 22.941 |
Transformer-based | ||||||||
OVTR | 26.585 | 44.031 | 23.724 | 14.138 | 29.771 | 46.338 | 24.974 | 21.643 |
MASA+ | 42.716 | 60.364 | 35.252 | 32.532 | 44.063 | 60.319 | 32.735 | 39.135 |
Download Instructions
To download the dataset you can use the HuggingFace CLI. First install the HuggingFace CLI according to the official HuggingFace documentation and login with your HuggingFace account. Then, you can download the dataset using the following command:
huggingface-cli download mscheidl/htd --repo-type dataset --local-dir htd
The video folders are provided as zip files. Before usage please unzip the files. You can use the following command to unzip all files in the data
folder.
Please note that the unzipping process can take a while (especially for TAO.zip)
cd htd
for z in data/*.zip; do (unzip -o -q "$z" -d data && echo "Unzipped: $z") & done; wait; echo "β
Done"
mkdir -p data/zips # create a folder for the zip files
mv data/*.zip data/zips/ # move the zip files to the zips folder
The dataset is organized in the following structure:
βββ htd
βββ data
βββ AnimalTrack
βββ BDD
βββ ...
βββ annotations
βββ classes.txt
βββ hard_tracks_dataset_coco_test.json
βββ hard_tracks_dataset_coco_val.json
βββ ...
βββ metadata
βββ lvis_v1_clip_a+cname.npy
βββ lvis_v1_train_cat_info.json
The data
folder contains the videos, the annotations
folder contains the annotations in COCO (TAO) format, and the metadata
folder contains the metadata files for running MASA+.
If you use HTD independently, you can ignore the metadata
folder.
Annotation format for HTD dataset
The annotations folder is structured as follows:
βββ annotations
βββ classes.txt
βββ hard_tracks_dataset_coco_test.json
βββ hard_tracks_dataset_coco_val.json
βββ hard_tracks_dataset_coco.json
βββ hard_tracks_dataset_coco_class_agnostic.json
Details about the annotations:
classes.txt
: Contains the list of classes in the dataset. Useful for Open-Vocabulary tracking.hard_tracks_dataset_coco_test.json
: Contains the annotations for the test set.hard_tracks_dataset_coco_val.json
: Contains the annotations for the validation set.hard_tracks_dataset_coco.json
: Contains the annotations for the entire dataset.hard_tracks_dataset_coco_class_agnostic.json
: Contains the annotations for the entire dataset in a class-agnostic format. This means that there is only one category namely "object" and all the objects in the dataset are assigned to this category.
The HTD dataset is annotated in COCO format. The annotations are stored in JSON files, which contain information about the images, annotations, categories, and other metadata. The format of the annotations is as follows:
{
"images": [image],
"videos": [video],
"tracks": [track],
"annotations": [annotation],
"categories": [category]
}
image: {
"id": int, # Unique ID of the image
"video_id": int, # Reference to the parent video
"file_name": str, # Path to the image file
"width": int, # Image width in pixels
"height": int, # Image height in pixels
"frame_index": int, # Index of the frame within the video (starting from 0)
"frame_id": int # Redundant or external frame ID (optional alignment)
"video": str, # Name of the video
"neg_category_ids": [int], # List of category IDs explicitly not present (optional)
"not_exhaustive_category_ids": [int] # Categories not exhaustively labeled in this image (optional)
video: {
"id": int, # Unique video ID
"name": str, # Human-readable or path-based name
"width": int, # Frame width
"height": int, # Frame height
"neg_category_ids": [int], # List of category IDs explicitly not present (optional)
"not_exhaustive_category_ids": [int] # Categories not exhaustively labeled in this video (optional)
"frame_range": int, # Number of frames between annotated frames
"metadata": dict, # Metadata for the video
}
track: {
"id": int, # Unique track ID
"category_id": int, # Object category
"video_id": int # Associated video
}
category: {
"id": int, # Unique category ID
"name": str, # Human-readable name of the category
}
annotation: {
"id": int, # Unique annotation ID
"image_id": int, # Image/frame ID
"video_id": int, # Video ID
"track_id": int, # Associated track ID
"bbox": [x, y, w, h], # Bounding box in absolute pixel coordinates
"area": float, # Area of the bounding box
"category_id": int # Category of the object
"iscrowd": int, # Crowd flag (from COCO)
"segmentation": [], # Polygon-based segmentation (if available)
"instance_id": int, # Instance index with a video
"scale_category": str # Scale type (e.g., 'moving-object')
}
- Downloads last month
- 36