Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,140 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
extra_gated_prompt:
|
4 |
+
You agree to not use the dataset to conduct experiments that cause harm to
|
5 |
+
human subjects. Please note that the data in this dataset may be subject to
|
6 |
+
other agreements. Before using the data, be sure to read the relevant
|
7 |
+
agreements carefully to ensure compliant use. Video copyrights belong to the
|
8 |
+
original video creators or platforms and are for academic research use only.
|
9 |
+
task_categories:
|
10 |
+
- visual-question-answering
|
11 |
+
- question-answering
|
12 |
+
extra_gated_fields:
|
13 |
+
Name: text
|
14 |
+
Company/Organization: text
|
15 |
+
Country: text
|
16 |
+
E-Mail: text
|
17 |
+
language:
|
18 |
+
- en
|
19 |
+
size_categories:
|
20 |
+
- 1M<n<10M
|
21 |
+
configs:
|
22 |
+
- config_name: dense_video_captioning
|
23 |
+
data_files:
|
24 |
+
- split: anet
|
25 |
+
path: dense_video_captioning/anet.json
|
26 |
+
- split: vitt
|
27 |
+
path: dense_video_captioning/vitt.json
|
28 |
+
- split: youcook2
|
29 |
+
path: dense_video_captioning/youcook2.json
|
30 |
+
|
31 |
+
- config_name: object_tracking
|
32 |
+
data_files:
|
33 |
+
- split: got10k_dynamic
|
34 |
+
path: object_tracking/got10k_dynamic.json
|
35 |
+
- split: lasot_dynamic
|
36 |
+
path: object_tracking/lasot_dynamic.json
|
37 |
+
|
38 |
+
- config_name: refcoco
|
39 |
+
data_files:
|
40 |
+
- split: refcoco_50k
|
41 |
+
path: refcoco/refcoco_50k.json
|
42 |
+
|
43 |
+
- config_name: spatial_temporal_action_localization
|
44 |
+
data_files:
|
45 |
+
- split: ava
|
46 |
+
path: spatial_temporal_action_localization/ava.json
|
47 |
+
|
48 |
+
- config_name: step_localization
|
49 |
+
data_files:
|
50 |
+
- split: coin
|
51 |
+
path: step_localization/coin/coin.json
|
52 |
+
- split: hirest_step
|
53 |
+
path: step_localization/hirest_step/hirest_step.json
|
54 |
+
|
55 |
+
- config_name: temporal_grounding
|
56 |
+
data_files:
|
57 |
+
- split: charades
|
58 |
+
path: temporal_grounding/charades.json
|
59 |
+
- split: didemo
|
60 |
+
path: temporal_grounding/didemo.json
|
61 |
+
- split: hirest
|
62 |
+
path: temporal_grounding/hirest.json
|
63 |
+
- split: queryd
|
64 |
+
path: temporal_grounding/queryd.json
|
65 |
+
|
66 |
+
- config_name: visual_genome
|
67 |
+
data_files:
|
68 |
+
- split: vg_86k
|
69 |
+
path: visual_genome/vg_86k.json
|
70 |
+
---
|
71 |
+
|
72 |
+
## Overview
|
73 |
+
This dataset provides a comprehensive collection for **Online Spatial-Temporal Understanding tasks**, covering multiple domains including Dense Video Captioning, Video Grounding, Step Localization, Spatial-Temporal Action Localization, and Object Tracking.
|
74 |
+
|
75 |
+
## Data Formats
|
76 |
+
* Format 1: Conversational QA (LLaVA-style)
|
77 |
+
```json
|
78 |
+
{
|
79 |
+
"video": "116/NLy71UrHElw.mp4",
|
80 |
+
"conversations": [
|
81 |
+
{
|
82 |
+
"from": "human",
|
83 |
+
"timestamps": 1026.0, # Video timestamp in seconds
|
84 |
+
"value": "<video>\nBased on current observation, list events..."
|
85 |
+
},
|
86 |
+
{
|
87 |
+
"from": "gpt",
|
88 |
+
"value": "21.0s - 22.0s (duration: 1.0s), begin to run up..."
|
89 |
+
}
|
90 |
+
]
|
91 |
+
}
|
92 |
+
```
|
93 |
+
Format 2: Template-based Tracking
|
94 |
+
```json
|
95 |
+
{
|
96 |
+
"video": "GOT-10k_Train_006457",
|
97 |
+
"fps": 1, # Frame rate
|
98 |
+
"all_image_files": ["00000001.jpg", ...], # Keyframe paths
|
99 |
+
"image_bboxes": [ # Temporal object tracking data
|
100 |
+
{
|
101 |
+
"timestamp": 0.0,
|
102 |
+
"bbox": [0.412, 0.517, 0.452, 0.753] # [x1,y1,x2,y2]
|
103 |
+
},
|
104 |
+
...
|
105 |
+
],
|
106 |
+
"query_template": { # Randomized temporal insertion
|
107 |
+
"from": "human",
|
108 |
+
"value": "Track the location of \"person\" at <bbox> over time..."
|
109 |
+
}
|
110 |
+
}
|
111 |
+
```
|
112 |
+
|
113 |
+
## Source Data
|
114 |
+
|
115 |
+
| Task | Dataset [Citation] | Source |
|
116 |
+
|-------------------------------|----------------------------|------------------------------------------------------------------------------------|
|
117 |
+
| Dense Video Captioning | `ActivityNet Captions` | [Source](http://activity-net.org/download.html) |
|
118 |
+
| | `ViTT` | [Source](https://github.com/google-research-datasets/Video-Timeline-Tags-ViTT) |
|
119 |
+
| | `YouCook2` | [Source](http://youcook2.eecs.umich.edu/) |
|
120 |
+
| Temporal Video Grounding | `DiDeMo` | [Source](https://github.com/LisaAnne/LocalizingMoments?tab=readme-ov-file#dataset) |
|
121 |
+
| | `QuerYD` | [Source](https://www.robots.ox.ac.uk/~vgg/data/queryd/) |
|
122 |
+
| | `HiREST_grounding` | [Source](https://github.com/j-min/HiREST) |
|
123 |
+
| | `Charades-STA` | [Source](https://github.com/jiyanggao/TALL) |
|
124 |
+
| Step Localization | `COIN` | [Source](https://github.com/coin-dataset/annotations) |
|
125 |
+
| | `HiREST_step` | [Source](https://github.com/j-min/HiREST) |
|
126 |
+
| Spatial Temporal Action Localization | `AVA` | [Source](https://research.google.com/ava/download.html) |
|
127 |
+
| Object Tracking | `GOT 10K` | [Source](http://got-10k.aitestunion.com/) |
|
128 |
+
| | `LaSOT` | [Source](http://vision.cs.stonybrook.edu/~lasot/) |
|
129 |
+
|
130 |
+
## Citation
|
131 |
+
If you find this project useful in your research, please consider cite:
|
132 |
+
```BibTeX
|
133 |
+
@article{huang2024online,
|
134 |
+
title={Online Video Understanding: A Comprehensive Benchmark and Memory-Augmented Method},
|
135 |
+
author={Huang, Zhenpeng and Li, Xinhao and Li, Jiaqi and Wang, Jing and Zeng, Xiangyu and Liang, Cheng and Wu, Tao and Chen, Xi and Li, Liang and Wang, Limin},
|
136 |
+
journal={arXiv preprint arXiv:2501.00584},
|
137 |
+
year={2024}
|
138 |
+
}
|
139 |
+
```
|
140 |
+
|