Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,3 +1,74 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# MoviePuzzle Dataset
|
2 |
+
|
3 |
+
## Introduce
|
4 |
+
This dataset is based on [MovieNet](https://movienet.github.io/) for [MoviePuzzle](https://moviepuzzle.github.io/) task.
|
5 |
+
We use 228 movie to generate 10031 movie clips. 7048 clips for train, 589 for val, 1178 for in domain test and 1196 for out domain test.
|
6 |
+
|
7 |
+
## Download
|
8 |
+
You can download the full dataset here: https://movienet.github.io/
|
9 |
+
|
10 |
+
## Structure
|
11 |
+
We categorized a dataset of 10031 movie clips into labels ranging from 0 to 10030. Then, we divided this dataset into four subsets: the training set, validation set, in-domain testing set, and out-of-domain testing set. The out-of-domain testing set consists of movies that are different from those in the other sets, while the remaining movies were split into sets with almost equal proportions. The file structure is as follows:
|
12 |
+
|
13 |
+
```
|
14 |
+
.
|
15 |
+
βββ train/
|
16 |
+
βΒ Β βββ 1/
|
17 |
+
βΒ Β βΒ Β βββ 1_0.png
|
18 |
+
βΒ Β βΒ Β βββ 2_1.png
|
19 |
+
βΒ Β βΒ Β βββ ...
|
20 |
+
βΒ Β βΒ Β βββ subtitle.json
|
21 |
+
βΒ Β βΒ Β βββ info_suffled.json
|
22 |
+
βΒ Β βΒ Β βββ info.json
|
23 |
+
βΒ Β βββ 2/
|
24 |
+
βΒ Β βΒ Β βββ 2_0.png
|
25 |
+
βΒ Β βΒ Β βββ 2_1.png
|
26 |
+
βΒ Β βΒ Β βββ ...
|
27 |
+
βΒ Β βΒ Β βββ subtitle.json
|
28 |
+
βΒ Β βΒ Β βββ info_suffled.json
|
29 |
+
βΒ Β βΒ Β βββ info.json
|
30 |
+
βΒ Β βββ ...
|
31 |
+
β
|
32 |
+
βββ split_ clip_id.json
|
33 |
+
βββ test_in_domain/
|
34 |
+
βββ test_out_domain/
|
35 |
+
βββ val/
|
36 |
+
βββ README.md
|
37 |
+
```
|
38 |
+
The PNG images under the folder named `clip_id/` (like 1/, 2/ ... ) represent the sampled frames in sequential order from the movie sequence. The `subtitle.json` file corresponds to the image frames, providing subtitles for each frame. The `info.json` file includes labels for each frame such as shot and scene information. On the other hand, `info_shuffled.json` represents the labels after shuffling. `split_clip_id.json` is a dictionary containing all the clip_id present in the datasets.
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
## File format
|
43 |
+
### info.json
|
44 |
+
```
|
45 |
+
{
|
46 |
+
"tt_id" : "tt0047396",
|
47 |
+
"img_num" : 20
|
48 |
+
"img_id" : [0, 1, 2 ...],
|
49 |
+
"shot_id" : [13, 13, 14 ...]
|
50 |
+
"scene_id" : [6, 6, 6, ...]
|
51 |
+
}
|
52 |
+
```
|
53 |
+
The `tt_id` indicates which movie the clip belongs to. The `img_num` represents the number of frames that make up the image, while `img_id` indicates the index of the image frame. The `shot_id` represents the position of each image frame in its original movie sequence, and the `scene_id` represents the position of the image frame in its original movie scene.
|
54 |
+
|
55 |
+
In the `info_shuffled.json` file, the dictionaries have the same keys as in the `info.json` file, but their values lists have been shuffled in the same order.
|
56 |
+
|
57 |
+
### subtitle.json
|
58 |
+
```
|
59 |
+
[
|
60 |
+
["Men, are you over 40?"],
|
61 |
+
["When you wake up in the morning, do you feel tired and rundown?"],
|
62 |
+
...
|
63 |
+
]
|
64 |
+
```
|
65 |
+
|
66 |
+
## Criteria
|
67 |
+
1. The frames have subtitle in a clip is more than 80\%.
|
68 |
+
2. The length of a clips is 10 to 20.
|
69 |
+
3. Only the subtitle that locate on the frame will be record.
|
70 |
+
4. NO overlap
|
71 |
+
|
72 |
+
|
73 |
+
## Citation
|
74 |
+
If you find our dataset helpful, please kindly cite us in your research.
|