Datasets:

Modalities:
Text
Video
Formats:
webdataset
Languages:
English
ArXiv:
Libraries:
Datasets
WebDataset
License:
alexmartin1722 commited on
Commit
2300025
·
verified ·
1 Parent(s): 0931421

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -13
README.md CHANGED
@@ -14,19 +14,77 @@ Associated with the paper: WikiVideo (https://arxiv.org/abs/2504.00939)
14
  Associated with the github repository (https://github.com/alexmartin1722/wikivideo)
15
 
16
 
17
- # Data Information
18
- The data in this repo is broken down into 3 main parts:
19
- - videos (in videos.tar.gz)
20
- - audio (in audios.tar.gz)
21
- - annotations (in annotations/)
22
-
23
- ## Annotations
24
- The annotations in `annotations/final_data.json` includes:
25
- - the grounding information annotated in the collection process (showing which claims are grounded in the video)
26
- - the final articles written by annotators
27
- - the full sets of claims that were decomposed for the wikipedia articles
28
-
29
- They keys of the `final_data.json` are the Wikipedia topics.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  # RAG Data
32
  The videos that we used as the distractor set (and also include these videos in videos.tar.gz) can be found here MultiVENT 2.0 (https://huggingface.co/datasets/hltcoe/MultiVENT2.0)
 
14
  Associated with the github repository (https://github.com/alexmartin1722/wikivideo)
15
 
16
 
17
+ ### Download instructions
18
+ The dataset can be found on [huggingface](https://huggingface.co/datasets/hltcoe/wikivideo). However, you can't use the datasets library to access the videos because everything is tarred. Instead you need to locally download the dataset and then untar the videos (and audios if you use those).
19
+
20
+ #### Step 1: Install git-lfs
21
+ The first thing you need to do is make sure that git-lfs is installed, otherwise you won't be able to pull the video and audio tar files.
22
+ ```bash
23
+ git lfs install
24
+ ```
25
+
26
+ #### Step 2: Clone the dataset
27
+ After enabling git-lfs, you can now pull the dataset from huggingface.
28
+ ```bash
29
+ git clone https://huggingface.co/datasets/hltcoe/wikivideo
30
+ ```
31
+ I would also tmux this because it might take a while.
32
+
33
+
34
+ #### Step 3: Untar the videos
35
+ In the `data/` folder, you will see places for `data/audios` and `data/videos`. You need to untar the videos and audios into these folders. The audios file is `audios.tar.gz` and the videos is `videos.tar.gz`.
36
+ ```bash
37
+ # untar the videos
38
+ tar -xvzf videos.tar.gz -C data/videos
39
+ # untar the audios
40
+ tar -xvzf audios.tar.gz -C data/audios
41
+ ```
42
+
43
+ #### Finish
44
+ Now you should be done. You will see a `annotations` folder in the huggingface repo, but this also exists in the `data/` folder already in the `data/wikivideo` directory.
45
+
46
+
47
+ ### Dataset Format
48
+ In the `data/wikivideo` directory, you will find the file `final_data.json` which has the articles. This file is formatted as a dictonary:
49
+ ```json
50
+ {
51
+ "Wikipedia Title": {
52
+ "article": "The article text",
53
+ "query_id": "test_pt2_query_XXX",
54
+ "original_article": ["sent1", "sent2", ...],
55
+ "audio_lists": [[true, false, true, ...], ...],
56
+ "video_lists": [[true, false, true, ...], ...],
57
+ "ocr_lists": [[true, false, true, ...], ...],
58
+ "neither_lists": [[true, false, true, ...], ...],
59
+ "video_ocr_lists": [[true, false, true, ...], ...],
60
+ "claims": [["claim1", "claim2", ...], ...],
61
+ "videos":{
62
+ "video_id": {
63
+ "anon_scale_id": "XXX",
64
+ "language": "english",
65
+ "video_type": "Professional | Edited | Diet Raw | Raw",
66
+ "relevance": 3,
67
+ }
68
+ }
69
+ },
70
+ ...
71
+ }
72
+ ```
73
+ In this json, you see that the top level key is the Wikipeda Article Title. Each other key is defined as follows:
74
+ - `article`: This is the human written article on the topic using the video data.
75
+ - `query_id`: This is the query id for the article from the MultiVENT 2.0 dataset. This will be helpful when doing RAG experiments.
76
+ - `original_article`: This is the original Wikipedia article from MegaWika 2.0. It is sentence tokenized. Each sentence corresponsed to an index in the audio_lists, video_lists, ocr_lists, neither_lists, video_ocr_lists, and claims lists.
77
+ - `claims`: This is a list of claims that are in the article. Each index corresponds to a sentence in the original article. Then each index in the sublist corresponds to a human written claim from the sentence. These specific claims correspond to the boolean elements in the audio_lists, video_lists, ocr_lists, neither_lists, and video_ocr_lists.
78
+ - `audio_lists`: This is a list of lists of booleans. Each index corresponds to a sentence in the original article. Then each index in the sublist corresponds to a claim. If the boolean is TRUE then the claim is supported by the audio, otherwise it is not.
79
+ - `video_lists`: This is a list of lists of booleans. Each index corresponds to a sentence in the original article. Then each index in the sublist corresponds to a claim. If the boolean is TRUE then the claim is supported by the video, otherwise it is not.
80
+ - `ocr_lists`: This is a list of lists of booleans. Each index corresponds to a sentence in the original article. Then each index in the sublist corresponds to a claim. If the boolean is TRUE then the claim is supported by the ocr, otherwise it is not.
81
+ - `neither_lists`: This is a list of lists of booleans. Each index corresponds to a sentence in the original article. Then each index in the sublist corresponds to a claim. If the boolean is TRUE then the claim is supported by any of the modalities.
82
+ - `video_ocr_lists`: This is a list of lists of booleans. Each index corresponds to a sentence in the original article. Then each index in the sublist corresponds to a claim. If the boolean is TRUE then the claim is supported by the video or ocr, otherwise it is not.
83
+ - `videos`: This is a dictionary of videos relevant to the query topic. The key is the video id and the value is a dictionary of video metadata. The metadata is defined as follows:
84
+ - `anon_scale_id`: This is the anonnoynmous ID used in MultiVENT 2.0 for the video. This will help you deduplicate the test set of the dataset when doing video retrieval in the RAG experiments.
85
+ - `language`: This is the language of the video.
86
+ - `video_type`: This is the type of video.
87
+ - `relevance`: This is the relevance of the video to the article. These values range from 0-3.
88
 
89
  # RAG Data
90
  The videos that we used as the distractor set (and also include these videos in videos.tar.gz) can be found here MultiVENT 2.0 (https://huggingface.co/datasets/hltcoe/MultiVENT2.0)