Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,132 @@
|
|
1 |
---
|
2 |
license: cc
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc
|
3 |
---
|
4 |
+
|
5 |
+
# Google WIT Vietnamese
|
6 |
+
|
7 |
+
This data repos contain extracted data from [Google WIT](https://github.com/google-research-datasets/wit/blob/main/DATA.md). The extracted data is all for Vietnamese language.
|
8 |
+
|
9 |
+
Given `x` is a data point in the OG dataset which has keys following OG `field_name`, the criteria to filter is
|
10 |
+
```python
|
11 |
+
criteria = lambda x: x.get("language", "") == "vi" and x.get("caption_reference_description", "")
|
12 |
+
```
|
13 |
+
|
14 |
+
## Text-related details
|
15 |
+
|
16 |
+
All `.tsv.gz` files follow OG data files in terms of file names and file structures.
|
17 |
+
|
18 |
+
### Train split
|
19 |
+
|
20 |
+
`wit_v1.train.*.tsv.gz`
|
21 |
+
|
22 |
+
Train data length of each file (not including the header),
|
23 |
+
```
|
24 |
+
17690
|
25 |
+
17756
|
26 |
+
17810
|
27 |
+
17724
|
28 |
+
17619
|
29 |
+
17494
|
30 |
+
17624
|
31 |
+
17696
|
32 |
+
17777
|
33 |
+
17562
|
34 |
+
```
|
35 |
+
Total 176752
|
36 |
+
|
37 |
+
### Validation split
|
38 |
+
|
39 |
+
`wit_v1.val.*.tsv.gz`
|
40 |
+
|
41 |
+
Val data length of each file (not including the header),
|
42 |
+
```
|
43 |
+
292
|
44 |
+
273
|
45 |
+
275
|
46 |
+
320
|
47 |
+
306
|
48 |
+
```
|
49 |
+
Total 1466
|
50 |
+
|
51 |
+
### Test split
|
52 |
+
|
53 |
+
`wit_v1.test.*.tsv.gz`
|
54 |
+
|
55 |
+
Test data length of each file (not including the header),
|
56 |
+
```
|
57 |
+
215
|
58 |
+
202
|
59 |
+
201
|
60 |
+
201
|
61 |
+
229
|
62 |
+
```
|
63 |
+
Total 1048
|
64 |
+
|
65 |
+
## Image-related details
|
66 |
+
|
67 |
+
### Image URL only
|
68 |
+
|
69 |
+
`*.image_url_list.txt` are simply lists of image urls from `*.tsv.gz` files
|
70 |
+
|
71 |
+
Image url length of each file (train, val, test, all)
|
72 |
+
```
|
73 |
+
157281
|
74 |
+
1271
|
75 |
+
900
|
76 |
+
159452
|
77 |
+
```
|
78 |
+
Google Research has made sure that all sets don't share same exact images.
|
79 |
+
|
80 |
+
### Downloaded Images
|
81 |
+
|
82 |
+
⚠ Please for the love of the gods, read this section carefully.
|
83 |
+
|
84 |
+
For `all.index.fmt_id.image_url_list.tsv`, from left to right, without headers, the columns are `index`, `fmt_id`, `image_url`. It is to map `image_url` (in `all.image_url_list.txt`) to `fmt_id`. It's for downloading images.
|
85 |
+
|
86 |
+
`fmt_id` is:
|
87 |
+
- used to name images (with proper image extensions) in `images/`.
|
88 |
+
- `index` but filled with 6 zeros
|
89 |
+
|
90 |
+
Downloading time was less than 36 hours with:
|
91 |
+
- 90 Mbps
|
92 |
+
- Processor Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz 1.99 GHz
|
93 |
+
- No asynchronous
|
94 |
+
|
95 |
+
For `fail.index.fmt_id.status.image_url_list.tsv`, from left to right, without headers, the columns are `index`, `fmt_id`, `status`, `image_url`. It is to track image urls (during downloading) that are inaccessible.
|
96 |
+
|
97 |
+
3367 image urls returned 404 (`status` values). In other words, we were able to download 97.88839275% of images.
|
98 |
+
|
99 |
+
`images/` folder takes disk space of:
|
100 |
+
- 215 GBs (uncompressed)
|
101 |
+
- 209 GBs (compressed)
|
102 |
+
|
103 |
+
We use Pillow to open each image to make sure that downloaded images are usable. We also log all faulty files in `corrupted_image_list.json`. There are less than 70 image files.
|
104 |
+
|
105 |
+
For `corrupted_image_list.json`, for each item in this list, the keys are `file_name`, `error`. `file_name` is `fmt_id` with extension but without `images/`. Some errors are either:
|
106 |
+
- files exceed Pillow default limit
|
107 |
+
- files are truncated
|
108 |
+
|
109 |
+
To actually load those files, the following code can be used to change Pillow behavior
|
110 |
+
```python
|
111 |
+
from PIL import Image, ImageFile
|
112 |
+
|
113 |
+
# For very big image files
|
114 |
+
Image.MAX_IMAGE_PIXELS = None
|
115 |
+
|
116 |
+
# For truncated image files
|
117 |
+
ImageFile.LOAD_TRUNCATED_IMAGES = True
|
118 |
+
```
|
119 |
+
|
120 |
+
Zip `images/` folder,
|
121 |
+
```bash
|
122 |
+
zip -r images.zip images/
|
123 |
+
zip images.zip --out spanned_images.zip -s 40g
|
124 |
+
```
|
125 |
+
https://superuser.com/questions/336219/how-do-i-split-a-zip-file-into-multiple-segments
|
126 |
+
|
127 |
+
Unzip `spanned_images.*` files,
|
128 |
+
```bash
|
129 |
+
zip -s 0 spanned_images.zip --out images.zip
|
130 |
+
unzip images.zip
|
131 |
+
```
|
132 |
+
https://unix.stackexchange.com/questions/40480/how-to-unzip-a-multipart-spanned-zip-on-linux
|