Datasets:

Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
davidanugraha commited on
Commit
929ee7b
·
1 Parent(s): 7148c6a

Initial commit to VQA

Browse files
.gitattributes CHANGED
@@ -9,7 +9,6 @@
9
  *.joblib filter=lfs diff=lfs merge=lfs -text
10
  *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
  *.lz4 filter=lfs diff=lfs merge=lfs -text
12
- *.mds filter=lfs diff=lfs merge=lfs -text
13
  *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
  *.model filter=lfs diff=lfs merge=lfs -text
15
  *.msgpack filter=lfs diff=lfs merge=lfs -text
@@ -57,3 +56,34 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  *.joblib filter=lfs diff=lfs merge=lfs -text
10
  *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
  *.lz4 filter=lfs diff=lfs merge=lfs -text
 
12
  *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
  *.model filter=lfs diff=lfs merge=lfs -text
14
  *.msgpack filter=lfs diff=lfs merge=lfs -text
 
56
  # Video files - compressed
57
  *.mp4 filter=lfs diff=lfs merge=lfs -text
58
  *.webm filter=lfs diff=lfs merge=lfs -text
59
+ hf_prompt/train_task1/ar_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
60
+ hf_prompt/train_task1/az_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
61
+ hf_prompt/train_task1/bn_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
62
+ hf_prompt/train_task1/cs_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
63
+ hf_prompt/train_task1/en_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
64
+ hf_prompt/train_task1/es_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
65
+ hf_prompt/train_task1/fr_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
66
+ hf_prompt/train_task1/hi_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
67
+ hf_prompt/train_task1/id_casual_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
68
+ hf_prompt/train_task1/id_formal_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
69
+ hf_prompt/train_task1/it_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
70
+ hf_prompt/train_task1/ja_casual_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
71
+ hf_prompt/train_task1/ja_formal_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
72
+ hf_prompt/train_task1/jv_krama_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
73
+ hf_prompt/train_task1/jv_ngoko_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
74
+ hf_prompt/train_task1/ko_casual_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
75
+ hf_prompt/train_task1/ko_formal_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
76
+ hf_prompt/train_task1/mr_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
77
+ hf_prompt/train_task1/nan_spoken_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
78
+ hf_prompt/train_task1/nan_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
79
+ hf_prompt/train_task1/ru_casual_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
80
+ hf_prompt/train_task1/ru_formal_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
81
+ hf_prompt/train_task1/sc_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
82
+ hf_prompt/train_task1/si_formal_spoken_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
83
+ hf_prompt/train_task1/su_loma_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
84
+ hf_prompt/train_task1/th_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
85
+ hf_prompt/train_task1/tl_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
86
+ hf_prompt/train_task1/yo_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
87
+ hf_prompt/train_task1/yue_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
88
+ hf_prompt/train_task1/zh_cn_train_task1.jsonl filter=lfs diff=lfs merge=lfs -text
89
+ *.jsonl filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,185 @@
1
  ---
2
  license: cc-by-sa-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-sa-4.0
3
+
4
+ language:
5
+ - eng
6
+ - ind
7
+ - ind
8
+ - zho
9
+ - kor
10
+ - kor
11
+ - jpn
12
+ - jpn
13
+ - sun
14
+ - jav
15
+ - jav
16
+ - ces
17
+ - spa
18
+ - fra
19
+ - ara
20
+ - hin
21
+ - ben
22
+ - mar
23
+ - sin
24
+ - yor
25
+ - yue
26
+ - nan
27
+ - nan
28
+ - tgl
29
+ - tha
30
+ - aze
31
+ - rus
32
+ - rus
33
+ - ita
34
+ - srd
35
+ multilinguality:
36
+ - multilingual
37
+ language_details: >-
38
+ en, id_formal, id_casual, zh_cn, ko_formal, ko_casual, ja_formal, ja_casual,
39
+ su_loma, jv_krama, jv_ngoko, cs, es, fr, ar, hi, bn, mr, si_formal_spoken, yo,
40
+ yue, nan, nan_spoken, tl, th, az, ru_formal, ru_casual, it, sc
41
+ configs:
42
+ - config_name: task1
43
+ data_files:
44
+ - split: test_large
45
+ path: hf_prompt/large_eval_task1/*
46
+ - split: test_small
47
+ path: hf_prompt/small_eval_task1/*
48
+ - split: train
49
+ path: hf_prompt/train_task1/*
50
+
51
+ - config_name: task2
52
+ data_files:
53
+ - split: test_large
54
+ path: hf_prompt/large_eval_task2/*
55
+ - split: test_small
56
+ path: hf_prompt/small_eval_task2/*
57
+ - split: train
58
+ path: hf_prompt/train_task2/*
59
  ---
60
+
61
+ # WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines
62
+
63
+ ![Overview](./images_readme/tasks.png)
64
+
65
+ WorldCuisines is a massive-scale visual question answering (VQA) benchmark for multilingual and multicultural understanding through global cuisines. The dataset contains text-image pairs across 30 languages and dialects, spanning 9 language families and featuring over 1 million data points, making it the largest multicultural VQA benchmark as of 17 October 2024.
66
+
67
+ ## Overview
68
+ We develop both a VQA dataset (**WC-VQA**) and a curated Knowledge Base (KB) for world cuisines (**WC-KB**). The **WC-VQA** dataset is constructed using **WC-KB**, which serves as the primary data source. We design two tasks as follows:
69
+
70
+ #### Task 1: Dish Name Prediction
71
+ This task requires predicting the name of a dish based on its image, a question, and contextual information. It is divided into three subtasks, each with a distinct query type:
72
+ - (a) **No-context question**: Predict the dish name without additional context.
73
+ - (b) **Contextualized question**: Predict the dish name with provided contextual information.
74
+ - (c) **Adversarial contextualized question**: Predict the dish name with misleading or adversarial context.
75
+ #### Task 2: Location Prediction
76
+ This task involves predicting the location where the dish is commonly consumed or originated, based on the dish image, a question, and contextual information.
77
+
78
+ **WC-KB** encompasses 2,414 dishes worldwide, including 6,045 images and metadata, covering both coarse-grained (e.g., stew) and fine-grained categories (e.g., beef stew), locations, and regional cuisines. It also features multilingual translations of 90 crowd-sourced prompt templates and 401 parallel data entries (i.e., multilingual information) for location and regional cuisine information.
79
+ From **WC-KB**, we construct **WC-VQA**, a multilingual parallel VQA dataset with 1 million samples encompassing over 30 languages and dialects, including various varieties and registers, such as formal and casual styles, with high-quality human annotations. The VQA is designed to evaluate models' ability to understand cultural food names and their origins.
80
+
81
+ We provide **WC-VQA** evaluation datasets in two sizes (12,000 and 60,000 instances) alongside a training dataset (1,080,000 instances).
82
+ The table below provides more detailed statistics regarding the number of VQA instances and images for each data split.
83
+
84
+ ![StatisticsTable](./images_readme/stats_table.png)
85
+
86
+ ## Dataset Construction
87
+
88
+ Our data sources are gathered from [Wikipedia](https://wikipedia.org) and [Wikimedia Commons](https://commons.wikimedia.org) to ensure they can be easily redistributed under an accepted open-source license. The data construction process involves four key steps:
89
+ 1. Dish selection
90
+ 2. Metadata annotation
91
+ 3. Quality assurance
92
+ 4. Data compilation.
93
+
94
+ ### Dish Selection
95
+
96
+ We compile a comprehensive list of dish names sourced from Wikipedia. We manually review pages that feature lists of dishes to determine whether each dish is a specialty unique to a specific culture, as we aim to focus on dishes that have distinct cultural significance. We exclude generic categories, such as ice cream, which lacks a specific cultural association. We ensure that each dish on our list has its own dedicated Wikipedia page. If a dish does not have a Wikipedia page, it is also excluded from our compilation. This meticulous approach ensures that our dataset is both culturally relevant and well-documented.
97
+
98
+ ### Metadata Annotation
99
+
100
+ Given a dish name and its corresponding Wikipedia page link, annotators manually compile metadata based on the provided information. This metadata includes:
101
+ - **Visual Representation**: Images sourced from Wikimedia Commons are included, along with their license information.
102
+ - **Categorization**: Dishes are classified into both coarse-grained (e.g., rice, bread) and fine-grained (e.g., fried rice, flatbread) categories.
103
+ - **Description**: Annotators provide a description of each dish based on the content from its Wikipedia page, avoiding the use of the dish's name, origin, or any distinctive keywords that uniquely identify the dish.
104
+ - **Cuisine**: The dish's origin cuisine and any cuisines with which it is strongly associated.
105
+ - **Geographic Distribution**: This includes the dish's associated countries, area (city or region), and broader continental region.
106
+
107
+ ### Quality Assurance
108
+
109
+ Before beginning the quality assurance process, we first identify common issues that arise during annotation and develop automated rules to detect easily identifiable annotation errors, such as incorrect string formatting. Annotators are then asked to correct these errors. To further ensure data quality and validity, we conduct several rounds of quality assurance:
110
+
111
+ 1. **Image Quality**: We remove instances where images are blurry, dark, or contain distracting elements such as people or other dishes. We also verify image licenses by cross-referencing them with information on Wikimedia Commons.
112
+ 2. **Categorization and Descriptions**: We refine dish categorization and descriptions, ensuring consistency in category assignments and keeping descriptions free from "information breaches" (e.g., excluding regional details from the description).
113
+ 3. **Cuisine Names and Geographic Accuracy**: We standardize cuisine names and meticulously review all country and area information for accuracy.
114
+
115
+ This comprehensive approach guarantees the integrity and reliability of our dataset.
116
+
117
+ ### Data Compilation
118
+
119
+ In this phase, we verify the overall quality check done by annotators, identifying any potential inconsistencies missed during quality assurance. We then compile the dataset by collecting the metadata into a single file.
120
+
121
+ ## VQA Generation
122
+
123
+ In this phase, we generate VQA data by sampling from **WC-KB**. A VQA data entry comprises a visual image, question text, and answer text. This process involves four stages:
124
+
125
+ 1. Conducting a similarity search for dish names
126
+ 2. Constructing questions and contexts
127
+ 3. Translating these elements into multiple languages
128
+ 4. Generating the VQA triplets.
129
+
130
+ ### Dish Names Similarity Search
131
+
132
+ To identify similar dishes in our dataset, we follow the approach from Winata et al. (2024) to employ a multilingual model E5$_\text{LARGE}$ Instruct (Wang et al. 2024) for computing text embeddings. Formally, given a dish $x$ with name $x_{\text{name}}$ and text description $x_{\text{desc}}$, we use a multilingual model $\theta$ to compute the embedding vector $v_x = \theta(\{x_\text{name};x_\text{desc}\})$, and then apply cosine similarity to compute a score $s = \text{similarity}(v_i, v_j)$ between dish $i$ and dish $j$. For each dish, we consider the top-$k$ most similar dishes to generate distractors in the multiple choice questions.
133
+
134
+ ### Question and Context Construction
135
+
136
+ Dish name prediction (Task 1) is divided into three question variations depending on the context:
137
+
138
+ 1. **No-context question**: Simply asks for the name of the dish without any provided context.
139
+ 2. **Contextualized question**: Provides additional information related to cuisine or location.
140
+ 3. **Adversarial contextualized question**: Similar to contextualized questions, but may include misleading location information to assess the model's robustness to irrelevant details.
141
+
142
+ For regional cuisine prediction (Task 2), only a basic question without any provided context is available.
143
+
144
+ ### Multiple Language Translation
145
+
146
+ #### Question and Context
147
+
148
+ All questions and contexts are initially collected in English and are then carefully translated by native speakers into 30 language varieties: 23 different languages, with 7 languages having two varieties. Translators prioritize naturalness, followed by diversity in translations when duplication occurs.
149
+
150
+ #### Food Name Alias
151
+
152
+ Using Wikipedia pages as our primary source, we verify if the English page has translations available in other languages. This enables us to extract dish names in multiple languages and compile them as translations for each dish. We utilize both the Wikipedia page titles in various languages and alias text from the English page. These translations enhance cultural relevance and accuracy for multilingual prompts. When translation is unavailable, we use the English name as the default.
153
+
154
+ #### Locations and Cuisines
155
+
156
+ With over 400 unique locations, including countries, cities, and areas, we first translate the English locations into other languages using GPT-4, followed by proofreading by native speakers. Regional cuisine names (the adjective form of the location in English) are translated in the same manner.
157
+
158
+ #### Morphological Inflections
159
+
160
+ In languages with rich inflectional morphology (e.g., Czech or Spanish), words are modified to express different grammatical categories (e.g., number, gender, or case). We provide a framework for human translators to use natural inflections in the prompt template while keeping the inflections as few as possible.
161
+
162
+ ### Generating VQA Triplets
163
+
164
+ To ensure no overlap between training and testing subsets, we split the dishes and multilingual questions into two subsets. For each subset, we randomly sample dish and question pairs. We use the dish entry in our KB dataset to select the image and inject the location into the context, if applicable. Answer candidates for multiple-choice questions are picked using the similarity search process. We repeat this process until the desired number of training or test samples is reached, discarding duplicates.
165
+
166
+ ## Ethical Considerations
167
+
168
+ Our research focuses on evaluating VLMs within the context of multilingual and multicultural VQA, a field that holds significant implications for diverse multilingual communities. We are committed to conducting our data collection and evaluations with the highest standards of transparency and fairness. To achieve this, we have adopted a crowd-sourcing approach for the annotation process, inviting volunteers to contribute and become co-authors if they provide significant contributions. We follow the guidelines from ACL for authorship eligibility as shown in https://www.aclweb.org/adminwiki/index.php/Authorship_Changes_Policy_for_ACL_Conference_Papers. In line with our commitment to openness and collaboration, we will release our dataset under an open-source license, CC-BY-SA 4.0.
169
+
170
+ ## Contact
171
+
172
+ E-mail: [Genta Indra Winata](genta.winata@capitalone.com) and [Frederikus Hudi](frederikus.hudi.fe7@is.naist.jp)
173
+
174
+ ## Citation
175
+
176
+ If you find this dataset useful, please cite the following works
177
+
178
+ ```bibtex
179
+ @article{winata2024worldcuisines,
180
+ title={WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines},
181
+ author={Winata, Genta Indra and Hudi, Frederikus and Irawan, Patrick Amadeus and Anugraha, David and Putri, Rifki Afina and Wang, Yutong and Nohejl, Adam and Prathama, Ubaidillah Ariq and Ousidhoum, Nedjma and Amriani, Afifa and others},
182
+ journal={arXiv preprint arXiv:2410.12705},
183
+ year={2024}
184
+ }
185
+ ```
images_readme/stats_table.png ADDED

Git LFS Details

  • SHA256: a7d565a465d736aef839cbac03facbae8704542d02050e9caefa8ad633257821
  • Pointer size: 130 Bytes
  • Size of remote file: 22.8 kB
images_readme/tasks.png ADDED

Git LFS Details

  • SHA256: 290743912c7a728b70bb0e601aca0dd3bdf85fb81e73c3eccef65cf694bac163
  • Pointer size: 131 Bytes
  • Size of remote file: 600 kB