Dataset Viewer (First 5GB)
The dataset viewer is not available for this split.
Rows from parquet row groups are too big to be read: 515.71 MiB (max=286.10 MiB)
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
CVQA for VLMEvalKit
- Original dataset: ported to VLMEvalKit
- From the original authors:
CVQA is a culturally diverse multilingual VQA benchmark consisting of over 10,000 questions from 39 country-language pairs. The questions in CVQA are written in both the native languages and English, and are categorized into 10 diverse categories.
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2048x1536 at 0x7C3E0EBEEE00>,
'ID': '5919991144272485961_0',
'Subset': "('Japanese', 'Japan')",
'Question': '写真に写っているキャラクターの名前は? ',
'Translated Question': 'What is the name of the object in the picture? ',
'Options': ['コスモ星丸', 'ミャクミャク', ' フリービー ', 'ハイバオ'],
'Translated Options': ['Cosmo Hoshimaru','MYAKU-MYAKU','Freebie ','Haibao'],
'Label': -1,
'Category': 'Objects / materials / clothing',
'Image Type': 'Self',
'Image Source': 'Self-open',
'License': 'CC BY-SA'
}
To support VLMEvalKit, two TSV files were created to represent the two versions of CVQA:
- The localised (LOC) version. The questions and answer options are in the subset's original native language. For evaluating with multilingual LLMs.
- The english (ENG) version. Questions and answers are asked in translated English, although the topics of the question involve cultures other than English. For evaluating on LLMs trained primarily on English.
TSV row data columns for LOC and ENG VLMEvalKit:
- index (int, based on dataset order. Does not follow CVQA ids since they are of type str)
- image (base64)
- question
- A option
- B option
- C option
- D option
- l2-category (
Subset
) - split (always called
test
)
Info
- Proposed method of evaluation:
- Prompt the model to answer only with the correct option letter (one of
[A,B,C,D]
) - Use regex or string search to locate the correct letter
- Alternatively, use LLM-as-a-judge to identify the correct answer letter. Although, this is a bit of an overkill.
- Prompt the model to answer only with the correct option letter (one of
- The original CVQA dataset numbers the options as
[0,1,2,3]
, however this has been changed to[A,B,C,D]
to follow the VLMEvalKit standard. This shouldn't have much effect on performance.
- Downloads last month
- 57