dataset_info:
features:
- name: image
dtype: image
- name: question_id
dtype: string
- name: question_type
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: difficulty
dtype: string
- name: category
dtype: string
- name: default_prompt
dtype: string
splits:
- name: test
num_bytes: 9393666010.68
num_examples: 2720
download_size: 630547630
dataset_size: 9393666010.68
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
license: cc-by-sa-4.0
task_categories:
- visual-question-answering
language:
- en
tags:
- art
pretty_name: VisualOverload
VisualOverload
📂 Load the dataset
The easiest way to load the dataset is to use HuggingFace's datasets
.
from datasets import load_dataset
vol_dataset = load_dataset("paulgavrikov/visualoverload")
Each sample contains the following fields
question_id
: Unique identifier of each question.image
: A PIL JPEG image. Most of our images match the total pixel count of 4k (3840x2160 px) in different aspect ratios.question
: A question about the image.question_type
: Type of question. Will be one ofchoice
(response expected to be "A", "B", "C", or "D"),counting
(freeform), orocr
(freeform). You can use this information to request a suitable output format.options
: This is the list of options forquestion_type=choice
and empty otherwise. Please treat the options as answers optionsA, B, C, D
(4 options) orA, B
(2 options).difficulty
: Meta-data about the difficulty of the question. One ofeasy
,medium
, orhard
.category
: Meta-data about the question task. One ofactivity
,attributes
,counting
,ocr
,reasoning
, orscene
.default_prompt
: You can use this prompt to stay compliant with our results. It is a simple combination of the question and answers, with some additional output format constraints. This should work well for most models.
🎯 Evaluate your model
Please see GitHub for an example evaluation script that generates a correct submission JSON.
All of our ground truth labels are private. The only way to score your submission is to use the evaluation server. You will need to sign in with a HuggingFace account.
Your predictions should be a list of dictionaries, each containing an question_id
field and a response
field. For multiple choice questions, the response
field should contain the predicted answer choice. For open-ended questions, the response
field should contain the option letter (A-D). We will apply simple heuristics to clean the responses, but please ensure they are as accurate as possible.
🏆 Submit to the leaderboard
We welcome all submissions for model or method (including prompting-based) to our dataset. Please create a GitHub issue following the template and include your predictions as JSON.