File size: 4,138 Bytes
c1537b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8eec6e9
c1537b9
8eec6e9
 
c1537b9
 
 
 
 
85a0c6e
 
 
 
 
 
 
 
8eec6e9
6c2f58f
 
b728d7e
6c2f58f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5e242bd
 
 
 
 
 
 
 
6c2f58f
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: question_id
    dtype: string
  - name: question_type
    dtype: string
  - name: question
    dtype: string
  - name: options
    dtype: string
  - name: difficulty
    dtype: string
  - name: category
    dtype: string
  - name: default_prompt
    dtype: string
  splits:
  - name: test
    num_bytes: 9393666010.68
    num_examples: 2720
  download_size: 630547630
  dataset_size: 9393666010.68
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
license: cc-by-sa-4.0
task_categories:
- visual-question-answering
language:
- en
tags:
- art
pretty_name: VisualOverload
---
# VisualOverload
<p align="center">
<img src="https://github.com/paulgavrikov/visualoverload/blob/main/assets/logo.jpg?raw=true" width="400">
</p>
Is basic image understanding really solved in state-of-the-art VLMs? We present VisualOverload, a slightly different visual question answering (VQA) benchmark comprising 2,720 question–answer pairs, with privately held ground-truth responses. Unlike prior VQA datasets that typically focus on near global image understanding, VisualOverload challenges models to perform simple, knowledge-free visual understanding and reasoning of details in densely populated (or, *overloaded*) scenes. Our dataset consists of high-resolution scans of public-domain paintings that are populated with multiple figures, actions, and unfolding subplots set against elaborately detailed backdrops. Questions were handcrafted to probe for a thorough understanding of the scene.

## 📂 Load the dataset

The easiest way to load the dataset is to use HuggingFace's `datasets`.

```python
from datasets import load_dataset

vol_dataset = load_dataset("paulgavrikov/visualoverload")
```

Each sample contains the following fields

- `question_id`: Unique identifier of each question. 
- `image`: A PIL JPEG image. Most of our images match the total pixel count of 4k (3840x2160 px) in different aspect ratios. 
- `question`: A question about the image.
- `question_type`: Type of question. Will be one of `choice` (response expected to be "A", "B", "C", or "D"), `counting` (freeform), or `ocr` (freeform). You can use this information to request a suitable output format. 
- `options`: This is the list of options for `question_type=choice` and empty otherwise. Please treat the options as answers options `A, B, C, D` (4 options) or `A, B` (2 options).
- `difficulty`: Meta-data about the difficulty of the question. One of `easy`, `medium`, or `hard`.
- `category`:  Meta-data about the question task. One of `activity`, `attributes`, `counting`, `ocr`, `reasoning`, or `scene`.
- `default_prompt`: You can use this prompt to stay compliant with our results. It is a simple combination of the question and answers, with some additional output format constraints. This should work well for most models.

## 🎯 Evaluate your model

Please see [GitHub](https://github.com/paulgavrikov/visualoverload/) for an example evaluation script that generates a correct submission JSON.

All of our ground truth labels are private. The only way to score your submission is to use the [evaluation server](https://huggingface.co/spaces/paulgavrikov/visualoverload-submit). You will need to sign in with a HuggingFace account.  

Your predictions should be a list of dictionaries, each containing an `question_id` field and a `response` field. For multiple choice questions, the `response` field should contain the predicted answer choice. For open-ended questions, the `response` field should contain the option letter (A-D). We will apply simple heuristics to clean the responses, but please ensure they are as accurate as possible.

Example: 
```
[
    {"question_id": "28deb79e", "response": "A"}, 
    {"question_id": "73cbabd7", "response": "C"}, 
    ...
]
```
## 🏆 Submit to the leaderboard
We welcome all submissions for model *or* method (including prompting-based) to our dataset. Please create a [GitHub issue](https://github.com/paulgavrikov/visualoverload/issues) following the template and include your predictions as JSON.