Update README.md
Browse files
README.md
CHANGED
@@ -37,3 +37,43 @@ tags:
|
|
37 |
- art
|
38 |
pretty_name: VisualOverload
|
39 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
- art
|
38 |
pretty_name: VisualOverload
|
39 |
---
|
40 |
+
# VisualOverload
|
41 |
+
<p align="center">
|
42 |
+
<img src="https://github.com/paulgavrikov/visualoverload/blob/main/assets/logo.jpg?raw=true" width="400"> <br>
|
43 |
+
</p>
|
44 |
+
Is basic image understanding really solved in state-of-the-art VLMs? We present VisualOverload, a slightly different visual question answering (VQA) benchmark comprising 2,720 question–answer pairs, with privately held ground-truth responses. Unlike prior VQA datasets that typically focus on near global image understanding, VisualOverload challenges models to perform simple, knowledge-free visual understanding and reasoning of details in densely populated (or, *overloaded*) scenes. Our dataset consists of high-resolution scans of public-domain paintings that are populated with multiple figures, actions, and unfolding subplots set against elaborately detailed backdrops. Questions were handcrafted to probe for a thorough understanding of the scene.
|
45 |
+
|
46 |
+
|
47 |
+
|
48 |
+
## 📂 Load the dataset
|
49 |
+
|
50 |
+
The easiest way to load the dataset is to use HuggingFace's `datasets`.
|
51 |
+
|
52 |
+
```python
|
53 |
+
from datasets import load_dataset
|
54 |
+
|
55 |
+
vol_dataset = load_dataset("paulgavrikov/visualoverload")
|
56 |
+
```
|
57 |
+
|
58 |
+
Each sample contains the following fields
|
59 |
+
|
60 |
+
- `question_id`: Unique identifier of each question.
|
61 |
+
- `image`: A PIL JPEG image. Most of our images match the total pixel count of 4k (3840x2160 px) in different aspect ratios.
|
62 |
+
- `question`: A question about the image.
|
63 |
+
- `question_type`: Type of question. Will be one of `choice` (response expected to be "A", "B", "C", or "D"), `counting` (freeform), or `ocr` (freeform). You can use this information to request a suitable output format.
|
64 |
+
- `options`: This is the list of options for `question_type=choice` and empty otherwise. Please treat the options as answers options `A, B, C, D` (4 options) or `A, B` (2 options).
|
65 |
+
- `difficulty`: Meta-data about the difficulty of the question. One of `easy`, `medium`, or `hard`.
|
66 |
+
- `category`: Meta-data about the question task. One of `activity`, `attributes`, `counting`, `ocr`, `reasoning`, or `scene`.
|
67 |
+
- `default_prompt`: You can use this prompt to stay compliant with our results. It is a simple combination of the question and answers, with some additional output format constraints. This should work well for most models.
|
68 |
+
|
69 |
+
## 🎯 Evaluate your model
|
70 |
+
|
71 |
+
Please see [GitHub](https://github.com/paulgavrikov/visualoverload/) for an example evaluation script that generates a correct submission JSON.
|
72 |
+
|
73 |
+
All of our ground truth labels are private. The only way to score your submission is to use the [evaluation server](https://huggingface.co/spaces/paulgavrikov/visualoverload-submit). You will need to sign in with a HuggingFace account.
|
74 |
+
|
75 |
+
Your predictions should be a list of dictionaries, each containing an `question_id` field and a `response` field. For multiple choice questions, the `response` field should contain the predicted answer choice. For open-ended questions, the `response` field should contain the option letter (A-D). We will apply simple heuristics to clean the responses, but please ensure they are as accurate as possible.
|
76 |
+
|
77 |
+
|
78 |
+
## 🏆 Submit to the leaderboard
|
79 |
+
We welcome all submissions for model *or* method (including prompting-based) to our dataset. Please create a [GitHub issue](https://github.com/paulgavrikov/visualoverload/issues) following the template and include your predictions as JSON.
|