Update README.md
Browse files
README.md
CHANGED
@@ -39,12 +39,10 @@ pretty_name: VisualOverload
|
|
39 |
---
|
40 |
# VisualOverload
|
41 |
<p align="center">
|
42 |
-
<img src="https://github.com/paulgavrikov/visualoverload/blob/main/assets/logo.jpg?raw=true" width="400">
|
43 |
</p>
|
44 |
Is basic image understanding really solved in state-of-the-art VLMs? We present VisualOverload, a slightly different visual question answering (VQA) benchmark comprising 2,720 question–answer pairs, with privately held ground-truth responses. Unlike prior VQA datasets that typically focus on near global image understanding, VisualOverload challenges models to perform simple, knowledge-free visual understanding and reasoning of details in densely populated (or, *overloaded*) scenes. Our dataset consists of high-resolution scans of public-domain paintings that are populated with multiple figures, actions, and unfolding subplots set against elaborately detailed backdrops. Questions were handcrafted to probe for a thorough understanding of the scene.
|
45 |
|
46 |
-
|
47 |
-
|
48 |
## 📂 Load the dataset
|
49 |
|
50 |
The easiest way to load the dataset is to use HuggingFace's `datasets`.
|
|
|
39 |
---
|
40 |
# VisualOverload
|
41 |
<p align="center">
|
42 |
+
<img src="https://github.com/paulgavrikov/visualoverload/blob/main/assets/logo.jpg?raw=true" width="400">
|
43 |
</p>
|
44 |
Is basic image understanding really solved in state-of-the-art VLMs? We present VisualOverload, a slightly different visual question answering (VQA) benchmark comprising 2,720 question–answer pairs, with privately held ground-truth responses. Unlike prior VQA datasets that typically focus on near global image understanding, VisualOverload challenges models to perform simple, knowledge-free visual understanding and reasoning of details in densely populated (or, *overloaded*) scenes. Our dataset consists of high-resolution scans of public-domain paintings that are populated with multiple figures, actions, and unfolding subplots set against elaborately detailed backdrops. Questions were handcrafted to probe for a thorough understanding of the scene.
|
45 |
|
|
|
|
|
46 |
## 📂 Load the dataset
|
47 |
|
48 |
The easiest way to load the dataset is to use HuggingFace's `datasets`.
|