Update README.md
Browse files
README.md
CHANGED
@@ -6,15 +6,14 @@ language:
|
|
6 |
- en
|
7 |
---
|
8 |
|
9 |
-
|
|
|
10 |
|
11 |
-
|
12 |
-
|
13 |
-
ViDoSeek sets itself apart with its heightened difficulty level, attributed to the multi-document context and the intricate nature of its content types, particularly the Layout category. The dataset contains both single-hop and multi-hop queries, presenting a diverse set of challenges.
|
14 |
-
|
15 |
-
We have also released the SlideVQA dataset, refined through our pipeline, which we refer to as SlideVQA-Refined. This dataset is suitable for evaluating retrieval-augmented generation tasks as well.
|
16 |
|
|
|
17 |
|
|
|
18 |
The annotation is in the form of a JSON file.
|
19 |
```json
|
20 |
{
|
@@ -37,7 +36,7 @@ The annotation is in the form of a JSON file.
|
|
37 |
}
|
38 |
```
|
39 |
|
40 |
-
|
41 |
If you find this dataset useful, please consider citing our paper:
|
42 |
```bigquery
|
43 |
@misc{wang2025vidoragvisualdocumentretrievalaugmented,
|
|
|
6 |
- en
|
7 |
---
|
8 |
|
9 |
+
## 🚀Overview
|
10 |
+
This is the Repo for ViDoSeek, a benchmark specifically designed for visually rich document retrieval-reason-answer, fully suited for evaluation of RAG within large document corpus. The paper is available at [https://arxiv.org/abs/2502.18017](https://arxiv.org/abs/2502.18017).
|
11 |
|
12 |
+
**ViDoSeek** sets itself apart with its heightened difficulty level, attributed to the multi-document context and the intricate nature of its content types, particularly the Layout category. The dataset contains both single-hop and multi-hop queries, presenting a diverse set of challenges.
|
|
|
|
|
|
|
|
|
13 |
|
14 |
+
We have also released the **SlideVQA-Refined** dataset which is refined through our pipeline. This dataset is suitable for evaluating retrieval-augmented generation tasks as well.
|
15 |
|
16 |
+
## 🔍Dataset Format
|
17 |
The annotation is in the form of a JSON file.
|
18 |
```json
|
19 |
{
|
|
|
36 |
}
|
37 |
```
|
38 |
|
39 |
+
## 📝 Citation
|
40 |
If you find this dataset useful, please consider citing our paper:
|
41 |
```bigquery
|
42 |
@misc{wang2025vidoragvisualdocumentretrievalaugmented,
|