Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,6 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
3 |
task_categories:
|
4 |
- image-to-text
|
5 |
language:
|
@@ -46,6 +47,10 @@ By introducing novel evaluation metrics that go beyond mere accuracy, VERIFY hig
|
|
46 |
|
47 |
Details of the benchmark can viewed at the [VERIFY project page](https://proj-verify.jing.vision/).
|
48 |
|
|
|
|
|
|
|
|
|
49 |
## Usage
|
50 |
|
51 |
|
@@ -64,9 +69,26 @@ print("Reasoning:", example['reasoning'])
|
|
64 |
print("Answer:", example['answer'])
|
65 |
```
|
66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
|
68 |
-
|
69 |
|
|
|
70 |
|
71 |
-
|
72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
arxiv: 2503.11557
|
4 |
task_categories:
|
5 |
- image-to-text
|
6 |
language:
|
|
|
47 |
|
48 |
Details of the benchmark can viewed at the [VERIFY project page](https://proj-verify.jing.vision/).
|
49 |
|
50 |
+
🔔 **Teaser:**
|
51 |
+
This teaser is provided for interested users. Simply copy and paste the image to quickly try out the advanced O1 or Gemini model.
|
52 |
+
|
53 |
+
|
54 |
## Usage
|
55 |
|
56 |
|
|
|
69 |
print("Answer:", example['answer'])
|
70 |
```
|
71 |
|
72 |
+
---
|
73 |
+
|
74 |
+
## Contact
|
75 |
+
|
76 |
+
For any questions or further information, please contact:
|
77 |
+
|
78 |
+
- **Jing Bi** – [[email protected]](mailto:[email protected])
|
79 |
+
|
80 |
+
---
|
81 |
|
82 |
+
## Citation
|
83 |
|
84 |
+
If you find this work useful in your research, please consider citing our paper:
|
85 |
|
86 |
+
```bibtex
|
87 |
+
@misc{bi2025verify,
|
88 |
+
title={VERIFY: A Benchmark of Visual Explanation and Reasoning for Investigating Multimodal Reasoning Fidelity},
|
89 |
+
author={Jing Bi and Junjia Guo and Susan Liang and Guangyu Sun and Luchuan Song and Yunlong Tang and Jinxi He and Jiarui Wu and Ali Vosoughi and Chen Chen and Chenliang Xu},
|
90 |
+
year={2025},
|
91 |
+
eprint={2503.11557},
|
92 |
+
archivePrefix={arXiv},
|
93 |
+
primaryClass={cs.CV}
|
94 |
+
}
|