Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -931,6 +931,7 @@ VisTW-MCQ, composed of past examination questions from various educational level
|
|
| 931 |
Our benchmark dataset was constructed using real-world exam papers collected from publicly available sources spanning the years 2013 to 2024. We selected subjects specifically requiring visual comprehension, such as medical diagnostics (e.g., interpreting X-ray and ultrasound images), geometry, electronic circuit design, and chemistry. For each question, we additionally collected human performance data, including average accuracy scores, to better gauge question difficulty and establish a human performance baseline.
|
| 932 |
|
| 933 |
Official benchmark : [Github TMMMU-Benchmark/evaluation](https://github.com/TMMMU-Benchmark/evaluation)
|
|
|
|
| 934 |
Arxiv : [VisTW: Benchmarking Vision-Language Models for Traditional Chinese in Taiwan](https://arxiv.org/abs/2503.10427v2)
|
| 935 |
|
| 936 |
| Model | VisTW-MCQ Accuracy | VisTW-MCQ Rank | VisTW-Dialogue Score | VisTW-Dialogue Rank |
|
|
|
|
| 931 |
Our benchmark dataset was constructed using real-world exam papers collected from publicly available sources spanning the years 2013 to 2024. We selected subjects specifically requiring visual comprehension, such as medical diagnostics (e.g., interpreting X-ray and ultrasound images), geometry, electronic circuit design, and chemistry. For each question, we additionally collected human performance data, including average accuracy scores, to better gauge question difficulty and establish a human performance baseline.
|
| 932 |
|
| 933 |
Official benchmark : [Github TMMMU-Benchmark/evaluation](https://github.com/TMMMU-Benchmark/evaluation)
|
| 934 |
+
|
| 935 |
Arxiv : [VisTW: Benchmarking Vision-Language Models for Traditional Chinese in Taiwan](https://arxiv.org/abs/2503.10427v2)
|
| 936 |
|
| 937 |
| Model | VisTW-MCQ Accuracy | VisTW-MCQ Rank | VisTW-Dialogue Score | VisTW-Dialogue Rank |
|