Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -934,43 +934,50 @@ Official benchmark : [Github TMMMU-Benchmark/evaluation](https://github.com/TMMM
|
|
| 934 |
|
| 935 |
Arxiv : [VisTW: Benchmarking Vision-Language Models for Traditional Chinese in Taiwan](https://arxiv.org/abs/2503.10427v2)
|
| 936 |
|
| 937 |
-
| Model
|
| 938 |
-
|
| 939 |
-
|
|
| 940 |
-
|
|
| 941 |
-
|
|
| 942 |
-
|
|
| 943 |
-
|
|
| 944 |
-
|
|
| 945 |
-
|
|
| 946 |
-
|
|
| 947 |
-
|
|
| 948 |
-
|
|
| 949 |
-
|
|
| 950 |
-
|
|
| 951 |
-
|
|
| 952 |
-
|
|
| 953 |
-
|
|
| 954 |
-
|
|
| 955 |
-
|
|
| 956 |
-
|
|
| 957 |
-
|
|
| 958 |
-
|
|
| 959 |
-
| InternVL2
|
| 960 |
-
|
|
| 961 |
-
|
|
| 962 |
-
|
|
| 963 |
-
|
|
| 964 |
-
|
|
| 965 |
-
|
|
| 966 |
-
|
|
| 967 |
-
|
|
| 968 |
-
|
|
| 969 |
-
|
|
| 970 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 971 |
|
| 972 |
-
*Models ordered by VisTW-MCQ Rank.*
|
| 973 |
|
|
|
|
| 974 |
|
| 975 |
|
| 976 |
## Citation
|
|
|
|
| 934 |
|
| 935 |
Arxiv : [VisTW: Benchmarking Vision-Language Models for Traditional Chinese in Taiwan](https://arxiv.org/abs/2503.10427v2)
|
| 936 |
|
| 937 |
+
| Model | VisTW-MCQ Accuracy | VisTW-MCQ Rank | VisTW-Dialogue Score | VisTW-Dialogue Rank | Avg Rank |
|
| 938 |
+
| --- | ---: | ---: | ---: | ---: | ---: |
|
| 939 |
+
| quasar-alpha | 0.6782 | 1 | 6.2733 | 5 | 3.0 |
|
| 940 |
+
| gemini-2.0-pro-exp-02-05 | 0.6619 | 2 | 6.7237 | 1 | 1.5 |
|
| 941 |
+
| gemini-2.0-flash-001 | 0.6596 | 3 | 6.6451 | 2 | 2.5 |
|
| 942 |
+
| llama-4-maverick | 0.6529 | 4 | 4.884 | 11 | 7.5 |
|
| 943 |
+
| claude-3-5-sonnet-20241022 | 0.6019 | 5 | 5.9603 | 8 | 6.5 |
|
| 944 |
+
| gpt-4o-2024-11-20 | 0.5755 | 6 | 6.1176 | 6 | 6.0 |
|
| 945 |
+
| qwen2.5-vl-72b-instruct | 0.5504 | 7 | 4.8656 | 12 | 9.5 |
|
| 946 |
+
| llama-4-scout | 0.5292 | 8 | 4.0943 | 19 | 13.5 |
|
| 947 |
+
| gemini-2.0-flash-lite-preview-02-05 | 0.4992 | 9 | 6.4159 | 4 | 6.5 |
|
| 948 |
+
| qwen2.5-vl-32b-instruct | 0.4935 | 10 | 5.5027 | 9 | 9.5 |
|
| 949 |
+
| gemma-3-12b-it | 0.4863 | 11 | 3.9403 | 20 | 15.5 |
|
| 950 |
+
| mistral-small-3.1-24b-instruct-2503 | 0.459 | 12 | 4.3298 | 15 | 13.5 |
|
| 951 |
+
| gemini-1.5-pro | 0.4417 | 13 | 5.0504 | 10 | 11.5 |
|
| 952 |
+
| meta-llama-Llama-3.2-90B-Vision-Instruct-Turbo | 0.4119 | 14 | 3.4443 | 27 | 20.5 |
|
| 953 |
+
| qvq-72b-preview | 0.4094 | 15 | 3.6122 | 24 | 19.5 |
|
| 954 |
+
| gpt-4o-mini-2024-07-18 | 0.4091 | 16 | 4.7405 | 13 | 14.5 |
|
| 955 |
+
| gpt-4o-2024-08-06 | 0.4 | 17 | 5.9756 | 7 | 12.0 |
|
| 956 |
+
| gemini-1.5-flash | 0.3943 | 18 | 4.2611 | 16 | 17.0 |
|
| 957 |
+
| gemini-2.0-flash-thinking-exp-1219 | 0.3764 | 19 | 6.5053 | 3 | 11.0 |
|
| 958 |
+
| Qwen-Qwen2.5-VL-7B-Instruct | 0.3592 | 20 | 4.542 | 14 | 17.0 |
|
| 959 |
+
| OpenGVLab-InternVL2-8B-MPO | 0.3533 | 21 | 3.6778 | 23 | 22.0 |
|
| 960 |
+
| OpenGVLab-InternVL2_5-8B | 0.3447 | 22 | 3.9008 | 21 | 21.5 |
|
| 961 |
+
| OpenGVLab-InternVL2-8B | 0.3431 | 23 | 3.4504 | 26 | 24.5 |
|
| 962 |
+
| nova-lite-v1 | 0.3377 | 24 | 3.2626 | 28 | 26.0 |
|
| 963 |
+
| claude-3-haiku-20240307 | 0.3291 | 25 | 3.6992 | 22 | 23.5 |
|
| 964 |
+
| OpenGVLab-InternVL2_5-4B | 0.3291 | 26 | 3.6031 | 25 | 25.5 |
|
| 965 |
+
| gemini-1.5-flash-8b | 0.328 | 27 | 4.1771 | 18 | 22.5 |
|
| 966 |
+
| meta-llama-Llama-3.2-11B-Vision-Instruct-Turbo | 0.3262 | 28 | 2.5786 | 33 | 30.5 |
|
| 967 |
+
| deepseek-ai-deepseek-vl2-small | 0.3181 | 29 | 0.5084 | 39 | 34.0 |
|
| 968 |
+
| OpenGVLab-InternVL2-4B | 0.3081 | 30 | 2.3069 | 34 | 32.0 |
|
| 969 |
+
| llama3.2-ffm-11b-v-32k-chat | 0.3037 | 31 | 3.115 | 30 | 30.5 |
|
| 970 |
+
| Qwen-Qwen2-VL-7B-Instruct | 0.3004 | 32 | 4.2122 | 17 | 24.5 |
|
| 971 |
+
| MediaTek-Research-Llama-Breeze2-3B-Instruct | 0.2971 | 33 | 2.8992 | 32 | 32.5 |
|
| 972 |
+
| MediaTek-Research-Llama-Breeze2-8B-Instruct | 0.2915 | 34 | 3.1374 | 29 | 31.5 |
|
| 973 |
+
| OpenGVLab-InternVL2-2B | 0.2891 | 35 | 2.2198 | 35 | 35.0 |
|
| 974 |
+
| phi-4-multimodal-instruct | 0.286 | 36 | 1.7863 | 38 | 37.0 |
|
| 975 |
+
| deepseek-ai-deepseek-vl2-tiny | 0.2781 | 37 | 2.0076 | 37 | 37.0 |
|
| 976 |
+
| THUDM-cogvlm2-llama3-chinese-chat-19B | 0.2777 | 38 | 2.9618 | 31 | 34.5 |
|
| 977 |
+
| OpenGVLab-InternVL2-1B | 0.2689 | 39 | 2.1298 | 36 | 37.5 |
|
| 978 |
|
|
|
|
| 979 |
|
| 980 |
+
*Models ordered by VisTW-MCQ Rank.*
|
| 981 |
|
| 982 |
|
| 983 |
## Citation
|