theblackcat102 commited on
Commit
1c45b57
·
verified ·
1 Parent(s): 22cfdcb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -35
README.md CHANGED
@@ -934,43 +934,50 @@ Official benchmark : [Github TMMMU-Benchmark/evaluation](https://github.com/TMMM
934
 
935
  Arxiv : [VisTW: Benchmarking Vision-Language Models for Traditional Chinese in Taiwan](https://arxiv.org/abs/2503.10427v2)
936
 
937
- | Model | VisTW-MCQ Accuracy | VisTW-MCQ Rank | VisTW-Dialogue Score | VisTW-Dialogue Rank |
938
- |------------------------------------|--------------------|----------------|-----------------------|---------------------|
939
- | Gemini-2.0-pro-exp-02-05 | 0.6619 | 1 | 6.72 | 1 |
940
- | Gemini-2.0-flash-001 | 0.6596 | 2 | 6.15 | 3 |
941
- | Claude-3-5-sonnet-20241022 | 0.6019 | 3 | 5.96 | 6 |
942
- | gpt-4o-2024-11-20 | 0.5755 | 4 | 6.12 | 4 |
943
- | Qwen2.5-VL-72B-instruct | 0.5413 | 5 | 4.87 | 9 |
944
- | Gemini-2.0-flash-lite-preview-02-05| 0.4992 | 6 | 5.92 | 7 |
945
- | Qwen2-VL-72B-instruct | 0.4701 | 7 | 4.21 | 14 |
946
- | Mistral-Small-3.1-24B | 0.4590 | 8 | 4.33 | 12 |
947
- | Gemini-1.5-pro | 0.4417 | 9 | 5.05 | 8 |
948
- | Gemma3-27b-it | 0.4375 | 10 | 3.94 | 17 |
949
- | Llama-3.2-90B-Vision-Instruct | 0.4119 | 11 | 3.44 | 22 |
950
- | gpt-4o-mini-2024-07-18 | 0.4091 | 12 | 4.74 | 10 |
951
- | gpt-4o-2024-08-06 | 0.4000 | 13 | 5.98 | 5 |
952
- | Gemini-1.5-flash | 0.3943 | 14 | 4.26 | 13 |
953
- | Gemini-2.0-flash-thinking-exp-1219 | 0.3764 | 15 | 6.51 | 2 |
954
- | Qwen2.5-VL-7B-Instruct | 0.3592 | 16 | 4.54 | 11 |
955
- | InternVL2.5-8B | 0.3447 | 17 | 3.90 | 18 |
956
- | InternVL2-8B | 0.3431 | 18 | 3.45 | 21 |
957
- | Nova-lite-v1 | 0.3376 | 19 | 3.26 | 23 |
958
- | Claude-3-haiku-20240307 | 0.3291 | 20 | 3.70 | 19 |
959
- | InternVL2.5-4B | 0.3291 | 21 | 3.60 | 20 |
960
- | Gemini-1.5-flash-8B | 0.3280 | 22 | 4.18 | 16 |
961
- | Llama-3.2-11B-Vision-Instruct | 0.3262 | 23 | 2.58 | 27 |
962
- | Deepseek-vl2-small | 0.3181 | 24 | 0.51 | 32 |
963
- | InternVL2-4B | 0.3081 | 25 | 2.31 | 28 |
964
- | Qwen2-VL-7B-Instruct | 0.3004 | 26 | 4.21 | 14 |
965
- | Breeze2-3B-Instruct | 0.2971 | 27 | 2.90 | 26 |
966
- | Breeze2-8B-Instruct | 0.2915 | 28 | 3.14 | 24 |
967
- | InternVL2-2B | 0.2891 | 29 | 2.22 | 29 |
968
- | CogVLM2-llama3-chinese-chat | 0.2777 | 30 | 2.96 | 25 |
969
- | deepseek-vl2-tiny | 0.2781 | 31 | 2.01 | 31 |
970
- | InternVL2-1B | 0.2689 | 32 | 2.13 | 30 |
 
 
 
 
 
 
 
971
 
972
- *Models ordered by VisTW-MCQ Rank.*
973
 
 
974
 
975
 
976
  ## Citation
 
934
 
935
  Arxiv : [VisTW: Benchmarking Vision-Language Models for Traditional Chinese in Taiwan](https://arxiv.org/abs/2503.10427v2)
936
 
937
+ | Model | VisTW-MCQ Accuracy | VisTW-MCQ Rank | VisTW-Dialogue Score | VisTW-Dialogue Rank | Avg Rank |
938
+ | --- | ---: | ---: | ---: | ---: | ---: |
939
+ | quasar-alpha | 0.6782 | 1 | 6.2733 | 5 | 3.0 |
940
+ | gemini-2.0-pro-exp-02-05 | 0.6619 | 2 | 6.7237 | 1 | 1.5 |
941
+ | gemini-2.0-flash-001 | 0.6596 | 3 | 6.6451 | 2 | 2.5 |
942
+ | llama-4-maverick | 0.6529 | 4 | 4.884 | 11 | 7.5 |
943
+ | claude-3-5-sonnet-20241022 | 0.6019 | 5 | 5.9603 | 8 | 6.5 |
944
+ | gpt-4o-2024-11-20 | 0.5755 | 6 | 6.1176 | 6 | 6.0 |
945
+ | qwen2.5-vl-72b-instruct | 0.5504 | 7 | 4.8656 | 12 | 9.5 |
946
+ | llama-4-scout | 0.5292 | 8 | 4.0943 | 19 | 13.5 |
947
+ | gemini-2.0-flash-lite-preview-02-05 | 0.4992 | 9 | 6.4159 | 4 | 6.5 |
948
+ | qwen2.5-vl-32b-instruct | 0.4935 | 10 | 5.5027 | 9 | 9.5 |
949
+ | gemma-3-12b-it | 0.4863 | 11 | 3.9403 | 20 | 15.5 |
950
+ | mistral-small-3.1-24b-instruct-2503 | 0.459 | 12 | 4.3298 | 15 | 13.5 |
951
+ | gemini-1.5-pro | 0.4417 | 13 | 5.0504 | 10 | 11.5 |
952
+ | meta-llama-Llama-3.2-90B-Vision-Instruct-Turbo | 0.4119 | 14 | 3.4443 | 27 | 20.5 |
953
+ | qvq-72b-preview | 0.4094 | 15 | 3.6122 | 24 | 19.5 |
954
+ | gpt-4o-mini-2024-07-18 | 0.4091 | 16 | 4.7405 | 13 | 14.5 |
955
+ | gpt-4o-2024-08-06 | 0.4 | 17 | 5.9756 | 7 | 12.0 |
956
+ | gemini-1.5-flash | 0.3943 | 18 | 4.2611 | 16 | 17.0 |
957
+ | gemini-2.0-flash-thinking-exp-1219 | 0.3764 | 19 | 6.5053 | 3 | 11.0 |
958
+ | Qwen-Qwen2.5-VL-7B-Instruct | 0.3592 | 20 | 4.542 | 14 | 17.0 |
959
+ | OpenGVLab-InternVL2-8B-MPO | 0.3533 | 21 | 3.6778 | 23 | 22.0 |
960
+ | OpenGVLab-InternVL2_5-8B | 0.3447 | 22 | 3.9008 | 21 | 21.5 |
961
+ | OpenGVLab-InternVL2-8B | 0.3431 | 23 | 3.4504 | 26 | 24.5 |
962
+ | nova-lite-v1 | 0.3377 | 24 | 3.2626 | 28 | 26.0 |
963
+ | claude-3-haiku-20240307 | 0.3291 | 25 | 3.6992 | 22 | 23.5 |
964
+ | OpenGVLab-InternVL2_5-4B | 0.3291 | 26 | 3.6031 | 25 | 25.5 |
965
+ | gemini-1.5-flash-8b | 0.328 | 27 | 4.1771 | 18 | 22.5 |
966
+ | meta-llama-Llama-3.2-11B-Vision-Instruct-Turbo | 0.3262 | 28 | 2.5786 | 33 | 30.5 |
967
+ | deepseek-ai-deepseek-vl2-small | 0.3181 | 29 | 0.5084 | 39 | 34.0 |
968
+ | OpenGVLab-InternVL2-4B | 0.3081 | 30 | 2.3069 | 34 | 32.0 |
969
+ | llama3.2-ffm-11b-v-32k-chat | 0.3037 | 31 | 3.115 | 30 | 30.5 |
970
+ | Qwen-Qwen2-VL-7B-Instruct | 0.3004 | 32 | 4.2122 | 17 | 24.5 |
971
+ | MediaTek-Research-Llama-Breeze2-3B-Instruct | 0.2971 | 33 | 2.8992 | 32 | 32.5 |
972
+ | MediaTek-Research-Llama-Breeze2-8B-Instruct | 0.2915 | 34 | 3.1374 | 29 | 31.5 |
973
+ | OpenGVLab-InternVL2-2B | 0.2891 | 35 | 2.2198 | 35 | 35.0 |
974
+ | phi-4-multimodal-instruct | 0.286 | 36 | 1.7863 | 38 | 37.0 |
975
+ | deepseek-ai-deepseek-vl2-tiny | 0.2781 | 37 | 2.0076 | 37 | 37.0 |
976
+ | THUDM-cogvlm2-llama3-chinese-chat-19B | 0.2777 | 38 | 2.9618 | 31 | 34.5 |
977
+ | OpenGVLab-InternVL2-1B | 0.2689 | 39 | 2.1298 | 36 | 37.5 |
978
 
 
979
 
980
+ *Models ordered by VisTW-MCQ Rank.*
981
 
982
 
983
  ## Citation