Datasets:
hezheqi
commited on
Commit
·
4e436fb
1
Parent(s):
48dbe22
Update readme
Browse files- README.md +9 -71
- hf_cmmu_dataset.py +46 -0
README.md
CHANGED
@@ -33,77 +33,15 @@ We currently evaluated 10 models on CMMU. The results are shown in the following
|
|
33 |
| Qwen-VL-Plus | 26.77 | 26.9 |
|
34 |
| GPT-4V | 30.19 | 30.91 |
|
35 |
|
36 |
-
## How to use
|
37 |
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
|
|
|
|
|
|
|
|
43 |
```
|
44 |
|
45 |
-
**About fill-in-the-blank questions**
|
46 |
-
|
47 |
-
For fill-in-the-blank questions, `CmmuDataset` will generate new questions by `sub_question`, for example:
|
48 |
-
|
49 |
-
The original question is:
|
50 |
-
```python
|
51 |
-
{
|
52 |
-
"type": "fill-in-the-blank",
|
53 |
-
"question_info": "question",
|
54 |
-
"id": "subject_1234",
|
55 |
-
"sub_questions": ["sub_question_0", "sub_question_1"],
|
56 |
-
"answer": ["answer_0", "answer_1"]
|
57 |
-
}
|
58 |
-
```
|
59 |
-
Converted questions are:
|
60 |
-
```python
|
61 |
-
[
|
62 |
-
{
|
63 |
-
"type": "fill-in-the-blank",
|
64 |
-
"question_info": "question" + "sub_question_0",
|
65 |
-
"id": "subject_1234-0",
|
66 |
-
"answer": "answer_0"
|
67 |
-
},
|
68 |
-
{
|
69 |
-
"type": "fill-in-the-blank",
|
70 |
-
"question_info": "question" + "sub_question_1",
|
71 |
-
"id": "subject_1234-1",
|
72 |
-
"answer": "answer_1"
|
73 |
-
}
|
74 |
-
]
|
75 |
-
```
|
76 |
-
|
77 |
-
**About ShiftCheck**
|
78 |
-
The parameter `shift_check` is `True` by default, you can get more information about `shift_check` in our technical report.
|
79 |
-
|
80 |
-
`CmmuDataset` will generate k new questions by `shift_check`, their ids are `{original_id}-k`.
|
81 |
-
|
82 |
-
|
83 |
-
## Evaluate
|
84 |
-
|
85 |
-
The output format should be a list of json dictionaries, the required key is as follows:
|
86 |
-
```python
|
87 |
-
{
|
88 |
-
"question_id": "question id",
|
89 |
-
"answer": "answer"
|
90 |
-
}
|
91 |
-
```
|
92 |
-
Current code call gpt4 API by `AzureOpenAI`, maybe you need to modify `eval/chat_llm.py` to create your own client, and before run evaluation, you need to set environment variables like `AZURE_OPENAI_API_KEY` and `AZURE_OPENAI_ENDPOINT`.
|
93 |
-
|
94 |
-
Run
|
95 |
-
```shell
|
96 |
-
python eval/evaluate.py --result your_pred_file --data_root your_path_to_cmmu_dataset
|
97 |
-
```
|
98 |
-
|
99 |
-
**NOTE** We evaluate fill-in-the-blank questions using GPT-4 by default. If you do not have access to GPT-4, you can attempt to use a rule-based method to fill in the blanks. However, be aware that the results might differ from the official ones.
|
100 |
-
```shell
|
101 |
-
python eval/evaluate.py --result your_pred_file --data_root your_path_to_cmmu_dataset --gpt none
|
102 |
-
```
|
103 |
-
|
104 |
-
To evaluate specific type of questions, you can use `--qtype` parameter, for example:
|
105 |
-
```shell
|
106 |
-
python eval/evaluate.py --result example/gpt4v_results_val.json --data_root your_path_to_cmmu_dataset --qtype fbq mrq
|
107 |
-
```
|
108 |
-
|
109 |
-
## Citation
|
|
|
33 |
| Qwen-VL-Plus | 26.77 | 26.9 |
|
34 |
| GPT-4V | 30.19 | 30.91 |
|
35 |
|
|
|
36 |
|
37 |
+
## Citation
|
38 |
+
**BibTeX:**
|
39 |
+
```bibtex
|
40 |
+
@article{he2024cmmu,
|
41 |
+
title={CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning},
|
42 |
+
author={Zheqi He, Xinya Wu, Pengfei Zhou, Richeng Xuan, Guang Liu, Xi Yang, Qiannan Zhu and Hua Huang},
|
43 |
+
journal={arXiv preprint},
|
44 |
+
year={2024},
|
45 |
+
}
|
46 |
```
|
47 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
hf_cmmu_dataset.py
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from datasets import DatasetBuilder, Features, Value, Sequence, DatasetInfo
|
2 |
+
import json
|
3 |
+
import os.path as osp
|
4 |
+
|
5 |
+
class HfCmmuDataset(DatasetBuilder):
|
6 |
+
def _info(self):
|
7 |
+
return DatasetInfo(
|
8 |
+
features=Features({
|
9 |
+
"type": Value("string"),
|
10 |
+
"class": Value("string"),
|
11 |
+
"grade_band": Value("string"),
|
12 |
+
"difficulty": Value("string"),
|
13 |
+
"answer": Value("string"),
|
14 |
+
"question_info": Value("string"),
|
15 |
+
"solution_info": Value("string"),
|
16 |
+
"options": Sequence(Value("string")),
|
17 |
+
"sub_questions": Sequence(Value("string")),
|
18 |
+
"id": Value("string"),
|
19 |
+
"images": Sequence(Value("string")),
|
20 |
+
"split": Value("string")
|
21 |
+
}),
|
22 |
+
)
|
23 |
+
|
24 |
+
def _split_generators(self, dl_manager):
|
25 |
+
# Define the splits here if needed
|
26 |
+
return []
|
27 |
+
|
28 |
+
def _generate_examples(self):
|
29 |
+
# Here, you'll need to write code to read and yield each example
|
30 |
+
# from your jsonl files.
|
31 |
+
jsonl_files = [
|
32 |
+
"biology.jsonl",
|
33 |
+
"chemistry.jsonl",
|
34 |
+
"geography.jsonl",
|
35 |
+
"history.jsonl",
|
36 |
+
"math.jsonl",
|
37 |
+
"physics.jsonl",
|
38 |
+
"politics.jsonl"
|
39 |
+
]
|
40 |
+
for file_path in jsonl_files: # Add all your jsonl files here
|
41 |
+
with open(osp.join('val', file_path), "r", encoding="utf-8") as f:
|
42 |
+
for idx, line in enumerate(f):
|
43 |
+
record = json.loads(line)
|
44 |
+
yield idx, record
|
45 |
+
|
46 |
+
dataset = HfCmmuDataset()
|