elichen-skymizer commited on
Commit
aaae47e
·
verified ·
1 Parent(s): 0b43d98

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -43,3 +43,4 @@ Qwen__Qwen3-30B-A3B-Instruct-2507/samples_mmlu_pro_law_2025-09-22T08-06-06.29649
43
  q8_0_mmlu_pro_logs/Qwen__Qwen3-30B-A3B-Instruct-2507/samples_mmlu_pro_law_2025-09-22T10-31-04.916749.jsonl filter=lfs diff=lfs merge=lfs -text
44
  q8_0_mmlu_pro_logs/Qwen__Qwen3-30B-A3B-Instruct-2507/samples_mmlu_pro_math_2025-09-22T10-31-04.916749.jsonl filter=lfs diff=lfs merge=lfs -text
45
  q8_0_mmlu_pro_logs/Qwen__Qwen3-30B-A3B-Instruct-2507/samples_mmlu_pro_physics_2025-09-22T10-31-04.916749.jsonl filter=lfs diff=lfs merge=lfs -text
 
 
43
  q8_0_mmlu_pro_logs/Qwen__Qwen3-30B-A3B-Instruct-2507/samples_mmlu_pro_law_2025-09-22T10-31-04.916749.jsonl filter=lfs diff=lfs merge=lfs -text
44
  q8_0_mmlu_pro_logs/Qwen__Qwen3-30B-A3B-Instruct-2507/samples_mmlu_pro_math_2025-09-22T10-31-04.916749.jsonl filter=lfs diff=lfs merge=lfs -text
45
  q8_0_mmlu_pro_logs/Qwen__Qwen3-30B-A3B-Instruct-2507/samples_mmlu_pro_physics_2025-09-22T10-31-04.916749.jsonl filter=lfs diff=lfs merge=lfs -text
46
+ q4_k_m_mmlu_pro_logs/samples_mmlu_pro_law_2025-09-22T08-06-06.296495.jsonl filter=lfs diff=lfs merge=lfs -text
q4_k_m_mmlu_pro_logs/results_2025-09-22T08-06-06.296495.json ADDED
@@ -0,0 +1,1222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "results": {
3
+ "mmlu_pro": {
4
+ "exact_match,custom-extract": 0.23528922872340424,
5
+ "exact_match_stderr,custom-extract": 0.0036094370940861903,
6
+ "alias": "mmlu_pro"
7
+ },
8
+ "mmlu_pro_biology": {
9
+ "alias": " - biology",
10
+ "exact_match,custom-extract": 0.18549511854951187,
11
+ "exact_match_stderr,custom-extract": 0.014526352452146701
12
+ },
13
+ "mmlu_pro_business": {
14
+ "alias": " - business",
15
+ "exact_match,custom-extract": 0.2991128010139417,
16
+ "exact_match_stderr,custom-extract": 0.01631091990746807
17
+ },
18
+ "mmlu_pro_chemistry": {
19
+ "alias": " - chemistry",
20
+ "exact_match,custom-extract": 0.08215547703180212,
21
+ "exact_match_stderr,custom-extract": 0.008165288212152419
22
+ },
23
+ "mmlu_pro_computer_science": {
24
+ "alias": " - computer_science",
25
+ "exact_match,custom-extract": 0.0951219512195122,
26
+ "exact_match_stderr,custom-extract": 0.014506870947377805
27
+ },
28
+ "mmlu_pro_economics": {
29
+ "alias": " - economics",
30
+ "exact_match,custom-extract": 0.42890995260663506,
31
+ "exact_match_stderr,custom-extract": 0.017045964139060617
32
+ },
33
+ "mmlu_pro_engineering": {
34
+ "alias": " - engineering",
35
+ "exact_match,custom-extract": 0.0784313725490196,
36
+ "exact_match_stderr,custom-extract": 0.008641140565804537
37
+ },
38
+ "mmlu_pro_health": {
39
+ "alias": " - health",
40
+ "exact_match,custom-extract": 0.45843520782396086,
41
+ "exact_match_stderr,custom-extract": 0.01743223873836313
42
+ },
43
+ "mmlu_pro_history": {
44
+ "alias": " - history",
45
+ "exact_match,custom-extract": 0.27034120734908135,
46
+ "exact_match_stderr,custom-extract": 0.02278369909884345
47
+ },
48
+ "mmlu_pro_law": {
49
+ "alias": " - law",
50
+ "exact_match,custom-extract": 0.1807447774750227,
51
+ "exact_match_stderr,custom-extract": 0.011602354889908736
52
+ },
53
+ "mmlu_pro_math": {
54
+ "alias": " - math",
55
+ "exact_match,custom-extract": 0.06809770540340489,
56
+ "exact_match_stderr,custom-extract": 0.006856216855671762
57
+ },
58
+ "mmlu_pro_other": {
59
+ "alias": " - other",
60
+ "exact_match,custom-extract": 0.35064935064935066,
61
+ "exact_match_stderr,custom-extract": 0.01570635135721108
62
+ },
63
+ "mmlu_pro_philosophy": {
64
+ "alias": " - philosophy",
65
+ "exact_match,custom-extract": 0.250501002004008,
66
+ "exact_match_stderr,custom-extract": 0.019416707602848925
67
+ },
68
+ "mmlu_pro_physics": {
69
+ "alias": " - physics",
70
+ "exact_match,custom-extract": 0.1724403387220939,
71
+ "exact_match_stderr,custom-extract": 0.010485321323348036
72
+ },
73
+ "mmlu_pro_psychology": {
74
+ "alias": " - psychology",
75
+ "exact_match,custom-extract": 0.5639097744360902,
76
+ "exact_match_stderr,custom-extract": 0.017565633891692446
77
+ }
78
+ },
79
+ "groups": {
80
+ "mmlu_pro": {
81
+ "exact_match,custom-extract": 0.23528922872340424,
82
+ "exact_match_stderr,custom-extract": 0.0036094370940861903,
83
+ "alias": "mmlu_pro"
84
+ }
85
+ },
86
+ "group_subtasks": {
87
+ "mmlu_pro": [
88
+ "mmlu_pro_biology",
89
+ "mmlu_pro_business",
90
+ "mmlu_pro_chemistry",
91
+ "mmlu_pro_computer_science",
92
+ "mmlu_pro_economics",
93
+ "mmlu_pro_engineering",
94
+ "mmlu_pro_health",
95
+ "mmlu_pro_history",
96
+ "mmlu_pro_law",
97
+ "mmlu_pro_math",
98
+ "mmlu_pro_other",
99
+ "mmlu_pro_philosophy",
100
+ "mmlu_pro_physics",
101
+ "mmlu_pro_psychology"
102
+ ]
103
+ },
104
+ "configs": {
105
+ "mmlu_pro_biology": {
106
+ "task": "mmlu_pro_biology",
107
+ "task_alias": "biology",
108
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
109
+ "test_split": "test",
110
+ "fewshot_split": "validation",
111
+ "process_docs": "functools.partial(<function process_docs at 0x779b82d0f2e0>, subject='biology')",
112
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d0da80>, including_answer=False)",
113
+ "doc_to_target": "answer",
114
+ "unsafe_code": false,
115
+ "description": "The following are multiple choice questions (with answers) about biology. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
116
+ "target_delimiter": " ",
117
+ "fewshot_delimiter": "\n\n",
118
+ "fewshot_config": {
119
+ "sampler": "first_n",
120
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d0d760>, including_answer=True)",
121
+ "doc_to_target": ""
122
+ },
123
+ "num_fewshot": 5,
124
+ "metric_list": [
125
+ {
126
+ "metric": "exact_match",
127
+ "aggregation": "mean",
128
+ "higher_is_better": true,
129
+ "ignore_case": true,
130
+ "ignore_punctuation": true
131
+ }
132
+ ],
133
+ "output_type": "generate_until",
134
+ "generation_kwargs": {
135
+ "until": [
136
+ "Question:"
137
+ ],
138
+ "max_gen_toks": 16384,
139
+ "do_sample": true,
140
+ "temperature": 0.7,
141
+ "top_k": 20,
142
+ "top_p": 0.8
143
+ },
144
+ "repeats": 1,
145
+ "filter_list": [
146
+ {
147
+ "name": "custom-extract",
148
+ "filter": [
149
+ {
150
+ "function": "regex",
151
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
152
+ },
153
+ {
154
+ "function": "take_first"
155
+ }
156
+ ]
157
+ }
158
+ ],
159
+ "should_decontaminate": false,
160
+ "metadata": {
161
+ "version": 2.1,
162
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
163
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
164
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
165
+ "enforce_eager": true,
166
+ "dtype": "bfloat16",
167
+ "gpu_memory_utilization": 0.9
168
+ }
169
+ },
170
+ "mmlu_pro_business": {
171
+ "task": "mmlu_pro_business",
172
+ "task_alias": "business",
173
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
174
+ "test_split": "test",
175
+ "fewshot_split": "validation",
176
+ "process_docs": "functools.partial(<function process_docs at 0x779b82d67380>, subject='business')",
177
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d66340>, including_answer=False)",
178
+ "doc_to_target": "answer",
179
+ "unsafe_code": false,
180
+ "description": "The following are multiple choice questions (with answers) about business. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
181
+ "target_delimiter": " ",
182
+ "fewshot_delimiter": "\n\n",
183
+ "fewshot_config": {
184
+ "sampler": "first_n",
185
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d665c0>, including_answer=True)",
186
+ "doc_to_target": ""
187
+ },
188
+ "num_fewshot": 5,
189
+ "metric_list": [
190
+ {
191
+ "metric": "exact_match",
192
+ "aggregation": "mean",
193
+ "higher_is_better": true,
194
+ "ignore_case": true,
195
+ "ignore_punctuation": true
196
+ }
197
+ ],
198
+ "output_type": "generate_until",
199
+ "generation_kwargs": {
200
+ "until": [
201
+ "Question:"
202
+ ],
203
+ "max_gen_toks": 16384,
204
+ "do_sample": true,
205
+ "temperature": 0.7,
206
+ "top_k": 20,
207
+ "top_p": 0.8
208
+ },
209
+ "repeats": 1,
210
+ "filter_list": [
211
+ {
212
+ "name": "custom-extract",
213
+ "filter": [
214
+ {
215
+ "function": "regex",
216
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
217
+ },
218
+ {
219
+ "function": "take_first"
220
+ }
221
+ ]
222
+ }
223
+ ],
224
+ "should_decontaminate": false,
225
+ "metadata": {
226
+ "version": 2.1,
227
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
228
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
229
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
230
+ "enforce_eager": true,
231
+ "dtype": "bfloat16",
232
+ "gpu_memory_utilization": 0.9
233
+ }
234
+ },
235
+ "mmlu_pro_chemistry": {
236
+ "task": "mmlu_pro_chemistry",
237
+ "task_alias": "chemistry",
238
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
239
+ "test_split": "test",
240
+ "fewshot_split": "validation",
241
+ "process_docs": "functools.partial(<function process_docs at 0x779b82d66a20>, subject='chemistry')",
242
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d66ca0>, including_answer=False)",
243
+ "doc_to_target": "answer",
244
+ "unsafe_code": false,
245
+ "description": "The following are multiple choice questions (with answers) about chemistry. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
246
+ "target_delimiter": " ",
247
+ "fewshot_delimiter": "\n\n",
248
+ "fewshot_config": {
249
+ "sampler": "first_n",
250
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d66e80>, including_answer=True)",
251
+ "doc_to_target": ""
252
+ },
253
+ "num_fewshot": 5,
254
+ "metric_list": [
255
+ {
256
+ "metric": "exact_match",
257
+ "aggregation": "mean",
258
+ "higher_is_better": true,
259
+ "ignore_case": true,
260
+ "ignore_punctuation": true
261
+ }
262
+ ],
263
+ "output_type": "generate_until",
264
+ "generation_kwargs": {
265
+ "until": [
266
+ "Question:"
267
+ ],
268
+ "max_gen_toks": 16384,
269
+ "do_sample": true,
270
+ "temperature": 0.7,
271
+ "top_k": 20,
272
+ "top_p": 0.8
273
+ },
274
+ "repeats": 1,
275
+ "filter_list": [
276
+ {
277
+ "name": "custom-extract",
278
+ "filter": [
279
+ {
280
+ "function": "regex",
281
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
282
+ },
283
+ {
284
+ "function": "take_first"
285
+ }
286
+ ]
287
+ }
288
+ ],
289
+ "should_decontaminate": false,
290
+ "metadata": {
291
+ "version": 2.1,
292
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
293
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
294
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
295
+ "enforce_eager": true,
296
+ "dtype": "bfloat16",
297
+ "gpu_memory_utilization": 0.9
298
+ }
299
+ },
300
+ "mmlu_pro_computer_science": {
301
+ "task": "mmlu_pro_computer_science",
302
+ "task_alias": "computer_science",
303
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
304
+ "test_split": "test",
305
+ "fewshot_split": "validation",
306
+ "process_docs": "functools.partial(<function process_docs at 0x779b82d640e0>, subject='computer science')",
307
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d659e0>, including_answer=False)",
308
+ "doc_to_target": "answer",
309
+ "unsafe_code": false,
310
+ "description": "The following are multiple choice questions (with answers) about computer science. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
311
+ "target_delimiter": " ",
312
+ "fewshot_delimiter": "\n\n",
313
+ "fewshot_config": {
314
+ "sampler": "first_n",
315
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d64fe0>, including_answer=True)",
316
+ "doc_to_target": ""
317
+ },
318
+ "num_fewshot": 5,
319
+ "metric_list": [
320
+ {
321
+ "metric": "exact_match",
322
+ "aggregation": "mean",
323
+ "higher_is_better": true,
324
+ "ignore_case": true,
325
+ "ignore_punctuation": true
326
+ }
327
+ ],
328
+ "output_type": "generate_until",
329
+ "generation_kwargs": {
330
+ "until": [
331
+ "Question:"
332
+ ],
333
+ "max_gen_toks": 16384,
334
+ "do_sample": true,
335
+ "temperature": 0.7,
336
+ "top_k": 20,
337
+ "top_p": 0.8
338
+ },
339
+ "repeats": 1,
340
+ "filter_list": [
341
+ {
342
+ "name": "custom-extract",
343
+ "filter": [
344
+ {
345
+ "function": "regex",
346
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
347
+ },
348
+ {
349
+ "function": "take_first"
350
+ }
351
+ ]
352
+ }
353
+ ],
354
+ "should_decontaminate": false,
355
+ "metadata": {
356
+ "version": 2.1,
357
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
358
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
359
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
360
+ "enforce_eager": true,
361
+ "dtype": "bfloat16",
362
+ "gpu_memory_utilization": 0.9
363
+ }
364
+ },
365
+ "mmlu_pro_economics": {
366
+ "task": "mmlu_pro_economics",
367
+ "task_alias": "economics",
368
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
369
+ "test_split": "test",
370
+ "fewshot_split": "validation",
371
+ "process_docs": "functools.partial(<function process_docs at 0x779b82d653a0>, subject='economics')",
372
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d65620>, including_answer=False)",
373
+ "doc_to_target": "answer",
374
+ "unsafe_code": false,
375
+ "description": "The following are multiple choice questions (with answers) about economics. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
376
+ "target_delimiter": " ",
377
+ "fewshot_delimiter": "\n\n",
378
+ "fewshot_config": {
379
+ "sampler": "first_n",
380
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d65800>, including_answer=True)",
381
+ "doc_to_target": ""
382
+ },
383
+ "num_fewshot": 5,
384
+ "metric_list": [
385
+ {
386
+ "metric": "exact_match",
387
+ "aggregation": "mean",
388
+ "higher_is_better": true,
389
+ "ignore_case": true,
390
+ "ignore_punctuation": true
391
+ }
392
+ ],
393
+ "output_type": "generate_until",
394
+ "generation_kwargs": {
395
+ "until": [
396
+ "Question:"
397
+ ],
398
+ "max_gen_toks": 16384,
399
+ "do_sample": true,
400
+ "temperature": 0.7,
401
+ "top_k": 20,
402
+ "top_p": 0.8
403
+ },
404
+ "repeats": 1,
405
+ "filter_list": [
406
+ {
407
+ "name": "custom-extract",
408
+ "filter": [
409
+ {
410
+ "function": "regex",
411
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
412
+ },
413
+ {
414
+ "function": "take_first"
415
+ }
416
+ ]
417
+ }
418
+ ],
419
+ "should_decontaminate": false,
420
+ "metadata": {
421
+ "version": 2.1,
422
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
423
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
424
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
425
+ "enforce_eager": true,
426
+ "dtype": "bfloat16",
427
+ "gpu_memory_utilization": 0.9
428
+ }
429
+ },
430
+ "mmlu_pro_engineering": {
431
+ "task": "mmlu_pro_engineering",
432
+ "task_alias": "engineering",
433
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
434
+ "test_split": "test",
435
+ "fewshot_split": "validation",
436
+ "process_docs": "functools.partial(<function process_docs at 0x779b82d0cb80>, subject='engineering')",
437
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d0c9a0>, including_answer=False)",
438
+ "doc_to_target": "answer",
439
+ "unsafe_code": false,
440
+ "description": "The following are multiple choice questions (with answers) about engineering. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
441
+ "target_delimiter": " ",
442
+ "fewshot_delimiter": "\n\n",
443
+ "fewshot_config": {
444
+ "sampler": "first_n",
445
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d0c900>, including_answer=True)",
446
+ "doc_to_target": ""
447
+ },
448
+ "num_fewshot": 5,
449
+ "metric_list": [
450
+ {
451
+ "metric": "exact_match",
452
+ "aggregation": "mean",
453
+ "higher_is_better": true,
454
+ "ignore_case": true,
455
+ "ignore_punctuation": true
456
+ }
457
+ ],
458
+ "output_type": "generate_until",
459
+ "generation_kwargs": {
460
+ "until": [
461
+ "Question:"
462
+ ],
463
+ "max_gen_toks": 16384,
464
+ "do_sample": true,
465
+ "temperature": 0.7,
466
+ "top_k": 20,
467
+ "top_p": 0.8
468
+ },
469
+ "repeats": 1,
470
+ "filter_list": [
471
+ {
472
+ "name": "custom-extract",
473
+ "filter": [
474
+ {
475
+ "function": "regex",
476
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
477
+ },
478
+ {
479
+ "function": "take_first"
480
+ }
481
+ ]
482
+ }
483
+ ],
484
+ "should_decontaminate": false,
485
+ "metadata": {
486
+ "version": 2.1,
487
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
488
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
489
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
490
+ "enforce_eager": true,
491
+ "dtype": "bfloat16",
492
+ "gpu_memory_utilization": 0.9
493
+ }
494
+ },
495
+ "mmlu_pro_health": {
496
+ "task": "mmlu_pro_health",
497
+ "task_alias": "health",
498
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
499
+ "test_split": "test",
500
+ "fewshot_split": "validation",
501
+ "process_docs": "functools.partial(<function process_docs at 0x779b82d0fce0>, subject='health')",
502
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d0ff60>, including_answer=False)",
503
+ "doc_to_target": "answer",
504
+ "unsafe_code": false,
505
+ "description": "The following are multiple choice questions (with answers) about health. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
506
+ "target_delimiter": " ",
507
+ "fewshot_delimiter": "\n\n",
508
+ "fewshot_config": {
509
+ "sampler": "first_n",
510
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d64180>, including_answer=True)",
511
+ "doc_to_target": ""
512
+ },
513
+ "num_fewshot": 5,
514
+ "metric_list": [
515
+ {
516
+ "metric": "exact_match",
517
+ "aggregation": "mean",
518
+ "higher_is_better": true,
519
+ "ignore_case": true,
520
+ "ignore_punctuation": true
521
+ }
522
+ ],
523
+ "output_type": "generate_until",
524
+ "generation_kwargs": {
525
+ "until": [
526
+ "Question:"
527
+ ],
528
+ "max_gen_toks": 16384,
529
+ "do_sample": true,
530
+ "temperature": 0.7,
531
+ "top_k": 20,
532
+ "top_p": 0.8
533
+ },
534
+ "repeats": 1,
535
+ "filter_list": [
536
+ {
537
+ "name": "custom-extract",
538
+ "filter": [
539
+ {
540
+ "function": "regex",
541
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
542
+ },
543
+ {
544
+ "function": "take_first"
545
+ }
546
+ ]
547
+ }
548
+ ],
549
+ "should_decontaminate": false,
550
+ "metadata": {
551
+ "version": 2.1,
552
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
553
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
554
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
555
+ "enforce_eager": true,
556
+ "dtype": "bfloat16",
557
+ "gpu_memory_utilization": 0.9
558
+ }
559
+ },
560
+ "mmlu_pro_history": {
561
+ "task": "mmlu_pro_history",
562
+ "task_alias": "history",
563
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
564
+ "test_split": "test",
565
+ "fewshot_split": "validation",
566
+ "process_docs": "functools.partial(<function process_docs at 0x779b82d0df80>, subject='history')",
567
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d0e480>, including_answer=False)",
568
+ "doc_to_target": "answer",
569
+ "unsafe_code": false,
570
+ "description": "The following are multiple choice questions (with answers) about history. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
571
+ "target_delimiter": " ",
572
+ "fewshot_delimiter": "\n\n",
573
+ "fewshot_config": {
574
+ "sampler": "first_n",
575
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d0db20>, including_answer=True)",
576
+ "doc_to_target": ""
577
+ },
578
+ "num_fewshot": 5,
579
+ "metric_list": [
580
+ {
581
+ "metric": "exact_match",
582
+ "aggregation": "mean",
583
+ "higher_is_better": true,
584
+ "ignore_case": true,
585
+ "ignore_punctuation": true
586
+ }
587
+ ],
588
+ "output_type": "generate_until",
589
+ "generation_kwargs": {
590
+ "until": [
591
+ "Question:"
592
+ ],
593
+ "max_gen_toks": 16384,
594
+ "do_sample": true,
595
+ "temperature": 0.7,
596
+ "top_k": 20,
597
+ "top_p": 0.8
598
+ },
599
+ "repeats": 1,
600
+ "filter_list": [
601
+ {
602
+ "name": "custom-extract",
603
+ "filter": [
604
+ {
605
+ "function": "regex",
606
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
607
+ },
608
+ {
609
+ "function": "take_first"
610
+ }
611
+ ]
612
+ }
613
+ ],
614
+ "should_decontaminate": false,
615
+ "metadata": {
616
+ "version": 2.1,
617
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
618
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
619
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
620
+ "enforce_eager": true,
621
+ "dtype": "bfloat16",
622
+ "gpu_memory_utilization": 0.9
623
+ }
624
+ },
625
+ "mmlu_pro_law": {
626
+ "task": "mmlu_pro_law",
627
+ "task_alias": "law",
628
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
629
+ "test_split": "test",
630
+ "fewshot_split": "validation",
631
+ "process_docs": "functools.partial(<function process_docs at 0x779b82d0e660>, subject='law')",
632
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d0e8e0>, including_answer=False)",
633
+ "doc_to_target": "answer",
634
+ "unsafe_code": false,
635
+ "description": "The following are multiple choice questions (with answers) about law. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
636
+ "target_delimiter": " ",
637
+ "fewshot_delimiter": "\n\n",
638
+ "fewshot_config": {
639
+ "sampler": "first_n",
640
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d0eac0>, including_answer=True)",
641
+ "doc_to_target": ""
642
+ },
643
+ "num_fewshot": 5,
644
+ "metric_list": [
645
+ {
646
+ "metric": "exact_match",
647
+ "aggregation": "mean",
648
+ "higher_is_better": true,
649
+ "ignore_case": true,
650
+ "ignore_punctuation": true
651
+ }
652
+ ],
653
+ "output_type": "generate_until",
654
+ "generation_kwargs": {
655
+ "until": [
656
+ "Question:"
657
+ ],
658
+ "max_gen_toks": 16384,
659
+ "do_sample": true,
660
+ "temperature": 0.7,
661
+ "top_k": 20,
662
+ "top_p": 0.8
663
+ },
664
+ "repeats": 1,
665
+ "filter_list": [
666
+ {
667
+ "name": "custom-extract",
668
+ "filter": [
669
+ {
670
+ "function": "regex",
671
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
672
+ },
673
+ {
674
+ "function": "take_first"
675
+ }
676
+ ]
677
+ }
678
+ ],
679
+ "should_decontaminate": false,
680
+ "metadata": {
681
+ "version": 2.1,
682
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
683
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
684
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
685
+ "enforce_eager": true,
686
+ "dtype": "bfloat16",
687
+ "gpu_memory_utilization": 0.9
688
+ }
689
+ },
690
+ "mmlu_pro_math": {
691
+ "task": "mmlu_pro_math",
692
+ "task_alias": "math",
693
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
694
+ "test_split": "test",
695
+ "fewshot_split": "validation",
696
+ "process_docs": "functools.partial(<function process_docs at 0x779b83ebc900>, subject='math')",
697
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d0d120>, including_answer=False)",
698
+ "doc_to_target": "answer",
699
+ "unsafe_code": false,
700
+ "description": "The following are multiple choice questions (with answers) about math. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
701
+ "target_delimiter": " ",
702
+ "fewshot_delimiter": "\n\n",
703
+ "fewshot_config": {
704
+ "sampler": "first_n",
705
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d0d3a0>, including_answer=True)",
706
+ "doc_to_target": ""
707
+ },
708
+ "num_fewshot": 5,
709
+ "metric_list": [
710
+ {
711
+ "metric": "exact_match",
712
+ "aggregation": "mean",
713
+ "higher_is_better": true,
714
+ "ignore_case": true,
715
+ "ignore_punctuation": true
716
+ }
717
+ ],
718
+ "output_type": "generate_until",
719
+ "generation_kwargs": {
720
+ "until": [
721
+ "Question:"
722
+ ],
723
+ "max_gen_toks": 16384,
724
+ "do_sample": true,
725
+ "temperature": 0.7,
726
+ "top_k": 20,
727
+ "top_p": 0.8
728
+ },
729
+ "repeats": 1,
730
+ "filter_list": [
731
+ {
732
+ "name": "custom-extract",
733
+ "filter": [
734
+ {
735
+ "function": "regex",
736
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
737
+ },
738
+ {
739
+ "function": "take_first"
740
+ }
741
+ ]
742
+ }
743
+ ],
744
+ "should_decontaminate": false,
745
+ "metadata": {
746
+ "version": 2.1,
747
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
748
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
749
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
750
+ "enforce_eager": true,
751
+ "dtype": "bfloat16",
752
+ "gpu_memory_utilization": 0.9
753
+ }
754
+ },
755
+ "mmlu_pro_other": {
756
+ "task": "mmlu_pro_other",
757
+ "task_alias": "other",
758
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
759
+ "test_split": "test",
760
+ "fewshot_split": "validation",
761
+ "process_docs": "functools.partial(<function process_docs at 0x779b82d0cfe0>, subject='other')",
762
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d0d260>, including_answer=False)",
763
+ "doc_to_target": "answer",
764
+ "unsafe_code": false,
765
+ "description": "The following are multiple choice questions (with answers) about other. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
766
+ "target_delimiter": " ",
767
+ "fewshot_delimiter": "\n\n",
768
+ "fewshot_config": {
769
+ "sampler": "first_n",
770
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b82d0d440>, including_answer=True)",
771
+ "doc_to_target": ""
772
+ },
773
+ "num_fewshot": 5,
774
+ "metric_list": [
775
+ {
776
+ "metric": "exact_match",
777
+ "aggregation": "mean",
778
+ "higher_is_better": true,
779
+ "ignore_case": true,
780
+ "ignore_punctuation": true
781
+ }
782
+ ],
783
+ "output_type": "generate_until",
784
+ "generation_kwargs": {
785
+ "until": [
786
+ "Question:"
787
+ ],
788
+ "max_gen_toks": 16384,
789
+ "do_sample": true,
790
+ "temperature": 0.7,
791
+ "top_k": 20,
792
+ "top_p": 0.8
793
+ },
794
+ "repeats": 1,
795
+ "filter_list": [
796
+ {
797
+ "name": "custom-extract",
798
+ "filter": [
799
+ {
800
+ "function": "regex",
801
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
802
+ },
803
+ {
804
+ "function": "take_first"
805
+ }
806
+ ]
807
+ }
808
+ ],
809
+ "should_decontaminate": false,
810
+ "metadata": {
811
+ "version": 2.1,
812
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
813
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
814
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
815
+ "enforce_eager": true,
816
+ "dtype": "bfloat16",
817
+ "gpu_memory_utilization": 0.9
818
+ }
819
+ },
820
+ "mmlu_pro_philosophy": {
821
+ "task": "mmlu_pro_philosophy",
822
+ "task_alias": "philosophy",
823
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
824
+ "test_split": "test",
825
+ "fewshot_split": "validation",
826
+ "process_docs": "functools.partial(<function process_docs at 0x779b83ebef20>, subject='philosophy')",
827
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b83ebf060>, including_answer=False)",
828
+ "doc_to_target": "answer",
829
+ "unsafe_code": false,
830
+ "description": "The following are multiple choice questions (with answers) about philosophy. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
831
+ "target_delimiter": " ",
832
+ "fewshot_delimiter": "\n\n",
833
+ "fewshot_config": {
834
+ "sampler": "first_n",
835
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b83ebf240>, including_answer=True)",
836
+ "doc_to_target": ""
837
+ },
838
+ "num_fewshot": 5,
839
+ "metric_list": [
840
+ {
841
+ "metric": "exact_match",
842
+ "aggregation": "mean",
843
+ "higher_is_better": true,
844
+ "ignore_case": true,
845
+ "ignore_punctuation": true
846
+ }
847
+ ],
848
+ "output_type": "generate_until",
849
+ "generation_kwargs": {
850
+ "until": [
851
+ "Question:"
852
+ ],
853
+ "max_gen_toks": 16384,
854
+ "do_sample": true,
855
+ "temperature": 0.7,
856
+ "top_k": 20,
857
+ "top_p": 0.8
858
+ },
859
+ "repeats": 1,
860
+ "filter_list": [
861
+ {
862
+ "name": "custom-extract",
863
+ "filter": [
864
+ {
865
+ "function": "regex",
866
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
867
+ },
868
+ {
869
+ "function": "take_first"
870
+ }
871
+ ]
872
+ }
873
+ ],
874
+ "should_decontaminate": false,
875
+ "metadata": {
876
+ "version": 2.1,
877
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
878
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
879
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
880
+ "enforce_eager": true,
881
+ "dtype": "bfloat16",
882
+ "gpu_memory_utilization": 0.9
883
+ }
884
+ },
885
+ "mmlu_pro_physics": {
886
+ "task": "mmlu_pro_physics",
887
+ "task_alias": "physics",
888
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
889
+ "test_split": "test",
890
+ "fewshot_split": "validation",
891
+ "process_docs": "functools.partial(<function process_docs at 0x779b83ebe2a0>, subject='physics')",
892
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b83e9fd80>, including_answer=False)",
893
+ "doc_to_target": "answer",
894
+ "unsafe_code": false,
895
+ "description": "The following are multiple choice questions (with answers) about physics. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
896
+ "target_delimiter": " ",
897
+ "fewshot_delimiter": "\n\n",
898
+ "fewshot_config": {
899
+ "sampler": "first_n",
900
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x779b83ebdbc0>, including_answer=True)",
901
+ "doc_to_target": ""
902
+ },
903
+ "num_fewshot": 5,
904
+ "metric_list": [
905
+ {
906
+ "metric": "exact_match",
907
+ "aggregation": "mean",
908
+ "higher_is_better": true,
909
+ "ignore_case": true,
910
+ "ignore_punctuation": true
911
+ }
912
+ ],
913
+ "output_type": "generate_until",
914
+ "generation_kwargs": {
915
+ "until": [
916
+ "Question:"
917
+ ],
918
+ "max_gen_toks": 16384,
919
+ "do_sample": true,
920
+ "temperature": 0.7,
921
+ "top_k": 20,
922
+ "top_p": 0.8
923
+ },
924
+ "repeats": 1,
925
+ "filter_list": [
926
+ {
927
+ "name": "custom-extract",
928
+ "filter": [
929
+ {
930
+ "function": "regex",
931
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
932
+ },
933
+ {
934
+ "function": "take_first"
935
+ }
936
+ ]
937
+ }
938
+ ],
939
+ "should_decontaminate": false,
940
+ "metadata": {
941
+ "version": 2.1,
942
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
943
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
944
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
945
+ "enforce_eager": true,
946
+ "dtype": "bfloat16",
947
+ "gpu_memory_utilization": 0.9
948
+ }
949
+ },
950
+ "mmlu_pro_psychology": {
951
+ "task": "mmlu_pro_psychology",
952
+ "task_alias": "psychology",
953
+ "dataset_path": "TIGER-Lab/MMLU-Pro",
954
+ "test_split": "test",
955
+ "fewshot_split": "validation",
956
+ "process_docs": "functools.partial(<function process_docs at 0x77992c20f420>, subject='psychology')",
957
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x77992c20f560>, including_answer=False)",
958
+ "doc_to_target": "answer",
959
+ "unsafe_code": false,
960
+ "description": "The following are multiple choice questions (with answers) about psychology. Think step by step and then finish your answer with \"the answer is (X)\" where X is the correct letter choice.\n",
961
+ "target_delimiter": " ",
962
+ "fewshot_delimiter": "\n\n",
963
+ "fewshot_config": {
964
+ "sampler": "first_n",
965
+ "doc_to_text": "functools.partial(<function format_cot_example at 0x77992c20f740>, including_answer=True)",
966
+ "doc_to_target": ""
967
+ },
968
+ "num_fewshot": 5,
969
+ "metric_list": [
970
+ {
971
+ "metric": "exact_match",
972
+ "aggregation": "mean",
973
+ "higher_is_better": true,
974
+ "ignore_case": true,
975
+ "ignore_punctuation": true
976
+ }
977
+ ],
978
+ "output_type": "generate_until",
979
+ "generation_kwargs": {
980
+ "until": [
981
+ "Question:"
982
+ ],
983
+ "max_gen_toks": 16384,
984
+ "do_sample": true,
985
+ "temperature": 0.7,
986
+ "top_k": 20,
987
+ "top_p": 0.8
988
+ },
989
+ "repeats": 1,
990
+ "filter_list": [
991
+ {
992
+ "name": "custom-extract",
993
+ "filter": [
994
+ {
995
+ "function": "regex",
996
+ "regex_pattern": "answer is \\(?([ABCDEFGHIJ])\\)?"
997
+ },
998
+ {
999
+ "function": "take_first"
1000
+ }
1001
+ ]
1002
+ }
1003
+ ],
1004
+ "should_decontaminate": false,
1005
+ "metadata": {
1006
+ "version": 2.1,
1007
+ "model": "./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf",
1008
+ "pretrained": "Qwen/Qwen3-30B-A3B-Instruct-2507",
1009
+ "tokenizer": "Qwen/Qwen3-30B-A3B-Instruct-2507",
1010
+ "enforce_eager": true,
1011
+ "dtype": "bfloat16",
1012
+ "gpu_memory_utilization": 0.9
1013
+ }
1014
+ }
1015
+ },
1016
+ "versions": {
1017
+ "mmlu_pro": 2.0,
1018
+ "mmlu_pro_biology": 2.1,
1019
+ "mmlu_pro_business": 2.1,
1020
+ "mmlu_pro_chemistry": 2.1,
1021
+ "mmlu_pro_computer_science": 2.1,
1022
+ "mmlu_pro_economics": 2.1,
1023
+ "mmlu_pro_engineering": 2.1,
1024
+ "mmlu_pro_health": 2.1,
1025
+ "mmlu_pro_history": 2.1,
1026
+ "mmlu_pro_law": 2.1,
1027
+ "mmlu_pro_math": 2.1,
1028
+ "mmlu_pro_other": 2.1,
1029
+ "mmlu_pro_philosophy": 2.1,
1030
+ "mmlu_pro_physics": 2.1,
1031
+ "mmlu_pro_psychology": 2.1
1032
+ },
1033
+ "n-shot": {
1034
+ "mmlu_pro_biology": 5,
1035
+ "mmlu_pro_business": 5,
1036
+ "mmlu_pro_chemistry": 5,
1037
+ "mmlu_pro_computer_science": 5,
1038
+ "mmlu_pro_economics": 5,
1039
+ "mmlu_pro_engineering": 5,
1040
+ "mmlu_pro_health": 5,
1041
+ "mmlu_pro_history": 5,
1042
+ "mmlu_pro_law": 5,
1043
+ "mmlu_pro_math": 5,
1044
+ "mmlu_pro_other": 5,
1045
+ "mmlu_pro_philosophy": 5,
1046
+ "mmlu_pro_physics": 5,
1047
+ "mmlu_pro_psychology": 5
1048
+ },
1049
+ "higher_is_better": {
1050
+ "mmlu_pro": {
1051
+ "exact_match": true
1052
+ },
1053
+ "mmlu_pro_biology": {
1054
+ "exact_match": true
1055
+ },
1056
+ "mmlu_pro_business": {
1057
+ "exact_match": true
1058
+ },
1059
+ "mmlu_pro_chemistry": {
1060
+ "exact_match": true
1061
+ },
1062
+ "mmlu_pro_computer_science": {
1063
+ "exact_match": true
1064
+ },
1065
+ "mmlu_pro_economics": {
1066
+ "exact_match": true
1067
+ },
1068
+ "mmlu_pro_engineering": {
1069
+ "exact_match": true
1070
+ },
1071
+ "mmlu_pro_health": {
1072
+ "exact_match": true
1073
+ },
1074
+ "mmlu_pro_history": {
1075
+ "exact_match": true
1076
+ },
1077
+ "mmlu_pro_law": {
1078
+ "exact_match": true
1079
+ },
1080
+ "mmlu_pro_math": {
1081
+ "exact_match": true
1082
+ },
1083
+ "mmlu_pro_other": {
1084
+ "exact_match": true
1085
+ },
1086
+ "mmlu_pro_philosophy": {
1087
+ "exact_match": true
1088
+ },
1089
+ "mmlu_pro_physics": {
1090
+ "exact_match": true
1091
+ },
1092
+ "mmlu_pro_psychology": {
1093
+ "exact_match": true
1094
+ }
1095
+ },
1096
+ "n-samples": {
1097
+ "mmlu_pro_biology": {
1098
+ "original": 717,
1099
+ "effective": 717
1100
+ },
1101
+ "mmlu_pro_business": {
1102
+ "original": 789,
1103
+ "effective": 789
1104
+ },
1105
+ "mmlu_pro_chemistry": {
1106
+ "original": 1132,
1107
+ "effective": 1132
1108
+ },
1109
+ "mmlu_pro_computer_science": {
1110
+ "original": 410,
1111
+ "effective": 410
1112
+ },
1113
+ "mmlu_pro_economics": {
1114
+ "original": 844,
1115
+ "effective": 844
1116
+ },
1117
+ "mmlu_pro_engineering": {
1118
+ "original": 969,
1119
+ "effective": 969
1120
+ },
1121
+ "mmlu_pro_health": {
1122
+ "original": 818,
1123
+ "effective": 818
1124
+ },
1125
+ "mmlu_pro_history": {
1126
+ "original": 381,
1127
+ "effective": 381
1128
+ },
1129
+ "mmlu_pro_law": {
1130
+ "original": 1101,
1131
+ "effective": 1101
1132
+ },
1133
+ "mmlu_pro_math": {
1134
+ "original": 1351,
1135
+ "effective": 1351
1136
+ },
1137
+ "mmlu_pro_other": {
1138
+ "original": 924,
1139
+ "effective": 924
1140
+ },
1141
+ "mmlu_pro_philosophy": {
1142
+ "original": 499,
1143
+ "effective": 499
1144
+ },
1145
+ "mmlu_pro_physics": {
1146
+ "original": 1299,
1147
+ "effective": 1299
1148
+ },
1149
+ "mmlu_pro_psychology": {
1150
+ "original": 798,
1151
+ "effective": 798
1152
+ }
1153
+ },
1154
+ "config": {
1155
+ "model": "vllm",
1156
+ "model_args": "model=./models/qwen3-30b-a3b-instruct-2507-q4_k_m.gguf,pretrained=Qwen/Qwen3-30B-A3B-Instruct-2507,tokenizer=Qwen/Qwen3-30B-A3B-Instruct-2507,enforce_eager=true,dtype=bfloat16,gpu_memory_utilization=0.9",
1157
+ "batch_size": "auto:4",
1158
+ "batch_sizes": [],
1159
+ "device": null,
1160
+ "use_cache": null,
1161
+ "limit": null,
1162
+ "bootstrap_iters": 100000,
1163
+ "gen_kwargs": {
1164
+ "temperature": 0.7,
1165
+ "top_k": 20,
1166
+ "top_p": 0.8,
1167
+ "do_sample": true,
1168
+ "max_gen_toks": 16384
1169
+ },
1170
+ "random_seed": 1234,
1171
+ "numpy_seed": 1234,
1172
+ "torch_seed": 1234,
1173
+ "fewshot_seed": 1234
1174
+ },
1175
+ "git_hash": "10b0664",
1176
+ "date": 1758522384.7203937,
1177
+ "pretty_env_info": "PyTorch version: 2.8.0+cu128\nIs debug build: False\nCUDA used to build PyTorch: 12.8\nROCM used to build PyTorch: N/A\n\nOS: Ubuntu 22.04.5 LTS (x86_64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version: Could not collect\nCMake version: version 3.22.1\nLibc version: glibc-2.35\n\nPython version: 3.13.7 (main, Sep 18 2025, 19:47:49) [Clang 20.1.4 ] (64-bit runtime)\nPython platform: Linux-6.8.0-58-generic-x86_64-with-glibc2.35\nIs CUDA available: True\nCUDA runtime version: 12.8.93\nCUDA_MODULE_LOADING set to: LAZY\nGPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3\nNvidia driver version: 570.133.20\ncuDNN version: Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0\nHIP runtime version: N/A\nMIOpen runtime version: N/A\nIs XNNPACK available: True\n\nCPU:\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 224\nOn-line CPU(s) list: 0-223\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8480C\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 56\nSocket(s): 2\nStepping: 8\nCPU max MHz: 3800.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4000.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 5.3 MiB (112 instances)\nL1i cache: 3.5 MiB (112 instances)\nL2 cache: 224 MiB (112 instances)\nL3 cache: 210 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-55,112-167\nNUMA node1 CPU(s): 56-111,168-223\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\nVersions of relevant libraries:\n[pip3] Could not collect\n[conda] Could not collect",
1178
+ "transformers_version": "4.56.2",
1179
+ "lm_eval_version": "0.4.9.1",
1180
+ "upper_git_hash": null,
1181
+ "tokenizer_pad_token": [
1182
+ "<|endoftext|>",
1183
+ "151643"
1184
+ ],
1185
+ "tokenizer_eos_token": [
1186
+ "<|im_end|>",
1187
+ "151645"
1188
+ ],
1189
+ "tokenizer_bos_token": [
1190
+ null,
1191
+ "None"
1192
+ ],
1193
+ "eot_token_id": 151645,
1194
+ "max_length": 262144,
1195
+ "task_hashes": {
1196
+ "mmlu_pro_biology": "f90be0074b11859515541d0e12c44e2724f495f0399d7bc6199a1c59db6c3014",
1197
+ "mmlu_pro_business": "cc3917e29d9fa7415367c3ef4fbe1ace364c93af6d59150e751f893f3006e9df",
1198
+ "mmlu_pro_chemistry": "825b2c16ff04c95e7f534940fe113151d4b057149a0e46da8642d73194330370",
1199
+ "mmlu_pro_computer_science": "94b0e5dd72e037ea160cd678365f4ce3e6537c6ba90fff6d96ebbd98e88b8e31",
1200
+ "mmlu_pro_economics": "874cb2a0108fec79d6d962bb209c27510dfc6fbb77ab18ac56cb6c1be52b00d3",
1201
+ "mmlu_pro_engineering": "2e57cb5ba0ba99fcb5d40f8e7ccd0510d29dfa1a091e236d9ce1003b6aa7d326",
1202
+ "mmlu_pro_health": "af2f784fb543cb5757a0be1116da37149397492e80e263f5afdb4d190987aaa4",
1203
+ "mmlu_pro_history": "a142960ca573545fb4f9347aadc582eedf2f6dbc5a4e97fe40786c80772db982",
1204
+ "mmlu_pro_law": "dbb995dc09f851078e2d0a445f949e858e5591d2c23d62826feee21491b000b2",
1205
+ "mmlu_pro_math": "b6a29c12149fa28a82e9b1b0a30bd982d4c7e6cf6e928e71d45c7f5f74e957cc",
1206
+ "mmlu_pro_other": "27c6b984cc15097e41662c571ac1b9206d89e621de5ca36ffd3c4698907ba918",
1207
+ "mmlu_pro_philosophy": "0952e2a617b811c9e5bdaf317547f5cec1b8f8acee5d852072232d50554a94f3",
1208
+ "mmlu_pro_physics": "921c3a192bbc5997f717e85dbbeec53057f137ae6230feb2cccc0602c728a01e",
1209
+ "mmlu_pro_psychology": "3a676cd0d16e989b636f0ddbb23918eeec2a8f450c27998501aeb95364eb7a0a"
1210
+ },
1211
+ "model_source": "vllm",
1212
+ "model_name": "Qwen/Qwen3-30B-A3B-Instruct-2507",
1213
+ "model_name_sanitized": "Qwen__Qwen3-30B-A3B-Instruct-2507",
1214
+ "system_instruction": null,
1215
+ "system_instruction_sha": null,
1216
+ "fewshot_as_multiturn": true,
1217
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0].content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if message.content is string %}\n {%- set content = message.content %}\n {%- else %}\n {%- set content = '' %}\n {%- endif %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}",
1218
+ "chat_template_sha": "64f85b198065d0fba2a81f37e10ed68161ce2c19a754c7100e67e0ca2ee9c326",
1219
+ "start_time": 12653743.526893998,
1220
+ "end_time": 12659789.728123028,
1221
+ "total_evaluation_time_seconds": "6046.201229030266"
1222
+ }
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_biology_2025-09-22T08-06-06.296495.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_business_2025-09-22T08-06-06.296495.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_chemistry_2025-09-22T08-06-06.296495.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_computer_science_2025-09-22T08-06-06.296495.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_economics_2025-09-22T08-06-06.296495.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_engineering_2025-09-22T08-06-06.296495.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_health_2025-09-22T08-06-06.296495.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_history_2025-09-22T08-06-06.296495.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_law_2025-09-22T08-06-06.296495.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:927749a3116cd319560f056befdf475f0d19248fbfc8efa8bee9567d87ab625b
3
+ size 16542146
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_math_2025-09-22T08-06-06.296495.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_other_2025-09-22T08-06-06.296495.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_philosophy_2025-09-22T08-06-06.296495.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_physics_2025-09-22T08-06-06.296495.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
q4_k_m_mmlu_pro_logs/samples_mmlu_pro_psychology_2025-09-22T08-06-06.296495.jsonl ADDED
The diff for this file is too large to render. See raw diff