lbourdois commited on
Commit
3e2a818
·
verified ·
1 Parent(s): 3aea550

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +234 -229
README.md CHANGED
@@ -1,229 +1,234 @@
1
- ---
2
- language:
3
- - de
4
- - en
5
- - it
6
- - fr
7
- - pt
8
- - nl
9
- - ar
10
- - es
11
- license: apache-2.0
12
- tags:
13
- - spectrum
14
- - sft
15
- base_model:
16
- - Qwen/Qwen2.5-14B
17
- model-index:
18
- - name: SauerkrautLM-v2-14b-SFT
19
- results:
20
- - task:
21
- type: text-generation
22
- name: Text Generation
23
- dataset:
24
- name: IFEval (0-Shot)
25
- type: HuggingFaceH4/ifeval
26
- args:
27
- num_few_shot: 0
28
- metrics:
29
- - type: inst_level_strict_acc and prompt_level_strict_acc
30
- value: 69.64
31
- name: strict accuracy
32
- source:
33
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
34
- name: Open LLM Leaderboard
35
- - task:
36
- type: text-generation
37
- name: Text Generation
38
- dataset:
39
- name: BBH (3-Shot)
40
- type: BBH
41
- args:
42
- num_few_shot: 3
43
- metrics:
44
- - type: acc_norm
45
- value: 45.82
46
- name: normalized accuracy
47
- source:
48
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
49
- name: Open LLM Leaderboard
50
- - task:
51
- type: text-generation
52
- name: Text Generation
53
- dataset:
54
- name: MATH Lvl 5 (4-Shot)
55
- type: hendrycks/competition_math
56
- args:
57
- num_few_shot: 4
58
- metrics:
59
- - type: exact_match
60
- value: 29.23
61
- name: exact match
62
- source:
63
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
64
- name: Open LLM Leaderboard
65
- - task:
66
- type: text-generation
67
- name: Text Generation
68
- dataset:
69
- name: GPQA (0-shot)
70
- type: Idavidrein/gpqa
71
- args:
72
- num_few_shot: 0
73
- metrics:
74
- - type: acc_norm
75
- value: 11.41
76
- name: acc_norm
77
- source:
78
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
79
- name: Open LLM Leaderboard
80
- - task:
81
- type: text-generation
82
- name: Text Generation
83
- dataset:
84
- name: MuSR (0-shot)
85
- type: TAUR-Lab/MuSR
86
- args:
87
- num_few_shot: 0
88
- metrics:
89
- - type: acc_norm
90
- value: 11.07
91
- name: acc_norm
92
- source:
93
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
94
- name: Open LLM Leaderboard
95
- - task:
96
- type: text-generation
97
- name: Text Generation
98
- dataset:
99
- name: MMLU-PRO (5-shot)
100
- type: TIGER-Lab/MMLU-Pro
101
- config: main
102
- split: test
103
- args:
104
- num_few_shot: 5
105
- metrics:
106
- - type: acc
107
- value: 46.73
108
- name: accuracy
109
- source:
110
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
111
- name: Open LLM Leaderboard
112
- ---
113
-
114
- ![SauerkrautLM-v2-14b-SFT](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-3.png "SauerkrautLM-v2-14b-SFT")
115
- ## VAGO solutions SauerkrautLM-v2-14b-SFT
116
-
117
- **Fine-tuned Model** - *Celebrating one year of SauerkrautLM with our most advanced model yet, showcasing two-phase Spectrum Fine-Tuning*
118
-
119
- Introducing **SauerkrautLM-14b-v2-SFT** – our latest Sauerkraut version based on [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B), celebrating the one-year anniversary of SauerkrautLM!
120
-
121
- - Two-phase Spectrum Fine-Tuning approach
122
- - Phase 1: 25% layer targeting with 0.6B tokens
123
- - Phase 2: 20% layer targeting with 0.6B tokens
124
- - Enhanced mathematical capabilities, function calling, and multilingual performance
125
-
126
- # Table of Contents
127
- 1. [Overview of all SauerkrautLM-14b-v2 Models](#all-SauerkrautLM-v2-14b)
128
- 2. [Model Details](#model-details)
129
- - [Training procedure](#training-procedure)
130
- 3. [Evaluation](#evaluation)
131
- 5. [Disclaimer](#disclaimer)
132
- 6. [Contact](#contact)
133
- 7. [Collaborations](#collaborations)
134
- 8. [Acknowledgement](#acknowledgement)
135
-
136
- ## All SauerkrautLM-v2-14b
137
-
138
- | Model | HF | EXL2 | GGUF | AWQ |
139
- |-------|-------|-------|-------|-------|
140
- | SauerkrautLM-v2-14b-SFT | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-SFT) | coming soon | coming soon | coming soon |
141
- | SauerkrautLM-v2-14b-DPO | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-DPO) | coming soon | coming soon | coming soon |
142
-
143
- ## Model Details
144
- **SauerkrautLM-v2-14b-SFT**
145
- - **Model Type:** SauerkrautLM-v2-14b-SFT is a fine-tuned Model based on [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B)
146
- - **Language(s):** German, English
147
- - **License:** Apache 2.0
148
- - **Contact:** [VAGO solutions](https://vago-solutions.ai)
149
-
150
- ## Training Procedure
151
-
152
- This model represents a significant advancement in our fine-tuning methodology, utilizing a two-phase Spectrum Fine-Tuning approach:
153
-
154
- **Phase 1 (25% Layer Targeting)**:
155
- - Training on 0.6B tokens with four distinct components:
156
- 1. Mathematics data (curated using proprietary classifier)
157
- 2. English performance data (from Sauerkraut-v1)
158
- 3. High-quality German training data (from Sauerkraut-v1)
159
- 4. Function calling data (from Sauerkraut-v2)
160
-
161
- **Phase 2 (20% Layer Targeting)**:
162
- - Training on additional 0.6B tokens with partial overlap:
163
- 1. New mathematics data (classifier-selected)
164
- 2. New English performance data (from Sauerkraut-v2)
165
- 3. New German training data (from Sauerkraut-v2)
166
- 4. Function calling data (from Sauerkraut-v2)
167
-
168
- **Dataset Composition**:
169
- - Carefully curated mathematical content using a proprietary classification model
170
- - Premium multilingual data from both Sauerkraut-v1 and Sauerkraut-v2
171
- - Specialized function calling training data
172
- - High-quality German-English content across various domains
173
-
174
- ## Objective and Results
175
-
176
- This release marks the one-year anniversary of SauerkrautLM, showcasing our most advanced training methodology to date. The two-phase Spectrum Fine-Tuning approach allows for more nuanced learning while maintaining efficiency in resource usage. The model demonstrates significant improvements in:
177
-
178
- - Mathematical reasoning capabilities
179
- - Function calling proficiency
180
- - Multilingual performance
181
- - Instruction following
182
- - Common-sense reasoning
183
-
184
- ## Evaluation
185
-
186
- **AGIEVAL**
187
- ![SauerkrautLM-v2-14b-SFT-AGIEVAL](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-AGIEVAL.png "SauerkrautLM-v2-14b-SFT-AGIEVAL")
188
-
189
- **GPT4ALL**
190
- ![SauerkrautLM-v2-14b-SFT-GPT4ALL](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-GPT4ALL.png "SauerkrautLM-v2-14b-SFT-GPT4ALL")
191
-
192
- **TRUTHFULQA**
193
- ![SauerkrautLM-v2-14b-SFT-TRUTHFULQA](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-TRUTHFULQA.png "SauerkrautLM-v2-14b-SFT-TRUTHFULQA")
194
-
195
- **OPENLEADERBOARD 2**
196
- ![SauerkrautLM-v2-14b-SFT-OPENLEADERBOARD](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-OPENLEADERBOARD.png "SauerkrautLM-v2-14b-SFT-OPENLEADERBOARD")
197
-
198
- **MMLU 5-shot**
199
- ![SauerkrautLM-v2-14b-SFT-MMLU-5shot](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-MMLU-5shot.png "SauerkrautLM-v2-14b-SFT-MMLU-5shot")
200
-
201
- **Berkeley Function Calling Leaderboard**
202
- ![SauerkrautLM-v2-14b-SFT-BERKELEY](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-BERKELEY.png "SauerkrautLM-v2-14b-SFT-BERKELEY")
203
-
204
- Please note that our benchmark results in absolute numbers may differ from the Hugging Face Leaderboard due to variations in benchmark evaluation pipelines. However, the relative differences remain consistent.
205
-
206
- ## Disclaimer
207
- We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
208
-
209
- ## Contact
210
- If you are interested in customized LLMs for business applications, please get in contact with us via our website. We are also grateful for your feedback and suggestions.
211
-
212
- ## Collaborations
213
- We are also keenly seeking support and investment for our startup, VAGO solutions where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.ai)
214
-
215
- ## Acknowledgement
216
- Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such a valuable model to the Open-Source community.
217
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
218
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-v2-14b-SFT)
219
-
220
- | Metric |Value|
221
- |-------------------|----:|
222
- |Avg. |35.65|
223
- |IFEval (0-Shot) |69.64|
224
- |BBH (3-Shot) |45.82|
225
- |MATH Lvl 5 (4-Shot)|29.23|
226
- |GPQA (0-shot) |11.41|
227
- |MuSR (0-shot) |11.07|
228
- |MMLU-PRO (5-shot) |46.73|
229
-
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zho
4
+ - eng
5
+ - fra
6
+ - spa
7
+ - por
8
+ - deu
9
+ - ita
10
+ - rus
11
+ - jpn
12
+ - kor
13
+ - vie
14
+ - tha
15
+ - ara
16
+ license: apache-2.0
17
+ tags:
18
+ - spectrum
19
+ - sft
20
+ base_model:
21
+ - Qwen/Qwen2.5-14B
22
+ model-index:
23
+ - name: SauerkrautLM-v2-14b-SFT
24
+ results:
25
+ - task:
26
+ type: text-generation
27
+ name: Text Generation
28
+ dataset:
29
+ name: IFEval (0-Shot)
30
+ type: HuggingFaceH4/ifeval
31
+ args:
32
+ num_few_shot: 0
33
+ metrics:
34
+ - type: inst_level_strict_acc and prompt_level_strict_acc
35
+ value: 69.64
36
+ name: strict accuracy
37
+ source:
38
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
39
+ name: Open LLM Leaderboard
40
+ - task:
41
+ type: text-generation
42
+ name: Text Generation
43
+ dataset:
44
+ name: BBH (3-Shot)
45
+ type: BBH
46
+ args:
47
+ num_few_shot: 3
48
+ metrics:
49
+ - type: acc_norm
50
+ value: 45.82
51
+ name: normalized accuracy
52
+ source:
53
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
54
+ name: Open LLM Leaderboard
55
+ - task:
56
+ type: text-generation
57
+ name: Text Generation
58
+ dataset:
59
+ name: MATH Lvl 5 (4-Shot)
60
+ type: hendrycks/competition_math
61
+ args:
62
+ num_few_shot: 4
63
+ metrics:
64
+ - type: exact_match
65
+ value: 29.23
66
+ name: exact match
67
+ source:
68
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
69
+ name: Open LLM Leaderboard
70
+ - task:
71
+ type: text-generation
72
+ name: Text Generation
73
+ dataset:
74
+ name: GPQA (0-shot)
75
+ type: Idavidrein/gpqa
76
+ args:
77
+ num_few_shot: 0
78
+ metrics:
79
+ - type: acc_norm
80
+ value: 11.41
81
+ name: acc_norm
82
+ source:
83
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
84
+ name: Open LLM Leaderboard
85
+ - task:
86
+ type: text-generation
87
+ name: Text Generation
88
+ dataset:
89
+ name: MuSR (0-shot)
90
+ type: TAUR-Lab/MuSR
91
+ args:
92
+ num_few_shot: 0
93
+ metrics:
94
+ - type: acc_norm
95
+ value: 11.07
96
+ name: acc_norm
97
+ source:
98
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
99
+ name: Open LLM Leaderboard
100
+ - task:
101
+ type: text-generation
102
+ name: Text Generation
103
+ dataset:
104
+ name: MMLU-PRO (5-shot)
105
+ type: TIGER-Lab/MMLU-Pro
106
+ config: main
107
+ split: test
108
+ args:
109
+ num_few_shot: 5
110
+ metrics:
111
+ - type: acc
112
+ value: 46.73
113
+ name: accuracy
114
+ source:
115
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-v2-14b-SFT
116
+ name: Open LLM Leaderboard
117
+ ---
118
+
119
+ ![SauerkrautLM-v2-14b-SFT](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-3.png "SauerkrautLM-v2-14b-SFT")
120
+ ## VAGO solutions SauerkrautLM-v2-14b-SFT
121
+
122
+ **Fine-tuned Model** - *Celebrating one year of SauerkrautLM with our most advanced model yet, showcasing two-phase Spectrum Fine-Tuning*
123
+
124
+ Introducing **SauerkrautLM-14b-v2-SFT** our latest Sauerkraut version based on [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B), celebrating the one-year anniversary of SauerkrautLM!
125
+
126
+ - Two-phase Spectrum Fine-Tuning approach
127
+ - Phase 1: 25% layer targeting with 0.6B tokens
128
+ - Phase 2: 20% layer targeting with 0.6B tokens
129
+ - Enhanced mathematical capabilities, function calling, and multilingual performance
130
+
131
+ # Table of Contents
132
+ 1. [Overview of all SauerkrautLM-14b-v2 Models](#all-SauerkrautLM-v2-14b)
133
+ 2. [Model Details](#model-details)
134
+ - [Training procedure](#training-procedure)
135
+ 3. [Evaluation](#evaluation)
136
+ 5. [Disclaimer](#disclaimer)
137
+ 6. [Contact](#contact)
138
+ 7. [Collaborations](#collaborations)
139
+ 8. [Acknowledgement](#acknowledgement)
140
+
141
+ ## All SauerkrautLM-v2-14b
142
+
143
+ | Model | HF | EXL2 | GGUF | AWQ |
144
+ |-------|-------|-------|-------|-------|
145
+ | SauerkrautLM-v2-14b-SFT | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-SFT) | coming soon | coming soon | coming soon |
146
+ | SauerkrautLM-v2-14b-DPO | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-DPO) | coming soon | coming soon | coming soon |
147
+
148
+ ## Model Details
149
+ **SauerkrautLM-v2-14b-SFT**
150
+ - **Model Type:** SauerkrautLM-v2-14b-SFT is a fine-tuned Model based on [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B)
151
+ - **Language(s):** German, English
152
+ - **License:** Apache 2.0
153
+ - **Contact:** [VAGO solutions](https://vago-solutions.ai)
154
+
155
+ ## Training Procedure
156
+
157
+ This model represents a significant advancement in our fine-tuning methodology, utilizing a two-phase Spectrum Fine-Tuning approach:
158
+
159
+ **Phase 1 (25% Layer Targeting)**:
160
+ - Training on 0.6B tokens with four distinct components:
161
+ 1. Mathematics data (curated using proprietary classifier)
162
+ 2. English performance data (from Sauerkraut-v1)
163
+ 3. High-quality German training data (from Sauerkraut-v1)
164
+ 4. Function calling data (from Sauerkraut-v2)
165
+
166
+ **Phase 2 (20% Layer Targeting)**:
167
+ - Training on additional 0.6B tokens with partial overlap:
168
+ 1. New mathematics data (classifier-selected)
169
+ 2. New English performance data (from Sauerkraut-v2)
170
+ 3. New German training data (from Sauerkraut-v2)
171
+ 4. Function calling data (from Sauerkraut-v2)
172
+
173
+ **Dataset Composition**:
174
+ - Carefully curated mathematical content using a proprietary classification model
175
+ - Premium multilingual data from both Sauerkraut-v1 and Sauerkraut-v2
176
+ - Specialized function calling training data
177
+ - High-quality German-English content across various domains
178
+
179
+ ## Objective and Results
180
+
181
+ This release marks the one-year anniversary of SauerkrautLM, showcasing our most advanced training methodology to date. The two-phase Spectrum Fine-Tuning approach allows for more nuanced learning while maintaining efficiency in resource usage. The model demonstrates significant improvements in:
182
+
183
+ - Mathematical reasoning capabilities
184
+ - Function calling proficiency
185
+ - Multilingual performance
186
+ - Instruction following
187
+ - Common-sense reasoning
188
+
189
+ ## Evaluation
190
+
191
+ **AGIEVAL**
192
+ ![SauerkrautLM-v2-14b-SFT-AGIEVAL](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-AGIEVAL.png "SauerkrautLM-v2-14b-SFT-AGIEVAL")
193
+
194
+ **GPT4ALL**
195
+ ![SauerkrautLM-v2-14b-SFT-GPT4ALL](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-GPT4ALL.png "SauerkrautLM-v2-14b-SFT-GPT4ALL")
196
+
197
+ **TRUTHFULQA**
198
+ ![SauerkrautLM-v2-14b-SFT-TRUTHFULQA](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-TRUTHFULQA.png "SauerkrautLM-v2-14b-SFT-TRUTHFULQA")
199
+
200
+ **OPENLEADERBOARD 2**
201
+ ![SauerkrautLM-v2-14b-SFT-OPENLEADERBOARD](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-OPENLEADERBOARD.png "SauerkrautLM-v2-14b-SFT-OPENLEADERBOARD")
202
+
203
+ **MMLU 5-shot**
204
+ ![SauerkrautLM-v2-14b-SFT-MMLU-5shot](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-MMLU-5shot.png "SauerkrautLM-v2-14b-SFT-MMLU-5shot")
205
+
206
+ **Berkeley Function Calling Leaderboard**
207
+ ![SauerkrautLM-v2-14b-SFT-BERKELEY](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-BERKELEY.png "SauerkrautLM-v2-14b-SFT-BERKELEY")
208
+
209
+ Please note that our benchmark results in absolute numbers may differ from the Hugging Face Leaderboard due to variations in benchmark evaluation pipelines. However, the relative differences remain consistent.
210
+
211
+ ## Disclaimer
212
+ We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
213
+
214
+ ## Contact
215
+ If you are interested in customized LLMs for business applications, please get in contact with us via our website. We are also grateful for your feedback and suggestions.
216
+
217
+ ## Collaborations
218
+ We are also keenly seeking support and investment for our startup, VAGO solutions where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.ai)
219
+
220
+ ## Acknowledgement
221
+ Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such a valuable model to the Open-Source community.
222
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
223
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-v2-14b-SFT)
224
+
225
+ | Metric |Value|
226
+ |-------------------|----:|
227
+ |Avg. |35.65|
228
+ |IFEval (0-Shot) |69.64|
229
+ |BBH (3-Shot) |45.82|
230
+ |MATH Lvl 5 (4-Shot)|29.23|
231
+ |GPQA (0-shot) |11.41|
232
+ |MuSR (0-shot) |11.07|
233
+ |MMLU-PRO (5-shot) |46.73|
234
+