lbourdois commited on
Commit
cacef1e
Β·
verified Β·
1 Parent(s): b61134b

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +254 -243
README.md CHANGED
@@ -1,243 +1,254 @@
1
- ---
2
- license: apache-2.0
3
- library_name: transformers
4
- pipeline_tag: text-generation
5
- base_model: Qwen/Qwen2.5-7B
6
- language:
7
- - en
8
- - zh
9
- ---
10
-
11
-
12
-
13
- # Xwen-7B-Chat
14
-
15
- > [!IMPORTANT]
16
- > If you enjoy our model, please **give it a like on our Hugging Face repo**. Your support means a lot to us. Thank you!
17
-
18
- > [!IMPORTANT]
19
- > You can download the **GGUF files of Xwen-7B-Chat** at [xwen-team/Xwen-7B-Chat-i1-GGUF](https://huggingface.co/xwen-team/Xwen-7B-Chat-i1-GGUF) (weighted/imatrix quants) and [xwen-team/Xwen-7B-Chat-GGUF](https://huggingface.co/xwen-team/Xwen-7B-Chat-GGUF) (static quants).
20
-
21
- NEWS:
22
- - Big thanks to @mradermacher for helping us build GGUFs for our Xwen-72B-Chat and Xwen-7B-Chat! The GGUF files have accumulated **over 2k downloads in one day** πŸš€ Our official GGUF repos: [**xwen-team/Xwen-7B-Chat-i1-GGUF**](https://huggingface.co/xwen-team/Xwen-7B-Chat-i1-GGUF) (weighted/imatrix quants) and [**xwen-team/Xwen-7B-Chat-GGUF**](https://huggingface.co/xwen-team/Xwen-7B-Chat-GGUF) (static quants).
23
-
24
-
25
-
26
- <img src="Xwen-Cartoon.jpg" alt="Xwen-Cartoon" style="zoom:35%;" />
27
-
28
- ## 1. Introduction
29
-
30
-
31
- Xwen is a series of open-sourced large language models (currently including **[Xwen-72B-Chat](https://huggingface.co/xwen-team/Xwen-72B-Chat)** and **[Xwen-7B-Chat](https://huggingface.co/xwen-team/Xwen-7B-Chat)**), post-trained from the pre-trained Qwen2.5 models (i.e., [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) and [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)) [1].
32
-
33
- **πŸ† Top-1 chat performance!** To the best of our knowledge, at the time of Xwen models' release (February 1, 2025), **[Xwen-72B-Chat](https://huggingface.co/xwen-team/Xwen-72B-Chat) and [Xwen-7B-Chat](https://huggingface.co/xwen-team/Xwen-7B-Chat) exhibit the best chat performance among open-sourced models below 100B and 10B, respectively**, based on evaluation results from widely-used benchmarks such as Arena-Hard-Auto [2], MT-Bench [3], and AlignBench [4]. Please view details in the [Evaluation Results](https://huggingface.co/xwen-team/Xwen-7B-Chat#3-evaluation-results) part.
34
-
35
- **πŸš€ Xwen technical report is on the way!** During the training of Xwen models, we have accumulated many technical insights and lessons. To promote the democratization of technology, we are in the process of documenting these insights and lessons in a technical report, which will be released as soon as possible.
36
-
37
-
38
-
39
- ## 2. Usage
40
-
41
- > [!CAUTION]
42
- > For optimal performance, we refrain from fine-tuning the model's identity. Thus, inquiries such as "Who are you" or "Who developed you" may yield random responses that are not necessarily accurate.
43
-
44
- > [!CAUTION]
45
- > This open-source model is provided "as is," without warranties or liabilities, and users assume all risks associated with its use; users are advised to comply with local laws, and the model's outputs do not represent the views or positions of the developers.
46
-
47
- The usage of our Xwen-Chat models is similar to that of the Qwen2.5-Instruct models, with the tokenizer and chat template being identical to those of the Qwen2.5-Instruct models.
48
-
49
- Here we provide a python script to demonstrate how to deploy our Xwen models to generate reponses:
50
-
51
-
52
- ```python
53
- from transformers import AutoModelForCausalLM, AutoTokenizer
54
-
55
- model_name = "xwen-team/Xwen-7B-Chat" # Or "xwen-team/Xwen-72B-Chat" if you want to use the 72B model
56
-
57
- model = AutoModelForCausalLM.from_pretrained(
58
- model_name,
59
- torch_dtype="auto",
60
- device_map="auto"
61
- )
62
- tokenizer = AutoTokenizer.from_pretrained(model_name)
63
-
64
- prompt = "Give me a short introduction to large language models."
65
- messages = [
66
- {"role": "system", "content": "You are Xwen, created by Xwen Team. You are a helpful assistant."}, # This system prompt is not necessary, and you can put it as an empty string.
67
- {"role": "user", "content": prompt}
68
- ]
69
- text = tokenizer.apply_chat_template(
70
- messages,
71
- tokenize=False,
72
- add_generation_prompt=True
73
- )
74
- model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
75
-
76
- generated_ids = model.generate(
77
- **model_inputs,
78
- max_new_tokens=512
79
- )
80
- generated_ids = [
81
- output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
82
- ]
83
-
84
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
85
-
86
- print(response)
87
- ```
88
-
89
- ## 3. Evaluation Results
90
-
91
- > [!CAUTION]
92
- > Results on other benchmarks will be updated soon! 😊
93
-
94
- πŸ”‘: Open-sourced
95
-
96
- πŸ”’: Proprietary
97
-
98
- ### 3.1 Arena-Hard-Auto-v0.1
99
-
100
- All results below, except those for `Xwen-72B-Chat`, `DeepSeek-V3` and `DeepSeek-R1`, are sourced from [Arena-Hard-Auto](https://github.com/lmarena/arena-hard-auto) (accessed on February 1, 2025).
101
-
102
- The results of `DeepSeek-V3` and `DeepSeek-R1` are borrowed from their officially reported results.
103
-
104
- #### 3.1.1 No Style Control
105
-
106
- **Comparison of Xwen-72B-Chat with other LLMs at a comparable level:**
107
-
108
- | | Score | 95% CIs |
109
- | --------------------------------- | ------------------------ | ----------- |
110
- | **Xwen-72B-Chat** πŸ”‘ | **86.1** (Top-1 among πŸ”‘ below 100B) | (-1.5, 1.7) |
111
- | Qwen2.5-72B-Instruct πŸ”‘ | 78.0 | (-1.8, 1.8) |
112
- | Athene-v2-Chat πŸ”‘ | 85.0 | (-1.4, 1.7) |
113
- | DeepSeek-V3 **(671B >> 72B)** πŸ”‘ | 85.5 | N/A |
114
- | DeepSeek-R1 **(671B >> 72B)** πŸ”‘ | **92.3** (Top-1 among πŸ”‘) | N/A |
115
- | Llama-3.1-Nemotron-70B-Instruct πŸ”‘ | 84.9 | (-1.7, 1.8) |
116
- | Llama-3.1-405B-Instruct-FP8 πŸ”‘ | 69.3 | (-2.4, 2.2) |
117
- | Claude-3-5-Sonnet-20241022 πŸ”’ | 85.2 | (-1.4, 1.6) |
118
- | O1-Preview-2024-09-12 πŸ”’ | **92.0** (Top-1 among πŸ”’) | (-1.2, 1.0) |
119
- | O1-Mini-2024-09-12 πŸ”’ | 90.4 | (-1.1, 1.3) |
120
- | GPT-4-Turbo-2024-04-09 πŸ”’ | 82.6 | (-1.8, 1.5) |
121
- | GPT-4-0125-Preview πŸ”’ | 78.0 | (-2.1, 2.4) |
122
- | GPT-4o-2024-08-06 πŸ”’ | 77.9 | (-2.0, 2.1) |
123
- | Yi-Lightning πŸ”’ | 81.5 | (-1.6, 1.6) |
124
- | Yi-LargeπŸ”’ | 63.7 | (-2.6, 2.4) |
125
- | GLM-4-0520 πŸ”’ | 63.8 | (-2.9, 2.8) |
126
-
127
-
128
- **Comparison of Xwen-7B-Chat with other LLMs at a comparable level:**
129
-
130
- | | Score | 95% CIs |
131
- | ----------------------- | -------- | ----------- |
132
- | **Xwen-7B-Chat** πŸ”‘ | **59.4** | (-2.4, 2.1) |
133
- | Qwen2.5-7B-Instruct πŸ”‘ | 50.4 | (-2.9, 2.5) |
134
- | Gemma-2-27B-IT πŸ”‘ | 57.5 | (-2.1, 2.4) |
135
- | Llama-3.1-8B-Instruct πŸ”‘ | 21.3 | (-1.9, 2.2) |
136
- | Llama-3-8B-Instruct πŸ”‘ | 20.6 | (-2.0, 1.9) |
137
- | Starling-LM-7B-beta πŸ”‘ | 23.0 | (-1.8, 1.8) |
138
- | DeepSeek-R1-Distill-Qwen-7B (only responses) πŸ”‘ | 17.2 | (-1.4, 1.7) |
139
- | DeepSeek-R1-Distill-Qwen-7B (w/ thoughts and responses) πŸ”‘ | 13.6 | (-1.4, 1.8) |
140
-
141
-
142
-
143
- #### 3.1.2 Style Control
144
-
145
- **Comparison of Xwen-72B-Chat with other LLMs at a comparable level:**
146
-
147
- | | Score | 95% CIs |
148
- | --------------------------------- | ------------------------ | ----------- |
149
- | **Xwen-72B-Chat** πŸ”‘ | **72.4** (Top-1 Among πŸ”‘) | (-4.3, 4.1) |
150
- | Qwen2.5-72B-Instruct πŸ”‘ | 63.3 | (-2.5, 2.3) |
151
- | Athene-v2-Chat πŸ”‘ | 72.1 | (-2.5, 2.5) |
152
- | Llama-3.1-Nemotron-70B-Instruct πŸ”‘ | 71.0 | (-2.8, 3.1) |
153
- | Llama-3.1-405B-Instruct-FP8 πŸ”‘ | 67.1 | (-2.2, 2.8) |
154
- | Claude-3-5-Sonnet-20241022 πŸ”’ | **86.4** (Top-1 Among πŸ”’) | (-1.3, 1.3) |
155
- | O1-Preview-2024-09-12 πŸ”’ | 81.7 | (-2.2, 2.1) |
156
- | O1-Mini-2024-09-12 πŸ”’ | 79.3 | (-2.8, 2.3) |
157
- | GPT-4-Turbo-2024-04-09 πŸ”’ | 74.3 | (-2.4, 2.4) |
158
- | GPT-4-0125-Preview πŸ”’ | 73.6 | (-2.0, 2.0) |
159
- | GPT-4o-2024-08-06 πŸ”’ | 71.1 | (-2.5, 2.0) |
160
- | Yi-Lightning πŸ”’ | 66.9 | (-3.3, 2.7) |
161
- | Yi-Large-Preview πŸ”’ | 65.1 | (-2.5, 2.5) |
162
- | GLM-4-0520 πŸ”’ | 61.4 | (-2.6, 2.4) |
163
-
164
- **Comparison of Xwen-7B-Chat with other LLMs at a comparable level:**
165
-
166
- | | Score | 95% CIs |
167
- | ----------------------- | -------- | ----------- |
168
- | **Xwen-7B-Chat** πŸ”‘ | **50.3** | (-3.8, 2.8) |
169
- | Qwen2.5-7B-Instruct πŸ”‘ | 46.9 | (-3.1, 2.7) |
170
- | Gemma-2-27B-IT πŸ”‘ | 47.5 | (-2.5, 2.7) |
171
- | Llama-3.1-8B-Instruct πŸ”‘ | 18.3 | (-1.6, 1.6) |
172
- | Llama-3-8B-Instruct πŸ”‘ | 19.8 | (-1.6, 1.9) |
173
- | Starling-LM-7B-beta πŸ”‘ | 26.1 | (-2.6, 2.0) |
174
- | DeepSeek-R1-Distill-Qwen-7B (only responses) πŸ”‘ | 18.5 | (-1.6, 1.8) |
175
- | DeepSeek-R1-Distill-Qwen-7B (w/ thoughts and responses) πŸ”‘ | 11.8 | (-1.6, 1.6) |
176
-
177
-
178
- ### 3.2 AlignBench-v1.1
179
-
180
- > [!IMPORTANT]
181
- > We replaced the original judge model, `GPT-4-0613`, in AlignBench with the more powerful model, `GPT-4o-0513`. To keep fairness, all the results below are generated by ``GPT-4o-0513``. As a result, the following results may differ from the AlignBench-v1.1 scores reported elsewhere.
182
-
183
- **Comparison of Xwen-72B-Chat with other LLMs at a comparable level:**
184
-
185
- | | Score |
186
- | ----------------------------- | ------------------------ |
187
- | **Xwen-72B-Chat** πŸ”‘ | **7.57** (Top-1 Among πŸ”‘) |
188
- | Qwen2.5-72B-Instruct πŸ”‘ | 7.51 |
189
- | Deepseek V2.5 πŸ”‘ | 7.38 |
190
- | Mistral-Large-Instruct-2407 πŸ”‘ | 7.10 |
191
- | Llama3.1-70B-Instruct πŸ”‘ | 5.81 |
192
- | Llama-3.1-405B-Instruct-FP8 πŸ”‘ | 5.56 |
193
- | GPT-4o-0513 πŸ”’ | **7.59** (Top-1 Among πŸ”’) |
194
- | Claude-3.5-Sonnet-20240620 πŸ”’ | 7.17 |
195
- | Yi-Lightning πŸ”’ | 7.54 |
196
- | Yi-Large-Preview πŸ”’ | 7.20 |
197
-
198
-
199
- **Comparison of Xwen-7B-Chat with other LLMs at a comparable level:**
200
-
201
- | | Score |
202
- | ------------------ | -------- |
203
- | **Xwen-7B-Chat** πŸ”‘ | **6.88** |
204
- | Qwen2.5-7B-Chat πŸ”‘ | 6.56 |
205
-
206
- ### 3.3 MT-Bench
207
-
208
- > [!IMPORTANT]
209
- > We replaced the original judge model, `GPT-4`, in MT-Bench with the more powerful model, `GPT-4o-0513`. To keep fairness, all the results below are generated by ``GPT-4o-0513``. As a result, the following results may differ from the MT-Bench scores reported elsewhere.
210
-
211
- **Comparison of Xwen-72B-Chat with other LLMs at a comparable level:**
212
-
213
- | | Score |
214
- | ----------------------------- | ------------------------ |
215
- | **Xwen-72B-Chat** πŸ”‘ | **8.64** (Top-1 Among πŸ”‘) |
216
- | Qwen2.5-72B-Instruct πŸ”‘ | 8.62 |
217
- | Deepseek V2.5 πŸ”‘ | 8.43 |
218
- | Mistral-Large-Instruct-2407 πŸ”‘ | 8.53 |
219
- | Llama3.1-70B-Instruct πŸ”‘ | 8.23 |
220
- | Llama-3.1-405B-Instruct-FP8 πŸ”‘ | 8.36 |
221
- | GPT-4o-0513 πŸ”’ | 8.59 |
222
- | Claude-3.5-Sonnet-20240620 πŸ”’ | 6.96 |
223
- | Yi-Lightning πŸ”’ | **8.75** (Top-1 Among πŸ”’) |
224
- | Yi-Large-Preview πŸ”’ | 8.32 |
225
-
226
- **Comparison of Xwen-7B-Chat with other LLMs at a comparable level:**
227
-
228
- | | Score |
229
- | ------------------ | -------- |
230
- | **Xwen-7B-Chat** πŸ”‘ | **7.98** |
231
- | Qwen2.5-7B-Chat πŸ”‘ | 7.71 |
232
-
233
-
234
- ## References
235
-
236
- [1] Yang, An, et al. "Qwen2. 5 technical report." arXiv preprint arXiv:2412.15115 (2024).
237
-
238
- [2] Li, Tianle, et al. "From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and BenchBuilder Pipeline." arXiv preprint arXiv:2406.11939 (2024).
239
-
240
- [3] Zheng, Lianmin, et al. "Judging llm-as-a-judge with mt-bench and chatbot arena." Advances in Neural Information Processing Systems 36 (2023).
241
-
242
- [4] Liu, Xiao, et al. "Alignbench: Benchmarking chinese alignment of large language models." Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (2024).
243
-
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ base_model: Qwen/Qwen2.5-7B
6
+ language:
7
+ - zho
8
+ - eng
9
+ - fra
10
+ - spa
11
+ - por
12
+ - deu
13
+ - ita
14
+ - rus
15
+ - jpn
16
+ - kor
17
+ - vie
18
+ - tha
19
+ - ara
20
+ ---
21
+
22
+
23
+
24
+ # Xwen-7B-Chat
25
+
26
+ > [!IMPORTANT]
27
+ > If you enjoy our model, please **give it a like on our Hugging Face repo**. Your support means a lot to us. Thank you!
28
+
29
+ > [!IMPORTANT]
30
+ > You can download the **GGUF files of Xwen-7B-Chat** at [xwen-team/Xwen-7B-Chat-i1-GGUF](https://huggingface.co/xwen-team/Xwen-7B-Chat-i1-GGUF) (weighted/imatrix quants) and [xwen-team/Xwen-7B-Chat-GGUF](https://huggingface.co/xwen-team/Xwen-7B-Chat-GGUF) (static quants).
31
+
32
+ NEWS:
33
+ - Big thanks to @mradermacher for helping us build GGUFs for our Xwen-72B-Chat and Xwen-7B-Chat! The GGUF files have accumulated **over 2k downloads in one day** πŸš€ Our official GGUF repos: [**xwen-team/Xwen-7B-Chat-i1-GGUF**](https://huggingface.co/xwen-team/Xwen-7B-Chat-i1-GGUF) (weighted/imatrix quants) and [**xwen-team/Xwen-7B-Chat-GGUF**](https://huggingface.co/xwen-team/Xwen-7B-Chat-GGUF) (static quants).
34
+
35
+
36
+
37
+ <img src="Xwen-Cartoon.jpg" alt="Xwen-Cartoon" style="zoom:35%;" />
38
+
39
+ ## 1. Introduction
40
+
41
+
42
+ Xwen is a series of open-sourced large language models (currently including **[Xwen-72B-Chat](https://huggingface.co/xwen-team/Xwen-72B-Chat)** and **[Xwen-7B-Chat](https://huggingface.co/xwen-team/Xwen-7B-Chat)**), post-trained from the pre-trained Qwen2.5 models (i.e., [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) and [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)) [1].
43
+
44
+ **πŸ† Top-1 chat performance!** To the best of our knowledge, at the time of Xwen models' release (February 1, 2025), **[Xwen-72B-Chat](https://huggingface.co/xwen-team/Xwen-72B-Chat) and [Xwen-7B-Chat](https://huggingface.co/xwen-team/Xwen-7B-Chat) exhibit the best chat performance among open-sourced models below 100B and 10B, respectively**, based on evaluation results from widely-used benchmarks such as Arena-Hard-Auto [2], MT-Bench [3], and AlignBench [4]. Please view details in the [Evaluation Results](https://huggingface.co/xwen-team/Xwen-7B-Chat#3-evaluation-results) part.
45
+
46
+ **πŸš€ Xwen technical report is on the way!** During the training of Xwen models, we have accumulated many technical insights and lessons. To promote the democratization of technology, we are in the process of documenting these insights and lessons in a technical report, which will be released as soon as possible.
47
+
48
+
49
+
50
+ ## 2. Usage
51
+
52
+ > [!CAUTION]
53
+ > For optimal performance, we refrain from fine-tuning the model's identity. Thus, inquiries such as "Who are you" or "Who developed you" may yield random responses that are not necessarily accurate.
54
+
55
+ > [!CAUTION]
56
+ > This open-source model is provided "as is," without warranties or liabilities, and users assume all risks associated with its use; users are advised to comply with local laws, and the model's outputs do not represent the views or positions of the developers.
57
+
58
+ The usage of our Xwen-Chat models is similar to that of the Qwen2.5-Instruct models, with the tokenizer and chat template being identical to those of the Qwen2.5-Instruct models.
59
+
60
+ Here we provide a python script to demonstrate how to deploy our Xwen models to generate reponses:
61
+
62
+
63
+ ```python
64
+ from transformers import AutoModelForCausalLM, AutoTokenizer
65
+
66
+ model_name = "xwen-team/Xwen-7B-Chat" # Or "xwen-team/Xwen-72B-Chat" if you want to use the 72B model
67
+
68
+ model = AutoModelForCausalLM.from_pretrained(
69
+ model_name,
70
+ torch_dtype="auto",
71
+ device_map="auto"
72
+ )
73
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
74
+
75
+ prompt = "Give me a short introduction to large language models."
76
+ messages = [
77
+ {"role": "system", "content": "You are Xwen, created by Xwen Team. You are a helpful assistant."}, # This system prompt is not necessary, and you can put it as an empty string.
78
+ {"role": "user", "content": prompt}
79
+ ]
80
+ text = tokenizer.apply_chat_template(
81
+ messages,
82
+ tokenize=False,
83
+ add_generation_prompt=True
84
+ )
85
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
86
+
87
+ generated_ids = model.generate(
88
+ **model_inputs,
89
+ max_new_tokens=512
90
+ )
91
+ generated_ids = [
92
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
93
+ ]
94
+
95
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
96
+
97
+ print(response)
98
+ ```
99
+
100
+ ## 3. Evaluation Results
101
+
102
+ > [!CAUTION]
103
+ > Results on other benchmarks will be updated soon! 😊
104
+
105
+ πŸ”‘: Open-sourced
106
+
107
+ πŸ”’: Proprietary
108
+
109
+ ### 3.1 Arena-Hard-Auto-v0.1
110
+
111
+ All results below, except those for `Xwen-72B-Chat`, `DeepSeek-V3` and `DeepSeek-R1`, are sourced from [Arena-Hard-Auto](https://github.com/lmarena/arena-hard-auto) (accessed on February 1, 2025).
112
+
113
+ The results of `DeepSeek-V3` and `DeepSeek-R1` are borrowed from their officially reported results.
114
+
115
+ #### 3.1.1 No Style Control
116
+
117
+ **Comparison of Xwen-72B-Chat with other LLMs at a comparable level:**
118
+
119
+ | | Score | 95% CIs |
120
+ | --------------------------------- | ------------------------ | ----------- |
121
+ | **Xwen-72B-Chat** πŸ”‘ | **86.1** (Top-1 among πŸ”‘ below 100B) | (-1.5, 1.7) |
122
+ | Qwen2.5-72B-Instruct πŸ”‘ | 78.0 | (-1.8, 1.8) |
123
+ | Athene-v2-Chat πŸ”‘ | 85.0 | (-1.4, 1.7) |
124
+ | DeepSeek-V3 **(671B >> 72B)** πŸ”‘ | 85.5 | N/A |
125
+ | DeepSeek-R1 **(671B >> 72B)** πŸ”‘ | **92.3** (Top-1 among πŸ”‘) | N/A |
126
+ | Llama-3.1-Nemotron-70B-Instruct πŸ”‘ | 84.9 | (-1.7, 1.8) |
127
+ | Llama-3.1-405B-Instruct-FP8 πŸ”‘ | 69.3 | (-2.4, 2.2) |
128
+ | Claude-3-5-Sonnet-20241022 πŸ”’ | 85.2 | (-1.4, 1.6) |
129
+ | O1-Preview-2024-09-12 πŸ”’ | **92.0** (Top-1 among πŸ”’) | (-1.2, 1.0) |
130
+ | O1-Mini-2024-09-12 πŸ”’ | 90.4 | (-1.1, 1.3) |
131
+ | GPT-4-Turbo-2024-04-09 πŸ”’ | 82.6 | (-1.8, 1.5) |
132
+ | GPT-4-0125-Preview πŸ”’ | 78.0 | (-2.1, 2.4) |
133
+ | GPT-4o-2024-08-06 πŸ”’ | 77.9 | (-2.0, 2.1) |
134
+ | Yi-Lightning πŸ”’ | 81.5 | (-1.6, 1.6) |
135
+ | Yi-LargeπŸ”’ | 63.7 | (-2.6, 2.4) |
136
+ | GLM-4-0520 πŸ”’ | 63.8 | (-2.9, 2.8) |
137
+
138
+
139
+ **Comparison of Xwen-7B-Chat with other LLMs at a comparable level:**
140
+
141
+ | | Score | 95% CIs |
142
+ | ----------------------- | -------- | ----------- |
143
+ | **Xwen-7B-Chat** πŸ”‘ | **59.4** | (-2.4, 2.1) |
144
+ | Qwen2.5-7B-Instruct πŸ”‘ | 50.4 | (-2.9, 2.5) |
145
+ | Gemma-2-27B-IT πŸ”‘ | 57.5 | (-2.1, 2.4) |
146
+ | Llama-3.1-8B-Instruct πŸ”‘ | 21.3 | (-1.9, 2.2) |
147
+ | Llama-3-8B-Instruct πŸ”‘ | 20.6 | (-2.0, 1.9) |
148
+ | Starling-LM-7B-beta πŸ”‘ | 23.0 | (-1.8, 1.8) |
149
+ | DeepSeek-R1-Distill-Qwen-7B (only responses) πŸ”‘ | 17.2 | (-1.4, 1.7) |
150
+ | DeepSeek-R1-Distill-Qwen-7B (w/ thoughts and responses) πŸ”‘ | 13.6 | (-1.4, 1.8) |
151
+
152
+
153
+
154
+ #### 3.1.2 Style Control
155
+
156
+ **Comparison of Xwen-72B-Chat with other LLMs at a comparable level:**
157
+
158
+ | | Score | 95% CIs |
159
+ | --------------------------------- | ------------------------ | ----------- |
160
+ | **Xwen-72B-Chat** πŸ”‘ | **72.4** (Top-1 Among πŸ”‘) | (-4.3, 4.1) |
161
+ | Qwen2.5-72B-Instruct πŸ”‘ | 63.3 | (-2.5, 2.3) |
162
+ | Athene-v2-Chat πŸ”‘ | 72.1 | (-2.5, 2.5) |
163
+ | Llama-3.1-Nemotron-70B-Instruct πŸ”‘ | 71.0 | (-2.8, 3.1) |
164
+ | Llama-3.1-405B-Instruct-FP8 πŸ”‘ | 67.1 | (-2.2, 2.8) |
165
+ | Claude-3-5-Sonnet-20241022 πŸ”’ | **86.4** (Top-1 Among πŸ”’) | (-1.3, 1.3) |
166
+ | O1-Preview-2024-09-12 πŸ”’ | 81.7 | (-2.2, 2.1) |
167
+ | O1-Mini-2024-09-12 πŸ”’ | 79.3 | (-2.8, 2.3) |
168
+ | GPT-4-Turbo-2024-04-09 πŸ”’ | 74.3 | (-2.4, 2.4) |
169
+ | GPT-4-0125-Preview πŸ”’ | 73.6 | (-2.0, 2.0) |
170
+ | GPT-4o-2024-08-06 πŸ”’ | 71.1 | (-2.5, 2.0) |
171
+ | Yi-Lightning πŸ”’ | 66.9 | (-3.3, 2.7) |
172
+ | Yi-Large-Preview πŸ”’ | 65.1 | (-2.5, 2.5) |
173
+ | GLM-4-0520 πŸ”’ | 61.4 | (-2.6, 2.4) |
174
+
175
+ **Comparison of Xwen-7B-Chat with other LLMs at a comparable level:**
176
+
177
+ | | Score | 95% CIs |
178
+ | ----------------------- | -------- | ----------- |
179
+ | **Xwen-7B-Chat** πŸ”‘ | **50.3** | (-3.8, 2.8) |
180
+ | Qwen2.5-7B-Instruct πŸ”‘ | 46.9 | (-3.1, 2.7) |
181
+ | Gemma-2-27B-IT πŸ”‘ | 47.5 | (-2.5, 2.7) |
182
+ | Llama-3.1-8B-Instruct πŸ”‘ | 18.3 | (-1.6, 1.6) |
183
+ | Llama-3-8B-Instruct πŸ”‘ | 19.8 | (-1.6, 1.9) |
184
+ | Starling-LM-7B-beta πŸ”‘ | 26.1 | (-2.6, 2.0) |
185
+ | DeepSeek-R1-Distill-Qwen-7B (only responses) πŸ”‘ | 18.5 | (-1.6, 1.8) |
186
+ | DeepSeek-R1-Distill-Qwen-7B (w/ thoughts and responses) πŸ”‘ | 11.8 | (-1.6, 1.6) |
187
+
188
+
189
+ ### 3.2 AlignBench-v1.1
190
+
191
+ > [!IMPORTANT]
192
+ > We replaced the original judge model, `GPT-4-0613`, in AlignBench with the more powerful model, `GPT-4o-0513`. To keep fairness, all the results below are generated by ``GPT-4o-0513``. As a result, the following results may differ from the AlignBench-v1.1 scores reported elsewhere.
193
+
194
+ **Comparison of Xwen-72B-Chat with other LLMs at a comparable level:**
195
+
196
+ | | Score |
197
+ | ----------------------------- | ------------------------ |
198
+ | **Xwen-72B-Chat** πŸ”‘ | **7.57** (Top-1 Among πŸ”‘) |
199
+ | Qwen2.5-72B-Instruct πŸ”‘ | 7.51 |
200
+ | Deepseek V2.5 πŸ”‘ | 7.38 |
201
+ | Mistral-Large-Instruct-2407 πŸ”‘ | 7.10 |
202
+ | Llama3.1-70B-Instruct πŸ”‘ | 5.81 |
203
+ | Llama-3.1-405B-Instruct-FP8 πŸ”‘ | 5.56 |
204
+ | GPT-4o-0513 πŸ”’ | **7.59** (Top-1 Among πŸ”’) |
205
+ | Claude-3.5-Sonnet-20240620 πŸ”’ | 7.17 |
206
+ | Yi-Lightning πŸ”’ | 7.54 |
207
+ | Yi-Large-Preview πŸ”’ | 7.20 |
208
+
209
+
210
+ **Comparison of Xwen-7B-Chat with other LLMs at a comparable level:**
211
+
212
+ | | Score |
213
+ | ------------------ | -------- |
214
+ | **Xwen-7B-Chat** πŸ”‘ | **6.88** |
215
+ | Qwen2.5-7B-Chat πŸ”‘ | 6.56 |
216
+
217
+ ### 3.3 MT-Bench
218
+
219
+ > [!IMPORTANT]
220
+ > We replaced the original judge model, `GPT-4`, in MT-Bench with the more powerful model, `GPT-4o-0513`. To keep fairness, all the results below are generated by ``GPT-4o-0513``. As a result, the following results may differ from the MT-Bench scores reported elsewhere.
221
+
222
+ **Comparison of Xwen-72B-Chat with other LLMs at a comparable level:**
223
+
224
+ | | Score |
225
+ | ----------------------------- | ------------------------ |
226
+ | **Xwen-72B-Chat** πŸ”‘ | **8.64** (Top-1 Among πŸ”‘) |
227
+ | Qwen2.5-72B-Instruct πŸ”‘ | 8.62 |
228
+ | Deepseek V2.5 πŸ”‘ | 8.43 |
229
+ | Mistral-Large-Instruct-2407 πŸ”‘ | 8.53 |
230
+ | Llama3.1-70B-Instruct πŸ”‘ | 8.23 |
231
+ | Llama-3.1-405B-Instruct-FP8 πŸ”‘ | 8.36 |
232
+ | GPT-4o-0513 πŸ”’ | 8.59 |
233
+ | Claude-3.5-Sonnet-20240620 πŸ”’ | 6.96 |
234
+ | Yi-Lightning πŸ”’ | **8.75** (Top-1 Among πŸ”’) |
235
+ | Yi-Large-Preview πŸ”’ | 8.32 |
236
+
237
+ **Comparison of Xwen-7B-Chat with other LLMs at a comparable level:**
238
+
239
+ | | Score |
240
+ | ------------------ | -------- |
241
+ | **Xwen-7B-Chat** πŸ”‘ | **7.98** |
242
+ | Qwen2.5-7B-Chat πŸ”‘ | 7.71 |
243
+
244
+
245
+ ## References
246
+
247
+ [1] Yang, An, et al. "Qwen2. 5 technical report." arXiv preprint arXiv:2412.15115 (2024).
248
+
249
+ [2] Li, Tianle, et al. "From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and BenchBuilder Pipeline." arXiv preprint arXiv:2406.11939 (2024).
250
+
251
+ [3] Zheng, Lianmin, et al. "Judging llm-as-a-judge with mt-bench and chatbot arena." Advances in Neural Information Processing Systems 36 (2023).
252
+
253
+ [4] Liu, Xiao, et al. "Alignbench: Benchmarking chinese alignment of large language models." Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (2024).
254
+