de-francophones commited on
Commit
e91b5a4
Β·
verified Β·
1 Parent(s): 8656056

Add languages tag

Browse files
Files changed (1) hide show
  1. README.md +176 -57
README.md CHANGED
@@ -1,58 +1,177 @@
1
- ---
2
- tags:
3
- - unsloth
4
- base_model:
5
- - Qwen/Qwen3-4B-Base
6
- license: apache-2.0
7
- ---
8
- # Qwen3-4B-Base
9
-
10
- ## Qwen3 Highlights
11
-
12
- Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
13
- Building upon extensive advancements in training data, model architecture, and optimization techniques, Qwen3 delivers the following key improvements over the previously released Qwen2.5:
14
-
15
- - **Expanded Higher-Quality Pre-training Corpus:** Qwen3 is pre-trained on 36 trillion tokens across 119 languages β€” tripling the language coverage of Qwen2.5 β€” with a much richer mix of high-quality data, including coding, STEM, reasoning, book, multilingual, and synthetic data.
16
- - **Training Techniques and Model Architecture:** Qwen3 incorporates a series of training techiques and architectural refinements, including global-batch load balancing loss for MoE models and qk layernorm for all models, leading to improved stability and overall performance.
17
- - **Three-stage Pre-training:** Stage 1 focuses on broad language modeling and general knowledge acquisition, Stage 2 improves reasoning skills like STEM, coding, and logical reasoning, and Stage 3 enhances long-context comprehension by extending training sequence lengths up to 32k tokens.
18
- - **Scaling Law Guided Hyperparameter Tuning:** Through comprehensive scaling law studies across the three-stage pre-training pipeline, Qwen3 systematically tunes critical hyperparameters β€” such as learning rate scheduler and batch size β€” separately for dense and MoE models, resulting in better training dynamics and final performance across different model scales.
19
-
20
- ## Model Overview
21
-
22
- **Qwen3-4B-Base** has the following features:
23
- - Type: Causal Language Models
24
- - Training Stage: Pretraining
25
- - Number of Parameters: 4.0B
26
- - Number of Paramaters (Non-Embedding): 3.6B
27
- - Number of Layers: 36
28
- - Number of Attention Heads (GQA): 32 for Q and 8 for KV
29
- - Context Length: 32,768
30
-
31
- For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
32
-
33
- ## Requirements
34
-
35
- The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
36
-
37
- With `transformers<4.51.0`, you will encounter the following error:
38
- ```
39
- KeyError: 'qwen3'
40
- ```
41
-
42
- ## Evaluation & Performance
43
-
44
- Detailed evaluation results are reported in this [πŸ“‘ blog](https://qwenlm.github.io/blog/qwen3/).
45
-
46
- ### Citation
47
-
48
- If you find our work helpful, feel free to give us a cite.
49
-
50
- ```
51
- @misc{qwen3,
52
- title = {Qwen3},
53
- url = {https://qwenlm.github.io/blog/qwen3/},
54
- author = {Qwen Team},
55
- month = {April},
56
- year = {2025}
57
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  ```
 
1
+ ---
2
+ tags:
3
+ - unsloth
4
+ base_model:
5
+ - Qwen/Qwen3-4B-Base
6
+ license: apache-2.0
7
+ language:
8
+ - eng
9
+ - fra
10
+ - por
11
+ - deu
12
+ - ron
13
+ - swe
14
+ - dan
15
+ - bul
16
+ - rus
17
+ - ces
18
+ - ell
19
+ - ukr
20
+ - spa
21
+ - nld
22
+ - slk
23
+ - hrv
24
+ - pol
25
+ - lit
26
+ - nob
27
+ - nno
28
+ - fas
29
+ - slv
30
+ - guj
31
+ - lav
32
+ - ita
33
+ - oci
34
+ - nep
35
+ - mar
36
+ - bel
37
+ - srp
38
+ - ltz
39
+ - vec
40
+ - asm
41
+ - cym
42
+ - szl
43
+ - ast
44
+ - hne
45
+ - awa
46
+ - mai
47
+ - bho
48
+ - snd
49
+ - gle
50
+ - fao
51
+ - hin
52
+ - pan
53
+ - ben
54
+ - ori
55
+ - tgk
56
+ - ydd
57
+ - lmo
58
+ - lij
59
+ - scn
60
+ - fur
61
+ - srd
62
+ - glg
63
+ - cat
64
+ - isl
65
+ - als
66
+ - lim
67
+ - prs
68
+ - afr
69
+ - mkd
70
+ - sin
71
+ - urd
72
+ - mag
73
+ - bos
74
+ - hye
75
+ - zho
76
+ - yue
77
+ - mya
78
+ - ara
79
+ - ars
80
+ - apc
81
+ - arz
82
+ - ary
83
+ - acm
84
+ - acq
85
+ - aeb
86
+ - heb
87
+ - mlt
88
+ - ind
89
+ - zsm
90
+ - tgl
91
+ - ceb
92
+ - jav
93
+ - sun
94
+ - min
95
+ - ban
96
+ - bjn
97
+ - pag
98
+ - ilo
99
+ - war
100
+ - tam
101
+ - tel
102
+ - kan
103
+ - mal
104
+ - tur
105
+ - azj
106
+ - uzn
107
+ - kaz
108
+ - bak
109
+ - tat
110
+ - tha
111
+ - lao
112
+ - fin
113
+ - est
114
+ - hun
115
+ - vie
116
+ - khm
117
+ - jpn
118
+ - kor
119
+ - kat
120
+ - eus
121
+ - hat
122
+ - pap
123
+ - kea
124
+ - tpi
125
+ - swa
126
+ ---
127
+ # Qwen3-4B-Base
128
+
129
+ ## Qwen3 Highlights
130
+
131
+ Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
132
+ Building upon extensive advancements in training data, model architecture, and optimization techniques, Qwen3 delivers the following key improvements over the previously released Qwen2.5:
133
+
134
+ - **Expanded Higher-Quality Pre-training Corpus:** Qwen3 is pre-trained on 36 trillion tokens across 119 languages β€” tripling the language coverage of Qwen2.5 β€” with a much richer mix of high-quality data, including coding, STEM, reasoning, book, multilingual, and synthetic data.
135
+ - **Training Techniques and Model Architecture:** Qwen3 incorporates a series of training techiques and architectural refinements, including global-batch load balancing loss for MoE models and qk layernorm for all models, leading to improved stability and overall performance.
136
+ - **Three-stage Pre-training:** Stage 1 focuses on broad language modeling and general knowledge acquisition, Stage 2 improves reasoning skills like STEM, coding, and logical reasoning, and Stage 3 enhances long-context comprehension by extending training sequence lengths up to 32k tokens.
137
+ - **Scaling Law Guided Hyperparameter Tuning:** Through comprehensive scaling law studies across the three-stage pre-training pipeline, Qwen3 systematically tunes critical hyperparameters β€” such as learning rate scheduler and batch size β€” separately for dense and MoE models, resulting in better training dynamics and final performance across different model scales.
138
+
139
+ ## Model Overview
140
+
141
+ **Qwen3-4B-Base** has the following features:
142
+ - Type: Causal Language Models
143
+ - Training Stage: Pretraining
144
+ - Number of Parameters: 4.0B
145
+ - Number of Paramaters (Non-Embedding): 3.6B
146
+ - Number of Layers: 36
147
+ - Number of Attention Heads (GQA): 32 for Q and 8 for KV
148
+ - Context Length: 32,768
149
+
150
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
151
+
152
+ ## Requirements
153
+
154
+ The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
155
+
156
+ With `transformers<4.51.0`, you will encounter the following error:
157
+ ```
158
+ KeyError: 'qwen3'
159
+ ```
160
+
161
+ ## Evaluation & Performance
162
+
163
+ Detailed evaluation results are reported in this [πŸ“‘ blog](https://qwenlm.github.io/blog/qwen3/).
164
+
165
+ ### Citation
166
+
167
+ If you find our work helpful, feel free to give us a cite.
168
+
169
+ ```
170
+ @misc{qwen3,
171
+ title = {Qwen3},
172
+ url = {https://qwenlm.github.io/blog/qwen3/},
173
+ author = {Qwen Team},
174
+ month = {April},
175
+ year = {2025}
176
+ }
177
  ```