Files changed (1) hide show
  1. README.md +177 -58
README.md CHANGED
@@ -1,59 +1,178 @@
1
- ---
2
- tags:
3
- - unsloth
4
- base_model:
5
- - Qwen/Qwen3-0.6B-Base
6
- license: apache-2.0
7
- library_name: transformers
8
- ---
9
- # Qwen3-0.6B-Base
10
-
11
- ## Qwen3 Highlights
12
-
13
- Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
14
- Building upon extensive advancements in training data, model architecture, and optimization techniques, Qwen3 delivers the following key improvements over the previously released Qwen2.5:
15
-
16
- - **Expanded Higher-Quality Pre-training Corpus:** Qwen3 is pre-trained on 36 trillion tokens across 119 languages β€” tripling the language coverage of Qwen2.5 β€” with a much richer mix of high-quality data, including coding, STEM, reasoning, book, multilingual, and synthetic data.
17
- - **Training Techniques and Model Architecture:** Qwen3 incorporates a series of training techiques and architectural refinements, including global-batch load balancing loss for MoE models and qk layernorm for all models, leading to improved stability and overall performance.
18
- - **Three-stage Pre-training:** Stage 1 focuses on broad language modeling and general knowledge acquisition, Stage 2 improves reasoning skills like STEM, coding, and logical reasoning, and Stage 3 enhances long-context comprehension by extending training sequence lengths up to 32k tokens.
19
- - **Scaling Law Guided Hyperparameter Tuning:** Through comprehensive scaling law studies across the three-stage pre-training pipeline, Qwen3 systematically tunes critical hyperparameters β€” such as learning rate scheduler and batch size β€” separately for dense and MoE models, resulting in better training dynamics and final performance across different model scales.
20
-
21
- ## Model Overview
22
-
23
- **Qwen3-0.6B-Base** has the following features:
24
- - Type: Causal Language Models
25
- - Training Stage: Pretraining
26
- - Number of Parameters: 0.6B
27
- - Number of Paramaters (Non-Embedding): 0.44B
28
- - Number of Layers: 28
29
- - Number of Attention Heads (GQA): 16 for Q and 8 for KV
30
- - Context Length: 32,768
31
-
32
- For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
33
-
34
- ## Requirements
35
-
36
- The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
37
-
38
- With `transformers<4.51.0`, you will encounter the following error:
39
- ```
40
- KeyError: 'qwen3'
41
- ```
42
-
43
- ## Evaluation & Performance
44
-
45
- Detailed evaluation results are reported in this [πŸ“‘ blog](https://qwenlm.github.io/blog/qwen3/).
46
-
47
- ### Citation
48
-
49
- If you find our work helpful, feel free to give us a cite.
50
-
51
- ```
52
- @misc{qwen3,
53
- title = {Qwen3},
54
- url = {https://qwenlm.github.io/blog/qwen3/},
55
- author = {Qwen Team},
56
- month = {April},
57
- year = {2025}
58
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  ```
 
1
+ ---
2
+ tags:
3
+ - unsloth
4
+ base_model:
5
+ - Qwen/Qwen3-0.6B-Base
6
+ license: apache-2.0
7
+ library_name: transformers
8
+ language:
9
+ - eng
10
+ - fra
11
+ - por
12
+ - deu
13
+ - ron
14
+ - swe
15
+ - dan
16
+ - bul
17
+ - rus
18
+ - ces
19
+ - ell
20
+ - ukr
21
+ - spa
22
+ - nld
23
+ - slk
24
+ - hrv
25
+ - pol
26
+ - lit
27
+ - nob
28
+ - nno
29
+ - fas
30
+ - slv
31
+ - guj
32
+ - lav
33
+ - ita
34
+ - oci
35
+ - nep
36
+ - mar
37
+ - bel
38
+ - srp
39
+ - ltz
40
+ - vec
41
+ - asm
42
+ - cym
43
+ - szl
44
+ - ast
45
+ - hne
46
+ - awa
47
+ - mai
48
+ - bho
49
+ - snd
50
+ - gle
51
+ - fao
52
+ - hin
53
+ - pan
54
+ - ben
55
+ - ori
56
+ - tgk
57
+ - ydd
58
+ - lmo
59
+ - lij
60
+ - scn
61
+ - fur
62
+ - srd
63
+ - glg
64
+ - cat
65
+ - isl
66
+ - als
67
+ - lim
68
+ - prs
69
+ - afr
70
+ - mkd
71
+ - sin
72
+ - urd
73
+ - mag
74
+ - bos
75
+ - hye
76
+ - zho
77
+ - yue
78
+ - mya
79
+ - ara
80
+ - ars
81
+ - apc
82
+ - arz
83
+ - ary
84
+ - acm
85
+ - acq
86
+ - aeb
87
+ - heb
88
+ - mlt
89
+ - ind
90
+ - zsm
91
+ - tgl
92
+ - ceb
93
+ - jav
94
+ - sun
95
+ - min
96
+ - ban
97
+ - bjn
98
+ - pag
99
+ - ilo
100
+ - war
101
+ - tam
102
+ - tel
103
+ - kan
104
+ - mal
105
+ - tur
106
+ - azj
107
+ - uzn
108
+ - kaz
109
+ - bak
110
+ - tat
111
+ - tha
112
+ - lao
113
+ - fin
114
+ - est
115
+ - hun
116
+ - vie
117
+ - khm
118
+ - jpn
119
+ - kor
120
+ - kat
121
+ - eus
122
+ - hat
123
+ - pap
124
+ - kea
125
+ - tpi
126
+ - swa
127
+ ---
128
+ # Qwen3-0.6B-Base
129
+
130
+ ## Qwen3 Highlights
131
+
132
+ Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
133
+ Building upon extensive advancements in training data, model architecture, and optimization techniques, Qwen3 delivers the following key improvements over the previously released Qwen2.5:
134
+
135
+ - **Expanded Higher-Quality Pre-training Corpus:** Qwen3 is pre-trained on 36 trillion tokens across 119 languages β€” tripling the language coverage of Qwen2.5 β€” with a much richer mix of high-quality data, including coding, STEM, reasoning, book, multilingual, and synthetic data.
136
+ - **Training Techniques and Model Architecture:** Qwen3 incorporates a series of training techiques and architectural refinements, including global-batch load balancing loss for MoE models and qk layernorm for all models, leading to improved stability and overall performance.
137
+ - **Three-stage Pre-training:** Stage 1 focuses on broad language modeling and general knowledge acquisition, Stage 2 improves reasoning skills like STEM, coding, and logical reasoning, and Stage 3 enhances long-context comprehension by extending training sequence lengths up to 32k tokens.
138
+ - **Scaling Law Guided Hyperparameter Tuning:** Through comprehensive scaling law studies across the three-stage pre-training pipeline, Qwen3 systematically tunes critical hyperparameters β€” such as learning rate scheduler and batch size β€” separately for dense and MoE models, resulting in better training dynamics and final performance across different model scales.
139
+
140
+ ## Model Overview
141
+
142
+ **Qwen3-0.6B-Base** has the following features:
143
+ - Type: Causal Language Models
144
+ - Training Stage: Pretraining
145
+ - Number of Parameters: 0.6B
146
+ - Number of Paramaters (Non-Embedding): 0.44B
147
+ - Number of Layers: 28
148
+ - Number of Attention Heads (GQA): 16 for Q and 8 for KV
149
+ - Context Length: 32,768
150
+
151
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
152
+
153
+ ## Requirements
154
+
155
+ The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
156
+
157
+ With `transformers<4.51.0`, you will encounter the following error:
158
+ ```
159
+ KeyError: 'qwen3'
160
+ ```
161
+
162
+ ## Evaluation & Performance
163
+
164
+ Detailed evaluation results are reported in this [πŸ“‘ blog](https://qwenlm.github.io/blog/qwen3/).
165
+
166
+ ### Citation
167
+
168
+ If you find our work helpful, feel free to give us a cite.
169
+
170
+ ```
171
+ @misc{qwen3,
172
+ title = {Qwen3},
173
+ url = {https://qwenlm.github.io/blog/qwen3/},
174
+ author = {Qwen Team},
175
+ month = {April},
176
+ year = {2025}
177
+ }
178
  ```