Files changed (1) hide show
  1. README.md +199 -185
README.md CHANGED
@@ -1,185 +1,199 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- base_model: Qwen/Qwen2.5-7B
5
- tags:
6
- - generated_from_trainer
7
- model-index:
8
- - name: outputs/out
9
- results: []
10
- ---
11
-
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
- [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
16
- <details><summary>See axolotl config</summary>
17
-
18
- axolotl version: `0.4.1`
19
- ```yaml
20
- base_model: Qwen/Qwen2.5-7B
21
- model_type: AutoModelForCausalLM
22
- tokenizer_type: AutoTokenizer
23
-
24
- load_in_8bit: false
25
- load_in_4bit: false
26
- strict: false
27
-
28
- datasets:
29
- - path: PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
30
- type: sharegpt
31
- conversation: chatml
32
- - path: NewEden/Kalo-Opus-Instruct-22k-Refusal-Murdered
33
- type: sharegpt
34
- conversation: chatml
35
- - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
36
- type: sharegpt
37
- conversation: chatml
38
- - path: NewEden/Gryphe-Sonnet-3.5-35k-Subset
39
- type: sharegpt
40
- conversation: chatml
41
- - path: Nitral-AI/Reasoning-1shot_ShareGPT
42
- type: sharegpt
43
- conversation: chatml
44
- - path: Nitral-AI/GU_Instruct-ShareGPT
45
- type: sharegpt
46
- conversation: chatml
47
- - path: Nitral-AI/Medical_Instruct-ShareGPT
48
- type: sharegpt
49
- conversation: chatml
50
- - path: AquaV/Resistance-Sharegpt
51
- type: sharegpt
52
- conversation: chatml
53
- - path: AquaV/US-Army-Survival-Sharegpt
54
- type: sharegpt
55
- conversation: chatml
56
- - path: Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
57
- type: sharegpt
58
- conversation: chatml
59
-
60
- chat_template: chatml
61
-
62
- val_set_size: 0.002
63
- output_dir: ./outputs/out
64
-
65
- adapter:
66
- lora_r:
67
- lora_alpha:
68
- lora_dropout:
69
- lora_target_linear:
70
-
71
- sequence_len: 8192
72
- sample_packing: true
73
- eval_sample_packing: false
74
- pad_to_sequence_len: true
75
-
76
- plugins:
77
- - axolotl.integrations.liger.LigerPlugin
78
- liger_rope: true
79
- liger_rms_norm: true
80
- liger_swiglu: true
81
- liger_fused_linear_cross_entropy: true
82
-
83
- wandb_project: qwen7B
84
- wandb_entity:
85
- wandb_watch:
86
- wandb_name: qwen7B
87
- wandb_log_model:
88
-
89
- gradient_accumulation_steps: 32
90
- micro_batch_size: 1
91
- num_epochs: 2
92
- optimizer: adamw_bnb_8bit
93
- lr_scheduler: cosine
94
- learning_rate: 0.00001
95
- weight_decay: 0.05
96
-
97
- train_on_inputs: false
98
- group_by_length: false
99
- bf16: auto
100
- fp16:
101
- tf32: true
102
-
103
- gradient_checkpointing: true
104
- early_stopping_patience:
105
- resume_from_checkpoint:
106
- local_rank:
107
- logging_steps: 1
108
- xformers_attention:
109
- flash_attention: true
110
-
111
- warmup_ratio: 0.1
112
- evals_per_epoch: 4
113
- eval_table_size:
114
- eval_max_new_tokens: 128
115
- saves_per_epoch: 2
116
-
117
- debug:
118
- deepspeed:
119
- fsdp:
120
- fsdp_config:
121
-
122
- special_tokens:
123
- pad_token: <pad>
124
-
125
- ```
126
-
127
- </details><br>
128
-
129
- # outputs/out
130
-
131
- This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
132
- It achieves the following results on the evaluation set:
133
- - Loss: 0.7923
134
-
135
- ## Model description
136
-
137
- More information needed
138
-
139
- ## Intended uses & limitations
140
-
141
- More information needed
142
-
143
- ## Training and evaluation data
144
-
145
- More information needed
146
-
147
- ## Training procedure
148
-
149
- ### Training hyperparameters
150
-
151
- The following hyperparameters were used during training:
152
- - learning_rate: 1e-05
153
- - train_batch_size: 1
154
- - eval_batch_size: 1
155
- - seed: 42
156
- - distributed_type: multi-GPU
157
- - num_devices: 4
158
- - gradient_accumulation_steps: 32
159
- - total_train_batch_size: 128
160
- - total_eval_batch_size: 4
161
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
162
- - lr_scheduler_type: cosine
163
- - lr_scheduler_warmup_steps: 46
164
- - num_epochs: 2
165
-
166
- ### Training results
167
-
168
- | Training Loss | Epoch | Step | Validation Loss |
169
- |:-------------:|:------:|:----:|:---------------:|
170
- | 1.0297 | 0.0043 | 1 | 1.1468 |
171
- | 0.8512 | 0.2515 | 58 | 0.8729 |
172
- | 0.8496 | 0.5030 | 116 | 0.8193 |
173
- | 0.8175 | 0.7546 | 174 | 0.8033 |
174
- | 0.7868 | 1.0041 | 232 | 0.7961 |
175
- | 0.8119 | 1.2555 | 290 | 0.7934 |
176
- | 0.799 | 1.5069 | 348 | 0.7926 |
177
- | 0.7891 | 1.7583 | 406 | 0.7923 |
178
-
179
-
180
- ### Framework versions
181
-
182
- - Transformers 4.45.0.dev0
183
- - Pytorch 2.4.0+cu121
184
- - Datasets 2.21.0
185
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-7B
5
+ tags:
6
+ - generated_from_trainer
7
+ language:
8
+ - zho
9
+ - eng
10
+ - fra
11
+ - spa
12
+ - por
13
+ - deu
14
+ - ita
15
+ - rus
16
+ - jpn
17
+ - kor
18
+ - vie
19
+ - tha
20
+ - ara
21
+ model-index:
22
+ - name: outputs/out
23
+ results: []
24
+ ---
25
+
26
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
27
+ should probably proofread and complete it, then remove this comment. -->
28
+
29
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
30
+ <details><summary>See axolotl config</summary>
31
+
32
+ axolotl version: `0.4.1`
33
+ ```yaml
34
+ base_model: Qwen/Qwen2.5-7B
35
+ model_type: AutoModelForCausalLM
36
+ tokenizer_type: AutoTokenizer
37
+
38
+ load_in_8bit: false
39
+ load_in_4bit: false
40
+ strict: false
41
+
42
+ datasets:
43
+ - path: PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
44
+ type: sharegpt
45
+ conversation: chatml
46
+ - path: NewEden/Kalo-Opus-Instruct-22k-Refusal-Murdered
47
+ type: sharegpt
48
+ conversation: chatml
49
+ - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
50
+ type: sharegpt
51
+ conversation: chatml
52
+ - path: NewEden/Gryphe-Sonnet-3.5-35k-Subset
53
+ type: sharegpt
54
+ conversation: chatml
55
+ - path: Nitral-AI/Reasoning-1shot_ShareGPT
56
+ type: sharegpt
57
+ conversation: chatml
58
+ - path: Nitral-AI/GU_Instruct-ShareGPT
59
+ type: sharegpt
60
+ conversation: chatml
61
+ - path: Nitral-AI/Medical_Instruct-ShareGPT
62
+ type: sharegpt
63
+ conversation: chatml
64
+ - path: AquaV/Resistance-Sharegpt
65
+ type: sharegpt
66
+ conversation: chatml
67
+ - path: AquaV/US-Army-Survival-Sharegpt
68
+ type: sharegpt
69
+ conversation: chatml
70
+ - path: Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
71
+ type: sharegpt
72
+ conversation: chatml
73
+
74
+ chat_template: chatml
75
+
76
+ val_set_size: 0.002
77
+ output_dir: ./outputs/out
78
+
79
+ adapter:
80
+ lora_r:
81
+ lora_alpha:
82
+ lora_dropout:
83
+ lora_target_linear:
84
+
85
+ sequence_len: 8192
86
+ sample_packing: true
87
+ eval_sample_packing: false
88
+ pad_to_sequence_len: true
89
+
90
+ plugins:
91
+ - axolotl.integrations.liger.LigerPlugin
92
+ liger_rope: true
93
+ liger_rms_norm: true
94
+ liger_swiglu: true
95
+ liger_fused_linear_cross_entropy: true
96
+
97
+ wandb_project: qwen7B
98
+ wandb_entity:
99
+ wandb_watch:
100
+ wandb_name: qwen7B
101
+ wandb_log_model:
102
+
103
+ gradient_accumulation_steps: 32
104
+ micro_batch_size: 1
105
+ num_epochs: 2
106
+ optimizer: adamw_bnb_8bit
107
+ lr_scheduler: cosine
108
+ learning_rate: 0.00001
109
+ weight_decay: 0.05
110
+
111
+ train_on_inputs: false
112
+ group_by_length: false
113
+ bf16: auto
114
+ fp16:
115
+ tf32: true
116
+
117
+ gradient_checkpointing: true
118
+ early_stopping_patience:
119
+ resume_from_checkpoint:
120
+ local_rank:
121
+ logging_steps: 1
122
+ xformers_attention:
123
+ flash_attention: true
124
+
125
+ warmup_ratio: 0.1
126
+ evals_per_epoch: 4
127
+ eval_table_size:
128
+ eval_max_new_tokens: 128
129
+ saves_per_epoch: 2
130
+
131
+ debug:
132
+ deepspeed:
133
+ fsdp:
134
+ fsdp_config:
135
+
136
+ special_tokens:
137
+ pad_token: <pad>
138
+
139
+ ```
140
+
141
+ </details><br>
142
+
143
+ # outputs/out
144
+
145
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
146
+ It achieves the following results on the evaluation set:
147
+ - Loss: 0.7923
148
+
149
+ ## Model description
150
+
151
+ More information needed
152
+
153
+ ## Intended uses & limitations
154
+
155
+ More information needed
156
+
157
+ ## Training and evaluation data
158
+
159
+ More information needed
160
+
161
+ ## Training procedure
162
+
163
+ ### Training hyperparameters
164
+
165
+ The following hyperparameters were used during training:
166
+ - learning_rate: 1e-05
167
+ - train_batch_size: 1
168
+ - eval_batch_size: 1
169
+ - seed: 42
170
+ - distributed_type: multi-GPU
171
+ - num_devices: 4
172
+ - gradient_accumulation_steps: 32
173
+ - total_train_batch_size: 128
174
+ - total_eval_batch_size: 4
175
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
176
+ - lr_scheduler_type: cosine
177
+ - lr_scheduler_warmup_steps: 46
178
+ - num_epochs: 2
179
+
180
+ ### Training results
181
+
182
+ | Training Loss | Epoch | Step | Validation Loss |
183
+ |:-------------:|:------:|:----:|:---------------:|
184
+ | 1.0297 | 0.0043 | 1 | 1.1468 |
185
+ | 0.8512 | 0.2515 | 58 | 0.8729 |
186
+ | 0.8496 | 0.5030 | 116 | 0.8193 |
187
+ | 0.8175 | 0.7546 | 174 | 0.8033 |
188
+ | 0.7868 | 1.0041 | 232 | 0.7961 |
189
+ | 0.8119 | 1.2555 | 290 | 0.7934 |
190
+ | 0.799 | 1.5069 | 348 | 0.7926 |
191
+ | 0.7891 | 1.7583 | 406 | 0.7923 |
192
+
193
+
194
+ ### Framework versions
195
+
196
+ - Transformers 4.45.0.dev0
197
+ - Pytorch 2.4.0+cu121
198
+ - Datasets 2.21.0
199
+ - Tokenizers 0.19.1