chansung commited on
Commit
13bda72
·
verified ·
1 Parent(s): 018cd94

Model save

Browse files
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: llama3.2
4
+ base_model: meta-llama/Llama-3.2-3B
5
+ tags:
6
+ - trl
7
+ - sft
8
+ - generated_from_trainer
9
+ datasets:
10
+ - generator
11
+ model-index:
12
+ - name: llama3-3b-closedqa-gpt4o-100k
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # llama3-3b-closedqa-gpt4o-100k
20
+
21
+ This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the generator dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 2.3769
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 0.0002
43
+ - train_batch_size: 16
44
+ - eval_batch_size: 16
45
+ - seed: 42
46
+ - distributed_type: multi-GPU
47
+ - num_devices: 8
48
+ - gradient_accumulation_steps: 2
49
+ - total_train_batch_size: 256
50
+ - total_eval_batch_size: 128
51
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
52
+ - lr_scheduler_type: cosine
53
+ - lr_scheduler_warmup_ratio: 0.1
54
+ - num_epochs: 10
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss |
59
+ |:-------------:|:-----:|:----:|:---------------:|
60
+ | 1.471 | 1.0 | 64 | 2.3854 |
61
+ | 1.362 | 2.0 | 128 | 2.3754 |
62
+ | 1.3229 | 3.0 | 192 | 2.3740 |
63
+ | 1.2996 | 4.0 | 256 | 2.3706 |
64
+ | 1.2878 | 5.0 | 320 | 2.3755 |
65
+ | 1.2746 | 6.0 | 384 | 2.3722 |
66
+ | 1.2617 | 7.0 | 448 | 2.3756 |
67
+ | 1.2497 | 8.0 | 512 | 2.3754 |
68
+ | 1.2549 | 9.0 | 576 | 2.3762 |
69
+ | 1.2494 | 10.0 | 640 | 2.3769 |
70
+
71
+
72
+ ### Framework versions
73
+
74
+ - PEFT 0.15.1
75
+ - Transformers 4.50.3
76
+ - Pytorch 2.6.0+cu124
77
+ - Datasets 3.5.0
78
+ - Tokenizers 0.21.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:05c03b22b19915440a3619513f7ea26129b9ec9e9278ac405bce16e35f9aa306
3
  size 1612749744
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d846c3d7d8d294a4f449243d354efe697a0682a73620ffab23df4c1a459ec1f2
3
  size 1612749744
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 10.0,
3
+ "total_flos": 2.846679860191953e+18,
4
+ "train_loss": 1.3252380434423685,
5
+ "train_runtime": 3360.483,
6
+ "train_samples": 111440,
7
+ "train_samples_per_second": 48.669,
8
+ "train_steps_per_second": 0.19
9
+ }
runs/Apr01_01-14-14_green-face-echoes-fin-01/events.out.tfevents.1743470256.green-face-echoes-fin-01.27822.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:85d8ea6c05da2b00cc094db1dad6a474dd6bfcee47b7f972a29d25761b3636e4
3
- size 34127
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf2fa52dfa76cb05648f8b68caea459d3d5b0ac69a8096c16d589147c1391d48
3
+ size 36440
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 10.0,
3
+ "total_flos": 2.846679860191953e+18,
4
+ "train_loss": 1.3252380434423685,
5
+ "train_runtime": 3360.483,
6
+ "train_samples": 111440,
7
+ "train_samples_per_second": 48.669,
8
+ "train_steps_per_second": 0.19
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,1026 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 10.0,
6
+ "eval_steps": 500,
7
+ "global_step": 640,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.015625,
14
+ "grad_norm": 0.5563601851463318,
15
+ "learning_rate": 3.125e-06,
16
+ "loss": 1.837,
17
+ "step": 1
18
+ },
19
+ {
20
+ "epoch": 0.078125,
21
+ "grad_norm": 0.5579909682273865,
22
+ "learning_rate": 1.5625e-05,
23
+ "loss": 1.832,
24
+ "step": 5
25
+ },
26
+ {
27
+ "epoch": 0.15625,
28
+ "grad_norm": 0.3869417607784271,
29
+ "learning_rate": 3.125e-05,
30
+ "loss": 1.8211,
31
+ "step": 10
32
+ },
33
+ {
34
+ "epoch": 0.234375,
35
+ "grad_norm": 0.2597704827785492,
36
+ "learning_rate": 4.6875e-05,
37
+ "loss": 1.791,
38
+ "step": 15
39
+ },
40
+ {
41
+ "epoch": 0.3125,
42
+ "grad_norm": 0.2681010663509369,
43
+ "learning_rate": 6.25e-05,
44
+ "loss": 1.7632,
45
+ "step": 20
46
+ },
47
+ {
48
+ "epoch": 0.390625,
49
+ "grad_norm": 0.2458464354276657,
50
+ "learning_rate": 7.8125e-05,
51
+ "loss": 1.7142,
52
+ "step": 25
53
+ },
54
+ {
55
+ "epoch": 0.46875,
56
+ "grad_norm": 0.2320868819952011,
57
+ "learning_rate": 9.375e-05,
58
+ "loss": 1.662,
59
+ "step": 30
60
+ },
61
+ {
62
+ "epoch": 0.546875,
63
+ "grad_norm": 0.2217356562614441,
64
+ "learning_rate": 0.000109375,
65
+ "loss": 1.606,
66
+ "step": 35
67
+ },
68
+ {
69
+ "epoch": 0.625,
70
+ "grad_norm": 0.17081762850284576,
71
+ "learning_rate": 0.000125,
72
+ "loss": 1.5636,
73
+ "step": 40
74
+ },
75
+ {
76
+ "epoch": 0.703125,
77
+ "grad_norm": 0.13257570564746857,
78
+ "learning_rate": 0.00014062500000000002,
79
+ "loss": 1.5291,
80
+ "step": 45
81
+ },
82
+ {
83
+ "epoch": 0.78125,
84
+ "grad_norm": 0.12237544357776642,
85
+ "learning_rate": 0.00015625,
86
+ "loss": 1.5163,
87
+ "step": 50
88
+ },
89
+ {
90
+ "epoch": 0.859375,
91
+ "grad_norm": 0.10626661777496338,
92
+ "learning_rate": 0.00017187500000000002,
93
+ "loss": 1.4857,
94
+ "step": 55
95
+ },
96
+ {
97
+ "epoch": 0.9375,
98
+ "grad_norm": 0.1030210331082344,
99
+ "learning_rate": 0.0001875,
100
+ "loss": 1.471,
101
+ "step": 60
102
+ },
103
+ {
104
+ "epoch": 1.0,
105
+ "eval_loss": 2.3853657245635986,
106
+ "eval_runtime": 0.8929,
107
+ "eval_samples_per_second": 6.72,
108
+ "eval_steps_per_second": 1.12,
109
+ "step": 64
110
+ },
111
+ {
112
+ "epoch": 1.015625,
113
+ "grad_norm": 0.1105547621846199,
114
+ "learning_rate": 0.00019999851261394218,
115
+ "loss": 1.4547,
116
+ "step": 65
117
+ },
118
+ {
119
+ "epoch": 1.09375,
120
+ "grad_norm": 0.1074226126074791,
121
+ "learning_rate": 0.00019994645874763658,
122
+ "loss": 1.4433,
123
+ "step": 70
124
+ },
125
+ {
126
+ "epoch": 1.171875,
127
+ "grad_norm": 0.1031486839056015,
128
+ "learning_rate": 0.00019982007981886847,
129
+ "loss": 1.4319,
130
+ "step": 75
131
+ },
132
+ {
133
+ "epoch": 1.25,
134
+ "grad_norm": 0.09431572258472443,
135
+ "learning_rate": 0.00019961946980917456,
136
+ "loss": 1.4231,
137
+ "step": 80
138
+ },
139
+ {
140
+ "epoch": 1.328125,
141
+ "grad_norm": 0.10248947143554688,
142
+ "learning_rate": 0.00019934477790194445,
143
+ "loss": 1.414,
144
+ "step": 85
145
+ },
146
+ {
147
+ "epoch": 1.40625,
148
+ "grad_norm": 0.10952038317918777,
149
+ "learning_rate": 0.00019899620837148077,
150
+ "loss": 1.3994,
151
+ "step": 90
152
+ },
153
+ {
154
+ "epoch": 1.484375,
155
+ "grad_norm": 0.11935596913099289,
156
+ "learning_rate": 0.0001985740204310909,
157
+ "loss": 1.3933,
158
+ "step": 95
159
+ },
160
+ {
161
+ "epoch": 1.5625,
162
+ "grad_norm": 0.13777290284633636,
163
+ "learning_rate": 0.00019807852804032305,
164
+ "loss": 1.3891,
165
+ "step": 100
166
+ },
167
+ {
168
+ "epoch": 1.640625,
169
+ "grad_norm": 0.10964534431695938,
170
+ "learning_rate": 0.00019751009967149087,
171
+ "loss": 1.3755,
172
+ "step": 105
173
+ },
174
+ {
175
+ "epoch": 1.71875,
176
+ "grad_norm": 0.10717500746250153,
177
+ "learning_rate": 0.00019686915803565934,
178
+ "loss": 1.3811,
179
+ "step": 110
180
+ },
181
+ {
182
+ "epoch": 1.796875,
183
+ "grad_norm": 0.11473763734102249,
184
+ "learning_rate": 0.0001961561797682962,
185
+ "loss": 1.3661,
186
+ "step": 115
187
+ },
188
+ {
189
+ "epoch": 1.875,
190
+ "grad_norm": 0.11382485926151276,
191
+ "learning_rate": 0.0001953716950748227,
192
+ "loss": 1.3596,
193
+ "step": 120
194
+ },
195
+ {
196
+ "epoch": 1.953125,
197
+ "grad_norm": 0.12709280848503113,
198
+ "learning_rate": 0.0001945162873363268,
199
+ "loss": 1.362,
200
+ "step": 125
201
+ },
202
+ {
203
+ "epoch": 2.0,
204
+ "eval_loss": 2.375438690185547,
205
+ "eval_runtime": 0.8905,
206
+ "eval_samples_per_second": 6.738,
207
+ "eval_steps_per_second": 1.123,
208
+ "step": 128
209
+ },
210
+ {
211
+ "epoch": 2.03125,
212
+ "grad_norm": 0.1248418539762497,
213
+ "learning_rate": 0.0001935905926757326,
214
+ "loss": 1.3543,
215
+ "step": 130
216
+ },
217
+ {
218
+ "epoch": 2.109375,
219
+ "grad_norm": 0.13678818941116333,
220
+ "learning_rate": 0.00019259529948474833,
221
+ "loss": 1.3531,
222
+ "step": 135
223
+ },
224
+ {
225
+ "epoch": 2.1875,
226
+ "grad_norm": 0.12561410665512085,
227
+ "learning_rate": 0.00019153114791194473,
228
+ "loss": 1.3382,
229
+ "step": 140
230
+ },
231
+ {
232
+ "epoch": 2.265625,
233
+ "grad_norm": 0.14160117506980896,
234
+ "learning_rate": 0.00019039892931234435,
235
+ "loss": 1.3388,
236
+ "step": 145
237
+ },
238
+ {
239
+ "epoch": 2.34375,
240
+ "grad_norm": 0.1499018520116806,
241
+ "learning_rate": 0.00018919948565893142,
242
+ "loss": 1.3392,
243
+ "step": 150
244
+ },
245
+ {
246
+ "epoch": 2.421875,
247
+ "grad_norm": 0.14188511669635773,
248
+ "learning_rate": 0.00018793370891651972,
249
+ "loss": 1.3406,
250
+ "step": 155
251
+ },
252
+ {
253
+ "epoch": 2.5,
254
+ "grad_norm": 0.12685342133045197,
255
+ "learning_rate": 0.00018660254037844388,
256
+ "loss": 1.3312,
257
+ "step": 160
258
+ },
259
+ {
260
+ "epoch": 2.578125,
261
+ "grad_norm": 0.1338503062725067,
262
+ "learning_rate": 0.00018520696996656788,
263
+ "loss": 1.3386,
264
+ "step": 165
265
+ },
266
+ {
267
+ "epoch": 2.65625,
268
+ "grad_norm": 0.1542833298444748,
269
+ "learning_rate": 0.0001837480354951308,
270
+ "loss": 1.3339,
271
+ "step": 170
272
+ },
273
+ {
274
+ "epoch": 2.734375,
275
+ "grad_norm": 0.1419745534658432,
276
+ "learning_rate": 0.00018222682189897752,
277
+ "loss": 1.3256,
278
+ "step": 175
279
+ },
280
+ {
281
+ "epoch": 2.8125,
282
+ "grad_norm": 0.12559981644153595,
283
+ "learning_rate": 0.00018064446042674828,
284
+ "loss": 1.3187,
285
+ "step": 180
286
+ },
287
+ {
288
+ "epoch": 2.890625,
289
+ "grad_norm": 0.1430642306804657,
290
+ "learning_rate": 0.0001790021277996269,
291
+ "loss": 1.3193,
292
+ "step": 185
293
+ },
294
+ {
295
+ "epoch": 2.96875,
296
+ "grad_norm": 0.12510241568088531,
297
+ "learning_rate": 0.0001773010453362737,
298
+ "loss": 1.3229,
299
+ "step": 190
300
+ },
301
+ {
302
+ "epoch": 3.0,
303
+ "eval_loss": 2.3739588260650635,
304
+ "eval_runtime": 0.8887,
305
+ "eval_samples_per_second": 6.751,
306
+ "eval_steps_per_second": 1.125,
307
+ "step": 192
308
+ },
309
+ {
310
+ "epoch": 3.046875,
311
+ "grad_norm": 0.13525685667991638,
312
+ "learning_rate": 0.00017554247804459316,
313
+ "loss": 1.3138,
314
+ "step": 195
315
+ },
316
+ {
317
+ "epoch": 3.125,
318
+ "grad_norm": 0.1221759021282196,
319
+ "learning_rate": 0.0001737277336810124,
320
+ "loss": 1.3114,
321
+ "step": 200
322
+ },
323
+ {
324
+ "epoch": 3.203125,
325
+ "grad_norm": 0.1508011817932129,
326
+ "learning_rate": 0.0001718581617779698,
327
+ "loss": 1.3121,
328
+ "step": 205
329
+ },
330
+ {
331
+ "epoch": 3.28125,
332
+ "grad_norm": 0.15374642610549927,
333
+ "learning_rate": 0.00016993515264033672,
334
+ "loss": 1.3061,
335
+ "step": 210
336
+ },
337
+ {
338
+ "epoch": 3.359375,
339
+ "grad_norm": 0.13054175674915314,
340
+ "learning_rate": 0.00016796013631151897,
341
+ "loss": 1.3131,
342
+ "step": 215
343
+ },
344
+ {
345
+ "epoch": 3.4375,
346
+ "grad_norm": 0.13617338240146637,
347
+ "learning_rate": 0.00016593458151000688,
348
+ "loss": 1.3055,
349
+ "step": 220
350
+ },
351
+ {
352
+ "epoch": 3.515625,
353
+ "grad_norm": 0.1574951857328415,
354
+ "learning_rate": 0.00016385999453716454,
355
+ "loss": 1.3081,
356
+ "step": 225
357
+ },
358
+ {
359
+ "epoch": 3.59375,
360
+ "grad_norm": 0.13046763837337494,
361
+ "learning_rate": 0.00016173791815707051,
362
+ "loss": 1.2971,
363
+ "step": 230
364
+ },
365
+ {
366
+ "epoch": 3.671875,
367
+ "grad_norm": 0.12328305840492249,
368
+ "learning_rate": 0.00015956993044924334,
369
+ "loss": 1.3004,
370
+ "step": 235
371
+ },
372
+ {
373
+ "epoch": 3.75,
374
+ "grad_norm": 0.1457005739212036,
375
+ "learning_rate": 0.0001573576436351046,
376
+ "loss": 1.301,
377
+ "step": 240
378
+ },
379
+ {
380
+ "epoch": 3.828125,
381
+ "grad_norm": 0.14775606989860535,
382
+ "learning_rate": 0.0001551027028790524,
383
+ "loss": 1.2981,
384
+ "step": 245
385
+ },
386
+ {
387
+ "epoch": 3.90625,
388
+ "grad_norm": 0.13573531806468964,
389
+ "learning_rate": 0.0001528067850650368,
390
+ "loss": 1.2964,
391
+ "step": 250
392
+ },
393
+ {
394
+ "epoch": 3.984375,
395
+ "grad_norm": 0.14599502086639404,
396
+ "learning_rate": 0.0001504715975495472,
397
+ "loss": 1.2996,
398
+ "step": 255
399
+ },
400
+ {
401
+ "epoch": 4.0,
402
+ "eval_loss": 2.3706400394439697,
403
+ "eval_runtime": 0.8905,
404
+ "eval_samples_per_second": 6.738,
405
+ "eval_steps_per_second": 1.123,
406
+ "step": 256
407
+ },
408
+ {
409
+ "epoch": 4.0625,
410
+ "grad_norm": 0.18352623283863068,
411
+ "learning_rate": 0.00014809887689193877,
412
+ "loss": 1.2791,
413
+ "step": 260
414
+ },
415
+ {
416
+ "epoch": 4.140625,
417
+ "grad_norm": 0.12809208035469055,
418
+ "learning_rate": 0.00014569038756304207,
419
+ "loss": 1.2838,
420
+ "step": 265
421
+ },
422
+ {
423
+ "epoch": 4.21875,
424
+ "grad_norm": 0.14866453409194946,
425
+ "learning_rate": 0.00014324792063301662,
426
+ "loss": 1.2855,
427
+ "step": 270
428
+ },
429
+ {
430
+ "epoch": 4.296875,
431
+ "grad_norm": 0.1483401656150818,
432
+ "learning_rate": 0.00014077329243942369,
433
+ "loss": 1.2906,
434
+ "step": 275
435
+ },
436
+ {
437
+ "epoch": 4.375,
438
+ "grad_norm": 0.1301499307155609,
439
+ "learning_rate": 0.000138268343236509,
440
+ "loss": 1.2832,
441
+ "step": 280
442
+ },
443
+ {
444
+ "epoch": 4.453125,
445
+ "grad_norm": 0.12229609489440918,
446
+ "learning_rate": 0.00013573493582670003,
447
+ "loss": 1.2855,
448
+ "step": 285
449
+ },
450
+ {
451
+ "epoch": 4.53125,
452
+ "grad_norm": 0.1406661719083786,
453
+ "learning_rate": 0.00013317495417533524,
454
+ "loss": 1.2885,
455
+ "step": 290
456
+ },
457
+ {
458
+ "epoch": 4.609375,
459
+ "grad_norm": 0.13113024830818176,
460
+ "learning_rate": 0.00013059030200965536,
461
+ "loss": 1.2899,
462
+ "step": 295
463
+ },
464
+ {
465
+ "epoch": 4.6875,
466
+ "grad_norm": 0.13431058824062347,
467
+ "learning_rate": 0.00012798290140309923,
468
+ "loss": 1.2819,
469
+ "step": 300
470
+ },
471
+ {
472
+ "epoch": 4.765625,
473
+ "grad_norm": 0.14978773891925812,
474
+ "learning_rate": 0.00012535469134595595,
475
+ "loss": 1.2852,
476
+ "step": 305
477
+ },
478
+ {
479
+ "epoch": 4.84375,
480
+ "grad_norm": 0.12423586845397949,
481
+ "learning_rate": 0.00012270762630343734,
482
+ "loss": 1.2802,
483
+ "step": 310
484
+ },
485
+ {
486
+ "epoch": 4.921875,
487
+ "grad_norm": 0.14971400797367096,
488
+ "learning_rate": 0.00012004367476224206,
489
+ "loss": 1.2829,
490
+ "step": 315
491
+ },
492
+ {
493
+ "epoch": 5.0,
494
+ "grad_norm": 0.14705318212509155,
495
+ "learning_rate": 0.00011736481776669306,
496
+ "loss": 1.2878,
497
+ "step": 320
498
+ },
499
+ {
500
+ "epoch": 5.0,
501
+ "eval_loss": 2.3755218982696533,
502
+ "eval_runtime": 0.8842,
503
+ "eval_samples_per_second": 6.786,
504
+ "eval_steps_per_second": 1.131,
505
+ "step": 320
506
+ },
507
+ {
508
+ "epoch": 5.078125,
509
+ "grad_norm": 0.17323575913906097,
510
+ "learning_rate": 0.00011467304744553618,
511
+ "loss": 1.2791,
512
+ "step": 325
513
+ },
514
+ {
515
+ "epoch": 5.15625,
516
+ "grad_norm": 0.15446850657463074,
517
+ "learning_rate": 0.00011197036553049625,
518
+ "loss": 1.2663,
519
+ "step": 330
520
+ },
521
+ {
522
+ "epoch": 5.234375,
523
+ "grad_norm": 0.1357167363166809,
524
+ "learning_rate": 0.00010925878186769158,
525
+ "loss": 1.2707,
526
+ "step": 335
527
+ },
528
+ {
529
+ "epoch": 5.3125,
530
+ "grad_norm": 0.13887612521648407,
531
+ "learning_rate": 0.00010654031292301432,
532
+ "loss": 1.2713,
533
+ "step": 340
534
+ },
535
+ {
536
+ "epoch": 5.390625,
537
+ "grad_norm": 0.18401511013507843,
538
+ "learning_rate": 0.00010381698028258817,
539
+ "loss": 1.2708,
540
+ "step": 345
541
+ },
542
+ {
543
+ "epoch": 5.46875,
544
+ "grad_norm": 0.14264121651649475,
545
+ "learning_rate": 0.00010109080914941824,
546
+ "loss": 1.2716,
547
+ "step": 350
548
+ },
549
+ {
550
+ "epoch": 5.546875,
551
+ "grad_norm": 0.1242898479104042,
552
+ "learning_rate": 9.836382683735132e-05,
553
+ "loss": 1.2778,
554
+ "step": 355
555
+ },
556
+ {
557
+ "epoch": 5.625,
558
+ "grad_norm": 0.11967791616916656,
559
+ "learning_rate": 9.563806126346642e-05,
560
+ "loss": 1.2761,
561
+ "step": 360
562
+ },
563
+ {
564
+ "epoch": 5.703125,
565
+ "grad_norm": 0.1407729834318161,
566
+ "learning_rate": 9.29155394400166e-05,
567
+ "loss": 1.271,
568
+ "step": 365
569
+ },
570
+ {
571
+ "epoch": 5.78125,
572
+ "grad_norm": 0.127610981464386,
573
+ "learning_rate": 9.019828596704394e-05,
574
+ "loss": 1.2707,
575
+ "step": 370
576
+ },
577
+ {
578
+ "epoch": 5.859375,
579
+ "grad_norm": 0.131484255194664,
580
+ "learning_rate": 8.74883215267881e-05,
581
+ "loss": 1.2703,
582
+ "step": 375
583
+ },
584
+ {
585
+ "epoch": 5.9375,
586
+ "grad_norm": 0.14869672060012817,
587
+ "learning_rate": 8.478766138100834e-05,
588
+ "loss": 1.2746,
589
+ "step": 380
590
+ },
591
+ {
592
+ "epoch": 6.0,
593
+ "eval_loss": 2.3722288608551025,
594
+ "eval_runtime": 0.8691,
595
+ "eval_samples_per_second": 6.904,
596
+ "eval_steps_per_second": 1.151,
597
+ "step": 384
598
+ },
599
+ {
600
+ "epoch": 6.015625,
601
+ "grad_norm": 0.12939053773880005,
602
+ "learning_rate": 8.209831387233676e-05,
603
+ "loss": 1.267,
604
+ "step": 385
605
+ },
606
+ {
607
+ "epoch": 6.09375,
608
+ "grad_norm": 0.1274917721748352,
609
+ "learning_rate": 7.942227893077652e-05,
610
+ "loss": 1.2609,
611
+ "step": 390
612
+ },
613
+ {
614
+ "epoch": 6.171875,
615
+ "grad_norm": 0.128373920917511,
616
+ "learning_rate": 7.676154658645656e-05,
617
+ "loss": 1.2613,
618
+ "step": 395
619
+ },
620
+ {
621
+ "epoch": 6.25,
622
+ "grad_norm": 0.12212257087230682,
623
+ "learning_rate": 7.411809548974792e-05,
624
+ "loss": 1.2608,
625
+ "step": 400
626
+ },
627
+ {
628
+ "epoch": 6.328125,
629
+ "grad_norm": 0.12020603567361832,
630
+ "learning_rate": 7.149389143984295e-05,
631
+ "loss": 1.2648,
632
+ "step": 405
633
+ },
634
+ {
635
+ "epoch": 6.40625,
636
+ "grad_norm": 0.13269896805286407,
637
+ "learning_rate": 6.889088592289093e-05,
638
+ "loss": 1.263,
639
+ "step": 410
640
+ },
641
+ {
642
+ "epoch": 6.484375,
643
+ "grad_norm": 0.12962587177753448,
644
+ "learning_rate": 6.6311014660778e-05,
645
+ "loss": 1.2688,
646
+ "step": 415
647
+ },
648
+ {
649
+ "epoch": 6.5625,
650
+ "grad_norm": 0.13470596075057983,
651
+ "learning_rate": 6.375619617162985e-05,
652
+ "loss": 1.2606,
653
+ "step": 420
654
+ },
655
+ {
656
+ "epoch": 6.640625,
657
+ "grad_norm": 0.1306695193052292,
658
+ "learning_rate": 6.122833034310793e-05,
659
+ "loss": 1.264,
660
+ "step": 425
661
+ },
662
+ {
663
+ "epoch": 6.71875,
664
+ "grad_norm": 0.12205986678600311,
665
+ "learning_rate": 5.872929701956054e-05,
666
+ "loss": 1.2624,
667
+ "step": 430
668
+ },
669
+ {
670
+ "epoch": 6.796875,
671
+ "grad_norm": 0.12223726511001587,
672
+ "learning_rate": 5.6260954604078585e-05,
673
+ "loss": 1.2624,
674
+ "step": 435
675
+ },
676
+ {
677
+ "epoch": 6.875,
678
+ "grad_norm": 0.11978649348020554,
679
+ "learning_rate": 5.382513867649663e-05,
680
+ "loss": 1.2684,
681
+ "step": 440
682
+ },
683
+ {
684
+ "epoch": 6.953125,
685
+ "grad_norm": 0.13118350505828857,
686
+ "learning_rate": 5.142366062836599e-05,
687
+ "loss": 1.2617,
688
+ "step": 445
689
+ },
690
+ {
691
+ "epoch": 7.0,
692
+ "eval_loss": 2.3756062984466553,
693
+ "eval_runtime": 0.8698,
694
+ "eval_samples_per_second": 6.898,
695
+ "eval_steps_per_second": 1.15,
696
+ "step": 448
697
+ },
698
+ {
699
+ "epoch": 7.03125,
700
+ "grad_norm": 0.12501190602779388,
701
+ "learning_rate": 4.9058306315915826e-05,
702
+ "loss": 1.2686,
703
+ "step": 450
704
+ },
705
+ {
706
+ "epoch": 7.109375,
707
+ "grad_norm": 0.11916686594486237,
708
+ "learning_rate": 4.6730834732003104e-05,
709
+ "loss": 1.2589,
710
+ "step": 455
711
+ },
712
+ {
713
+ "epoch": 7.1875,
714
+ "grad_norm": 0.1307828575372696,
715
+ "learning_rate": 4.444297669803981e-05,
716
+ "loss": 1.2641,
717
+ "step": 460
718
+ },
719
+ {
720
+ "epoch": 7.265625,
721
+ "grad_norm": 0.1322871744632721,
722
+ "learning_rate": 4.219643357686967e-05,
723
+ "loss": 1.2637,
724
+ "step": 465
725
+ },
726
+ {
727
+ "epoch": 7.34375,
728
+ "grad_norm": 0.12456091493368149,
729
+ "learning_rate": 3.999287600755192e-05,
730
+ "loss": 1.2596,
731
+ "step": 470
732
+ },
733
+ {
734
+ "epoch": 7.421875,
735
+ "grad_norm": 0.12441947311162949,
736
+ "learning_rate": 3.783394266299228e-05,
737
+ "loss": 1.2609,
738
+ "step": 475
739
+ },
740
+ {
741
+ "epoch": 7.5,
742
+ "grad_norm": 0.1233881339430809,
743
+ "learning_rate": 3.5721239031346066e-05,
744
+ "loss": 1.258,
745
+ "step": 480
746
+ },
747
+ {
748
+ "epoch": 7.578125,
749
+ "grad_norm": 0.11959797888994217,
750
+ "learning_rate": 3.365633622209891e-05,
751
+ "loss": 1.2535,
752
+ "step": 485
753
+ },
754
+ {
755
+ "epoch": 7.65625,
756
+ "grad_norm": 0.13552451133728027,
757
+ "learning_rate": 3.164076979771287e-05,
758
+ "loss": 1.2567,
759
+ "step": 490
760
+ },
761
+ {
762
+ "epoch": 7.734375,
763
+ "grad_norm": 0.1312321126461029,
764
+ "learning_rate": 2.9676038631707593e-05,
765
+ "loss": 1.2546,
766
+ "step": 495
767
+ },
768
+ {
769
+ "epoch": 7.8125,
770
+ "grad_norm": 0.12341451644897461,
771
+ "learning_rate": 2.776360379402445e-05,
772
+ "loss": 1.2589,
773
+ "step": 500
774
+ },
775
+ {
776
+ "epoch": 7.890625,
777
+ "grad_norm": 0.11580604314804077,
778
+ "learning_rate": 2.5904887464504114e-05,
779
+ "loss": 1.2514,
780
+ "step": 505
781
+ },
782
+ {
783
+ "epoch": 7.96875,
784
+ "grad_norm": 0.1164306178689003,
785
+ "learning_rate": 2.4101271875283817e-05,
786
+ "loss": 1.2497,
787
+ "step": 510
788
+ },
789
+ {
790
+ "epoch": 8.0,
791
+ "eval_loss": 2.3754327297210693,
792
+ "eval_runtime": 0.8715,
793
+ "eval_samples_per_second": 6.885,
794
+ "eval_steps_per_second": 1.147,
795
+ "step": 512
796
+ },
797
+ {
798
+ "epoch": 8.046875,
799
+ "grad_norm": 0.11353053152561188,
800
+ "learning_rate": 2.2354098282902446e-05,
801
+ "loss": 1.2515,
802
+ "step": 515
803
+ },
804
+ {
805
+ "epoch": 8.125,
806
+ "grad_norm": 0.11808107793331146,
807
+ "learning_rate": 2.0664665970876496e-05,
808
+ "loss": 1.2568,
809
+ "step": 520
810
+ },
811
+ {
812
+ "epoch": 8.203125,
813
+ "grad_norm": 0.12227106839418411,
814
+ "learning_rate": 1.903423128348959e-05,
815
+ "loss": 1.2526,
816
+ "step": 525
817
+ },
818
+ {
819
+ "epoch": 8.28125,
820
+ "grad_norm": 0.11858697980642319,
821
+ "learning_rate": 1.7464006691513623e-05,
822
+ "loss": 1.2546,
823
+ "step": 530
824
+ },
825
+ {
826
+ "epoch": 8.359375,
827
+ "grad_norm": 0.11320216953754425,
828
+ "learning_rate": 1.595515989055618e-05,
829
+ "loss": 1.2513,
830
+ "step": 535
831
+ },
832
+ {
833
+ "epoch": 8.4375,
834
+ "grad_norm": 0.11540042608976364,
835
+ "learning_rate": 1.4508812932705363e-05,
836
+ "loss": 1.2539,
837
+ "step": 540
838
+ },
839
+ {
840
+ "epoch": 8.515625,
841
+ "grad_norm": 0.11582642793655396,
842
+ "learning_rate": 1.3126041392116772e-05,
843
+ "loss": 1.2552,
844
+ "step": 545
845
+ },
846
+ {
847
+ "epoch": 8.59375,
848
+ "grad_norm": 0.11406353861093521,
849
+ "learning_rate": 1.1807873565164506e-05,
850
+ "loss": 1.2451,
851
+ "step": 550
852
+ },
853
+ {
854
+ "epoch": 8.671875,
855
+ "grad_norm": 0.11722878366708755,
856
+ "learning_rate": 1.0555289705749483e-05,
857
+ "loss": 1.2555,
858
+ "step": 555
859
+ },
860
+ {
861
+ "epoch": 8.75,
862
+ "grad_norm": 0.11788377165794373,
863
+ "learning_rate": 9.369221296335006e-06,
864
+ "loss": 1.2644,
865
+ "step": 560
866
+ },
867
+ {
868
+ "epoch": 8.828125,
869
+ "grad_norm": 0.12228409945964813,
870
+ "learning_rate": 8.250550355250875e-06,
871
+ "loss": 1.2554,
872
+ "step": 565
873
+ },
874
+ {
875
+ "epoch": 8.90625,
876
+ "grad_norm": 0.11590871214866638,
877
+ "learning_rate": 7.200108780781556e-06,
878
+ "loss": 1.2581,
879
+ "step": 570
880
+ },
881
+ {
882
+ "epoch": 8.984375,
883
+ "grad_norm": 0.11986227333545685,
884
+ "learning_rate": 6.218677732526035e-06,
885
+ "loss": 1.2549,
886
+ "step": 575
887
+ },
888
+ {
889
+ "epoch": 9.0,
890
+ "eval_loss": 2.3762052059173584,
891
+ "eval_runtime": 0.8701,
892
+ "eval_samples_per_second": 6.896,
893
+ "eval_steps_per_second": 1.149,
894
+ "step": 576
895
+ },
896
+ {
897
+ "epoch": 9.0625,
898
+ "grad_norm": 0.11538632959127426,
899
+ "learning_rate": 5.306987050489442e-06,
900
+ "loss": 1.2553,
901
+ "step": 580
902
+ },
903
+ {
904
+ "epoch": 9.140625,
905
+ "grad_norm": 0.11359097808599472,
906
+ "learning_rate": 4.465714712338398e-06,
907
+ "loss": 1.2438,
908
+ "step": 585
909
+ },
910
+ {
911
+ "epoch": 9.21875,
912
+ "grad_norm": 0.1167021319270134,
913
+ "learning_rate": 3.6954863292237297e-06,
914
+ "loss": 1.2508,
915
+ "step": 590
916
+ },
917
+ {
918
+ "epoch": 9.296875,
919
+ "grad_norm": 0.11401596665382385,
920
+ "learning_rate": 2.996874680545603e-06,
921
+ "loss": 1.2555,
922
+ "step": 595
923
+ },
924
+ {
925
+ "epoch": 9.375,
926
+ "grad_norm": 0.11593978106975555,
927
+ "learning_rate": 2.3703992880066638e-06,
928
+ "loss": 1.2491,
929
+ "step": 600
930
+ },
931
+ {
932
+ "epoch": 9.453125,
933
+ "grad_norm": 0.11369957774877548,
934
+ "learning_rate": 1.8165260292704711e-06,
935
+ "loss": 1.2556,
936
+ "step": 605
937
+ },
938
+ {
939
+ "epoch": 9.53125,
940
+ "grad_norm": 0.11447325348854065,
941
+ "learning_rate": 1.3356667915121025e-06,
942
+ "loss": 1.2559,
943
+ "step": 610
944
+ },
945
+ {
946
+ "epoch": 9.609375,
947
+ "grad_norm": 0.11217474192380905,
948
+ "learning_rate": 9.281791651187366e-07,
949
+ "loss": 1.2635,
950
+ "step": 615
951
+ },
952
+ {
953
+ "epoch": 9.6875,
954
+ "grad_norm": 0.11821179836988449,
955
+ "learning_rate": 5.943661777680354e-07,
956
+ "loss": 1.2521,
957
+ "step": 620
958
+ },
959
+ {
960
+ "epoch": 9.765625,
961
+ "grad_norm": 0.113133005797863,
962
+ "learning_rate": 3.3447606908196817e-07,
963
+ "loss": 1.2539,
964
+ "step": 625
965
+ },
966
+ {
967
+ "epoch": 9.84375,
968
+ "grad_norm": 0.11367449164390564,
969
+ "learning_rate": 1.487021060236904e-07,
970
+ "loss": 1.2528,
971
+ "step": 630
972
+ },
973
+ {
974
+ "epoch": 9.921875,
975
+ "grad_norm": 0.11390256881713867,
976
+ "learning_rate": 3.7182439174832106e-08,
977
+ "loss": 1.2581,
978
+ "step": 635
979
+ },
980
+ {
981
+ "epoch": 10.0,
982
+ "grad_norm": 0.11366365104913712,
983
+ "learning_rate": 0.0,
984
+ "loss": 1.2494,
985
+ "step": 640
986
+ },
987
+ {
988
+ "epoch": 10.0,
989
+ "eval_loss": 2.376882791519165,
990
+ "eval_runtime": 0.8974,
991
+ "eval_samples_per_second": 6.686,
992
+ "eval_steps_per_second": 1.114,
993
+ "step": 640
994
+ },
995
+ {
996
+ "epoch": 10.0,
997
+ "step": 640,
998
+ "total_flos": 2.846679860191953e+18,
999
+ "train_loss": 1.3252380434423685,
1000
+ "train_runtime": 3360.483,
1001
+ "train_samples_per_second": 48.669,
1002
+ "train_steps_per_second": 0.19
1003
+ }
1004
+ ],
1005
+ "logging_steps": 5,
1006
+ "max_steps": 640,
1007
+ "num_input_tokens_seen": 0,
1008
+ "num_train_epochs": 10,
1009
+ "save_steps": 100,
1010
+ "stateful_callbacks": {
1011
+ "TrainerControl": {
1012
+ "args": {
1013
+ "should_epoch_stop": false,
1014
+ "should_evaluate": false,
1015
+ "should_log": false,
1016
+ "should_save": true,
1017
+ "should_training_stop": true
1018
+ },
1019
+ "attributes": {}
1020
+ }
1021
+ },
1022
+ "total_flos": 2.846679860191953e+18,
1023
+ "train_batch_size": 16,
1024
+ "trial_name": null,
1025
+ "trial_params": null
1026
+ }