yuriivoievidka commited on
Commit
a3b7a64
·
verified ·
1 Parent(s): 37e05dd

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -6,7 +6,7 @@ tags:
6
  - generated_from_trainer
7
  - dataset_size:10635
8
  - loss:MultipleNegativesSymmetricRankingLoss
9
- base_model: microsoft/mpnet-base
10
  widget:
11
  - source_sentence: '12 Rules For Life: An Antidote to Chaos by Jordan B. Peterson'
12
  sentences:
@@ -40,16 +40,16 @@ pipeline_tag: sentence-similarity
40
  library_name: sentence-transformers
41
  ---
42
 
43
- # SentenceTransformer based on microsoft/mpnet-base
44
 
45
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the train dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
46
 
47
  ## Model Details
48
 
49
  ### Model Description
50
  - **Model Type:** Sentence Transformer
51
- - **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
52
- - **Maximum Sequence Length:** 512 tokens
53
  - **Output Dimensionality:** 768 dimensions
54
  - **Similarity Function:** Cosine Similarity
55
  - **Training Dataset:**
@@ -67,8 +67,9 @@ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [m
67
 
68
  ```
69
  SentenceTransformer(
70
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
71
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
 
72
  )
73
  ```
74
 
@@ -201,7 +202,7 @@ You can finetune this model on your own dataset.
201
  - `per_device_train_batch_size`: 16
202
  - `per_device_eval_batch_size`: 16
203
  - `learning_rate`: 2e-05
204
- - `num_train_epochs`: 10
205
  - `warmup_ratio`: 0.1
206
 
207
  #### All Hyperparameters
@@ -224,7 +225,7 @@ You can finetune this model on your own dataset.
224
  - `adam_beta2`: 0.999
225
  - `adam_epsilon`: 1e-08
226
  - `max_grad_norm`: 1.0
227
- - `num_train_epochs`: 10
228
  - `max_steps`: -1
229
  - `lr_scheduler_type`: linear
230
  - `lr_scheduler_kwargs`: {}
@@ -326,49 +327,27 @@ You can finetune this model on your own dataset.
326
  ### Training Logs
327
  | Epoch | Step | Training Loss | train loss |
328
  |:------:|:----:|:-------------:|:----------:|
329
- | 0.3008 | 200 | 2.8113 | 2.0799 |
330
- | 0.6015 | 400 | 2.0877 | 1.9239 |
331
- | 0.9023 | 600 | 1.9258 | 1.8882 |
332
- | 1.2030 | 800 | 1.7382 | 1.8684 |
333
- | 1.5038 | 1000 | 1.7232 | 1.8226 |
334
- | 1.8045 | 1200 | 1.6814 | 1.8167 |
335
- | 2.1053 | 1400 | 1.5764 | 1.8133 |
336
- | 2.4060 | 1600 | 1.5333 | 1.7898 |
337
- | 2.7068 | 1800 | 1.5216 | 1.7782 |
338
- | 3.0075 | 2000 | 1.4966 | 1.7663 |
339
- | 3.3083 | 2200 | 1.4325 | 1.7642 |
340
- | 3.6090 | 2400 | 1.4043 | 1.7956 |
341
- | 3.9098 | 2600 | 1.4212 | 1.7609 |
342
- | 4.2105 | 2800 | 1.3808 | 1.7611 |
343
- | 4.5113 | 3000 | 1.35 | 1.7671 |
344
- | 4.8120 | 3200 | 1.3644 | 1.7517 |
345
- | 5.1128 | 3400 | 1.304 | 1.7712 |
346
- | 5.4135 | 3600 | 1.288 | 1.7820 |
347
- | 5.7143 | 3800 | 1.3051 | 1.7699 |
348
- | 6.0150 | 4000 | 1.2803 | 1.7678 |
349
- | 6.3158 | 4200 | 1.2026 | 1.7812 |
350
- | 6.6165 | 4400 | 1.2602 | 1.7846 |
351
- | 6.9173 | 4600 | 1.2392 | 1.7733 |
352
- | 7.2180 | 4800 | 1.2088 | 1.7745 |
353
- | 7.5188 | 5000 | 1.1791 | 1.7867 |
354
- | 7.8195 | 5200 | 1.1946 | 1.7779 |
355
- | 8.1203 | 5400 | 1.1617 | 1.7931 |
356
- | 8.4211 | 5600 | 1.1495 | 1.7911 |
357
- | 8.7218 | 5800 | 1.1635 | 1.7949 |
358
- | 9.0226 | 6000 | 1.1324 | 1.7962 |
359
- | 9.3233 | 6200 | 1.1304 | 1.8035 |
360
- | 9.6241 | 6400 | 1.1126 | 1.8056 |
361
- | 9.9248 | 6600 | 1.0986 | 1.8062 |
362
 
363
 
364
  ### Framework Versions
365
  - Python: 3.10.12
366
  - Sentence Transformers: 4.1.0
367
  - Transformers: 4.52.4
368
- - PyTorch: 2.6.0+cu124
369
  - Accelerate: 1.8.1
370
  - Datasets: 3.6.0
371
- - Tokenizers: 0.21.1
372
 
373
  ## Citation
374
 
 
6
  - generated_from_trainer
7
  - dataset_size:10635
8
  - loss:MultipleNegativesSymmetricRankingLoss
9
+ base_model: sentence-transformers/all-mpnet-base-v2
10
  widget:
11
  - source_sentence: '12 Rules For Life: An Antidote to Chaos by Jordan B. Peterson'
12
  sentences:
 
40
  library_name: sentence-transformers
41
  ---
42
 
43
+ # SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
44
 
45
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) on the train dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
46
 
47
  ## Model Details
48
 
49
  ### Model Description
50
  - **Model Type:** Sentence Transformer
51
+ - **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 12e86a3c702fc3c50205a8db88f0ec7c0b6b94a0 -->
52
+ - **Maximum Sequence Length:** 384 tokens
53
  - **Output Dimensionality:** 768 dimensions
54
  - **Similarity Function:** Cosine Similarity
55
  - **Training Dataset:**
 
67
 
68
  ```
69
  SentenceTransformer(
70
+ (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
71
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
72
+ (2): Normalize()
73
  )
74
  ```
75
 
 
202
  - `per_device_train_batch_size`: 16
203
  - `per_device_eval_batch_size`: 16
204
  - `learning_rate`: 2e-05
205
+ - `num_train_epochs`: 7
206
  - `warmup_ratio`: 0.1
207
 
208
  #### All Hyperparameters
 
225
  - `adam_beta2`: 0.999
226
  - `adam_epsilon`: 1e-08
227
  - `max_grad_norm`: 1.0
228
+ - `num_train_epochs`: 7
229
  - `max_steps`: -1
230
  - `lr_scheduler_type`: linear
231
  - `lr_scheduler_kwargs`: {}
 
327
  ### Training Logs
328
  | Epoch | Step | Training Loss | train loss |
329
  |:------:|:----:|:-------------:|:----------:|
330
+ | 0.6006 | 200 | 2.5755 | 2.4113 |
331
+ | 1.2012 | 400 | 2.2395 | 2.3553 |
332
+ | 1.8018 | 600 | 2.0813 | 2.3290 |
333
+ | 2.4024 | 800 | 1.9813 | 2.3169 |
334
+ | 3.0030 | 1000 | 1.9233 | 2.3081 |
335
+ | 3.6036 | 1200 | 1.8338 | 2.3076 |
336
+ | 4.2042 | 1400 | 1.8029 | 2.3380 |
337
+ | 4.8048 | 1600 | 1.7766 | 2.3005 |
338
+ | 5.4054 | 1800 | 1.722 | 2.3254 |
339
+ | 6.0060 | 2000 | 1.7217 | 2.3215 |
340
+ | 6.6066 | 2200 | 1.6759 | 2.3322 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
341
 
342
 
343
  ### Framework Versions
344
  - Python: 3.10.12
345
  - Sentence Transformers: 4.1.0
346
  - Transformers: 4.52.4
347
+ - PyTorch: 2.5.1+cu124
348
  - Accelerate: 1.8.1
349
  - Datasets: 3.6.0
350
+ - Tokenizers: 0.21.2
351
 
352
  ## Citation
353
 
config_sentence_transformers.json CHANGED
@@ -2,7 +2,7 @@
2
  "__version__": {
3
  "sentence_transformers": "4.1.0",
4
  "transformers": "4.52.4",
5
- "pytorch": "2.6.0+cu124"
6
  },
7
  "prompts": {},
8
  "default_prompt_name": null,
 
2
  "__version__": {
3
  "sentence_transformers": "4.1.0",
4
  "transformers": "4.52.4",
5
+ "pytorch": "2.5.1+cu124"
6
  },
7
  "prompts": {},
8
  "default_prompt_name": null,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:070b8e311a59229e3d1911753c8912809c4b6c99f9cf43c46f1c8ac5dfe915e0
3
  size 437967672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ddbdb6468cca74299465849d3460bd26cc567441655d3084af35bb3acd7144e
3
  size 437967672
modules.json CHANGED
@@ -10,5 +10,11 @@
10
  "name": "1",
11
  "path": "1_Pooling",
12
  "type": "sentence_transformers.models.Pooling"
 
 
 
 
 
 
13
  }
14
  ]
 
10
  "name": "1",
11
  "path": "1_Pooling",
12
  "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
  }
20
  ]
optimizer.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:636d83f75ef3b6379d2de1380140e9e6db0862aa30c2f61df168fa48a5f11f94
3
- size 871331770
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43f6626a50f5d4a1fe140af797071be81b4523615afa4e4b6d5795ee9ef59320
3
+ size 876058170
rng_state.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:784b875c2b86372c41eaa4d7d8efaa50c3c0a99edec1ace8f8b943345f97b54f
3
  size 14244
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9887d9179089333ff9b4030c7aa932e0435c5243b5cc42026e85559ac64ae3e
3
  size 14244
scheduler.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:45b9b94e19b7c7a2fcd96ee21ab65fa9d6c05333276c875f55a40f3bff2d6f6f
3
  size 1064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:190fc72819eea0b8f2844c8816cd0625c6bce70b27c2d3b3ce154d7ea3cae54a
3
  size 1064
sentence_bert_config.json CHANGED
@@ -1,4 +1,4 @@
1
  {
2
- "max_seq_length": 512,
3
  "do_lower_case": false
4
  }
 
1
  {
2
+ "max_seq_length": 384,
3
  "do_lower_case": false
4
  }
special_tokens_map.json CHANGED
@@ -9,7 +9,7 @@
9
  "cls_token": {
10
  "content": "<s>",
11
  "lstrip": false,
12
- "normalized": true,
13
  "rstrip": false,
14
  "single_word": false
15
  },
@@ -37,7 +37,7 @@
37
  "sep_token": {
38
  "content": "</s>",
39
  "lstrip": false,
40
- "normalized": true,
41
  "rstrip": false,
42
  "single_word": false
43
  },
 
9
  "cls_token": {
10
  "content": "<s>",
11
  "lstrip": false,
12
+ "normalized": false,
13
  "rstrip": false,
14
  "single_word": false
15
  },
 
37
  "sep_token": {
38
  "content": "</s>",
39
  "lstrip": false,
40
+ "normalized": false,
41
  "rstrip": false,
42
  "single_word": false
43
  },
tokenizer.json CHANGED
@@ -2,7 +2,7 @@
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
- "max_length": 512,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
 
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
+ "max_length": 384,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
tokenizer_config.json CHANGED
@@ -56,11 +56,18 @@
56
  "eos_token": "</s>",
57
  "extra_special_tokens": {},
58
  "mask_token": "<mask>",
59
- "model_max_length": 512,
 
 
60
  "pad_token": "<pad>",
 
 
61
  "sep_token": "</s>",
 
62
  "strip_accents": null,
63
  "tokenize_chinese_chars": true,
64
  "tokenizer_class": "MPNetTokenizer",
 
 
65
  "unk_token": "[UNK]"
66
  }
 
56
  "eos_token": "</s>",
57
  "extra_special_tokens": {},
58
  "mask_token": "<mask>",
59
+ "max_length": 128,
60
+ "model_max_length": 384,
61
+ "pad_to_multiple_of": null,
62
  "pad_token": "<pad>",
63
+ "pad_token_type_id": 0,
64
+ "padding_side": "right",
65
  "sep_token": "</s>",
66
+ "stride": 0,
67
  "strip_accents": null,
68
  "tokenize_chinese_chars": true,
69
  "tokenizer_class": "MPNetTokenizer",
70
+ "truncation_side": "right",
71
+ "truncation_strategy": "longest_first",
72
  "unk_token": "[UNK]"
73
  }
trainer_state.json CHANGED
@@ -2,513 +2,183 @@
2
  "best_global_step": null,
3
  "best_metric": null,
4
  "best_model_checkpoint": null,
5
- "epoch": 10.0,
6
  "eval_steps": 200,
7
- "global_step": 6650,
8
  "is_hyper_param_search": false,
9
  "is_local_process_zero": true,
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
- "epoch": 0.3007518796992481,
14
- "grad_norm": 23.491586685180664,
15
- "learning_rate": 5.984962406015038e-06,
16
- "loss": 2.8113,
17
  "step": 200
18
  },
19
  {
20
- "epoch": 0.3007518796992481,
21
- "eval_train_loss": 2.0799365043640137,
22
- "eval_train_runtime": 5.0075,
23
- "eval_train_samples_per_second": 1070.204,
24
- "eval_train_steps_per_second": 66.9,
25
  "step": 200
26
  },
27
  {
28
- "epoch": 0.6015037593984962,
29
- "grad_norm": 34.29732131958008,
30
- "learning_rate": 1.2e-05,
31
- "loss": 2.0877,
32
  "step": 400
33
  },
34
  {
35
- "epoch": 0.6015037593984962,
36
- "eval_train_loss": 1.923947811126709,
37
- "eval_train_runtime": 5.0362,
38
- "eval_train_samples_per_second": 1064.098,
39
- "eval_train_steps_per_second": 66.519,
40
  "step": 400
41
  },
42
  {
43
- "epoch": 0.9022556390977443,
44
- "grad_norm": 38.32964324951172,
45
- "learning_rate": 1.8015037593984962e-05,
46
- "loss": 1.9258,
47
  "step": 600
48
  },
49
  {
50
- "epoch": 0.9022556390977443,
51
- "eval_train_loss": 1.888200283050537,
52
- "eval_train_runtime": 4.9999,
53
- "eval_train_samples_per_second": 1071.817,
54
- "eval_train_steps_per_second": 67.001,
55
  "step": 600
56
  },
57
  {
58
- "epoch": 1.2030075187969924,
59
- "grad_norm": 15.528878211975098,
60
- "learning_rate": 1.9552213868003343e-05,
61
- "loss": 1.7382,
62
  "step": 800
63
  },
64
  {
65
- "epoch": 1.2030075187969924,
66
- "eval_train_loss": 1.8683608770370483,
67
- "eval_train_runtime": 5.0138,
68
- "eval_train_samples_per_second": 1068.861,
69
- "eval_train_steps_per_second": 66.816,
70
  "step": 800
71
  },
72
  {
73
- "epoch": 1.5037593984962405,
74
- "grad_norm": 13.27901554107666,
75
- "learning_rate": 1.8883876357560568e-05,
76
- "loss": 1.7232,
77
  "step": 1000
78
  },
79
  {
80
- "epoch": 1.5037593984962405,
81
- "eval_train_loss": 1.8225561380386353,
82
- "eval_train_runtime": 5.0119,
83
- "eval_train_samples_per_second": 1069.261,
84
- "eval_train_steps_per_second": 66.841,
85
  "step": 1000
86
  },
87
  {
88
- "epoch": 1.8045112781954886,
89
- "grad_norm": 11.53130054473877,
90
- "learning_rate": 1.8215538847117796e-05,
91
- "loss": 1.6814,
92
  "step": 1200
93
  },
94
  {
95
- "epoch": 1.8045112781954886,
96
- "eval_train_loss": 1.8166730403900146,
97
- "eval_train_runtime": 5.0134,
98
- "eval_train_samples_per_second": 1068.945,
99
- "eval_train_steps_per_second": 66.822,
100
  "step": 1200
101
  },
102
  {
103
- "epoch": 2.1052631578947367,
104
- "grad_norm": 14.417011260986328,
105
- "learning_rate": 1.754720133667502e-05,
106
- "loss": 1.5764,
107
  "step": 1400
108
  },
109
  {
110
- "epoch": 2.1052631578947367,
111
- "eval_train_loss": 1.8132838010787964,
112
- "eval_train_runtime": 5.0144,
113
- "eval_train_samples_per_second": 1068.73,
114
- "eval_train_steps_per_second": 66.808,
115
  "step": 1400
116
  },
117
  {
118
- "epoch": 2.406015037593985,
119
- "grad_norm": 11.700883865356445,
120
- "learning_rate": 1.6878863826232248e-05,
121
- "loss": 1.5333,
122
  "step": 1600
123
  },
124
  {
125
- "epoch": 2.406015037593985,
126
- "eval_train_loss": 1.7898207902908325,
127
- "eval_train_runtime": 5.0228,
128
- "eval_train_samples_per_second": 1066.927,
129
- "eval_train_steps_per_second": 66.695,
130
  "step": 1600
131
  },
132
  {
133
- "epoch": 2.706766917293233,
134
- "grad_norm": 13.112250328063965,
135
- "learning_rate": 1.6210526315789473e-05,
136
- "loss": 1.5216,
137
  "step": 1800
138
  },
139
  {
140
- "epoch": 2.706766917293233,
141
- "eval_train_loss": 1.7781648635864258,
142
- "eval_train_runtime": 5.0052,
143
- "eval_train_samples_per_second": 1070.687,
144
- "eval_train_steps_per_second": 66.93,
145
  "step": 1800
146
  },
147
  {
148
- "epoch": 3.007518796992481,
149
- "grad_norm": 11.557127952575684,
150
- "learning_rate": 1.55421888053467e-05,
151
- "loss": 1.4966,
152
  "step": 2000
153
  },
154
  {
155
- "epoch": 3.007518796992481,
156
- "eval_train_loss": 1.7662715911865234,
157
- "eval_train_runtime": 5.0354,
158
- "eval_train_samples_per_second": 1064.268,
159
- "eval_train_steps_per_second": 66.529,
160
  "step": 2000
161
  },
162
  {
163
- "epoch": 3.308270676691729,
164
- "grad_norm": 10.65110969543457,
165
- "learning_rate": 1.4873851294903927e-05,
166
- "loss": 1.4325,
167
  "step": 2200
168
  },
169
  {
170
- "epoch": 3.308270676691729,
171
- "eval_train_loss": 1.764186143875122,
172
- "eval_train_runtime": 5.0269,
173
- "eval_train_samples_per_second": 1066.066,
174
- "eval_train_steps_per_second": 66.642,
175
  "step": 2200
176
- },
177
- {
178
- "epoch": 3.6090225563909772,
179
- "grad_norm": 11.466296195983887,
180
- "learning_rate": 1.4205513784461153e-05,
181
- "loss": 1.4043,
182
- "step": 2400
183
- },
184
- {
185
- "epoch": 3.6090225563909772,
186
- "eval_train_loss": 1.7955785989761353,
187
- "eval_train_runtime": 5.06,
188
- "eval_train_samples_per_second": 1059.097,
189
- "eval_train_steps_per_second": 66.206,
190
- "step": 2400
191
- },
192
- {
193
- "epoch": 3.909774436090226,
194
- "grad_norm": 9.564383506774902,
195
- "learning_rate": 1.353717627401838e-05,
196
- "loss": 1.4212,
197
- "step": 2600
198
- },
199
- {
200
- "epoch": 3.909774436090226,
201
- "eval_train_loss": 1.7609018087387085,
202
- "eval_train_runtime": 5.0402,
203
- "eval_train_samples_per_second": 1063.247,
204
- "eval_train_steps_per_second": 66.465,
205
- "step": 2600
206
- },
207
- {
208
- "epoch": 4.2105263157894735,
209
- "grad_norm": 12.078660011291504,
210
- "learning_rate": 1.2868838763575606e-05,
211
- "loss": 1.3808,
212
- "step": 2800
213
- },
214
- {
215
- "epoch": 4.2105263157894735,
216
- "eval_train_loss": 1.7610782384872437,
217
- "eval_train_runtime": 5.0859,
218
- "eval_train_samples_per_second": 1053.692,
219
- "eval_train_steps_per_second": 65.868,
220
- "step": 2800
221
- },
222
- {
223
- "epoch": 4.511278195488722,
224
- "grad_norm": 10.561222076416016,
225
- "learning_rate": 1.2200501253132832e-05,
226
- "loss": 1.35,
227
- "step": 3000
228
- },
229
- {
230
- "epoch": 4.511278195488722,
231
- "eval_train_loss": 1.7670680284500122,
232
- "eval_train_runtime": 5.0558,
233
- "eval_train_samples_per_second": 1059.976,
234
- "eval_train_steps_per_second": 66.261,
235
- "step": 3000
236
- },
237
- {
238
- "epoch": 4.81203007518797,
239
- "grad_norm": 14.785975456237793,
240
- "learning_rate": 1.1532163742690059e-05,
241
- "loss": 1.3644,
242
- "step": 3200
243
- },
244
- {
245
- "epoch": 4.81203007518797,
246
- "eval_train_loss": 1.751652479171753,
247
- "eval_train_runtime": 5.0835,
248
- "eval_train_samples_per_second": 1054.196,
249
- "eval_train_steps_per_second": 65.9,
250
- "step": 3200
251
- },
252
- {
253
- "epoch": 5.112781954887218,
254
- "grad_norm": 10.927189826965332,
255
- "learning_rate": 1.0863826232247285e-05,
256
- "loss": 1.304,
257
- "step": 3400
258
- },
259
- {
260
- "epoch": 5.112781954887218,
261
- "eval_train_loss": 1.7712498903274536,
262
- "eval_train_runtime": 5.0673,
263
- "eval_train_samples_per_second": 1057.559,
264
- "eval_train_steps_per_second": 66.11,
265
- "step": 3400
266
- },
267
- {
268
- "epoch": 5.413533834586466,
269
- "grad_norm": 14.33267879486084,
270
- "learning_rate": 1.0195488721804511e-05,
271
- "loss": 1.288,
272
- "step": 3600
273
- },
274
- {
275
- "epoch": 5.413533834586466,
276
- "eval_train_loss": 1.7820113897323608,
277
- "eval_train_runtime": 5.086,
278
- "eval_train_samples_per_second": 1053.672,
279
- "eval_train_steps_per_second": 65.867,
280
- "step": 3600
281
- },
282
- {
283
- "epoch": 5.714285714285714,
284
- "grad_norm": 11.89034366607666,
285
- "learning_rate": 9.527151211361737e-06,
286
- "loss": 1.3051,
287
- "step": 3800
288
- },
289
- {
290
- "epoch": 5.714285714285714,
291
- "eval_train_loss": 1.7699248790740967,
292
- "eval_train_runtime": 5.1253,
293
- "eval_train_samples_per_second": 1045.605,
294
- "eval_train_steps_per_second": 65.363,
295
- "step": 3800
296
- },
297
- {
298
- "epoch": 6.015037593984962,
299
- "grad_norm": 10.595609664916992,
300
- "learning_rate": 8.858813700918964e-06,
301
- "loss": 1.2803,
302
- "step": 4000
303
- },
304
- {
305
- "epoch": 6.015037593984962,
306
- "eval_train_loss": 1.7678076028823853,
307
- "eval_train_runtime": 5.1035,
308
- "eval_train_samples_per_second": 1050.07,
309
- "eval_train_steps_per_second": 65.642,
310
- "step": 4000
311
- },
312
- {
313
- "epoch": 6.315789473684211,
314
- "grad_norm": 14.781892776489258,
315
- "learning_rate": 8.190476190476192e-06,
316
- "loss": 1.2026,
317
- "step": 4200
318
- },
319
- {
320
- "epoch": 6.315789473684211,
321
- "eval_train_loss": 1.7812011241912842,
322
- "eval_train_runtime": 5.1217,
323
- "eval_train_samples_per_second": 1046.331,
324
- "eval_train_steps_per_second": 65.408,
325
- "step": 4200
326
- },
327
- {
328
- "epoch": 6.616541353383458,
329
- "grad_norm": 11.254812240600586,
330
- "learning_rate": 7.522138680033417e-06,
331
- "loss": 1.2602,
332
- "step": 4400
333
- },
334
- {
335
- "epoch": 6.616541353383458,
336
- "eval_train_loss": 1.7846208810806274,
337
- "eval_train_runtime": 5.1259,
338
- "eval_train_samples_per_second": 1045.481,
339
- "eval_train_steps_per_second": 65.355,
340
- "step": 4400
341
- },
342
- {
343
- "epoch": 6.917293233082707,
344
- "grad_norm": 9.643959999084473,
345
- "learning_rate": 6.8538011695906435e-06,
346
- "loss": 1.2392,
347
- "step": 4600
348
- },
349
- {
350
- "epoch": 6.917293233082707,
351
- "eval_train_loss": 1.7733409404754639,
352
- "eval_train_runtime": 5.1326,
353
- "eval_train_samples_per_second": 1044.114,
354
- "eval_train_steps_per_second": 65.269,
355
- "step": 4600
356
- },
357
- {
358
- "epoch": 7.2180451127819545,
359
- "grad_norm": 12.258922576904297,
360
- "learning_rate": 6.18546365914787e-06,
361
- "loss": 1.2088,
362
- "step": 4800
363
- },
364
- {
365
- "epoch": 7.2180451127819545,
366
- "eval_train_loss": 1.7745392322540283,
367
- "eval_train_runtime": 5.1493,
368
- "eval_train_samples_per_second": 1040.714,
369
- "eval_train_steps_per_second": 65.057,
370
- "step": 4800
371
- },
372
- {
373
- "epoch": 7.518796992481203,
374
- "grad_norm": 12.351716041564941,
375
- "learning_rate": 5.517126148705096e-06,
376
- "loss": 1.1791,
377
- "step": 5000
378
- },
379
- {
380
- "epoch": 7.518796992481203,
381
- "eval_train_loss": 1.7866636514663696,
382
- "eval_train_runtime": 5.144,
383
- "eval_train_samples_per_second": 1041.787,
384
- "eval_train_steps_per_second": 65.124,
385
- "step": 5000
386
- },
387
- {
388
- "epoch": 7.819548872180452,
389
- "grad_norm": 15.052789688110352,
390
- "learning_rate": 4.8487886382623224e-06,
391
- "loss": 1.1946,
392
- "step": 5200
393
- },
394
- {
395
- "epoch": 7.819548872180452,
396
- "eval_train_loss": 1.7778518199920654,
397
- "eval_train_runtime": 5.1357,
398
- "eval_train_samples_per_second": 1043.481,
399
- "eval_train_steps_per_second": 65.23,
400
- "step": 5200
401
- },
402
- {
403
- "epoch": 8.1203007518797,
404
- "grad_norm": 8.957300186157227,
405
- "learning_rate": 4.18045112781955e-06,
406
- "loss": 1.1617,
407
- "step": 5400
408
- },
409
- {
410
- "epoch": 8.1203007518797,
411
- "eval_train_loss": 1.7931042909622192,
412
- "eval_train_runtime": 5.1877,
413
- "eval_train_samples_per_second": 1033.016,
414
- "eval_train_steps_per_second": 64.576,
415
- "step": 5400
416
- },
417
- {
418
- "epoch": 8.421052631578947,
419
- "grad_norm": 13.89137077331543,
420
- "learning_rate": 3.5121136173767755e-06,
421
- "loss": 1.1495,
422
- "step": 5600
423
- },
424
- {
425
- "epoch": 8.421052631578947,
426
- "eval_train_loss": 1.791070818901062,
427
- "eval_train_runtime": 5.1363,
428
- "eval_train_samples_per_second": 1043.352,
429
- "eval_train_steps_per_second": 65.222,
430
- "step": 5600
431
- },
432
- {
433
- "epoch": 8.721804511278195,
434
- "grad_norm": 11.32971477508545,
435
- "learning_rate": 2.8437761069340018e-06,
436
- "loss": 1.1635,
437
- "step": 5800
438
- },
439
- {
440
- "epoch": 8.721804511278195,
441
- "eval_train_loss": 1.794918417930603,
442
- "eval_train_runtime": 5.1728,
443
- "eval_train_samples_per_second": 1035.991,
444
- "eval_train_steps_per_second": 64.762,
445
- "step": 5800
446
- },
447
- {
448
- "epoch": 9.022556390977444,
449
- "grad_norm": 13.075417518615723,
450
- "learning_rate": 2.1754385964912285e-06,
451
- "loss": 1.1324,
452
- "step": 6000
453
- },
454
- {
455
- "epoch": 9.022556390977444,
456
- "eval_train_loss": 1.7962439060211182,
457
- "eval_train_runtime": 5.1942,
458
- "eval_train_samples_per_second": 1031.737,
459
- "eval_train_steps_per_second": 64.496,
460
- "step": 6000
461
- },
462
- {
463
- "epoch": 9.323308270676693,
464
- "grad_norm": 12.90481948852539,
465
- "learning_rate": 1.5071010860484548e-06,
466
- "loss": 1.1304,
467
- "step": 6200
468
- },
469
- {
470
- "epoch": 9.323308270676693,
471
- "eval_train_loss": 1.8035305738449097,
472
- "eval_train_runtime": 5.1397,
473
- "eval_train_samples_per_second": 1042.671,
474
- "eval_train_steps_per_second": 65.179,
475
- "step": 6200
476
- },
477
- {
478
- "epoch": 9.62406015037594,
479
- "grad_norm": 12.501527786254883,
480
- "learning_rate": 8.38763575605681e-07,
481
- "loss": 1.1126,
482
- "step": 6400
483
- },
484
- {
485
- "epoch": 9.62406015037594,
486
- "eval_train_loss": 1.8056447505950928,
487
- "eval_train_runtime": 5.1771,
488
- "eval_train_samples_per_second": 1035.144,
489
- "eval_train_steps_per_second": 64.709,
490
- "step": 6400
491
- },
492
- {
493
- "epoch": 9.924812030075188,
494
- "grad_norm": 10.318084716796875,
495
- "learning_rate": 1.704260651629073e-07,
496
- "loss": 1.0986,
497
- "step": 6600
498
- },
499
- {
500
- "epoch": 9.924812030075188,
501
- "eval_train_loss": 1.806175947189331,
502
- "eval_train_runtime": 5.1696,
503
- "eval_train_samples_per_second": 1036.634,
504
- "eval_train_steps_per_second": 64.802,
505
- "step": 6600
506
  }
507
  ],
508
  "logging_steps": 200,
509
- "max_steps": 6650,
510
  "num_input_tokens_seen": 0,
511
- "num_train_epochs": 10,
512
  "save_steps": 3000,
513
  "stateful_callbacks": {
514
  "TrainerControl": {
@@ -523,7 +193,7 @@
523
  }
524
  },
525
  "total_flos": 0.0,
526
- "train_batch_size": 16,
527
  "trial_name": null,
528
  "trial_params": null
529
  }
 
2
  "best_global_step": null,
3
  "best_metric": null,
4
  "best_model_checkpoint": null,
5
+ "epoch": 7.0,
6
  "eval_steps": 200,
7
+ "global_step": 2331,
8
  "is_hyper_param_search": false,
9
  "is_local_process_zero": true,
10
  "is_world_process_zero": true,
11
  "log_history": [
12
  {
13
+ "epoch": 0.6006006006006006,
14
+ "grad_norm": 9.713151931762695,
15
+ "learning_rate": 1.700854700854701e-05,
16
+ "loss": 2.5755,
17
  "step": 200
18
  },
19
  {
20
+ "epoch": 0.6006006006006006,
21
+ "eval_train_loss": 2.4112653732299805,
22
+ "eval_train_runtime": 16.1652,
23
+ "eval_train_samples_per_second": 331.515,
24
+ "eval_train_steps_per_second": 10.393,
25
  "step": 200
26
  },
27
  {
28
+ "epoch": 1.2012012012012012,
29
+ "grad_norm": 8.402304649353027,
30
+ "learning_rate": 1.8426323319027183e-05,
31
+ "loss": 2.2395,
32
  "step": 400
33
  },
34
  {
35
+ "epoch": 1.2012012012012012,
36
+ "eval_train_loss": 2.3553037643432617,
37
+ "eval_train_runtime": 16.1973,
38
+ "eval_train_samples_per_second": 330.857,
39
+ "eval_train_steps_per_second": 10.372,
40
  "step": 400
41
  },
42
  {
43
+ "epoch": 1.8018018018018018,
44
+ "grad_norm": 8.638038635253906,
45
+ "learning_rate": 1.6518836432999526e-05,
46
+ "loss": 2.0813,
47
  "step": 600
48
  },
49
  {
50
+ "epoch": 1.8018018018018018,
51
+ "eval_train_loss": 2.3290350437164307,
52
+ "eval_train_runtime": 15.8942,
53
+ "eval_train_samples_per_second": 337.167,
54
+ "eval_train_steps_per_second": 10.57,
55
  "step": 600
56
  },
57
  {
58
+ "epoch": 2.4024024024024024,
59
+ "grad_norm": 8.17331314086914,
60
+ "learning_rate": 1.4611349546971865e-05,
61
+ "loss": 1.9813,
62
  "step": 800
63
  },
64
  {
65
+ "epoch": 2.4024024024024024,
66
+ "eval_train_loss": 2.316850423812866,
67
+ "eval_train_runtime": 16.4597,
68
+ "eval_train_samples_per_second": 325.584,
69
+ "eval_train_steps_per_second": 10.207,
70
  "step": 800
71
  },
72
  {
73
+ "epoch": 3.003003003003003,
74
+ "grad_norm": 8.972688674926758,
75
+ "learning_rate": 1.2703862660944206e-05,
76
+ "loss": 1.9233,
77
  "step": 1000
78
  },
79
  {
80
+ "epoch": 3.003003003003003,
81
+ "eval_train_loss": 2.3080697059631348,
82
+ "eval_train_runtime": 16.059,
83
+ "eval_train_samples_per_second": 333.707,
84
+ "eval_train_steps_per_second": 10.461,
85
  "step": 1000
86
  },
87
  {
88
+ "epoch": 3.6036036036036037,
89
+ "grad_norm": 8.94318962097168,
90
+ "learning_rate": 1.0796375774916547e-05,
91
+ "loss": 1.8338,
92
  "step": 1200
93
  },
94
  {
95
+ "epoch": 3.6036036036036037,
96
+ "eval_train_loss": 2.3076283931732178,
97
+ "eval_train_runtime": 16.1704,
98
+ "eval_train_samples_per_second": 331.408,
99
+ "eval_train_steps_per_second": 10.389,
100
  "step": 1200
101
  },
102
  {
103
+ "epoch": 4.2042042042042045,
104
+ "grad_norm": 10.612234115600586,
105
+ "learning_rate": 8.888888888888888e-06,
106
+ "loss": 1.8029,
107
  "step": 1400
108
  },
109
  {
110
+ "epoch": 4.2042042042042045,
111
+ "eval_train_loss": 2.337951183319092,
112
+ "eval_train_runtime": 16.2105,
113
+ "eval_train_samples_per_second": 330.588,
114
+ "eval_train_steps_per_second": 10.364,
115
  "step": 1400
116
  },
117
  {
118
+ "epoch": 4.804804804804805,
119
+ "grad_norm": 8.080140113830566,
120
+ "learning_rate": 6.981402002861231e-06,
121
+ "loss": 1.7766,
122
  "step": 1600
123
  },
124
  {
125
+ "epoch": 4.804804804804805,
126
+ "eval_train_loss": 2.300466775894165,
127
+ "eval_train_runtime": 16.2606,
128
+ "eval_train_samples_per_second": 329.569,
129
+ "eval_train_steps_per_second": 10.332,
130
  "step": 1600
131
  },
132
  {
133
+ "epoch": 5.405405405405405,
134
+ "grad_norm": 8.161681175231934,
135
+ "learning_rate": 5.073915116833572e-06,
136
+ "loss": 1.722,
137
  "step": 1800
138
  },
139
  {
140
+ "epoch": 5.405405405405405,
141
+ "eval_train_loss": 2.325410842895508,
142
+ "eval_train_runtime": 16.2864,
143
+ "eval_train_samples_per_second": 329.047,
144
+ "eval_train_steps_per_second": 10.315,
145
  "step": 1800
146
  },
147
  {
148
+ "epoch": 6.006006006006006,
149
+ "grad_norm": 9.505444526672363,
150
+ "learning_rate": 3.1664282308059137e-06,
151
+ "loss": 1.7217,
152
  "step": 2000
153
  },
154
  {
155
+ "epoch": 6.006006006006006,
156
+ "eval_train_loss": 2.3215274810791016,
157
+ "eval_train_runtime": 15.9019,
158
+ "eval_train_samples_per_second": 337.003,
159
+ "eval_train_steps_per_second": 10.565,
160
  "step": 2000
161
  },
162
  {
163
+ "epoch": 6.606606606606607,
164
+ "grad_norm": 11.631622314453125,
165
+ "learning_rate": 1.2589413447782547e-06,
166
+ "loss": 1.6759,
167
  "step": 2200
168
  },
169
  {
170
+ "epoch": 6.606606606606607,
171
+ "eval_train_loss": 2.3322482109069824,
172
+ "eval_train_runtime": 15.9374,
173
+ "eval_train_samples_per_second": 336.253,
174
+ "eval_train_steps_per_second": 10.541,
175
  "step": 2200
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
176
  }
177
  ],
178
  "logging_steps": 200,
179
+ "max_steps": 2331,
180
  "num_input_tokens_seen": 0,
181
+ "num_train_epochs": 7,
182
  "save_steps": 3000,
183
  "stateful_callbacks": {
184
  "TrainerControl": {
 
193
  }
194
  },
195
  "total_flos": 0.0,
196
+ "train_batch_size": 32,
197
  "trial_name": null,
198
  "trial_params": null
199
  }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c5426350813d7892767af2be085b90ee8f4228e448896c2b7304612735ddb7b6
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20f9cc1bfbf387326ed07dacc8a11b82a6ff607c0cb073f258fc1350a90ff02a
3
  size 5496