EhDa24 commited on
Commit
eeff85c
·
verified ·
1 Parent(s): 1d6709d

End of training

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -34,14 +34,15 @@ More information needed
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 2e-05
37
- - train_batch_size: 4
38
- - eval_batch_size: 4
39
  - seed: 42
40
- - gradient_accumulation_steps: 8
41
- - total_train_batch_size: 32
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
- - num_epochs: 4
 
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results
@@ -53,4 +54,4 @@ The following hyperparameters were used during training:
53
  - Transformers 4.51.3
54
  - Pytorch 2.5.1+cu124
55
  - Datasets 3.2.0
56
- - Tokenizers 0.21.0
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 2e-05
37
+ - train_batch_size: 3
38
+ - eval_batch_size: 3
39
  - seed: 42
40
+ - gradient_accumulation_steps: 4
41
+ - total_train_batch_size: 12
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
+ - lr_scheduler_warmup_steps: 200
45
+ - num_epochs: 2
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
 
54
  - Transformers 4.51.3
55
  - Pytorch 2.5.1+cu124
56
  - Datasets 3.2.0
57
+ - Tokenizers 0.21.1