gbennani commited on
Commit
c7773b3
·
verified ·
1 Parent(s): ad0de5b

End of training

Browse files
Files changed (2) hide show
  1. README.md +4 -5
  2. generation_config.json +1 -1
README.md CHANGED
@@ -35,13 +35,12 @@ More information needed
35
  The following hyperparameters were used during training:
36
  - learning_rate: 5e-05
37
  - train_batch_size: 1
38
- - eval_batch_size: 8
39
  - seed: 42
40
- - gradient_accumulation_steps: 8
41
- - total_train_batch_size: 8
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - num_epochs: 3
 
45
 
46
  ### Training results
47
 
@@ -49,7 +48,7 @@ The following hyperparameters were used during training:
49
 
50
  ### Framework versions
51
 
52
- - Transformers 4.52.3
53
  - Pytorch 2.5.1+cu124
54
- - Datasets 3.6.0
55
  - Tokenizers 0.21.0
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 5e-05
37
  - train_batch_size: 1
38
+ - eval_batch_size: 1
39
  - seed: 42
 
 
40
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
  - lr_scheduler_type: linear
42
  - num_epochs: 3
43
+ - mixed_precision_training: Native AMP
44
 
45
  ### Training results
46
 
 
48
 
49
  ### Framework versions
50
 
51
+ - Transformers 4.52.4
52
  - Pytorch 2.5.1+cu124
53
+ - Datasets 2.21.0
54
  - Tokenizers 0.21.0
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "bos_token_id": 151643,
3
  "eos_token_id": 151643,
4
  "max_new_tokens": 2048,
5
- "transformers_version": "4.52.3"
6
  }
 
2
  "bos_token_id": 151643,
3
  "eos_token_id": 151643,
4
  "max_new_tokens": 2048,
5
+ "transformers_version": "4.52.4"
6
  }