bomolopuu commited on
Commit
9445e2f
·
verified ·
1 Parent(s): cb86c05

End of training

Browse files
Files changed (2) hide show
  1. README.md +5 -11
  2. adapter.bam.safetensors +1 -1
README.md CHANGED
@@ -4,8 +4,6 @@ license: cc-by-nc-4.0
4
  base_model: facebook/mms-1b-all
5
  tags:
6
  - generated_from_trainer
7
- metrics:
8
- - wer
9
  model-index:
10
  - name: wav2vec2-large-mms-1b-ngn-on-bam-colab
11
  results: []
@@ -17,9 +15,6 @@ should probably proofread and complete it, then remove this comment. -->
17
  # wav2vec2-large-mms-1b-ngn-on-bam-colab
18
 
19
  This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the None dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 1.4665
22
- - Wer: 0.6688
23
 
24
  ## Model description
25
 
@@ -38,21 +33,20 @@ More information needed
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 0.001
42
- - train_batch_size: 8
43
  - eval_batch_size: 8
44
  - seed: 42
 
 
45
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
- - lr_scheduler_warmup_steps: 100
48
  - num_epochs: 2
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
- | Training Loss | Epoch | Step | Validation Loss | Wer |
54
- |:-------------:|:------:|:----:|:---------------:|:------:|
55
- | 10.9406 | 1.2048 | 100 | 1.4665 | 0.6688 |
56
 
57
 
58
  ### Framework versions
 
4
  base_model: facebook/mms-1b-all
5
  tags:
6
  - generated_from_trainer
 
 
7
  model-index:
8
  - name: wav2vec2-large-mms-1b-ngn-on-bam-colab
9
  results: []
 
15
  # wav2vec2-large-mms-1b-ngn-on-bam-colab
16
 
17
  This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the None dataset.
 
 
 
18
 
19
  ## Model description
20
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 0.0005
37
+ - train_batch_size: 16
38
  - eval_batch_size: 8
39
  - seed: 42
40
+ - gradient_accumulation_steps: 4
41
+ - total_train_batch_size: 64
42
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
+ - lr_scheduler_warmup_ratio: 0.1
45
  - num_epochs: 2
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
 
 
 
50
 
51
 
52
  ### Framework versions
adapter.bam.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9fad7e10b2d11b6dda268269da66b8e5518ef278ce8aefa547d50a02429966cd
3
  size 8813904
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:709c7e0527d51b012c99ed9191ce1eb2ed35bfc56ba50855aa2b5be48e4554b3
3
  size 8813904