bomolopuu commited on
Commit
6411009
·
verified ·
1 Parent(s): 90e0b1a

End of training

Browse files
Files changed (2) hide show
  1. README.md +12 -4
  2. adapter.bam.safetensors +1 -1
README.md CHANGED
@@ -4,6 +4,8 @@ license: cc-by-nc-4.0
4
  base_model: facebook/mms-1b-all
5
  tags:
6
  - generated_from_trainer
 
 
7
  model-index:
8
  - name: wav2vec2-large-mms-1b-ngn-on-bam-colab
9
  results: []
@@ -15,6 +17,9 @@ should probably proofread and complete it, then remove this comment. -->
15
  # wav2vec2-large-mms-1b-ngn-on-bam-colab
16
 
17
  This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the None dataset.
 
 
 
18
 
19
  ## Model description
20
 
@@ -37,16 +42,19 @@ The following hyperparameters were used during training:
37
  - train_batch_size: 16
38
  - eval_batch_size: 8
39
  - seed: 42
40
- - gradient_accumulation_steps: 2
41
- - total_train_batch_size: 32
42
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
- - lr_scheduler_type: cosine
44
  - lr_scheduler_warmup_ratio: 0.1
45
- - num_epochs: 4
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
 
 
 
50
 
51
 
52
  ### Framework versions
 
4
  base_model: facebook/mms-1b-all
5
  tags:
6
  - generated_from_trainer
7
+ metrics:
8
+ - wer
9
  model-index:
10
  - name: wav2vec2-large-mms-1b-ngn-on-bam-colab
11
  results: []
 
17
  # wav2vec2-large-mms-1b-ngn-on-bam-colab
18
 
19
  This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 1.0502
22
+ - Wer: 0.65
23
 
24
  ## Model description
25
 
 
42
  - train_batch_size: 16
43
  - eval_batch_size: 8
44
  - seed: 42
45
+ - gradient_accumulation_steps: 4
46
+ - total_train_batch_size: 64
47
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
+ - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_ratio: 0.1
50
+ - num_epochs: 10
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
54
 
55
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
56
+ |:-------------:|:------:|:----:|:---------------:|:----:|
57
+ | 5.8957 | 9.0952 | 100 | 1.0502 | 0.65 |
58
 
59
 
60
  ### Framework versions
adapter.bam.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:112505b001809d887d445f175e602cafc0be6243a8249f24ac52e7b3ad4371cd
3
  size 8813904
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf345f6f3de304d0ce085f8d63c848b6772698af15a3d01eb8e50415402f2d8e
3
  size 8813904