djdhyun-gglabs commited on
Commit
15926e4
·
verified ·
1 Parent(s): 5ad0692

End of training

Browse files
Files changed (1) hide show
  1. README.md +1 -13
README.md CHANGED
@@ -6,8 +6,6 @@ license: mit
6
  base_model: openai/whisper-large-v3-turbo
7
  tags:
8
  - generated_from_trainer
9
- metrics:
10
- - wer
11
  model-index:
12
  - name: Whisper Small ko
13
  results: []
@@ -19,9 +17,6 @@ should probably proofread and complete it, then remove this comment. -->
19
  # Whisper Small ko
20
 
21
  This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the custom dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 1.6056
24
- - Wer: 52.7174
25
 
26
  ## Model description
27
 
@@ -47,18 +42,11 @@ The following hyperparameters were used during training:
47
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_steps: 500
50
- - training_steps: 500
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
54
 
55
- | Training Loss | Epoch | Step | Validation Loss | Wer |
56
- |:-------------:|:-----:|:----:|:---------------:|:-------:|
57
- | 0.0 | 50.0 | 100 | 0.9917 | 36.4130 |
58
- | 0.0 | 100.0 | 200 | 1.1163 | 39.6739 |
59
- | 0.0 | 150.0 | 300 | 1.2701 | 47.2826 |
60
- | 0.0 | 200.0 | 400 | 1.4354 | 50.0 |
61
- | 0.0 | 250.0 | 500 | 1.6056 | 52.7174 |
62
 
63
 
64
  ### Framework versions
 
6
  base_model: openai/whisper-large-v3-turbo
7
  tags:
8
  - generated_from_trainer
 
 
9
  model-index:
10
  - name: Whisper Small ko
11
  results: []
 
17
  # Whisper Small ko
18
 
19
  This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the custom dataset.
 
 
 
20
 
21
  ## Model description
22
 
 
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 500
45
+ - training_steps: 10
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
 
 
 
 
 
 
 
50
 
51
 
52
  ### Framework versions