ASR_Whisper_Stroke

This model is a fine-tuned version of openai/whisper-small on the ASR_Preprocess_Stroke_Dataset dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2559
  • Cer: 14.5144
  • Wer: 19.5868

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 700
  • training_steps: 7000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer Wer
0.2408 1.1390 1000 0.3222 61.9392 59.4850
0.0986 2.2779 2000 0.2687 40.5623 49.2289
0.0566 3.4169 3000 0.2586 39.9808 45.3462
0.03 4.5558 4000 0.2521 21.4272 28.0574
0.0157 5.6948 5000 0.2559 15.1623 19.7908
0.0072 6.8337 6000 0.2517 16.4368 23.1233
0.0048 7.9727 7000 0.2559 14.5144 19.5868

Framework versions

  • Transformers 4.53.0.dev0
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
6
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yoona-J/ASR_Whisper_Stroke

Finetuned
(2830)
this model

Dataset used to train yoona-J/ASR_Whisper_Stroke

Evaluation results