Edit model card

whisper-large-v3-atco2-asr

This model is a fine-tuned version of openai/whisper-large-v3 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7695
  • Wer: 17.0374

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 2800
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.1388 3.57 100 0.5488 20.1957
0.0313 7.14 200 0.5830 17.5712
0.0173 10.71 300 0.5898 20.4181
0.004 14.29 400 0.6201 16.3256
0.001 17.86 500 0.6543 18.4164
0.002 21.43 600 0.6499 17.8381
0.0003 25.0 700 0.6724 17.1263
0.0002 28.57 800 0.6890 16.9929
0.0002 32.14 900 0.7012 16.8594
0.0001 35.71 1000 0.7104 16.9484
0.0001 39.29 1100 0.7178 16.9039
0.0001 42.86 1200 0.7241 17.4377
0.0001 46.43 1300 0.7305 17.3488
0.0001 50.0 1400 0.7358 17.3043
0.0001 53.57 1500 0.7407 17.3043
0.0001 57.14 1600 0.7451 17.1263
0.0001 60.71 1700 0.7495 17.2598
0.0001 64.29 1800 0.7529 17.2153
0.0001 67.86 1900 0.7563 17.2598
0.0001 71.43 2000 0.7593 17.4377
0.0001 75.0 2100 0.7612 17.3932
0.0001 78.57 2200 0.7632 17.2598
0.0 82.14 2300 0.7651 17.1263
0.0 85.71 2400 0.7666 17.0819
0.0 89.29 2500 0.7681 17.0374
0.0 92.86 2600 0.7686 17.0374
0.0 96.43 2700 0.7695 17.1263
0.0 100.0 2800 0.7695 17.0374

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.0.1+cu117
  • Datasets 2.12.0
  • Tokenizers 0.14.1
Downloads last month
1,539
Safetensors
Model size
1.61B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jlvdoorn/whisper-large-v3-atco2-asr

Finetuned
(296)
this model