Distill Whisper Call Center Tforge Dev lr8

This model is a fine-tuned version of distil-whisper/distil-large-v3 on the www_call_center_merged_en_corrected dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3423
  • Wer: 48.5786

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-08
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.8384 3.0722 1000 1.3904 49.8263
0.6597 6.1444 2000 1.3512 48.8471
0.6763 9.2166 3000 1.3436 48.2628
0.6504 12.2888 4000 1.3423 48.5786

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.7.0+cu126
  • Datasets 3.6.0
  • Tokenizers 0.20.3
Downloads last month
2
Safetensors
Model size
756M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Luandrie/_Whisper_Call_Center_en_lr8

Finetuned
(23)
this model

Evaluation results