Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

whisper-large-v3-turbo-sandi-train-1-rich-transcript-32

This model is a fine-tuned version of openai/whisper-large-v3-turbo on the ntnu-smil/sandi2025-ds dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7920
  • Wer: 23.2484
  • Cer: 16.5856
  • Decode Runtime: 203.4514
  • Wer Runtime: 0.1639
  • Cer Runtime: 0.3157

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 7e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.98) and epsilon=1e-06 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • training_steps: 732

Training results

Training Loss Epoch Step Validation Loss Wer Cer Decode Runtime Wer Runtime Cer Runtime
0.9653 0.1667 122 0.8334 103.3411 64.3972 245.8729 0.1967 0.3575
1.1313 1.1667 244 0.8073 53.0086 33.9467 210.6469 0.1851 0.3293
0.54 2.1667 366 0.7915 25.4142 18.3008 196.4910 0.1906 0.3139
0.3761 3.1667 488 0.7882 24.2463 17.3425 196.9004 0.1675 0.3169
0.8462 4.1667 610 0.7921 23.4051 16.7178 197.5723 0.1661 0.3141
0.9957 5.1667 732 0.7920 23.2484 16.5856 203.4514 0.1639 0.3157

Framework versions

  • PEFT 0.15.2
  • Transformers 4.52.2
  • Pytorch 2.8.0.dev20250319+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ntnu-smil/whisper-large-v3-turbo-sandi-train-1-rich-transcript-32

Adapter
(69)
this model

Dataset used to train ntnu-smil/whisper-large-v3-turbo-sandi-train-1-rich-transcript-32

Evaluation results