Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Visualize in Weights & Biases

Whisper Large Ru ORD 0.9 Peft PEFT 4-bit Q DoRA - Mizoru

This model is a fine-tuned version of openai/whisper-large-v2 on the ORD_0.9synth dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9568
  • Wer: 40.6051
  • Cer: 24.1709
  • Clean Wer: 30.6324
  • Clean Cer: 18.3212

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer Clean Wer Clean Cer
1.2498 1.0 1162 1.0305 44.9736 26.2819 34.8983 20.4239
1.1376 2.0 2324 0.9974 45.1705 26.6383 33.6258 20.2044
1.0073 3.0 3486 0.9692 41.3758 24.8708 31.3579 19.0546
0.9389 4.0 4648 0.9568 40.6051 24.1709 30.6324 18.3212

Framework versions

  • PEFT 0.12.0
  • Transformers 4.41.0.dev0
  • Pytorch 2.3.1
  • Datasets 3.2.0
  • Tokenizers 0.19.1
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mizoru/whisper-large-ru-ORD_0.9_peft_wth_synth_0.1

Adapter
(251)
this model