Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Visualize in Weights & Biases

Whisper Large Ru ORD 0.9 Peft PEFT 4-bit Q DoRA - Mizoru

This model is a fine-tuned version of openai/whisper-large-v2 on the ORD_0.9 dataset. It achieves the following results on the evaluation set:

  • Loss: 2.8845
  • Wer: 74.8477
  • Cer: 38.7512
  • Clean Wer: 36.9662
  • Clean Cer: 22.6459

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer Clean Wer Clean Cer
0.4253 1.0 500 2.7477 74.1292 39.0216 37.6289 23.1469
0.3839 2.0 1000 3.0590 74.9874 39.2363 38.2604 23.1286
0.3321 3.0 1500 2.8845 74.8477 38.7512 36.9662 22.6459

Framework versions

  • PEFT 0.12.0
  • Transformers 4.41.0.dev0
  • Pytorch 2.3.1
  • Datasets 3.2.0
  • Tokenizers 0.19.1
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mizoru/cv_0.1_peft

Adapter
(267)
this model