You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

fast-whisper-finetuning/Whisper_w_PEFT(https://github.com/Vaibhavs10/fast-whisper-finetuning/blob/main/Whisper_w_PEFT.ipynb)

-GPU RTX 4090D * 1卡

-Dataset TingChen-ppmc/whisper-small-Shanghai

trainer.train() #per_device_train_batch_size=4, # OutOfMemoryError: CUDA out of memory. Tried to allocate 60.00 MiB. GPU -Train Result [100/100 03:36, Epoch 0/1] Step Training Loss Validation Loss 100 1.971000 1.035924 TrainOutput(global_step=100, training_loss=1.9710490417480468, metrics={'train_runtime': 217.3877, 'train_samples_per_second': 1.84, 'train_steps_per_second': 0.46, 'total_flos': 8.5832810496e+17, 'train_loss': 1.9710490417480468, 'epoch': 0.15060240963855423})

-Eva Result wer=98.68189806678383 and normalized_wer=103.27573794096472 {'eval/wer': 98.68189806678383, 'eval/normalized_wer': 103.27573794096472}

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.