This model is a fine-tune of OpenAI's Whisper Large v3 Turbo model (https://huggingface.co/openai/whisper-large-v3-turbo) over the following Korean datasets:
https://huggingface.co/datasets/Junhoee/STT_Korean_Dataset_80000 https://huggingface.co/datasets/Bingsu/zeroth-korean Combined they have roughly 102k sentences.
This is the last checkpoint which has achieved ~16 WER (down from ~24 WER).
Training was 10,000 iterations.
- Downloads last month
- 44
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for royshilkrot/whisper-large-v3-turbo-korean-ggml
Base model
openai/whisper-large-v3
Finetuned
openai/whisper-large-v3-turbo