(lo)whipa-models
Collection
Full and PEFT LoRA (LoWhIPA) fine-tuned Whisper-base and Whisper-large-v2 models for language-agnostic IPA transcription of speech.
•
14 items
•
Updated
This Whisper-for-IPA (WhIPA) model adapter is a PEFT LoRA fine-tuned version of openai/whisper-large-v2 on a subset of:
For deployment and description, please refer to https://github.com/jshrdt/whipa.
from transformers import WhisperForConditionalGeneration, WhisperTokenizer, WhisperProcessor
from peft import PeftModel
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-large-v2", task="transcribe")
tokenizer.add_special_tokens({"additional_special_tokens": ["<|ip|>"] + tokenizer.all_special_tokens})
base_model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2")
base_model.generation_config.lang_to_id["<|ip|>"] = tokenizer.convert_tokens_to_ids(["<|ip|>"])[0]
base_model.resize_token_embeddings(len(tokenizer))
whipa_model = PeftModel.from_pretrained(base_model, "jshrdt/lowhipa-large-comb")
whipa_model.generation_config.language = "<|ip|>"
whipa_model.generation_config.task = "transcribe"
whipa_processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2", task="transcribe")
More information needed
More information needed
Training Loss | Epoch | Validation Loss |
---|---|---|
0.7537 | 2.0323 | 0.5796585083007812 |
0.2638 | 4.0645 | 0.4017384648323059 |
0.1532 | 6.0968 | 0.40539106726646423 |
0.0909 | 8.1290 | 0.4510815143585205 |
0.0535 | 10.1613 | 0.4732421040534973 |
Base model
openai/whisper-large-v2