CTranslate2 Conversion of whisper-large-v3-turbo-latam (INT8 Quantization)
This model is converted from marianbasti/whisper-large-v3-turbo-latam to the CTranslate2 format using INT8 quantization, primarily for use with faster-whisper.
Model Details
For more details about the finetuned model, see its original model card.
Conversion Details
The original model was converted using the following command:
ct2-transformers-converter --model marianbasti/whisper-large-v3-turbo-latam --copy_files tokenizer.json preprocessor_config.json --output_dir faster-whisper-large-v3-turbo-latam-int8-ct2 --quantization int8
More info on model conversion.
Check Zoont/faster-whisper-large-v3-turbo-int8-ct2 for a quantized version of the original whisper-large-v3-turbo.
- Downloads last month
- 15
Model tree for nekusu/faster-whisper-large-v3-turbo-latam-int8-ct2
Base model
openai/whisper-large-v3
Finetuned
openai/whisper-large-v3-turbo
Finetuned
marianbasti/whisper-large-v3-turbo-latam