CTranslate2 Conversion of whisper-large-v3-turbo-latam (INT8 Quantization)

This model is converted from marianbasti/whisper-large-v3-turbo-latam to the CTranslate2 format using INT8 quantization, primarily for use with faster-whisper.

Model Details

For more details about the finetuned model, see its original model card.

Conversion Details

The original model was converted using the following command:

ct2-transformers-converter --model marianbasti/whisper-large-v3-turbo-latam --copy_files tokenizer.json preprocessor_config.json --output_dir faster-whisper-large-v3-turbo-latam-int8-ct2 --quantization int8

More info on model conversion.

Check Zoont/faster-whisper-large-v3-turbo-int8-ct2 for a quantized version of the original whisper-large-v3-turbo.

Downloads last month
15
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nekusu/faster-whisper-large-v3-turbo-latam-int8-ct2