Distil-Large-v3.5 quants

This is a repository of GGML quants for distil-large-v3.5 (a Whisper-based transcription model), for use with whisper.cpp.

If you are looking for a program to run this model with, then I would recommend EasyWhisper UI, as it is user-friendly, has a GUI, and automates a lot of the hard stuff for you.

List of Quants

Clicking on a link will download the corresponding quant instantly.

Link Quant Size Notes
GGML F32 3.03 GB Likely overkill.
GGML F16 1.52 GB Performs better than Q8_0 for noisy audio and music.
GGML Q8_0 818 MB Sweet spot; superficial quality loss at nearly double the speed.
GGML Q6_K 637 MB
GGML Q5_K 538 MB
GGML Q5_1 585 MB
GGML Q5_0 538 MB Last "good" quant; anything below loses quality rapidly.
GGML Q4_K 444 MB Might not have lost too much quality, but I'm not sure.
GGML Q4_1 491 MB
GGML Q4_0 444 MB
GGML Q3_K 345 MB
GGML Q2_K 269 MB Completely non-sensical output.

The F16 quant was taken from distil-whisper/distil-large-v3.5-ggml/ggml-model.bin.

Questions you may have

Why do the "K-quants" not work for me?

My guess is that your GPU might be too old to recognize them, considering that I have gotten the same error on my GTX 1080. If you would like to run them regardless, you can try switching to CPU inference.

Are the K-quants "S", "M", or "L"?

The quantizer I was using was not specific about this, so I do not know about this either.

What program did you use to make these quants?

I used whisper.cpp v1.7.6 on Windows x64, leveraging CUDA 12.4.0. For the F32 quant, I converted the original Hugging Face (H5) format model to a GGML using the models/convert-h5-to-ggml.py script.

One or multiple of the quants are not working for me.

Open a new discussion in the community tab about this, and I will look into the issue.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Pomni/distil-large-v3.5-ggml-allquants

Finetuned
(5)
this model

Datasets used to train Pomni/distil-large-v3.5-ggml-allquants

Collection including Pomni/distil-large-v3.5-ggml-allquants