Distil-Small.en quants
This is a repository of GGML quants for distil-small.en (a Whisper-based transcription model), for use with whisper.cpp.
If you are looking for a program to run this model with, then I would recommend EasyWhisper UI, as it is user-friendly, has a GUI, and will automate a lot of the hard stuff for you.
List of Quants
Clicking on a link will download the corresponding quant instantly.
Link | Quant | Size | Notes |
---|---|---|---|
GGML | F32 | 665 MB | Likely overkill. |
GGML | F16 | 336 MB | Performs better than Q8_0 for noisy audio and music. |
GGML | Q8_0 | 188 MB | Sweet spot; superficial quality loss at nearly double the speed. |
GGML | Q6_K | 148 MB | |
GGML | Q5_K | 127 MB | |
GGML | Q5_1 | 137 MB | |
GGML | Q5_0 | 127 MB | Last "good" quant; anything below loses quality rapidly. |
GGML | Q4_K | 106 MB | Might not have lost too much quality, but I'm not sure. |
GGML | Q4_1 | 117 MB | |
GGML | Q4_0 | 106 MB | |
GGML | Q3_K | 84.9 MB | |
GGML | Q2_K | 68.4 MB | Completely non-sensical outputs. |
The F32 quant was taken from distil-whisper/distil-small.en/ggml-distil-small.en.fp32.bin, and the F16 quant was taken from distil-whisper/distil-small.en/ggml-distil-small.en.bin.
Questions you may have
Why do the "K-quants" not work for me?
My guess is that your GPU might be too old to recognize them, considering that I have gotten the same error on my GTX 1080. If you would like to run them regardless, you can try switching to CPU inference.
Are the K-quants "S", "M", or "L"?
The quantizer I was using was not specific about this, so I do not know about this either.
What program did you use to make these quants?
I used whisper.cpp v1.7.6 on Windows x64, leveraging CUDA 12.4.0.
Model tree for Pomni/distil-small.en-ggml-allquants
Base model
distil-whisper/distil-small.en