Experimental layer-wise + pruned (layers 4 and 5) quantization of cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
Using LLaMA C++ release b5770 for quantization.
Original model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
From the original model creators:
Discord: https://discord.gg/h3K4XGj2RH
Website: https://dphn.ai
Twitter: https://x.com/dphnAIWhat is Dolphin Mistral 24B Venice Edition?
Dolphin Mistral 24B Venice Edition is a collaborative project we undertook with Venice.ai with the goal of creating the most uncensored version of Mistral 24B for use within the Venice ecosystem.
Dolphin Mistral 24B Venice Edition is now live on https://venice.ai/ as โVenice Uncensored,โ the new default model for all Venice users.
Dolphin aims to be a general purpose model, similar to the models behind ChatGPT, Claude, Gemini. But these models present problems for businesses seeking to include AI in their products.
- They maintain control of the system prompt, deprecating and changing things as they wish, often causing software to break.
- They maintain control of the model versions, sometimes changing things silently, or deprecating older models that your business relies on.
- They maintain control of the alignment, and in particular the alignment is one-size-fits all, not tailored to the application.
- They can see all your queries and they can potentially use that data in ways you wouldn't want. Dolphin, in contrast, is steerable and gives control to the system owner. You set the system prompt. You decide the alignment. You have control of your data. Dolphin does not impose its ethics or guidelines on you. You are the one who decides the guidelines.
Dolphin belongs to YOU, it is your tool, an extension of your will. Just as you are personally responsible for what you do with a knife, gun, fire, car, or the internet, you are the creator and originator of any content you generate with Dolphin.
From Eric Hartford's, the creator of the Dolphin model series, Uncensored Models:
Most of these models (for example, Alpaca, Vicuna, WizardLM, MPT-7B-Chat, Wizard-Vicuna, GPT4-X-Vicuna) have some sort of embedded alignment. For general purposes, this is a good thing. This is what stops the model from doing bad things, like teaching you how to cook meth and make bombs. But what is the nature of this alignment? And, why is it so?
The reason these models are aligned is that they are trained with data that was generated by ChatGPT, which itself is aligned by an alignment team at OpenAI. As it is a black box, we don't know all the reasons for the decisions that were made, but we can observe it generally is aligned with American popular culture, and to obey American law...
PLEASE READ THIS BEFORE USING THESE EXPERIMENTAL VERSIONS!
An area of personal interest is finding ways to optimize the inference performance of LLMs when deployed in resource-constrained environments like commodity hardware, desktops, laptops, mobiles, edge devices, etc. There are many approaches to accomplish this, including architecture simplification and knowledge distillation, but my focus has been primarily on quantization and pruning.
The method used to produce these experimental versions is covered in Squeezing Tensor Bits: the quest for smaller LLMs, but at a high level it involves using a custom version of llama-imatrix
and llama-quantize
to identify influential tensors, quantize the most important layers to higher bit precision and the less important to lower bits, and remove (prune) one or more layers. This process was partly inspired by Dumitru's et al Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing LLMs Beyond Integer Bit-Levels, and Xin Men's et al ShortGPT: Layers in Large Language Models are More Redundant Than You Expect
As of version b5125, llama-quantize can perform tensor-wide quantization (TWQ), whereby user-defined tensors are quantized at a specific level, or perform layer-wise quantization (LWQ) by selecting different quantization types per tensor/layer. For example, --tensor-type attn_v=q6_k
will quantize all Attention Value tensors at q6_k (TWQ), and --tensor-type "\.([0-9]|1[01257]|31)\.attn_k=q4_k"
will quantize Attention Key tensors on layers 0 to 9, 10, 11, 12, 15, 17 and 31 at q4_k, leaving the remaining layers at their default value (LWQ).
As of version b5740, llama-quantize can also prune models during quantisation by providing a comma-separated list in the --prune-layers
command line option. The pruning operation will renumber remaining layers to avoid gaps in the sequence, update the relevant model metadata and, if an imatrix is available, it will use the correct importance score vector. This option can be used alongside --tensor-type
to perform tensor/layer-wise quantization on selected tensor types, whilst at the same time pruning others. For example:
llama-quantize --tensor-type attn=q6_k --prune-layers 3,7,11 --imatrix imatrix.dat model-f32.gguf model-q4_k_m.gguf q4_k_m
An enhanced version of llama-imatrix generates useful statistics to guide the tensor and layer selection process. --show-statistics
will display:
- ฮฃ(Actยฒ): the sum of all squared activations over the tensor (i.e. the Importance Scores)
- Min & Max: minimum and maximum squared activation values
- ฮผ & ฯ: activations' mean and standard deviation
- % Active: proportion of elements whose average squared activation exceeds a very small threshold (1e-5). Helpful to determine how alive/dormant the tensor is during inference
- N: number of squared activations in the tensor
- Entropy: entropy of the squared activation distribution, in bits (standard Shannon entropy measurement)
- E (norm): Normalized entropy.
- ZD Score: z-score distribution as described in 3.1 Layer Importance Scores in the Layer-Wise Quantization paper
- CosSim: cosine similarity between same type tensors with respect to the previous layer (i.e. blk.7.attn_k and blk.6.attn_k)
Please note that statistics are calculated for each individual tensor and should be used to compare between tensors of the same type only. For example, assuming that attn_k in layer 10 has a higher influence during inference than attn_k in layer 7 because its ฮฃ(Actยฒ) is larger makes sense, whilst concluding the same between attn_k and ffn_down does not.
Thereโs a pull request to merge these changes back into the core llama.cpp project. This may or may not ever happen so, until then, the modified version will be available on GitHub.
For testing and comparison I use models produced by Unsloth (Daniel and Michael Han do some really advanced level stuff!) and Bartowski (see credits below) but if they don't provide versions of the required model, all tests and comparisons are done against naive quantizations obtained by simply running llama-quantize
with no further optimization.
All experimental versions were generated using an appropriate imatrix created from calibration datasets available at eaddario/imatrix-calibration. At its core, an Importance Matrix (imatrix) is a table or, more broadly, a structured representation that scores the relative importance of different features or parameters in a machine learning model. It essentially quantifies the "impact" each feature has on a specific outcome, prediction, or relationship being modelled, and it helps to counterbalance the negative effects of quantization and pruning.
The process to generate these models is roughly as follows:
- Convert the original model's tensors to GGUF F16*
- Estimate the Perplexity score for the F16 model (baseline) using the wikitext-2-raw-v1 dataset, and save the logits
- Generate an imatrix from selected calibration datasets
- Determine tensor and layer Importance Score contribution using the enhanced version of
llama-imatrix
- Select an appropriate quant level for each tensor and quantize/prune the model using
llama-quantize
. In this model's case, layers 4 and 5 have been pruned - Calculate Perplexity, KL Divergence, ARC (Easy+Challenge), HellaSwag, MMLU, Truthful QA and WinoGrande scores for each quantized model
- Keep versions with the best scores
- Repeat until all desired quants are created. I find that quantizations below Q3/IQ3 are not fit for my purposes and therefore do not usually generate them, but happy to provide other quants on request.
*BF16 would be preferred, but Apple's GPUs don't support it yet, and therefore any operations are executed in the CPU, making it unacceptably slow. This is expected to change in the near term but until then, if you are using Apple kit avoid using any models tagged BF16
Models
Sizes (in GB)
Model | Bartowski | Repo | Shrinkage |
---|---|---|---|
Dolphin-Mistral-24B-Venice-Edition-pruned-IQ3_M | 10.7 | 9.6 | 10.3% |
Dolphin-Mistral-24B-Venice-Edition-pruned-IQ3_S | 9.9 | 9.3 | 6.2% |
Dolphin-Mistral-24B-Venice-Edition-pruned-IQ4_NL | 13.5 | 11.6 | 14.1% |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q3_K_L | 12.4 | 10.8 | 12.9% |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q3_K_M | 11.5 | 9.9 | 13.9% |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q3_K_S | 10.4 | 8.9 | 14.4% |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q4_K_M | 14.3 | 12.4 | 13.3% |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q4_K_S | 13.5 | 11.7 | 13.3% |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q5_K_M | 16.8 | 14.3 | 14.9% |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q5_K_S | 16.3 | 13.9 | 14.7% |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q6_K | 19.7 | 16.8 | 14.7% |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q8_0 | 25.1 | 21.9 | 12.7% |
Perplexity and KL Divergence scores
Model | ฮผPPL | ๐PPL | ฮผKLD | RMS ฮp |
---|---|---|---|---|
Dolphin-Mistral-24B-Venice-Edition-pruned-IQ3_M | 20.379006 ยฑ0.160275 | 73.93% | 1.290608 ยฑ0.004304 | 37.928 ยฑ0.088 |
Dolphin-Mistral-24B-Venice-Edition-pruned-IQ3_S | 21.165413 ยฑ0.164512 | 73.80% | 1.340446 ยฑ0.004301 | 38.586 ยฑ0.088 |
Dolphin-Mistral-24B-Venice-Edition-pruned-IQ4_NL | 18.783744 ยฑ0.146959 | 74.79% | 1.199318 ยฑ0.004258 | 36.745 ยฑ0.088 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q3_K_L | 19.313300 ยฑ0.150799 | 74.61% | 1.248712 ยฑ0.004216 | 37.260 ยฑ0.088 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q3_K_M | 18.723777 ยฑ0.145380 | 75.90% | 1.226150 ยฑ0.004006 | 36.807 ยฑ0.087 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q3_K_S | 19.765437 ยฑ0.153182 | 74.13% | 1.295119 ยฑ0.004177 | 38.004 ยฑ0.087 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q4_K_M | 18.556910 ยฑ0.145472 | 74.92% | 1.187728 ยฑ0.004237 | 36.521 ยฑ0.088 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q4_K_M-bartowski | 6.304728 ยฑ0.042418 | 99.60% | 0.016941 ยฑ0.000138 | 4.031 ยฑ0.037 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q4_K_S | 18.663517 ยฑ0.146425 | 74.87% | 1.192878 ยฑ0.004250 | 36.598 ยฑ0.088 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q5_K_M | 18.174846 ยฑ0.142320 | 75.14% | 1.159685 ยฑ0.004238 | 36.214 ยฑ0.088 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q5_K_S | 18.199918 ยฑ0.142513 | 75.20% | 1.160040 ยฑ0.004229 | 36.220 ยฑ0.088 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q6_K | 18.213825 ยฑ0.142965 | 75.05% | 1.158026 ยฑ0.004262 | 36.219 ยฑ0.088 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q8_0 | 18.203515 ยฑ0.142826 | 75.02% | 1.158351 ยฑ0.004265 | 36.227 ยฑ0.088 |
Dolphin-Mistral-24B-Venice-Edition-pruned-F16 | 6.180577 ยฑ0.041038 | 100% | N/A | N/A |
ARC, HellaSwag, MMLU, Truthful QA and WinoGrande scores
Scores generated using llama-perplexity with 750 tasks per test, and a context size of 768 tokens.
For the test data used in the generation of these scores, follow the appropriate links: HellaSwag, ARC, MMLU, Truthful QA and WinoGrande
Model | ARC | HellaSwag | MMLU | Truthful QA | WinoGrande | Avg Score |
---|---|---|---|---|---|---|
Dolphin-Mistral-24B-Venice-Edition-pruned-IQ3_M | 65.6000 ยฑ1.7358 | 79.60 | 42.9333 ยฑ1.8086 | 38.4000 ยฑ1.7771 | 72.4000 ยฑ1.6334 | 59.79 |
Dolphin-Mistral-24B-Venice-Edition-pruned-IQ3_S | 64.9333 ยฑ1.7436 | 79.87 | 42.0000 ยฑ1.8034 | 38.0000 ยฑ1.7736 | 72.5333 ยฑ1.6309 | 59.47 |
Dolphin-Mistral-24B-Venice-Edition-pruned-IQ4_NL | 68.4000 ยฑ1.6988 | 80.66 | 44.9333 ยฑ1.8176 | 38.1333 ยฑ1.7748 | 74.4000 ยฑ1.5947 | 61.31 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q3_K_L | 67.2000 ยฑ1.7155 | 80.27 | 43.2000 ยฑ1.8100 | 39.6000 ยฑ1.7870 | 72.9333 ยฑ1.6235 | 60.64 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q3_K_M | 66.6667 ยฑ1.7225 | 80.67 | 43.8667 ยฑ1.8132 | 39.4667 ยฑ1.7860 | 72.2667 ยฑ1.6358 | 60.59 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q3_K_S | 66.2667 ยฑ1.7276 | 78.93 | 43.7333 ยฑ1.8126 | 38.1333 ยฑ1.7748 | 72.8000 ยฑ1.6260 | 59.97 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q4_K_M | 68.0000 ยฑ1.7045 | 80.93 | 45.2000 ยฑ1.8185 | 36.6667 ยฑ1.7608 | 72.1333 ยฑ1.6382 | 60.59 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q4_K_M-bartowski | 69.8667 ยฑ1.6766 | 84.27 | 45.3333 ยฑ1.8190 | 37.6000 ยฑ1.7699 | 80.2667 ยฑ1.4542 | 63.47 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q4_K_S | 67.0667 ยฑ1.7172 | 81.07 | 45.2000 ยฑ1.8185 | 36.2667 ยฑ1.7567 | 72.0000 ยฑ1.6406 | 60.32 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q5_K_M | 67.0667 ยฑ1.7172 | 81.73 | 44.5333 ยฑ1.8160 | 37.8667 ยฑ1.7724 | 73.8667 ยฑ1.6054 | 61.01 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q5_K_S | 67.3333 ยฑ1.7137 | 81.47 | 44.2667 ยฑ1.8149 | 38.6667 ยฑ1.7794 | 74.2667 ยฑ1.5974 | 61.20 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q6_K | 67.4667 ยฑ1.7119 | 81.07 | 44.5333 ยฑ1.8160 | 39.6000 ยฑ1.7870 | 73.8667 ยฑ1.6054 | 61.31 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q8_0 | 68.1333 ยฑ1.7026 | 81.33 | 44.9333 ยฑ1.8176 | 38.2667 ยฑ1.7759 | 74.4000 ยฑ1.5947 | 61.41 |
Dolphin-Mistral-24B-Venice-Edition-pruned-F16 | 70.8000 ยฑ1.6614 | 84.53 | 45.3333 ยฑ1.8190 | 38.1333 ยฑ1.7748 | 80.2667 ยฑ1.4542 | 63.81 |
Tokens per Second - Benchmarks
Scores generated using llama-bench. Naive (llama-quantize
with no optimization) Q4_K_M quantization included for comparison.
model | size | params | backend | threads | test | t/s |
---|---|---|---|---|---|---|
Dolphin-Mistral-24B-Venice-Edition-pruned-Q4_K_M | 11.53 GiB | 22.46 B | Metal,BLAS | 12 | pp512 | 266.57 ยฑ14.60 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q4_K_M | 11.53 GiB | 22.46 B | Metal,BLAS | 12 | tg128 | 27.60 ยฑ0.54 |
Dolphin-Mistral-24B-Venice-Edition-pruned-Q4_K_M | 11.53 GiB | 22.46 B | Metal,BLAS | 12 | pp1024+tg1024 | 41.55 ยฑ2.99 |
Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-bartowski | 13.34 GiB | 23.57 B | Metal,BLAS | 12 | pp512 | 253.67 ยฑ2.26 |
Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-bartowski | 13.34 GiB | 23.57 B | Metal,BLAS | 12 | tg128 | 27.69 ยฑ0.46 |
Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-bartowski | 13.34 GiB | 23.57 B | Metal,BLAS | 12 | pp1024+tg1024 | 45.54 ยฑ0.18 |
Metrics used
Perplexity: one of the key metrics used in NLP evaluation. It measures the quality of a language model by evaluating how well it predicts the next token given a particular sequence of words. A PPL of 1 indicates an exact match between predicted and actual, whereas values greater than one indicate a degree of "surprise" the generated token differs from the expected.
KullbackโLeibler (KL) Divergence: a statistical measure of how much a probability distribution differs from another. When quantizing models (or altering the original tensors in any way for that matter), the closest we can preserve the weights' probability distribution to the original model the better, thus the closest to 0 the better.
AI2 Reasoning Challenge (ARC): a benchmark to evaluate the ability of AI models to answer complex science questions that require logical reasoning beyond pattern matching.
HellaSwag: the Harder Endings, Longer contexts, and Low-shot Activities for Situations With Adversarial Generations (bit of a mouthful!) is a benchmark designed to test commonsense natural language inference. It requires the model to predict the most likely ending of a sentence.
MMLU: the Massive Multitask Language Understanding evaluates LLMsโ general knowledge and problem-solving abilities across 57 subjects, including elementary mathematics, US history, computer science, and law.
Truthful QA: evaluates how well LLMs generate truthful responses to questions. It identifies whether AI models can avoid generating false or misleading information, particularly in areas where human knowledge is prone to misconceptions.
Winogrande: based on the Winograd Schema Challenge, is a natural language understanding task requiring models to resolve ambiguities in sentences involving pronoun references.
Credits
LLaMa C++ has a large and vibrant community of contributors (~1,200 last time I checked) that actively maintain and extend its functionality, adding new models and architectures almost as fast as they appear (considering the breakneck speed at which the AI/ML field is advancing, this alone is a remarkable feat!), and whilst I'm grateful to each and everyone of them, I want to recognise three people in particular: Thank You! Colin Kealty for the many contributions and for being one of the best sources of high quality quantized models available on Hugging Face, and a really big Thank You! to Georgi Gerganov for his amazing work with llama.cpp and the ggml/gguf libraries, and Iwan Kawrakow for being one of the key authors behind the many quantisation algorithms and the imatrix functionality.
- Downloads last month
- 407
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for eaddario/Dolphin-Mistral-24B-Venice-Edition-pruned-GGUF
Base model
mistralai/Mistral-Small-24B-Base-2501