Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,166 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
quantized_by: bartowski
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
## Llamacpp imatrix Quantizations of Dolphin-Mistral-24B-Venice-Edition by cognitivecomputations
|
| 7 |
+
|
| 8 |
+
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5317">b5317</a> for quantization.
|
| 9 |
+
|
| 10 |
+
Original model: https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
|
| 11 |
+
|
| 12 |
+
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
|
| 13 |
+
|
| 14 |
+
Run them in [LM Studio](https://lmstudio.ai/)
|
| 15 |
+
|
| 16 |
+
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
|
| 17 |
+
|
| 18 |
+
## Prompt format
|
| 19 |
+
|
| 20 |
+
```
|
| 21 |
+
<s>[SYSTEM_PROMPT]{system_prompt}[/SYSTEM_PROMPT][INST]{prompt}[/INST]
|
| 22 |
+
```
|
| 23 |
+
|
| 24 |
+
## Download a file (not the whole branch) from below:
|
| 25 |
+
|
| 26 |
+
| Filename | Quant type | File Size | Split | Description |
|
| 27 |
+
| -------- | ---------- | --------- | ----- | ----------- |
|
| 28 |
+
| [Dolphin-Mistral-24B-Venice-Edition-bf16.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-bf16.gguf) | bf16 | 47.15GB | false | Full BF16 weights. |
|
| 29 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q8_0.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q8_0.gguf) | Q8_0 | 25.05GB | false | Extremely high quality, generally unneeded but max available quant. |
|
| 30 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q6_K_L.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q6_K_L.gguf) | Q6_K_L | 19.67GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
|
| 31 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q6_K.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q6_K.gguf) | Q6_K | 19.35GB | false | Very high quality, near perfect, *recommended*. |
|
| 32 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q5_K_L.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q5_K_L.gguf) | Q5_K_L | 17.18GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
|
| 33 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q5_K_M.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q5_K_M.gguf) | Q5_K_M | 16.76GB | false | High quality, *recommended*. |
|
| 34 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q5_K_S.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q5_K_S.gguf) | Q5_K_S | 16.30GB | false | High quality, *recommended*. |
|
| 35 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q4_1.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q4_1.gguf) | Q4_1 | 14.87GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
|
| 36 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q4_K_L.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q4_K_L.gguf) | Q4_K_L | 14.83GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
|
| 37 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q4_K_M.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q4_K_M.gguf) | Q4_K_M | 14.33GB | false | Good quality, default size for most use cases, *recommended*. |
|
| 38 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q4_K_S.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q4_K_S.gguf) | Q4_K_S | 13.55GB | false | Slightly lower quality with more space savings, *recommended*. |
|
| 39 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q4_0.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q4_0.gguf) | Q4_0 | 13.49GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
|
| 40 |
+
| [Dolphin-Mistral-24B-Venice-Edition-IQ4_NL.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-IQ4_NL.gguf) | IQ4_NL | 13.47GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
|
| 41 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q3_K_XL.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q3_K_XL.gguf) | Q3_K_XL | 12.99GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
|
| 42 |
+
| [Dolphin-Mistral-24B-Venice-Edition-IQ4_XS.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-IQ4_XS.gguf) | IQ4_XS | 12.76GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
| 43 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q3_K_L.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q3_K_L.gguf) | Q3_K_L | 12.40GB | false | Lower quality but usable, good for low RAM availability. |
|
| 44 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q3_K_M.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q3_K_M.gguf) | Q3_K_M | 11.47GB | false | Low quality. |
|
| 45 |
+
| [Dolphin-Mistral-24B-Venice-Edition-IQ3_M.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-IQ3_M.gguf) | IQ3_M | 10.65GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
| 46 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q3_K_S.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q3_K_S.gguf) | Q3_K_S | 10.40GB | false | Low quality, not recommended. |
|
| 47 |
+
| [Dolphin-Mistral-24B-Venice-Edition-IQ3_XS.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-IQ3_XS.gguf) | IQ3_XS | 9.91GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
| 48 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q2_K_L.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q2_K_L.gguf) | Q2_K_L | 9.55GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
|
| 49 |
+
| [Dolphin-Mistral-24B-Venice-Edition-IQ3_XXS.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-IQ3_XXS.gguf) | IQ3_XXS | 9.28GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
|
| 50 |
+
| [Dolphin-Mistral-24B-Venice-Edition-Q2_K.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q2_K.gguf) | Q2_K | 8.89GB | false | Very low quality but surprisingly usable. |
|
| 51 |
+
| [Dolphin-Mistral-24B-Venice-Edition-IQ2_M.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-IQ2_M.gguf) | IQ2_M | 8.11GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
|
| 52 |
+
| [Dolphin-Mistral-24B-Venice-Edition-IQ2_S.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-IQ2_S.gguf) | IQ2_S | 7.48GB | false | Low quality, uses SOTA techniques to be usable. |
|
| 53 |
+
| [Dolphin-Mistral-24B-Venice-Edition-IQ2_XS.gguf](https://huggingface.co/bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF/blob/main/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-IQ2_XS.gguf) | IQ2_XS | 7.21GB | false | Low quality, uses SOTA techniques to be usable. |
|
| 54 |
+
|
| 55 |
+
## Embed/output weights
|
| 56 |
+
|
| 57 |
+
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
|
| 58 |
+
|
| 59 |
+
## Downloading using huggingface-cli
|
| 60 |
+
|
| 61 |
+
<details>
|
| 62 |
+
<summary>Click to view download instructions</summary>
|
| 63 |
+
|
| 64 |
+
First, make sure you have hugginface-cli installed:
|
| 65 |
+
|
| 66 |
+
```
|
| 67 |
+
pip install -U "huggingface_hub[cli]"
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
Then, you can target the specific file you want:
|
| 71 |
+
|
| 72 |
+
```
|
| 73 |
+
huggingface-cli download bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF --include "cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q4_K_M.gguf" --local-dir ./
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
| 77 |
+
|
| 78 |
+
```
|
| 79 |
+
huggingface-cli download bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF --include "cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q8_0/*" --local-dir ./
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
You can either specify a new local-dir (cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q8_0) or download them all in place (./)
|
| 83 |
+
|
| 84 |
+
</details>
|
| 85 |
+
|
| 86 |
+
## ARM/AVX information
|
| 87 |
+
|
| 88 |
+
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
|
| 89 |
+
|
| 90 |
+
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
|
| 91 |
+
|
| 92 |
+
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
|
| 93 |
+
|
| 94 |
+
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
|
| 95 |
+
|
| 96 |
+
<details>
|
| 97 |
+
<summary>Click to view Q4_0_X_X information (deprecated</summary>
|
| 98 |
+
|
| 99 |
+
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
|
| 100 |
+
|
| 101 |
+
<details>
|
| 102 |
+
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
|
| 103 |
+
|
| 104 |
+
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
|
| 105 |
+
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
|
| 106 |
+
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
|
| 107 |
+
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
|
| 108 |
+
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
|
| 109 |
+
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
|
| 110 |
+
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
|
| 111 |
+
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
|
| 112 |
+
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
|
| 113 |
+
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
|
| 114 |
+
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
|
| 115 |
+
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
|
| 116 |
+
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
|
| 117 |
+
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
|
| 118 |
+
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
|
| 119 |
+
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
|
| 120 |
+
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
|
| 121 |
+
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
|
| 122 |
+
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
|
| 123 |
+
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
|
| 124 |
+
|
| 125 |
+
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
|
| 126 |
+
|
| 127 |
+
</details>
|
| 128 |
+
|
| 129 |
+
</details>
|
| 130 |
+
|
| 131 |
+
## Which file should I choose?
|
| 132 |
+
|
| 133 |
+
<details>
|
| 134 |
+
<summary>Click here for details</summary>
|
| 135 |
+
|
| 136 |
+
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
|
| 137 |
+
|
| 138 |
+
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
|
| 139 |
+
|
| 140 |
+
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
|
| 141 |
+
|
| 142 |
+
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
|
| 143 |
+
|
| 144 |
+
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
|
| 145 |
+
|
| 146 |
+
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
|
| 147 |
+
|
| 148 |
+
If you want to get more into the weeds, you can check out this extremely useful feature chart:
|
| 149 |
+
|
| 150 |
+
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
|
| 151 |
+
|
| 152 |
+
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
|
| 153 |
+
|
| 154 |
+
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
|
| 155 |
+
|
| 156 |
+
</details>
|
| 157 |
+
|
| 158 |
+
## Credits
|
| 159 |
+
|
| 160 |
+
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
|
| 161 |
+
|
| 162 |
+
Thank you ZeroWw for the inspiration to experiment with embed/output.
|
| 163 |
+
|
| 164 |
+
Thank you to LM Studio for sponsoring my work.
|
| 165 |
+
|
| 166 |
+
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|