Upload README.md
Browse files
README.md
CHANGED
@@ -108,15 +108,11 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
108 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
109 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
110 |
| [mixtral-8x7b-moe-rp-story.Q2_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q2_K.gguf) | Q2_K | 2 | 15.64 GB| 18.14 GB | smallest, significant quality loss - not recommended for most purposes |
|
111 |
-
| [mixtral-8x7b-moe-rp-story.Q3_K_S.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q3_K_S.gguf) | Q3_K_S | 3 | 20.29 GB| 22.79 GB | very small, high quality loss |
|
112 |
| [mixtral-8x7b-moe-rp-story.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q3_K_M.gguf) | Q3_K_M | 3 | 20.36 GB| 22.86 GB | very small, high quality loss |
|
113 |
-
| [mixtral-8x7b-moe-rp-story.Q3_K_L.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q3_K_L.gguf) | Q3_K_L | 3 | 20.43 GB| 22.93 GB | small, substantial quality loss |
|
114 |
| [mixtral-8x7b-moe-rp-story.Q4_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
115 |
| [mixtral-8x7b-moe-rp-story.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q4_K_M.gguf) | Q4_K_M | 4 | 26.44 GB| 28.94 GB | medium, balanced quality - recommended |
|
116 |
-
| [mixtral-8x7b-moe-rp-story.Q4_K_S.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q4_K_S.gguf) | Q4_K_S | 4 | 26.44 GB| 28.94 GB | small, greater quality loss |
|
117 |
| [mixtral-8x7b-moe-rp-story.Q5_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
118 |
| [mixtral-8x7b-moe-rp-story.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q5_K_M.gguf) | Q5_K_M | 5 | 32.23 GB| 34.73 GB | large, very low quality loss - recommended |
|
119 |
-
| [mixtral-8x7b-moe-rp-story.Q5_K_S.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q5_K_S.gguf) | Q5_K_S | 5 | 32.23 GB| 34.73 GB | large, low quality loss - recommended |
|
120 |
| [mixtral-8x7b-moe-rp-story.Q6_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss |
|
121 |
| [mixtral-8x7b-moe-rp-story.Q8_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q8_0.gguf) | Q8_0 | 8 | 49.62 GB| 52.12 GB | very large, extremely low quality loss - not recommended |
|
122 |
|
@@ -326,6 +322,8 @@ The DPO chat model is here to help get more human reply.
|
|
326 |
|
327 |
This is my first try at doing this, so don't hesitate to give feedback!
|
328 |
|
|
|
|
|
329 |
<!-- description start -->
|
330 |
## Description
|
331 |
|
|
|
108 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
109 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
110 |
| [mixtral-8x7b-moe-rp-story.Q2_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q2_K.gguf) | Q2_K | 2 | 15.64 GB| 18.14 GB | smallest, significant quality loss - not recommended for most purposes |
|
|
|
111 |
| [mixtral-8x7b-moe-rp-story.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q3_K_M.gguf) | Q3_K_M | 3 | 20.36 GB| 22.86 GB | very small, high quality loss |
|
|
|
112 |
| [mixtral-8x7b-moe-rp-story.Q4_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
113 |
| [mixtral-8x7b-moe-rp-story.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q4_K_M.gguf) | Q4_K_M | 4 | 26.44 GB| 28.94 GB | medium, balanced quality - recommended |
|
|
|
114 |
| [mixtral-8x7b-moe-rp-story.Q5_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
115 |
| [mixtral-8x7b-moe-rp-story.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q5_K_M.gguf) | Q5_K_M | 5 | 32.23 GB| 34.73 GB | large, very low quality loss - recommended |
|
|
|
116 |
| [mixtral-8x7b-moe-rp-story.Q6_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss |
|
117 |
| [mixtral-8x7b-moe-rp-story.Q8_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF/blob/main/mixtral-8x7b-moe-rp-story.Q8_0.gguf) | Q8_0 | 8 | 49.62 GB| 52.12 GB | very large, extremely low quality loss - not recommended |
|
118 |
|
|
|
322 |
|
323 |
This is my first try at doing this, so don't hesitate to give feedback!
|
324 |
|
325 |
+
WARNING: ALL THE "K" GGUF QUANT OF MIXTRAL MODELS SEEMS TO BE [BROKEN](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/TvjEP14ps7ZUgJ-0-mhIX.png), PREFER Q4_0, Q5_0 or Q8_0!
|
326 |
+
|
327 |
<!-- description start -->
|
328 |
## Description
|
329 |
|