auto-patch README.md
Browse files
README.md
CHANGED
@@ -48,7 +48,7 @@ static quants of https://huggingface.co/DavidAU/Qwen3-Code-Reasoning-Instruct-6B
|
|
48 |
|
49 |
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF).***
|
50 |
|
51 |
-
weighted/imatrix quants
|
52 |
## Usage
|
53 |
|
54 |
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
@@ -64,9 +64,15 @@ more details, including on how to concatenate multi-part files.
|
|
64 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q2_K.gguf) | Q2_K | 2.5 | |
|
65 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q3_K_S.gguf) | Q3_K_S | 2.8 | |
|
66 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q3_K_M.gguf) | Q3_K_M | 3.1 | lower quality |
|
|
|
|
|
67 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q4_K_S.gguf) | Q4_K_S | 3.6 | fast, recommended |
|
|
|
|
|
|
|
68 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q6_K.gguf) | Q6_K | 5.0 | very good quality |
|
69 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q8_0.gguf) | Q8_0 | 6.4 | fast, best quality |
|
|
|
70 |
|
71 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
72 |
types (lower is better):
|
|
|
48 |
|
49 |
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF).***
|
50 |
|
51 |
+
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-i1-GGUF
|
52 |
## Usage
|
53 |
|
54 |
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
|
|
64 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q2_K.gguf) | Q2_K | 2.5 | |
|
65 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q3_K_S.gguf) | Q3_K_S | 2.8 | |
|
66 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q3_K_M.gguf) | Q3_K_M | 3.1 | lower quality |
|
67 |
+
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q3_K_L.gguf) | Q3_K_L | 3.4 | |
|
68 |
+
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.IQ4_XS.gguf) | IQ4_XS | 3.4 | |
|
69 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q4_K_S.gguf) | Q4_K_S | 3.6 | fast, recommended |
|
70 |
+
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q4_K_M.gguf) | Q4_K_M | 3.7 | fast, recommended |
|
71 |
+
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q5_K_S.gguf) | Q5_K_S | 4.2 | |
|
72 |
+
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q5_K_M.gguf) | Q5_K_M | 4.3 | |
|
73 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q6_K.gguf) | Q6_K | 5.0 | very good quality |
|
74 |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.Q8_0.gguf) | Q8_0 | 6.4 | fast, best quality |
|
75 |
+
| [GGUF](https://huggingface.co/mradermacher/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x-GGUF/resolve/main/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x.f16.gguf) | f16 | 12.0 | 16 bpw, overkill |
|
76 |
|
77 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
78 |
types (lower is better):
|