Files changed (1) hide show
  1. README.md +82 -68
README.md CHANGED
@@ -1,69 +1,83 @@
1
- ---
2
- license: apache-2.0
3
- base_model:
4
- - Qwen/Qwen2.5-0.5B-Instruct
5
- datasets:
6
- - agentlans/common-crawl-sample
7
- - bigcode/the-stack-smol-xl
8
- - open-thoughts/OpenThoughts-Unverified-173k
9
- - cognitivecomputations/dolphin-r1
10
- tags:
11
- - draft
12
- - speculative-decoding
13
- ---
14
-
15
- ![russian dolls.webp](https://cdn-uploads.huggingface.co/production/uploads/65995c45539c808e84c38bf1/hAb6qi-c0wt4wA5pl4Qup.webp)
16
-
17
- A `0.5B` parameter draft (speculative decoding) model for use with [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1).
18
-
19
- **NOTE**: This is a draft model for the **full-sized** `DeepSeek-R1` model and not the smaller "distilled" models!
20
-
21
- See [jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0) for the non-GGUF version, and a detailed explanation of how the model was created.
22
-
23
- ---
24
-
25
- # Without `imatrix`
26
-
27
- Link | Type | PPL | PPL vs BF16
28
- -----| -----|-----|------------
29
- [DeepSeek-R1-DRAFT-0.5B-BF16.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-BF16.gguf) | BF16 | 11.0267 ± 0.08658 | ---
30
- [DeepSeek-R1-DRAFT-0.5B-F16.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-F16.gguf)| F16 | 11.0294 ± 0.08660 | +0.02%
31
- [DeepSeek-R1-DRAFT-0.5B-Q8_0.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q8_0.gguf)| Q8_0 | 11.0450 ± 0.08675 | +0.17%
32
- [DeepSeek-R1-DRAFT-0.5B-Q6_K.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q6_K.gguf)| Q6_K | 11.1231 ± 0.08732 | +0.87%
33
- [DeepSeek-R1-DRAFT-0.5B-Q5_K_M.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q5_K_M.gguf)| Q5_K_M | 11.2727 ± 0.08902 | +2.23%
34
- [DeepSeek-R1-DRAFT-0.5B-Q5_K_S.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q5_K_S.gguf)| Q5_K_S | 11.2803 ± 0.08888 | +2.30%
35
- [DeepSeek-R1-DRAFT-0.5B-Q4_K_M.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q4_K_M.gguf)| Q4_K_M | 11.8171 ± 0.09319 | +7.17%
36
- [DeepSeek-R1-DRAFT-0.5B-Q4_K_S.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q4_K_S.gguf)| Q4_K_S | 11.9379 ± 0.09380 | +8.26%
37
- [DeepSeek-R1-DRAFT-0.5B-IQ4_NL.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-IQ4_NL.gguf)| IQ4_NL | 11.8497 ± 0.09445 | +7.46%
38
- [DeepSeek-R1-DRAFT-0.5B-IQ4_XS.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-IQ4_XS.gguf)| IQ4_XS | 11.8600 ± 0.09464 | +7.56%
39
- [DeepSeek-R1-DRAFT-0.5B-Q5_1.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q5_1.gguf)| Q5_1 | 11.3624 ± 0.08926 | +3.05%
40
- [DeepSeek-R1-DRAFT-0.5B-Q5_0.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q5_0.gguf)| Q5_0 | 11.5217 ± 0.09124 | +4.49%
41
- [DeepSeek-R1-DRAFT-0.5B-Q4_1.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q4_1.gguf)| Q4_1 | 12.3107 ± 0.09765 | +11.64%
42
- [DeepSeek-R1-DRAFT-0.5B-Q4_0.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q4_0.gguf)| Q4_0 | 12.6168 ± 0.10021 | +14.42%
43
-
44
- # With `imatrix`
45
-
46
- Link | Type | PPL | PPL vs BF16
47
- -----| -----|-----|------------
48
- [DeepSeek-R1-DRAFT-0.5B-iQ6_K.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ6_K.gguf)| Q6_K | 11.0940 ± 0.08714 | +0.61%
49
- [DeepSeek-R1-DRAFT-0.5B-iQ5_K_M.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ5_K_M.gguf)| Q5_K_M | 11.2333 ± 0.08819 | +1.87%
50
- [DeepSeek-R1-DRAFT-0.5B-iQ5_K_S.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ5_K_S.gguf)| Q5_K_S | 11.2238 ± 0.08798 | +1.79%
51
- [DeepSeek-R1-DRAFT-0.5B-iQ4_K_M.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ4_K_M.gguf)| Q4_K_M | 11.6273 ± 0.09165 | +5.45%
52
- [DeepSeek-R1-DRAFT-0.5B-iQ4_K_S.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ4_K_S.gguf)| Q4_K_S | 11.7004 ± 0.09225 | +6.11%
53
- [DeepSeek-R1-DRAFT-0.5B-iIQ4_NL.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iIQ4_NL.gguf)| IQ4_NL | 11.6495 ± 0.09192 | +5.65%
54
- [DeepSeek-R1-DRAFT-0.5B-iIQ4_XS.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iIQ4_XS.gguf)| IQ4_XS | 11.6924 ± 0.09246 | +6.04%
55
- [DeepSeek-R1-DRAFT-0.5B-iQ5_1.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ5_1.gguf)| Q5_1 | 11.2001 ± 0.08792 | +1.57%
56
- [DeepSeek-R1-DRAFT-0.5B-iQ5_0.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ5_0.gguf)| Q5_0 | 11.3579 ± 0.08961 | +3.00%
57
- [DeepSeek-R1-DRAFT-0.5B-iQ4_1.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ4_1.gguf)| Q4_1 | 11.7469 ± 0.09250 | +6.53%
58
- [DeepSeek-R1-DRAFT-0.5B-iQ4_0.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ4_0.gguf)| Q4_0 | 12.1546 ± 0.09619 | +10.23%
59
-
60
- - Based on these results, my suggestion is to use `IQ4_XS` unless you have a good reason not to (ie: use `Q4_K_S` if `IQ4_XS` runs slow on your hardware, `Q4_0` may be a better choice if running on CPU, etc).
61
- - I am not sure if the versions created using the `imatrix` file are actually better or worse in practice (more thorough testing is needeed; PPL might not be a good predictor of actual draft acceptance rates!).
62
- - Both `deepseek-r1` and `qwen-2.5` use [YaRN](https://arxiv.org/abs/2309.00071) as their context window extension method. To get the best quality output it is advised to use smaller contexts (eg: `16k`) when you can. Due to the way `YaRN` is implemented in `llama.cpp`; just setting the context to a massive value **will** degrade both the draft and target models' outputs!
63
- - Do not use less than 4-bit quants for speculative decoding - the large drop in quality will not be offset by the memory bandwidth reduction!
64
-
65
- ---
66
-
67
- I have included the [imatrix file](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/DeepSeek-R1-DRAFT-0.5B-BF16.imatrix) used to generate the `Q4_0`-`Q6_K` quants, along with the [1MB sample of the fine-tuning data](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/DeepSeek-R1-DRAFT-imatrix-data.txt) used to create it.
68
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
  I have included the [1MB sample of the fine-tuning data](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/DeepSeek-R1-DRAFT-perplexity-test-data.txt) used to calculate the PPL using `llama-perplexity`'s default settings.
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/Qwen2.5-0.5B-Instruct
5
+ datasets:
6
+ - agentlans/common-crawl-sample
7
+ - bigcode/the-stack-smol-xl
8
+ - open-thoughts/OpenThoughts-Unverified-173k
9
+ - cognitivecomputations/dolphin-r1
10
+ tags:
11
+ - draft
12
+ - speculative-decoding
13
+ language:
14
+ - zho
15
+ - eng
16
+ - fra
17
+ - spa
18
+ - por
19
+ - deu
20
+ - ita
21
+ - rus
22
+ - jpn
23
+ - kor
24
+ - vie
25
+ - tha
26
+ - ara
27
+ ---
28
+
29
+ ![russian dolls.webp](https://cdn-uploads.huggingface.co/production/uploads/65995c45539c808e84c38bf1/hAb6qi-c0wt4wA5pl4Qup.webp)
30
+
31
+ A `0.5B` parameter draft (speculative decoding) model for use with [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1).
32
+
33
+ **NOTE**: This is a draft model for the **full-sized** `DeepSeek-R1` model and not the smaller "distilled" models!
34
+
35
+ See [jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0) for the non-GGUF version, and a detailed explanation of how the model was created.
36
+
37
+ ---
38
+
39
+ # Without `imatrix`
40
+
41
+ Link | Type | PPL | PPL vs BF16
42
+ -----| -----|-----|------------
43
+ [DeepSeek-R1-DRAFT-0.5B-BF16.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-BF16.gguf) | BF16 | 11.0267 ± 0.08658 | ---
44
+ [DeepSeek-R1-DRAFT-0.5B-F16.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-F16.gguf)| F16 | 11.0294 ± 0.08660 | +0.02%
45
+ [DeepSeek-R1-DRAFT-0.5B-Q8_0.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q8_0.gguf)| Q8_0 | 11.0450 ± 0.08675 | +0.17%
46
+ [DeepSeek-R1-DRAFT-0.5B-Q6_K.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q6_K.gguf)| Q6_K | 11.1231 ± 0.08732 | +0.87%
47
+ [DeepSeek-R1-DRAFT-0.5B-Q5_K_M.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q5_K_M.gguf)| Q5_K_M | 11.2727 ± 0.08902 | +2.23%
48
+ [DeepSeek-R1-DRAFT-0.5B-Q5_K_S.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q5_K_S.gguf)| Q5_K_S | 11.2803 ± 0.08888 | +2.30%
49
+ [DeepSeek-R1-DRAFT-0.5B-Q4_K_M.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q4_K_M.gguf)| Q4_K_M | 11.8171 ± 0.09319 | +7.17%
50
+ [DeepSeek-R1-DRAFT-0.5B-Q4_K_S.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q4_K_S.gguf)| Q4_K_S | 11.9379 ± 0.09380 | +8.26%
51
+ [DeepSeek-R1-DRAFT-0.5B-IQ4_NL.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-IQ4_NL.gguf)| IQ4_NL | 11.8497 ± 0.09445 | +7.46%
52
+ [DeepSeek-R1-DRAFT-0.5B-IQ4_XS.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-IQ4_XS.gguf)| IQ4_XS | 11.8600 ± 0.09464 | +7.56%
53
+ [DeepSeek-R1-DRAFT-0.5B-Q5_1.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q5_1.gguf)| Q5_1 | 11.3624 ± 0.08926 | +3.05%
54
+ [DeepSeek-R1-DRAFT-0.5B-Q5_0.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q5_0.gguf)| Q5_0 | 11.5217 ± 0.09124 | +4.49%
55
+ [DeepSeek-R1-DRAFT-0.5B-Q4_1.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q4_1.gguf)| Q4_1 | 12.3107 ± 0.09765 | +11.64%
56
+ [DeepSeek-R1-DRAFT-0.5B-Q4_0.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/no-imatrix/DeepSeek-R1-DRAFT-0.5B-Q4_0.gguf)| Q4_0 | 12.6168 ± 0.10021 | +14.42%
57
+
58
+ # With `imatrix`
59
+
60
+ Link | Type | PPL | PPL vs BF16
61
+ -----| -----|-----|------------
62
+ [DeepSeek-R1-DRAFT-0.5B-iQ6_K.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ6_K.gguf)| Q6_K | 11.0940 ± 0.08714 | +0.61%
63
+ [DeepSeek-R1-DRAFT-0.5B-iQ5_K_M.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ5_K_M.gguf)| Q5_K_M | 11.2333 ± 0.08819 | +1.87%
64
+ [DeepSeek-R1-DRAFT-0.5B-iQ5_K_S.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ5_K_S.gguf)| Q5_K_S | 11.2238 ± 0.08798 | +1.79%
65
+ [DeepSeek-R1-DRAFT-0.5B-iQ4_K_M.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ4_K_M.gguf)| Q4_K_M | 11.6273 ± 0.09165 | +5.45%
66
+ [DeepSeek-R1-DRAFT-0.5B-iQ4_K_S.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ4_K_S.gguf)| Q4_K_S | 11.7004 ± 0.09225 | +6.11%
67
+ [DeepSeek-R1-DRAFT-0.5B-iIQ4_NL.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iIQ4_NL.gguf)| IQ4_NL | 11.6495 ± 0.09192 | +5.65%
68
+ [DeepSeek-R1-DRAFT-0.5B-iIQ4_XS.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iIQ4_XS.gguf)| IQ4_XS | 11.6924 ± 0.09246 | +6.04%
69
+ [DeepSeek-R1-DRAFT-0.5B-iQ5_1.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ5_1.gguf)| Q5_1 | 11.2001 ± 0.08792 | +1.57%
70
+ [DeepSeek-R1-DRAFT-0.5B-iQ5_0.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ5_0.gguf)| Q5_0 | 11.3579 ± 0.08961 | +3.00%
71
+ [DeepSeek-R1-DRAFT-0.5B-iQ4_1.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ4_1.gguf)| Q4_1 | 11.7469 ± 0.09250 | +6.53%
72
+ [DeepSeek-R1-DRAFT-0.5B-iQ4_0.gguf](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/with-imatrix/DeepSeek-R1-DRAFT-0.5B-iQ4_0.gguf)| Q4_0 | 12.1546 ± 0.09619 | +10.23%
73
+
74
+ - Based on these results, my suggestion is to use `IQ4_XS` unless you have a good reason not to (ie: use `Q4_K_S` if `IQ4_XS` runs slow on your hardware, `Q4_0` may be a better choice if running on CPU, etc).
75
+ - I am not sure if the versions created using the `imatrix` file are actually better or worse in practice (more thorough testing is needeed; PPL might not be a good predictor of actual draft acceptance rates!).
76
+ - Both `deepseek-r1` and `qwen-2.5` use [YaRN](https://arxiv.org/abs/2309.00071) as their context window extension method. To get the best quality output it is advised to use smaller contexts (eg: `16k`) when you can. Due to the way `YaRN` is implemented in `llama.cpp`; just setting the context to a massive value **will** degrade both the draft and target models' outputs!
77
+ - Do not use less than 4-bit quants for speculative decoding - the large drop in quality will not be offset by the memory bandwidth reduction!
78
+
79
+ ---
80
+
81
+ I have included the [imatrix file](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/DeepSeek-R1-DRAFT-0.5B-BF16.imatrix) used to generate the `Q4_0`-`Q6_K` quants, along with the [1MB sample of the fine-tuning data](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/DeepSeek-R1-DRAFT-imatrix-data.txt) used to create it.
82
+
83
  I have included the [1MB sample of the fine-tuning data](https://huggingface.co/jukofyork/DeepSeek-R1-DRAFT-0.5B-v1.0-GGUF/blob/main/DeepSeek-R1-DRAFT-perplexity-test-data.txt) used to calculate the PPL using `llama-perplexity`'s default settings.