morriszms commited on
Commit
2897888
·
verified ·
1 Parent(s): 38e7985

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ DeepSeek-R1-Distill-Llama-3B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ DeepSeek-R1-Distill-Llama-3B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ DeepSeek-R1-Distill-Llama-3B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ DeepSeek-R1-Distill-Llama-3B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ DeepSeek-R1-Distill-Llama-3B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ DeepSeek-R1-Distill-Llama-3B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ DeepSeek-R1-Distill-Llama-3B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ DeepSeek-R1-Distill-Llama-3B-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ DeepSeek-R1-Distill-Llama-3B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ DeepSeek-R1-Distill-Llama-3B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ DeepSeek-R1-Distill-Llama-3B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ DeepSeek-R1-Distill-Llama-3B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
DeepSeek-R1-Distill-Llama-3B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:213c0478997e4574d1946d85d95e54656cef02c129a03dc74eb101e7c8109020
3
+ size 1363932736
DeepSeek-R1-Distill-Llama-3B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bc9232f5f78949d1a22b47783028a14080b1b8ec29b29fbaff3d689b5b1ca2b
3
+ size 1815344704
DeepSeek-R1-Distill-Llama-3B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3dcc904ef956274aba1cd1d2e28607dfd93d7b265b09ce7bac9abc09fb1fc03
3
+ size 1687156288
DeepSeek-R1-Distill-Llama-3B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54b2b54669afb134b453348faf7597aff89ba010ebd87ddf858d634094e5ab13
3
+ size 1542846016
DeepSeek-R1-Distill-Llama-3B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32fd77c178731ccee89f7fa240ba626a96f6f6721a0bedae323cc25065d500c3
3
+ size 1917187648
DeepSeek-R1-Distill-Llama-3B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:859a46d8995ed079432fb45bf9abc7d12c44ab0b56f891de5a5049aa552dc4cc
3
+ size 2019374656
DeepSeek-R1-Distill-Llama-3B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59dc612de6e21bdc59d15c60d2cebfdcc537589f1d3ea710a52bd3197e69f86e
3
+ size 1928197696
DeepSeek-R1-Distill-Llama-3B-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b44b98434aba6c71ead83b97e2c9b4ccd734fb18a2255ea408530fdbfe0be0b
3
+ size 2269509184
DeepSeek-R1-Distill-Llama-3B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ee1dbc06d28091a8d6735c2b5874f37184af17fa258b83444bee7811bd9bf67
3
+ size 2322150976
DeepSeek-R1-Distill-Llama-3B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d58867e5957ffb3574a3f6c21e83b37ce8ed5806bbff8944784d1b8864cd1d3
3
+ size 2269509184
DeepSeek-R1-Distill-Llama-3B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74fabf7dec352ae2a33d8d3331a00498c8f8830fcade8435b87a8720018d8ce6
3
+ size 2643850816
DeepSeek-R1-Distill-Llama-3B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c18570b116510cac780b9b9664480aeb6beee2124292f88fca6c4f66204e0ec6
3
+ size 3421896256
README.md ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ library_name: transformers
6
+ tags:
7
+ - reasoning
8
+ - axolotl
9
+ - r1
10
+ - TensorBlock
11
+ - GGUF
12
+ base_model: suayptalha/DeepSeek-R1-Distill-Llama-3B
13
+ datasets:
14
+ - ServiceNow-AI/R1-Distill-SFT
15
+ pipeline_tag: text-generation
16
+ model-index:
17
+ - name: DeepSeek-R1-Distill-Llama-3B
18
+ results:
19
+ - task:
20
+ type: text-generation
21
+ name: Text Generation
22
+ dataset:
23
+ name: IFEval (0-Shot)
24
+ type: HuggingFaceH4/ifeval
25
+ args:
26
+ num_few_shot: 0
27
+ metrics:
28
+ - type: inst_level_strict_acc and prompt_level_strict_acc
29
+ value: 70.93
30
+ name: strict accuracy
31
+ source:
32
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/DeepSeek-R1-Distill-Llama-3B
33
+ name: Open LLM Leaderboard
34
+ - task:
35
+ type: text-generation
36
+ name: Text Generation
37
+ dataset:
38
+ name: BBH (3-Shot)
39
+ type: BBH
40
+ args:
41
+ num_few_shot: 3
42
+ metrics:
43
+ - type: acc_norm
44
+ value: 21.45
45
+ name: normalized accuracy
46
+ source:
47
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/DeepSeek-R1-Distill-Llama-3B
48
+ name: Open LLM Leaderboard
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: MATH Lvl 5 (4-Shot)
54
+ type: hendrycks/competition_math
55
+ args:
56
+ num_few_shot: 4
57
+ metrics:
58
+ - type: exact_match
59
+ value: 20.92
60
+ name: exact match
61
+ source:
62
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/DeepSeek-R1-Distill-Llama-3B
63
+ name: Open LLM Leaderboard
64
+ - task:
65
+ type: text-generation
66
+ name: Text Generation
67
+ dataset:
68
+ name: GPQA (0-shot)
69
+ type: Idavidrein/gpqa
70
+ args:
71
+ num_few_shot: 0
72
+ metrics:
73
+ - type: acc_norm
74
+ value: 1.45
75
+ name: acc_norm
76
+ source:
77
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/DeepSeek-R1-Distill-Llama-3B
78
+ name: Open LLM Leaderboard
79
+ - task:
80
+ type: text-generation
81
+ name: Text Generation
82
+ dataset:
83
+ name: MuSR (0-shot)
84
+ type: TAUR-Lab/MuSR
85
+ args:
86
+ num_few_shot: 0
87
+ metrics:
88
+ - type: acc_norm
89
+ value: 2.91
90
+ name: acc_norm
91
+ source:
92
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/DeepSeek-R1-Distill-Llama-3B
93
+ name: Open LLM Leaderboard
94
+ - task:
95
+ type: text-generation
96
+ name: Text Generation
97
+ dataset:
98
+ name: MMLU-PRO (5-shot)
99
+ type: TIGER-Lab/MMLU-Pro
100
+ config: main
101
+ split: test
102
+ args:
103
+ num_few_shot: 5
104
+ metrics:
105
+ - type: acc
106
+ value: 21.98
107
+ name: accuracy
108
+ source:
109
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/DeepSeek-R1-Distill-Llama-3B
110
+ name: Open LLM Leaderboard
111
+ ---
112
+
113
+ <div style="width: auto; margin-left: auto; margin-right: auto">
114
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
115
+ </div>
116
+ <div style="display: flex; justify-content: space-between; width: 100%;">
117
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
118
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
119
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
120
+ </p>
121
+ </div>
122
+ </div>
123
+
124
+ ## suayptalha/DeepSeek-R1-Distill-Llama-3B - GGUF
125
+
126
+ This repo contains GGUF format model files for [suayptalha/DeepSeek-R1-Distill-Llama-3B](https://huggingface.co/suayptalha/DeepSeek-R1-Distill-Llama-3B).
127
+
128
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
129
+
130
+ <div style="text-align: left; margin: 20px 0;">
131
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
132
+ Run them on the TensorBlock client using your local machine ↗
133
+ </a>
134
+ </div>
135
+
136
+ ## Prompt template
137
+
138
+ ```
139
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
140
+
141
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
142
+
143
+ {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
144
+ ```
145
+
146
+ ## Model file specification
147
+
148
+ | Filename | Quant type | File Size | Description |
149
+ | -------- | ---------- | --------- | ----------- |
150
+ | [DeepSeek-R1-Distill-Llama-3B-Q2_K.gguf](https://huggingface.co/tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF/blob/main/DeepSeek-R1-Distill-Llama-3B-Q2_K.gguf) | Q2_K | 1.364 GB | smallest, significant quality loss - not recommended for most purposes |
151
+ | [DeepSeek-R1-Distill-Llama-3B-Q3_K_S.gguf](https://huggingface.co/tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF/blob/main/DeepSeek-R1-Distill-Llama-3B-Q3_K_S.gguf) | Q3_K_S | 1.543 GB | very small, high quality loss |
152
+ | [DeepSeek-R1-Distill-Llama-3B-Q3_K_M.gguf](https://huggingface.co/tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF/blob/main/DeepSeek-R1-Distill-Llama-3B-Q3_K_M.gguf) | Q3_K_M | 1.687 GB | very small, high quality loss |
153
+ | [DeepSeek-R1-Distill-Llama-3B-Q3_K_L.gguf](https://huggingface.co/tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF/blob/main/DeepSeek-R1-Distill-Llama-3B-Q3_K_L.gguf) | Q3_K_L | 1.815 GB | small, substantial quality loss |
154
+ | [DeepSeek-R1-Distill-Llama-3B-Q4_0.gguf](https://huggingface.co/tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF/blob/main/DeepSeek-R1-Distill-Llama-3B-Q4_0.gguf) | Q4_0 | 1.917 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
155
+ | [DeepSeek-R1-Distill-Llama-3B-Q4_K_S.gguf](https://huggingface.co/tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF/blob/main/DeepSeek-R1-Distill-Llama-3B-Q4_K_S.gguf) | Q4_K_S | 1.928 GB | small, greater quality loss |
156
+ | [DeepSeek-R1-Distill-Llama-3B-Q4_K_M.gguf](https://huggingface.co/tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF/blob/main/DeepSeek-R1-Distill-Llama-3B-Q4_K_M.gguf) | Q4_K_M | 2.019 GB | medium, balanced quality - recommended |
157
+ | [DeepSeek-R1-Distill-Llama-3B-Q5_0.gguf](https://huggingface.co/tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF/blob/main/DeepSeek-R1-Distill-Llama-3B-Q5_0.gguf) | Q5_0 | 2.270 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
158
+ | [DeepSeek-R1-Distill-Llama-3B-Q5_K_S.gguf](https://huggingface.co/tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF/blob/main/DeepSeek-R1-Distill-Llama-3B-Q5_K_S.gguf) | Q5_K_S | 2.270 GB | large, low quality loss - recommended |
159
+ | [DeepSeek-R1-Distill-Llama-3B-Q5_K_M.gguf](https://huggingface.co/tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF/blob/main/DeepSeek-R1-Distill-Llama-3B-Q5_K_M.gguf) | Q5_K_M | 2.322 GB | large, very low quality loss - recommended |
160
+ | [DeepSeek-R1-Distill-Llama-3B-Q6_K.gguf](https://huggingface.co/tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF/blob/main/DeepSeek-R1-Distill-Llama-3B-Q6_K.gguf) | Q6_K | 2.644 GB | very large, extremely low quality loss |
161
+ | [DeepSeek-R1-Distill-Llama-3B-Q8_0.gguf](https://huggingface.co/tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF/blob/main/DeepSeek-R1-Distill-Llama-3B-Q8_0.gguf) | Q8_0 | 3.422 GB | very large, extremely low quality loss - not recommended |
162
+
163
+
164
+ ## Downloading instruction
165
+
166
+ ### Command line
167
+
168
+ Firstly, install Huggingface Client
169
+
170
+ ```shell
171
+ pip install -U "huggingface_hub[cli]"
172
+ ```
173
+
174
+ Then, downoad the individual model file the a local directory
175
+
176
+ ```shell
177
+ huggingface-cli download tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF --include "DeepSeek-R1-Distill-Llama-3B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
178
+ ```
179
+
180
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
181
+
182
+ ```shell
183
+ huggingface-cli download tensorblock/DeepSeek-R1-Distill-Llama-3B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
184
+ ```