legraphista commited on
Commit
d2d21c6
Β·
verified Β·
1 Parent(s): b96d827

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +170 -0
README.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mattshumer/Reflection-Llama-3.1-70B
3
+ inference: false
4
+ library_name: gguf
5
+ license: llama3.1
6
+ pipeline_tag: text-generation
7
+ quantized_by: legraphista
8
+ tags:
9
+ - quantized
10
+ - GGUF
11
+ - quantization
12
+ - imat
13
+ - imatrix
14
+ - static
15
+ - 32bit
16
+ - 16bit
17
+ - 8bit
18
+ - 6bit
19
+ - 5bit
20
+ - 4bit
21
+ - 3bit
22
+ - 2bit
23
+ - 1bit
24
+ ---
25
+
26
+ # Reflection-Llama-3.1-70B-IMat-GGUF
27
+ _Llama.cpp imatrix quantization of mattshumer/Reflection-Llama-3.1-70B_
28
+
29
+ Original Model: [mattshumer/Reflection-Llama-3.1-70B](https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B)
30
+ Original dtype: `FP32` (`float32`)
31
+ Quantized by: llama.cpp [b3671](https://github.com/ggerganov/llama.cpp/releases/tag/b3671)
32
+ IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
33
+
34
+ - [Files](#files)
35
+ - [IMatrix](#imatrix)
36
+ - [Common Quants](#common-quants)
37
+ - [All Quants](#all-quants)
38
+ - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
39
+ - [Inference](#inference)
40
+ - [Simple chat template](#simple-chat-template)
41
+ - [Chat template with system prompt](#chat-template-with-system-prompt)
42
+ - [Llama.cpp](#llama-cpp)
43
+ - [FAQ](#faq)
44
+ - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
45
+ - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
46
+
47
+ ---
48
+
49
+ ## Files
50
+
51
+ ### IMatrix
52
+ Status: ⏳ Processing
53
+ Link: [here](https://huggingface.co/legraphista/Reflection-Llama-3.1-70B-IMat-GGUF/blob/main/imatrix.dat)
54
+
55
+ ### Common Quants
56
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
57
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
58
+ | Reflection-Llama-3.1-70B.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ Static | -
59
+ | Reflection-Llama-3.1-70B.Q6_K | Q6_K | - | ⏳ Processing | βšͺ Static | -
60
+ | Reflection-Llama-3.1-70B.Q4_K | Q4_K | - | ⏳ Processing | 🟒 IMatrix | -
61
+ | Reflection-Llama-3.1-70B.Q3_K | Q3_K | - | ⏳ Processing | 🟒 IMatrix | -
62
+ | Reflection-Llama-3.1-70B.Q2_K | Q2_K | - | ⏳ Processing | 🟒 IMatrix | -
63
+
64
+
65
+ ### All Quants
66
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
67
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
68
+ | Reflection-Llama-3.1-70B.F32 | F32 | - | ⏳ Processing | βšͺ Static | -
69
+ | Reflection-Llama-3.1-70B.BF16 | BF16 | - | ⏳ Processing | βšͺ Static | -
70
+ | Reflection-Llama-3.1-70B.FP16 | F16 | - | ⏳ Processing | βšͺ Static | -
71
+ | Reflection-Llama-3.1-70B.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ Static | -
72
+ | Reflection-Llama-3.1-70B.Q6_K | Q6_K | - | ⏳ Processing | βšͺ Static | -
73
+ | Reflection-Llama-3.1-70B.Q5_K | Q5_K | - | ⏳ Processing | βšͺ Static | -
74
+ | Reflection-Llama-3.1-70B.Q5_K_S | Q5_K_S | - | ⏳ Processing | βšͺ Static | -
75
+ | Reflection-Llama-3.1-70B.Q4_K | Q4_K | - | ⏳ Processing | 🟒 IMatrix | -
76
+ | Reflection-Llama-3.1-70B.Q4_K_S | Q4_K_S | - | ⏳ Processing | 🟒 IMatrix | -
77
+ | Reflection-Llama-3.1-70B.IQ4_NL | IQ4_NL | - | ⏳ Processing | 🟒 IMatrix | -
78
+ | Reflection-Llama-3.1-70B.IQ4_XS | IQ4_XS | - | ⏳ Processing | 🟒 IMatrix | -
79
+ | Reflection-Llama-3.1-70B.Q3_K | Q3_K | - | ⏳ Processing | 🟒 IMatrix | -
80
+ | Reflection-Llama-3.1-70B.Q3_K_L | Q3_K_L | - | ⏳ Processing | 🟒 IMatrix | -
81
+ | Reflection-Llama-3.1-70B.Q3_K_S | Q3_K_S | - | ⏳ Processing | 🟒 IMatrix | -
82
+ | Reflection-Llama-3.1-70B.IQ3_M | IQ3_M | - | ⏳ Processing | 🟒 IMatrix | -
83
+ | Reflection-Llama-3.1-70B.IQ3_S | IQ3_S | - | ⏳ Processing | 🟒 IMatrix | -
84
+ | Reflection-Llama-3.1-70B.IQ3_XS | IQ3_XS | - | ⏳ Processing | 🟒 IMatrix | -
85
+ | Reflection-Llama-3.1-70B.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟒 IMatrix | -
86
+ | Reflection-Llama-3.1-70B.Q2_K | Q2_K | - | ⏳ Processing | 🟒 IMatrix | -
87
+ | Reflection-Llama-3.1-70B.Q2_K_S | Q2_K_S | - | ⏳ Processing | 🟒 IMatrix | -
88
+ | Reflection-Llama-3.1-70B.IQ2_M | IQ2_M | - | ⏳ Processing | 🟒 IMatrix | -
89
+ | Reflection-Llama-3.1-70B.IQ2_S | IQ2_S | - | ⏳ Processing | 🟒 IMatrix | -
90
+ | Reflection-Llama-3.1-70B.IQ2_XS | IQ2_XS | - | ⏳ Processing | 🟒 IMatrix | -
91
+ | Reflection-Llama-3.1-70B.IQ2_XXS | IQ2_XXS | - | ⏳ Processing | 🟒 IMatrix | -
92
+ | Reflection-Llama-3.1-70B.IQ1_M | IQ1_M | - | ⏳ Processing | 🟒 IMatrix | -
93
+ | Reflection-Llama-3.1-70B.IQ1_S | IQ1_S | - | ⏳ Processing | 🟒 IMatrix | -
94
+
95
+
96
+ ## Downloading using huggingface-cli
97
+ If you do not have hugginface-cli installed:
98
+ ```
99
+ pip install -U "huggingface_hub[cli]"
100
+ ```
101
+ Download the specific file you want:
102
+ ```
103
+ huggingface-cli download legraphista/Reflection-Llama-3.1-70B-IMat-GGUF --include "Reflection-Llama-3.1-70B.Q8_0.gguf" --local-dir ./
104
+ ```
105
+ If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
106
+ ```
107
+ huggingface-cli download legraphista/Reflection-Llama-3.1-70B-IMat-GGUF --include "Reflection-Llama-3.1-70B.Q8_0/*" --local-dir ./
108
+ # see FAQ for merging GGUF's
109
+ ```
110
+
111
+ ---
112
+
113
+ ## Inference
114
+
115
+ ### Simple chat template
116
+ ```
117
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
118
+
119
+ Cutting Knowledge Date: December 2023
120
+ Today Date: 26 Jul 2024
121
+
122
+ <|eot_id|><|start_header_id|>user<|end_header_id|>
123
+
124
+ {user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
125
+
126
+ {assistant_response}<|eot_id|><|start_header_id|>user<|end_header_id|>
127
+
128
+ {next_user_prompt}<|eot_id|>
129
+ ```
130
+
131
+ ### Chat template with system prompt
132
+ ```
133
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
134
+
135
+ Cutting Knowledge Date: December 2023
136
+ Today Date: 26 Jul 2024
137
+
138
+ You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.<|eot_id|><|start_header_id|>user<|end_header_id|>
139
+
140
+ {user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
141
+
142
+ {assistant_response}<|eot_id|><|start_header_id|>user<|end_header_id|>
143
+
144
+ {next_user_prompt}<|eot_id|>
145
+ ```
146
+
147
+ ### Llama.cpp
148
+ ```
149
+ llama.cpp/main -m Reflection-Llama-3.1-70B.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
150
+ ```
151
+
152
+ ---
153
+
154
+ ## FAQ
155
+
156
+ ### Why is the IMatrix not applied everywhere?
157
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
158
+
159
+ ### How do I merge a split GGUF?
160
+ 1. Make sure you have `gguf-split` available
161
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
162
+ - Download the appropriate zip for your system from the latest release
163
+ - Unzip the archive and you should be able to find `gguf-split`
164
+ 2. Locate your GGUF chunks folder (ex: `Reflection-Llama-3.1-70B.Q8_0`)
165
+ 3. Run `gguf-split --merge Reflection-Llama-3.1-70B.Q8_0/Reflection-Llama-3.1-70B.Q8_0-00001-of-XXXXX.gguf Reflection-Llama-3.1-70B.Q8_0.gguf`
166
+ - Make sure to point `gguf-split` to the first chunk of the split.
167
+
168
+ ---
169
+
170
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!