shenzhi-wang mradermacher commited on
Commit
d95c5ab
·
verified ·
0 Parent(s):

Duplicate from mradermacher/Xwen-7B-Chat-GGUF

Browse files

Co-authored-by: team mradermacher <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Xwen-7B-Chat.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Xwen-7B-Chat.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Xwen-7B-Chat.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Xwen-7B-Chat.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Xwen-7B-Chat.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Xwen-7B-Chat.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Xwen-7B-Chat.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Xwen-7B-Chat.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Xwen-7B-Chat.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Xwen-7B-Chat.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Xwen-7B-Chat.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Xwen-7B-Chat.f16.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: xwen-team/Xwen-7B-Chat
3
+ language:
4
+ - en
5
+ - zh
6
+ library_name: transformers
7
+ license: apache-2.0
8
+ quantized_by: mradermacher
9
+ ---
10
+ ## About
11
+
12
+ <!-- ### quantize_version: 2 -->
13
+ <!-- ### output_tensor_quantised: 1 -->
14
+ <!-- ### convert_type: hf -->
15
+ <!-- ### vocab_type: -->
16
+ <!-- ### tags: -->
17
+ static quants of https://huggingface.co/xwen-team/Xwen-7B-Chat
18
+
19
+ <!-- provided-files -->
20
+ weighted/imatrix quants are available at https://huggingface.co/mradermacher/Xwen-7B-Chat-i1-GGUF
21
+ ## Usage
22
+
23
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
24
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
25
+ more details, including on how to concatenate multi-part files.
26
+
27
+ ## Provided Quants
28
+
29
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
30
+
31
+ | Link | Type | Size/GB | Notes |
32
+ |:-----|:-----|--------:|:------|
33
+ | [GGUF](https://huggingface.co/mradermacher/Xwen-7B-Chat-GGUF/resolve/main/Xwen-7B-Chat.Q2_K.gguf) | Q2_K | 3.1 | |
34
+ | [GGUF](https://huggingface.co/mradermacher/Xwen-7B-Chat-GGUF/resolve/main/Xwen-7B-Chat.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
35
+ | [GGUF](https://huggingface.co/mradermacher/Xwen-7B-Chat-GGUF/resolve/main/Xwen-7B-Chat.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
36
+ | [GGUF](https://huggingface.co/mradermacher/Xwen-7B-Chat-GGUF/resolve/main/Xwen-7B-Chat.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
37
+ | [GGUF](https://huggingface.co/mradermacher/Xwen-7B-Chat-GGUF/resolve/main/Xwen-7B-Chat.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
38
+ | [GGUF](https://huggingface.co/mradermacher/Xwen-7B-Chat-GGUF/resolve/main/Xwen-7B-Chat.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
39
+ | [GGUF](https://huggingface.co/mradermacher/Xwen-7B-Chat-GGUF/resolve/main/Xwen-7B-Chat.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
40
+ | [GGUF](https://huggingface.co/mradermacher/Xwen-7B-Chat-GGUF/resolve/main/Xwen-7B-Chat.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
41
+ | [GGUF](https://huggingface.co/mradermacher/Xwen-7B-Chat-GGUF/resolve/main/Xwen-7B-Chat.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
42
+ | [GGUF](https://huggingface.co/mradermacher/Xwen-7B-Chat-GGUF/resolve/main/Xwen-7B-Chat.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
43
+ | [GGUF](https://huggingface.co/mradermacher/Xwen-7B-Chat-GGUF/resolve/main/Xwen-7B-Chat.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
44
+ | [GGUF](https://huggingface.co/mradermacher/Xwen-7B-Chat-GGUF/resolve/main/Xwen-7B-Chat.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
45
+
46
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
47
+ types (lower is better):
48
+
49
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
50
+
51
+ And here are Artefact2's thoughts on the matter:
52
+ https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
53
+
54
+ ## FAQ / Model Request
55
+
56
+ See https://huggingface.co/mradermacher/model_requests for some answers to
57
+ questions you might have and/or if you want some other model quantized.
58
+
59
+ ## Thanks
60
+
61
+ I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
62
+ me use its servers and providing upgrades to my workstation to enable
63
+ this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
64
+
65
+ <!-- end -->
Xwen-7B-Chat.IQ4_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5462bd2a57f9317c0d30324b8a927d4cabf792bc18eb826b0df56175bedb2faf
3
+ size 4250299104
Xwen-7B-Chat.Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3a79f086ef167d2755ddf0a3acb18f36802b348ec3d067e7cd994ec955230c8
3
+ size 3015940832
Xwen-7B-Chat.Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:baabe15d2cb6addfd96f3617bca96f6239f64a7157d8900997354a09c521cc03
3
+ size 4088460000
Xwen-7B-Chat.Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a304c96b39e939b09546071c997d3935670b57823a8a89946af95479cedd1222
3
+ size 3808391904
Xwen-7B-Chat.Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:daeabc1efca6c67503d857313a1d19da062dd112d6c5cc831c7fee6274f2dd70
3
+ size 3492369120
Xwen-7B-Chat.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8912f1656f9cfeb98b4197910cfeda2faf8ba15f0e58559c54500058f13fe5d
3
+ size 4683074272
Xwen-7B-Chat.Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2835c409187fe844c92b9bdea5975971751e26285f77a567f0ba9068efaa0be7
3
+ size 4457769696
Xwen-7B-Chat.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:534c045642a690d3e8aa725bbda91db0e87106ab0e6af9dfcb8787d865539ef3
3
+ size 5444831968
Xwen-7B-Chat.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d9e433ab00d8552f5b53f6cb7d11563a63a768fa583e253d3c4e0251c9a80e0
3
+ size 5315177184
Xwen-7B-Chat.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4994db16c0937e13becac4f1e16d4dcfb70426abb28c084cf9331e4f7e09a74d
3
+ size 6254199520
Xwen-7B-Chat.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35386b21707998ecfb4b67bf1108dd1e12128728519522910af8f415da07eaff
3
+ size 8098525920
Xwen-7B-Chat.f16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:433833431a03604c1194559a69ba6df93f382f6c82c845457aeb26ec81ca9170
3
+ size 15237853920