Upload folder using huggingface_hub (#1)
Browse files- ae5675443901ac31d2a9117efb0835790989ff26bf3548e797e5354937103b33 (2a01a3b2f57c2e47d0a73541c45d556e906e3513)
- ae23788462f9fffa3c5608cdfb16e0cd6a16323fdf3c19e1605a262bf03a2bc4 (93df8057256ca33d1d31afecafd9e7293be819dd)
- 2576ba9774b8a07cc716b68f27ea73f3dbfaf2b2e52ab6db5c3e6abcb3dc5e0e (c27924e1fff6df91529dee594b4107de7e9f8a4e)
- 589388fd36a26b96ac0293cfbb7cf8e6367fb2bb07bafbbc0066059b2fbd5a31 (fe11f961aae8a1ccb2fbf855cd4424cab8654601)
- 20ed0256f3245ff79c74ee6c9e6649679502978155ae6eed552d51a14197c01e (5f4c08690fd092580b6fa8014cc5077490e4c3e9)
- df0e1efe652cdef8bd227fc1bf2e6ff356eb369a99cc361f93b6dec68d5d11dd (ebf7741701cc5dcf998172d48c608fb449dfccd9)
.gitattributes
CHANGED
@@ -33,3 +33,8 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Hush-Qwen2.5-7B-RP-v1.4-1M.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
Hush-Qwen2.5-7B-RP-v1.4-1M.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
Hush-Qwen2.5-7B-RP-v1.4-1M.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
Hush-Qwen2.5-7B-RP-v1.4-1M.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
Hush-Qwen2.5-7B-RP-v1.4-1M.fp16.gguf filter=lfs diff=lfs merge=lfs -text
|
Hush-Qwen2.5-7B-RP-v1.4-1M.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5f978f66a8604d3dbe09cbf9852ca0624c7dba94ad7390e142666c38bfa2164c
|
3 |
+
size 5442666400
|
Hush-Qwen2.5-7B-RP-v1.4-1M.Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5bed8dcd1652b99413a0aba2618681875877d5c0251ddc2d2937e62132c850db
|
3 |
+
size 5313011616
|
Hush-Qwen2.5-7B-RP-v1.4-1M.Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0f615126e3aaeb54629739893f500d43865c36c5f908cf99e3ce333acfb717e6
|
3 |
+
size 6251844032
|
Hush-Qwen2.5-7B-RP-v1.4-1M.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2d98e618575775ef59eb30ef43db29b1ef47916c407b8b9a25176a98d4ec362d
|
3 |
+
size 8095477760
|
Hush-Qwen2.5-7B-RP-v1.4-1M.fp16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6d38e8fbe01b29cf6eaf0675b88f5c7ef987491520327b792fcf2d824cba12ab
|
3 |
+
size 15232124480
|
README.md
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: marcuscedricridia/Hush-Qwen2.5-7B-RP-v1.4-1M
|
3 |
+
inference: false
|
4 |
+
model_creator: marcuscedricridia
|
5 |
+
model_name: Hush-Qwen2.5-7B-RP-v1.4-1M-GGUF
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
quantized_by: MaziyarPanahi
|
8 |
+
tags:
|
9 |
+
- quantized
|
10 |
+
- 2-bit
|
11 |
+
- 3-bit
|
12 |
+
- 4-bit
|
13 |
+
- 5-bit
|
14 |
+
- 6-bit
|
15 |
+
- 8-bit
|
16 |
+
- GGUF
|
17 |
+
- text-generation
|
18 |
+
---
|
19 |
+
# [MaziyarPanahi/Hush-Qwen2.5-7B-RP-v1.4-1M-GGUF](https://huggingface.co/MaziyarPanahi/Hush-Qwen2.5-7B-RP-v1.4-1M-GGUF)
|
20 |
+
- Model creator: [marcuscedricridia](https://huggingface.co/marcuscedricridia)
|
21 |
+
- Original model: [marcuscedricridia/Hush-Qwen2.5-7B-RP-v1.4-1M](https://huggingface.co/marcuscedricridia/Hush-Qwen2.5-7B-RP-v1.4-1M)
|
22 |
+
|
23 |
+
## Description
|
24 |
+
[MaziyarPanahi/Hush-Qwen2.5-7B-RP-v1.4-1M-GGUF](https://huggingface.co/MaziyarPanahi/Hush-Qwen2.5-7B-RP-v1.4-1M-GGUF) contains GGUF format model files for [marcuscedricridia/Hush-Qwen2.5-7B-RP-v1.4-1M](https://huggingface.co/marcuscedricridia/Hush-Qwen2.5-7B-RP-v1.4-1M).
|
25 |
+
|
26 |
+
### About GGUF
|
27 |
+
|
28 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
29 |
+
|
30 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
31 |
+
|
32 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
33 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
34 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
35 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
36 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
37 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
38 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
39 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
40 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
41 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
42 |
+
|
43 |
+
## Special thanks
|
44 |
+
|
45 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|