MaziyarPanahi commited on
Commit
ff60af6
·
verified ·
1 Parent(s): 0a801cb

Upload folder using huggingface_hub (#1)

Browse files

- adc4af4b9982992e34e28f39086e44dc04668471c944c711c5589d549beffbe0 (9c4e9784c645715de45d6eb7e7fd596d6f555411)
- 9d30d5f2eda9ff333edbe95820a68269d2f37fb0f693fa6ff66dc55268c5c455 (5faf0a2903eeae9675640391df092712ed12578f)
- 10170aa4cf1944171ce630a6d1133da0821e4947189b0ea8d36405e1b2435733 (6b5c0c236be5a47120fa629c4048d1320c8e9052)
- 312d09f6671dfe0e136d19eaf5f9a64f6b9f1fc71b9cfa6dff31687c511aaf20 (10f5b5b4a9a47ebc292fd996e2f409544c550c61)
- faedc8d203c9a8e82195b993d1b99422a626aba829aa5d52ba37fe654cac8923 (1fed80bb682312482a6b4b9e1279b0b224304831)
- f3ca250cacc97e92621d64dfa63a4ec4617993455ac7fc53930d3f7729b89668 (eeb48c310e52d6b1da75c773cc0d659731e1892f)

.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Qwen2.5-7B-Instruct-abliterated-v2.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Qwen2.5-7B-Instruct-abliterated-v2.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Qwen2.5-7B-Instruct-abliterated-v2.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Qwen2.5-7B-Instruct-abliterated-v2.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Qwen2.5-7B-Instruct-abliterated-v2.fp16.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Qwen2.5-7B-Instruct-abliterated-v2-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
Qwen2.5-7B-Instruct-abliterated-v2-GGUF_imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d32c5a3696a347b1a521db0d29499a39d859f17aafde103b8bbc83992b2aa48
3
+ size 4536654
Qwen2.5-7B-Instruct-abliterated-v2.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d99098bdfd801a61179575364e665aaeb780bb315704cc2af0b68f2557bd3c5
3
+ size 5444832064
Qwen2.5-7B-Instruct-abliterated-v2.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ecf186a9c9a4a2cd5f03720f93fceb405722b7b6d10f6febf4a2db05e808a16
3
+ size 5315177280
Qwen2.5-7B-Instruct-abliterated-v2.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15b65a0263c3fdda75d032d3749c27685a036449c02f5c9a01ede3fd6eb48082
3
+ size 6254199616
Qwen2.5-7B-Instruct-abliterated-v2.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db7f7cd4a0fe5f08803e3d314222318c97217b8406f0593c827ec4225aee55f0
3
+ size 8098526016
Qwen2.5-7B-Instruct-abliterated-v2.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae36771d64b4cc80a5aeb4325ef1dffe56804d1dac29578d1eae3ae812332632
3
+ size 15237853760
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - quantized
4
+ - 2-bit
5
+ - 3-bit
6
+ - 4-bit
7
+ - 5-bit
8
+ - 6-bit
9
+ - 8-bit
10
+ - GGUF
11
+ - text-generation
12
+ - text-generation
13
+ model_name: Qwen2.5-7B-Instruct-abliterated-v2-GGUF
14
+ base_model: huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2
15
+ inference: false
16
+ model_creator: huihui-ai
17
+ pipeline_tag: text-generation
18
+ quantized_by: MaziyarPanahi
19
+ ---
20
+ # [MaziyarPanahi/Qwen2.5-7B-Instruct-abliterated-v2-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-7B-Instruct-abliterated-v2-GGUF)
21
+ - Model creator: [huihui-ai](https://huggingface.co/huihui-ai)
22
+ - Original model: [huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2)
23
+
24
+ ## Description
25
+ [MaziyarPanahi/Qwen2.5-7B-Instruct-abliterated-v2-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-7B-Instruct-abliterated-v2-GGUF) contains GGUF format model files for [huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2).
26
+
27
+ ### About GGUF
28
+
29
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
30
+
31
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
32
+
33
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
34
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
35
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
36
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
37
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
38
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
39
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
40
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
41
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
42
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
43
+
44
+ ## Special thanks
45
+
46
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.