Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: Qwen/Qwen3-30B-A3B
|
3 |
+
library_name: transformers
|
4 |
+
license: apache-2.0
|
5 |
+
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
tags:
|
8 |
+
- llama-cpp
|
9 |
+
- gguf-my-repo
|
10 |
+
---
|
11 |
+
|
12 |
+
*Produced by [Antigma Labs](https://antigma.ai), [Antigma Quantize Space](https://huggingface.co/spaces/Antigma/quantize-my-repo)*
|
13 |
+
|
14 |
+
*Follow Antigma Labs in X [https://x.com/antigma_labs](https://x.com/antigma_labs)*
|
15 |
+
|
16 |
+
*Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)*
|
17 |
+
|
18 |
+
## llama.cpp quantization
|
19 |
+
Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">b4944</a> for quantization.
|
20 |
+
Original model: https://huggingface.co/Qwen/Qwen3-30B-A3B
|
21 |
+
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
|
22 |
+
## Prompt format
|
23 |
+
```
|
24 |
+
<|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|>
|
25 |
+
```
|
26 |
+
## Download a file (not the whole branch) from below:
|
27 |
+
| Filename | Quant type | File Size | Split |
|
28 |
+
| -------- | ---------- | --------- | ----- |
|
29 |
+
| [qwen3-30b-a3b-q4_k_m.gguf](https://huggingface.co/Antigma/Qwen3-30B-A3B-GGUF/blob/main/qwen3-30b-a3b-q4_k_m.gguf)|Q4_K_M|17.28 GB|False|
|
30 |
+
|[qwen3-30b-a3b-q4_0.gguf](https://huggingface.co/Antigma/Qwen3-30B-A3B-GGUF/blob/main/qwen3-30b-a3b-q4_0.gguf)|Q4_0|16.12 GB|False|
|
31 |
+
|[qwen3-30b-a3b-q4_k_s.gguf](https://huggingface.co/Antigma/Qwen3-30B-A3B-GGUF/blob/main/qwen3-30b-a3b-q4_k_s.gguf)|Q4_K_S|16.26 GB|False|
|
32 |
+
|
33 |
+
## Downloading using huggingface-cli
|
34 |
+
<details>
|
35 |
+
<summary>Click to view download instructions</summary>
|
36 |
+
First, make sure you have hugginface-cli installed:
|
37 |
+
|
38 |
+
```
|
39 |
+
pip install -U "huggingface_hub[cli]"
|
40 |
+
```
|
41 |
+
|
42 |
+
Then, you can target the specific file you want:
|
43 |
+
|
44 |
+
```
|
45 |
+
huggingface-cli download https://huggingface.co/Antigma/Qwen3-30B-A3B-GGUF --include "qwen3-30b-a3b-q4_k_m.gguf" --local-dir ./
|
46 |
+
```
|
47 |
+
|
48 |
+
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
49 |
+
|
50 |
+
```
|
51 |
+
huggingface-cli download https://huggingface.co/Antigma/Qwen3-30B-A3B-GGUF --include "qwen3-30b-a3b-q4_k_m.gguf/*" --local-dir ./
|
52 |
+
```
|
53 |
+
|
54 |
+
You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)
|
55 |
+
|
56 |
+
</details>
|