Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ tags:
|
|
15 |
*Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)*
|
16 |
|
17 |
## llama.cpp quantization
|
18 |
-
Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">
|
19 |
Original model: https://huggingface.co/Qwen/Qwen3-0.6B-Base
|
20 |
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
|
21 |
## Prompt format
|
|
|
15 |
*Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)*
|
16 |
|
17 |
## llama.cpp quantization
|
18 |
+
Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">b5215</a> for quantization.
|
19 |
Original model: https://huggingface.co/Qwen/Qwen3-0.6B-Base
|
20 |
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
|
21 |
## Prompt format
|