--- base_model: Qwen/Qwen3-0.6B-Base library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- *Produced by [Antigma Labs](https://antigma.ai), [Antigma Quantize Space](https://huggingface.co/spaces/Antigma/quantize-my-repo)* *Follow Antigma Labs in X [https://x.com/antigma_labs](https://x.com/antigma_labs)* *Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)* ## llama.cpp quantization Using llama.cpp release b5215 for quantization. Original model: https://huggingface.co/Qwen/Qwen3-0.6B-Base Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project ## Prompt format ``` <|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | | -------- | ---------- | --------- | ----- | | [qwen3-0.6b-base-q4_k_m.gguf](https://huggingface.co/Antigma/Qwen3-0.6B-Base-GGUF/blob/main/qwen3-0.6b-base-q4_k_m.gguf)|Q4_K_M|0.37 GB|False| |[qwen3-0.6b-base-q4_0.gguf](https://huggingface.co/Antigma/Qwen3-0.6B-Base-GGUF/blob/main/qwen3-0.6b-base-q4_0.gguf)|Q4_0|0.36 GB|False| |[qwen3-0.6b-base-q4_k_s.gguf](https://huggingface.co/Antigma/Qwen3-0.6B-Base-GGUF/blob/main/qwen3-0.6b-base-q4_k_s.gguf)|Q4_K_S|0.36 GB|False| ## Downloading using huggingface-cli
Click to view download instructions First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download https://huggingface.co/Antigma/Qwen3-0.6B-Base-GGUF --include "qwen3-0.6b-base-q4_k_m.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download https://huggingface.co/Antigma/Qwen3-0.6B-Base-GGUF --include "qwen3-0.6b-base-q4_k_m.gguf/*" --local-dir ./ ``` You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)