hassenhamdi commited on
Commit
23c5a61
·
verified ·
1 Parent(s): f6a9bd1

Update README.md

Browse files

specify quant type

Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -19,6 +19,9 @@ tags:
19
  This model was converted to GGUF format from [`suayptalha/DeepSeek-R1-Distill-Llama-3B`](https://huggingface.co/suayptalha/DeepSeek-R1-Distill-Llama-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
20
  Refer to the [original model card](https://huggingface.co/suayptalha/DeepSeek-R1-Distill-Llama-3B) for more details on the model.
21
 
 
 
 
22
  ## Use with llama.cpp
23
  Install llama.cpp through brew (works on Mac and Linux)
24
 
 
19
  This model was converted to GGUF format from [`suayptalha/DeepSeek-R1-Distill-Llama-3B`](https://huggingface.co/suayptalha/DeepSeek-R1-Distill-Llama-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
20
  Refer to the [original model card](https://huggingface.co/suayptalha/DeepSeek-R1-Distill-Llama-3B) for more details on the model.
21
 
22
+ This a Q4_K_M quantization.
23
+ You can use the model in lmstudio or any other text generatio ui of choice able of running gguf format.
24
+
25
  ## Use with llama.cpp
26
  Install llama.cpp through brew (works on Mac and Linux)
27