qingy2024 commited on
Commit
e907966
·
verified ·
1 Parent(s): ccff65b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -7,7 +7,7 @@ tags:
7
  - gguf-my-repo
8
  ---
9
 
10
- # qingy2024/Utility-19B-MoE-Q3_K_M-GGUF
11
  This model was converted to GGUF format from [`qingy2024/Utility-19B-MoE`](https://huggingface.co/qingy2024/Utility-19B-MoE) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
12
  Refer to the [original model card](https://huggingface.co/qingy2024/Utility-19B-MoE) for more details on the model.
13
 
@@ -22,12 +22,12 @@ Invoke the llama.cpp server or the CLI.
22
 
23
  ### CLI:
24
  ```bash
25
- llama-cli --hf-repo qingy2024/Utility-19B-MoE-Q3_K_M-GGUF --hf-file utility-19b-moe-q3_k_m.gguf -p "The meaning to life and the universe is"
26
  ```
27
 
28
  ### Server:
29
  ```bash
30
- llama-server --hf-repo qingy2024/Utility-19B-MoE-Q3_K_M-GGUF --hf-file utility-19b-moe-q3_k_m.gguf -c 2048
31
  ```
32
 
33
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
@@ -44,9 +44,9 @@ cd llama.cpp && LLAMA_CURL=1 make
44
 
45
  Step 3: Run inference through the main binary.
46
  ```
47
- ./llama-cli --hf-repo qingy2024/Utility-19B-MoE-Q3_K_M-GGUF --hf-file utility-19b-moe-q3_k_m.gguf -p "The meaning to life and the universe is"
48
  ```
49
  or
50
  ```
51
- ./llama-server --hf-repo qingy2024/Utility-19B-MoE-Q3_K_M-GGUF --hf-file utility-19b-moe-q3_k_m.gguf -c 2048
52
  ```
 
7
  - gguf-my-repo
8
  ---
9
 
10
+ # qingy2024/Utility-19B-MoE-Q4_K_S-GGUF
11
  This model was converted to GGUF format from [`qingy2024/Utility-19B-MoE`](https://huggingface.co/qingy2024/Utility-19B-MoE) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
12
  Refer to the [original model card](https://huggingface.co/qingy2024/Utility-19B-MoE) for more details on the model.
13
 
 
22
 
23
  ### CLI:
24
  ```bash
25
+ llama-cli --hf-repo qingy2024/Utility-19B-MoE-Q4_K_S-GGUF --hf-file utility-19b-moe-q4_k_s.gguf -p "The meaning to life and the universe is"
26
  ```
27
 
28
  ### Server:
29
  ```bash
30
+ llama-server --hf-repo qingy2024/Utility-19B-MoE-Q4_K_S-GGUF --hf-file utility-19b-moe-q4_k_s.gguf -c 2048
31
  ```
32
 
33
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
44
 
45
  Step 3: Run inference through the main binary.
46
  ```
47
+ ./llama-cli --hf-repo qingy2024/Utility-19B-MoE-Q4_K_S-GGUF --hf-file utility-19b-moe-q4_k_s.gguf -p "The meaning to life and the universe is"
48
  ```
49
  or
50
  ```
51
+ ./llama-server --hf-repo qingy2024/Utility-19B-MoE-Q4_K_S-GGUF --hf-file utility-19b-moe-q4_k_s.gguf -c 2048
52
  ```