im0qianqian commited on
Commit
1dc5508
·
verified ·
1 Parent(s): 555af54

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -3
README.md CHANGED
@@ -12,10 +12,18 @@ For model inference, please download our release package from this url https://g
12
 
13
 
14
 
15
- Let's look forward to the following PR being merged:
16
 
17
- - [#16063 model : add BailingMoeV2 support](https://github.com/ggml-org/llama.cpp/pull/16063)
18
- - [#16028 Add support for Ling v2](https://github.com/ggml-org/llama.cpp/pull/16028)
 
 
 
 
 
 
 
 
19
 
20
 
21
  ## Demo
@@ -23,3 +31,11 @@ Let's look forward to the following PR being merged:
23
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fde26773930f07f74aaf912/MC4h9G33YjvpboRA4LPfO.png)
24
 
25
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fde26773930f07f74aaf912/0YcuTJFLs6k9K4Sgzd-UD.png)
 
 
 
 
 
 
 
 
 
12
 
13
 
14
 
15
+ ## Quick start
16
 
17
+ ```
18
+ # Use a local model file
19
+ llama-cli -m my_model.gguf
20
+
21
+ # Or download and run a model directly from Hugging Face
22
+ llama-cli -hf inclusionAI/Ling-mini-2.0-GGUF
23
+
24
+ # Launch OpenAI-compatible API server
25
+ llama-server -m my_model.gguf
26
+ ```
27
 
28
 
29
  ## Demo
 
31
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fde26773930f07f74aaf912/MC4h9G33YjvpboRA4LPfO.png)
32
 
33
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fde26773930f07f74aaf912/0YcuTJFLs6k9K4Sgzd-UD.png)
34
+
35
+
36
+ ## PR
37
+
38
+ Let's look forward to the following PR being merged:
39
+
40
+ - [#16063 model : add BailingMoeV2 support](https://github.com/ggml-org/llama.cpp/pull/16063)
41
+ - [#16028 Add support for Ling v2](https://github.com/ggml-org/llama.cpp/pull/16028)