Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -12,10 +12,18 @@ For model inference, please download our release package from this url https://g | |
| 12 |  | 
| 13 |  | 
| 14 |  | 
| 15 | 
            -
             | 
| 16 |  | 
| 17 | 
            -
             | 
| 18 | 
            -
             | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
| 19 |  | 
| 20 |  | 
| 21 | 
             
            ## Demo
         | 
| @@ -23,3 +31,11 @@ Let's look forward to the following PR being merged: | |
| 23 | 
             
            
         | 
| 24 |  | 
| 25 | 
             
            
         | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 12 |  | 
| 13 |  | 
| 14 |  | 
| 15 | 
            +
            ## Quick start
         | 
| 16 |  | 
| 17 | 
            +
            ```
         | 
| 18 | 
            +
            # Use a local model file
         | 
| 19 | 
            +
            llama-cli -m my_model.gguf
         | 
| 20 | 
            +
             | 
| 21 | 
            +
            # Or download and run a model directly from Hugging Face
         | 
| 22 | 
            +
            llama-cli -hf inclusionAI/Ling-mini-2.0-GGUF
         | 
| 23 | 
            +
             | 
| 24 | 
            +
            # Launch OpenAI-compatible API server
         | 
| 25 | 
            +
            llama-server -m my_model.gguf
         | 
| 26 | 
            +
            ```
         | 
| 27 |  | 
| 28 |  | 
| 29 | 
             
            ## Demo
         | 
|  | |
| 31 | 
             
            
         | 
| 32 |  | 
| 33 | 
             
            
         | 
| 34 | 
            +
             | 
| 35 | 
            +
             | 
| 36 | 
            +
            ## PR
         | 
| 37 | 
            +
             | 
| 38 | 
            +
            Let's look forward to the following PR being merged:
         | 
| 39 | 
            +
             | 
| 40 | 
            +
            - [#16063 model : add BailingMoeV2 support](https://github.com/ggml-org/llama.cpp/pull/16063)
         | 
| 41 | 
            +
            - [#16028 Add support for Ling v2](https://github.com/ggml-org/llama.cpp/pull/16028)
         | 

