Triangle104 commited on
Commit
d0b4e74
·
verified ·
1 Parent(s): 2c62f39

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Tower-Babel/Babel-9B-Chat
3
+ language:
4
+ - en
5
+ - zh
6
+ - hi
7
+ - es
8
+ - fr
9
+ - ar
10
+ - bn
11
+ - ru
12
+ - pt
13
+ - id
14
+ - ur
15
+ - de
16
+ - ja
17
+ - sw
18
+ - ta
19
+ - tr
20
+ - ko
21
+ - vi
22
+ - jv
23
+ - it
24
+ - ha
25
+ - th
26
+ - fa
27
+ - tl
28
+ - my
29
+ license: other
30
+ license_name: seallm
31
+ license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
32
+ tags:
33
+ - multilingual
34
+ - babel
35
+ - llama-cpp
36
+ - gguf-my-repo
37
+ ---
38
+
39
+ # Triangle104/Babel-9B-Chat-Q4_K_S-GGUF
40
+ This model was converted to GGUF format from [`Tower-Babel/Babel-9B-Chat`](https://huggingface.co/Tower-Babel/Babel-9B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
41
+ Refer to the [original model card](https://huggingface.co/Tower-Babel/Babel-9B-Chat) for more details on the model.
42
+
43
+ ## Use with llama.cpp
44
+ Install llama.cpp through brew (works on Mac and Linux)
45
+
46
+ ```bash
47
+ brew install llama.cpp
48
+
49
+ ```
50
+ Invoke the llama.cpp server or the CLI.
51
+
52
+ ### CLI:
53
+ ```bash
54
+ llama-cli --hf-repo Triangle104/Babel-9B-Chat-Q4_K_S-GGUF --hf-file babel-9b-chat-q4_k_s.gguf -p "The meaning to life and the universe is"
55
+ ```
56
+
57
+ ### Server:
58
+ ```bash
59
+ llama-server --hf-repo Triangle104/Babel-9B-Chat-Q4_K_S-GGUF --hf-file babel-9b-chat-q4_k_s.gguf -c 2048
60
+ ```
61
+
62
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
63
+
64
+ Step 1: Clone llama.cpp from GitHub.
65
+ ```
66
+ git clone https://github.com/ggerganov/llama.cpp
67
+ ```
68
+
69
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
70
+ ```
71
+ cd llama.cpp && LLAMA_CURL=1 make
72
+ ```
73
+
74
+ Step 3: Run inference through the main binary.
75
+ ```
76
+ ./llama-cli --hf-repo Triangle104/Babel-9B-Chat-Q4_K_S-GGUF --hf-file babel-9b-chat-q4_k_s.gguf -p "The meaning to life and the universe is"
77
+ ```
78
+ or
79
+ ```
80
+ ./llama-server --hf-repo Triangle104/Babel-9B-Chat-Q4_K_S-GGUF --hf-file babel-9b-chat-q4_k_s.gguf -c 2048
81
+ ```