waltervix commited on
Commit
8a54626
·
verified ·
1 Parent(s): 842f827

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -39
README.md CHANGED
@@ -13,42 +13,41 @@ base_model: ruliad/deepthought-8b-llama-v0.01-alpha
13
  This model was converted to GGUF format from [`ruliad/deepthought-8b-llama-v0.01-alpha`](https://huggingface.co/ruliad/deepthought-8b-llama-v0.01-alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/ruliad/deepthought-8b-llama-v0.01-alpha) for more details on the model.
15
 
16
- ## Use with llama.cpp
17
- Install llama.cpp through brew (works on Mac and Linux)
18
-
19
- ```bash
20
- brew install llama.cpp
21
-
22
- ```
23
- Invoke the llama.cpp server or the CLI.
24
-
25
- ### CLI:
26
- ```bash
27
- llama-cli --hf-repo waltervix/deepthought-8b-llama-v0.01-alpha-Q4_K_M-GGUF --hf-file deepthought-8b-llama-v0.01-alpha-q4_k_m.gguf -p "The meaning to life and the universe is"
28
- ```
29
-
30
- ### Server:
31
- ```bash
32
- llama-server --hf-repo waltervix/deepthought-8b-llama-v0.01-alpha-Q4_K_M-GGUF --hf-file deepthought-8b-llama-v0.01-alpha-q4_k_m.gguf -c 2048
33
- ```
34
-
35
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
36
-
37
- Step 1: Clone llama.cpp from GitHub.
38
- ```
39
- git clone https://github.com/ggerganov/llama.cpp
40
- ```
41
-
42
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
43
- ```
44
- cd llama.cpp && LLAMA_CURL=1 make
45
- ```
46
-
47
- Step 3: Run inference through the main binary.
48
- ```
49
- ./llama-cli --hf-repo waltervix/deepthought-8b-llama-v0.01-alpha-Q4_K_M-GGUF --hf-file deepthought-8b-llama-v0.01-alpha-q4_k_m.gguf -p "The meaning to life and the universe is"
50
- ```
51
- or
52
- ```
53
- ./llama-server --hf-repo waltervix/deepthought-8b-llama-v0.01-alpha-Q4_K_M-GGUF --hf-file deepthought-8b-llama-v0.01-alpha-q4_k_m.gguf -c 2048
54
- ```
 
13
  This model was converted to GGUF format from [`ruliad/deepthought-8b-llama-v0.01-alpha`](https://huggingface.co/ruliad/deepthought-8b-llama-v0.01-alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/ruliad/deepthought-8b-llama-v0.01-alpha) for more details on the model.
15
 
16
+ ## Run locally with Samantha Interface Assistant
17
+
18
+ <!-- header start -->
19
+ <!-- 200823 -->
20
+ <div style="width: auto; margin-left: auto; margin-right: auto">
21
+ <img src="https://i.ibb.co/5WP8Sbh/samantha-ia.png" alt="Samantha_IA" style="width: 70%; min-width: 400px; display: block; margin: auto;">
22
+ </div>
23
+ <!-- header end -->
24
+
25
+ **Github project:** https://github.com/controlecidadao/samantha_ia/blob/main/README.md
26
+
27
+ <br>
28
+
29
+
30
+ ## 📺 Video: Intelligence Challenge with Samantha - Microsoft Phi 3.5 vs Google Gemma 2
31
+
32
+ **Video:** https://www.youtube.com/watch?v=KgicCGMSygU
33
+
34
+ <br>
35
+
36
+ ## 👟 Testing a Model in 5 Steps with Samantha
37
+
38
+ Samantha needs just a `.gguf` model file to generate text. Follow these steps to perform a simple model test:
39
+
40
+ **1)** Open Windows Task Management by pressing `CTRL + SHIFT + ESC` and check available memory. Close some programs if necessary to free memory.
41
+
42
+ **2)** Visit [Hugging Face](https://huggingface.co/models?library=gguf&sort=trending&search=gguf) repository and click on the card to open the corresponding page. Locate the _Files and versions_ tab and choose a `.gguf` model that fits in your available memory.
43
+
44
+ **3)** Right click over the model download link icon and copy its URL.
45
+
46
+ **4)** Paste the model URL into Samantha's _Download models for testing_ field.
47
+
48
+ **5)** Insert a prompt into _User prompt_ field and press `Enter`. Keep the `$$$` sign at the end of your prompt. The model will be downloaded and the response will be generated using the default deterministic settings. You can track this process via Windows Task Management.
49
+
50
+
51
+ Every new model downloaded via this copy and paste procedure will replace the previous one to save hard drive space. Model download is saved as `MODEL_FOR_TESTING.gguf` in your _Downloads_ folder.
52
+
53
+ You can also download the model and save it permanently to your computer. For more datails, visit Samantha's project on Github.