Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -15,75 +15,49 @@ tags:
|
|
15 |
- trl
|
16 |
- sft
|
17 |
- llama-cpp
|
18 |
-
-
|
19 |
---
|
20 |
|
21 |
# matrixportal/Turkish-Llama-3-8B-function-calling-GGUF
|
22 |
-
|
23 |
Refer to the [original model card](https://huggingface.co/atasoglu/Turkish-Llama-3-8B-function-calling) for more details on the model.
|
24 |
|
25 |
-
##
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
| **GPT4All Chat**| A chat application compatible with GGUF models for local, offline interactions. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) |
|
65 |
-
|
66 |
-
---
|
67 |
-
|
68 |
-
## π± Mobile Applications
|
69 |
-
|
70 |
-
| Application | Description | Download Link |
|
71 |
-
|-------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
|
72 |
-
| **ChatterUI** | A simple and lightweight LLM app for mobile devices. | [GitHub Repository](https://github.com/Vali-98/ChatterUI) |
|
73 |
-
| **Maid** | Mobile Artificial Intelligence Distribution for running AI models on mobile devices. | [GitHub Repository](https://github.com/Mobile-Artificial-Intelligence/maid) |
|
74 |
-
| **PocketPal AI** | A mobile AI assistant powered by local models. | [GitHub Repository](https://github.com/a-ghorbani/pocketpal-ai) |
|
75 |
-
| **Layla** | A flexible platform for running various AI models on mobile devices. | [Website](https://www.layla-network.ai/) |
|
76 |
-
|
77 |
-
---
|
78 |
-
|
79 |
-
## π¨ Image Generation Applications
|
80 |
-
|
81 |
-
| Application | Description | Download Link |
|
82 |
-
|-------------------------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
|
83 |
-
| **Stable Diffusion** | An open-source AI model for generating images from text. | [GitHub Repository](https://github.com/CompVis/stable-diffusion) |
|
84 |
-
| **Stable Diffusion WebUI** | A web application providing access to Stable Diffusion models via a browser interface. | [GitHub Repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) |
|
85 |
-
| **Local Dream** | Android Stable Diffusion with Snapdragon NPU acceleration. Also supports CPU inference. | [GitHub Repository](https://github.com/xororz/local-dream) |
|
86 |
-
| **Stable-Diffusion-Android (SDAI)** | An open-source AI art application for Android devices, enabling digital art creation. | [GitHub Repository](https://github.com/ShiftHackZ/Stable-Diffusion-Android) |
|
87 |
-
|
88 |
-
---
|
89 |
-
|
|
|
15 |
- trl
|
16 |
- sft
|
17 |
- llama-cpp
|
18 |
+
- gguf-my-repo
|
19 |
---
|
20 |
|
21 |
# matrixportal/Turkish-Llama-3-8B-function-calling-GGUF
|
22 |
+
This model was converted to GGUF format from [`atasoglu/Turkish-Llama-3-8B-function-calling`](https://huggingface.co/atasoglu/Turkish-Llama-3-8B-function-calling) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
23 |
Refer to the [original model card](https://huggingface.co/atasoglu/Turkish-Llama-3-8B-function-calling) for more details on the model.
|
24 |
|
25 |
+
## Use with llama.cpp
|
26 |
+
Install llama.cpp through brew (works on Mac and Linux)
|
27 |
+
|
28 |
+
```bash
|
29 |
+
brew install llama.cpp
|
30 |
+
|
31 |
+
```
|
32 |
+
Invoke the llama.cpp server or the CLI.
|
33 |
+
|
34 |
+
### CLI:
|
35 |
+
```bash
|
36 |
+
llama-cli --hf-repo matrixportal/Turkish-Llama-3-8B-function-calling-GGUF --hf-file turkish-llama-3-8b-function-calling-q5_k_s.gguf -p "The meaning to life and the universe is"
|
37 |
+
```
|
38 |
+
|
39 |
+
### Server:
|
40 |
+
```bash
|
41 |
+
llama-server --hf-repo matrixportal/Turkish-Llama-3-8B-function-calling-GGUF --hf-file turkish-llama-3-8b-function-calling-q5_k_s.gguf -c 2048
|
42 |
+
```
|
43 |
+
|
44 |
+
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
45 |
+
|
46 |
+
Step 1: Clone llama.cpp from GitHub.
|
47 |
+
```
|
48 |
+
git clone https://github.com/ggerganov/llama.cpp
|
49 |
+
```
|
50 |
+
|
51 |
+
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
|
52 |
+
```
|
53 |
+
cd llama.cpp && LLAMA_CURL=1 make
|
54 |
+
```
|
55 |
+
|
56 |
+
Step 3: Run inference through the main binary.
|
57 |
+
```
|
58 |
+
./llama-cli --hf-repo matrixportal/Turkish-Llama-3-8B-function-calling-GGUF --hf-file turkish-llama-3-8b-function-calling-q5_k_s.gguf -p "The meaning to life and the universe is"
|
59 |
+
```
|
60 |
+
or
|
61 |
+
```
|
62 |
+
./llama-server --hf-repo matrixportal/Turkish-Llama-3-8B-function-calling-GGUF --hf-file turkish-llama-3-8b-function-calling-q5_k_s.gguf -c 2048
|
63 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|