matrixportal commited on
Commit
1d01269
Β·
verified Β·
1 Parent(s): 1dbc9d5

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +41 -67
README.md CHANGED
@@ -15,75 +15,49 @@ tags:
15
  - trl
16
  - sft
17
  - llama-cpp
18
- - matrixportal
19
  ---
20
 
21
  # matrixportal/Turkish-Llama-3-8B-function-calling-GGUF
22
- This model was converted to GGUF format from [`atasoglu/Turkish-Llama-3-8B-function-calling`](https://huggingface.co/atasoglu/Turkish-Llama-3-8B-function-calling) using llama.cpp via the ggml.ai's [all-gguf-same-where](https://huggingface.co/spaces/matrixportal/all-gguf-same-where) space.
23
  Refer to the [original model card](https://huggingface.co/atasoglu/Turkish-Llama-3-8B-function-calling) for more details on the model.
24
 
25
- ## βœ… Quantized Models Download List
26
-
27
- ### πŸ” Recommended Quantizations
28
- - **✨ General CPU Use:** [`Q4_K_M`](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q4_k_m.gguf) (Best balance of speed/quality)
29
- - **πŸ“± ARM Devices:** [`Q4_0`](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q4_0.gguf) (Optimized for ARM CPUs)
30
- - **πŸ† Maximum Quality:** [`Q8_0`](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q8_0.gguf) (Near-original quality)
31
-
32
- ### πŸ“¦ Full Quantization Options
33
- | πŸš€ Download | πŸ”’ Type | πŸ“ Notes |
34
- |:---------|:-----|:------|
35
- | [Download](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q2_k.gguf) | ![Q2_K](https://img.shields.io/badge/Q2_K-1A73E8) | Basic quantization |
36
- | [Download](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q3_k_s.gguf) | ![Q3_K_S](https://img.shields.io/badge/Q3_K_S-34A853) | Small size |
37
- | [Download](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q3_k_m.gguf) | ![Q3_K_M](https://img.shields.io/badge/Q3_K_M-FBBC05) | Balanced quality |
38
- | [Download](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q3_k_l.gguf) | ![Q3_K_L](https://img.shields.io/badge/Q3_K_L-4285F4) | Better quality |
39
- | [Download](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q4_0.gguf) | ![Q4_0](https://img.shields.io/badge/Q4_0-EA4335) | Fast on ARM |
40
- | [Download](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q4_k_s.gguf) | ![Q4_K_S](https://img.shields.io/badge/Q4_K_S-673AB7) | Fast, recommended |
41
- | [Download](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q4_k_m.gguf) | ![Q4_K_M](https://img.shields.io/badge/Q4_K_M-673AB7) ⭐ | Best balance |
42
- | [Download](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q5_0.gguf) | ![Q5_0](https://img.shields.io/badge/Q5_0-FF6D01) | Good quality |
43
- | [Download](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q5_k_s.gguf) | ![Q5_K_S](https://img.shields.io/badge/Q5_K_S-0F9D58) | Balanced |
44
- | [Download](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q5_k_m.gguf) | ![Q5_K_M](https://img.shields.io/badge/Q5_K_M-0F9D58) | High quality |
45
- | [Download](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q6_k.gguf) | ![Q6_K](https://img.shields.io/badge/Q6_K-4285F4) πŸ† | Very good quality |
46
- | [Download](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-q8_0.gguf) | ![Q8_0](https://img.shields.io/badge/Q8_0-EA4335) ⚑ | Fast, best quality |
47
- | [Download](https://huggingface.co/matrixportal/Turkish-Llama-3-8B-function-calling-GGUF/resolve/main/turkish-llama-3-8b-function-calling-f16.gguf) | ![F16](https://img.shields.io/badge/F16-000000) | Maximum accuracy |
48
-
49
- πŸ’‘ **Tip:** Use `F16` for maximum precision when quality is critical
50
-
51
-
52
- ---
53
- # πŸš€ Applications and Tools for Locally Quantized LLMs
54
- ## πŸ–₯️ Desktop Applications
55
-
56
- | Application | Description | Download Link |
57
- |-----------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
58
- | **Llama.cpp** | A fast and efficient inference engine for GGUF models. | [GitHub Repository](https://github.com/ggml-org/llama.cpp) |
59
- | **Ollama** | A streamlined solution for running LLMs locally. | [Website](https://ollama.com/) |
60
- | **AnythingLLM** | An AI-powered knowledge management tool. | [GitHub Repository](https://github.com/Mintplex-Labs/anything-llm) |
61
- | **Open WebUI** | A user-friendly web interface for running local LLMs. | [GitHub Repository](https://github.com/open-webui/open-webui) |
62
- | **GPT4All** | A user-friendly desktop application supporting various LLMs, compatible with GGUF models. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) |
63
- | **LM Studio** | A desktop application designed to run and manage local LLMs, supporting GGUF format. | [Website](https://lmstudio.ai/) |
64
- | **GPT4All Chat**| A chat application compatible with GGUF models for local, offline interactions. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) |
65
-
66
- ---
67
-
68
- ## πŸ“± Mobile Applications
69
-
70
- | Application | Description | Download Link |
71
- |-------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
72
- | **ChatterUI** | A simple and lightweight LLM app for mobile devices. | [GitHub Repository](https://github.com/Vali-98/ChatterUI) |
73
- | **Maid** | Mobile Artificial Intelligence Distribution for running AI models on mobile devices. | [GitHub Repository](https://github.com/Mobile-Artificial-Intelligence/maid) |
74
- | **PocketPal AI** | A mobile AI assistant powered by local models. | [GitHub Repository](https://github.com/a-ghorbani/pocketpal-ai) |
75
- | **Layla** | A flexible platform for running various AI models on mobile devices. | [Website](https://www.layla-network.ai/) |
76
-
77
- ---
78
-
79
- ## 🎨 Image Generation Applications
80
-
81
- | Application | Description | Download Link |
82
- |-------------------------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
83
- | **Stable Diffusion** | An open-source AI model for generating images from text. | [GitHub Repository](https://github.com/CompVis/stable-diffusion) |
84
- | **Stable Diffusion WebUI** | A web application providing access to Stable Diffusion models via a browser interface. | [GitHub Repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) |
85
- | **Local Dream** | Android Stable Diffusion with Snapdragon NPU acceleration. Also supports CPU inference. | [GitHub Repository](https://github.com/xororz/local-dream) |
86
- | **Stable-Diffusion-Android (SDAI)** | An open-source AI art application for Android devices, enabling digital art creation. | [GitHub Repository](https://github.com/ShiftHackZ/Stable-Diffusion-Android) |
87
-
88
- ---
89
-
 
15
  - trl
16
  - sft
17
  - llama-cpp
18
+ - gguf-my-repo
19
  ---
20
 
21
  # matrixportal/Turkish-Llama-3-8B-function-calling-GGUF
22
+ This model was converted to GGUF format from [`atasoglu/Turkish-Llama-3-8B-function-calling`](https://huggingface.co/atasoglu/Turkish-Llama-3-8B-function-calling) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
23
  Refer to the [original model card](https://huggingface.co/atasoglu/Turkish-Llama-3-8B-function-calling) for more details on the model.
24
 
25
+ ## Use with llama.cpp
26
+ Install llama.cpp through brew (works on Mac and Linux)
27
+
28
+ ```bash
29
+ brew install llama.cpp
30
+
31
+ ```
32
+ Invoke the llama.cpp server or the CLI.
33
+
34
+ ### CLI:
35
+ ```bash
36
+ llama-cli --hf-repo matrixportal/Turkish-Llama-3-8B-function-calling-GGUF --hf-file turkish-llama-3-8b-function-calling-q8_0.gguf -p "The meaning to life and the universe is"
37
+ ```
38
+
39
+ ### Server:
40
+ ```bash
41
+ llama-server --hf-repo matrixportal/Turkish-Llama-3-8B-function-calling-GGUF --hf-file turkish-llama-3-8b-function-calling-q8_0.gguf -c 2048
42
+ ```
43
+
44
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
45
+
46
+ Step 1: Clone llama.cpp from GitHub.
47
+ ```
48
+ git clone https://github.com/ggerganov/llama.cpp
49
+ ```
50
+
51
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
52
+ ```
53
+ cd llama.cpp && LLAMA_CURL=1 make
54
+ ```
55
+
56
+ Step 3: Run inference through the main binary.
57
+ ```
58
+ ./llama-cli --hf-repo matrixportal/Turkish-Llama-3-8B-function-calling-GGUF --hf-file turkish-llama-3-8b-function-calling-q8_0.gguf -p "The meaning to life and the universe is"
59
+ ```
60
+ or
61
+ ```
62
+ ./llama-server --hf-repo matrixportal/Turkish-Llama-3-8B-function-calling-GGUF --hf-file turkish-llama-3-8b-function-calling-q8_0.gguf -c 2048
63
+ ```