hardlyworking commited on
Commit
6abd259
·
verified ·
1 Parent(s): 1c2226e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: GreenerPastures/Golden-Curry-12B
3
+ datasets:
4
+ - Mielikki/Erebus-87k
5
+ - PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
6
+ - NewEden/Kalo-Opus-Instruct-22k-Refusal-Murdered
7
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
8
+ - NewEden/Gryphe-Sonnet-3.5-35k-Subset
9
+ - Nitral-AI/GU_Instruct-ShareGPT
10
+ - Nitral-AI/Medical_Instruct-ShareGPT
11
+ - AquaV/Resistance-Sharegpt
12
+ - AquaV/US-Army-Survival-Sharegpt
13
+ - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
14
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
15
+ - ResplendentAI/bluemoon
16
+ - hardlyworking/openerotica-freedomrp-sharegpt-system
17
+ - MinervaAI/Aesir-Preview
18
+ - anthracite-core/c2_logs_32k_v1.1
19
+ - Nitral-AI/Creative_Writing-ShareGPT
20
+ - PJMixers/lodrick-the-lafted_OpusStories-Story2Prompt-ShareGPT
21
+ - NewEden/Opus-accepted-hermes-rejected-shuffled
22
+ language:
23
+ - en
24
+ license: apache-2.0
25
+ tags:
26
+ - llama-cpp
27
+ - gguf-my-repo
28
+ ---
29
+
30
+ # hardlyworking/Golden-Curry-12B-Q3_K_M-GGUF
31
+ This model was converted to GGUF format from [`GreenerPastures/Golden-Curry-12B`](https://huggingface.co/GreenerPastures/Golden-Curry-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
32
+ Refer to the [original model card](https://huggingface.co/GreenerPastures/Golden-Curry-12B) for more details on the model.
33
+
34
+ ## Use with llama.cpp
35
+ Install llama.cpp through brew (works on Mac and Linux)
36
+
37
+ ```bash
38
+ brew install llama.cpp
39
+
40
+ ```
41
+ Invoke the llama.cpp server or the CLI.
42
+
43
+ ### CLI:
44
+ ```bash
45
+ llama-cli --hf-repo hardlyworking/Golden-Curry-12B-Q3_K_M-GGUF --hf-file golden-curry-12b-q3_k_m.gguf -p "The meaning to life and the universe is"
46
+ ```
47
+
48
+ ### Server:
49
+ ```bash
50
+ llama-server --hf-repo hardlyworking/Golden-Curry-12B-Q3_K_M-GGUF --hf-file golden-curry-12b-q3_k_m.gguf -c 2048
51
+ ```
52
+
53
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
54
+
55
+ Step 1: Clone llama.cpp from GitHub.
56
+ ```
57
+ git clone https://github.com/ggerganov/llama.cpp
58
+ ```
59
+
60
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
61
+ ```
62
+ cd llama.cpp && LLAMA_CURL=1 make
63
+ ```
64
+
65
+ Step 3: Run inference through the main binary.
66
+ ```
67
+ ./llama-cli --hf-repo hardlyworking/Golden-Curry-12B-Q3_K_M-GGUF --hf-file golden-curry-12b-q3_k_m.gguf -p "The meaning to life and the universe is"
68
+ ```
69
+ or
70
+ ```
71
+ ./llama-server --hf-repo hardlyworking/Golden-Curry-12B-Q3_K_M-GGUF --hf-file golden-curry-12b-q3_k_m.gguf -c 2048
72
+ ```