tommytracx commited on
Commit
6d85f70
·
verified ·
1 Parent(s): 680035a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -24
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: mistralai/Mistral-Small-24B-Instruct-2501
3
  language:
4
  - en
5
  - fr
@@ -17,50 +17,55 @@ tags:
17
  - transformers
18
  - llama-cpp
19
  inference: false
20
- extra_gated_description: If you want to learn more about how we process your personal
21
- data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
22
  ---
23
 
24
- # tommytracx/Mistral-Small-24B-Instruct-2501-Q6_K-GGUF
25
- This model was converted to GGUF format from [`mistralai/Mistral-Small-24B-Instruct-2501`](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) using llama.cpp.
26
- Refer to the [original model card](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) for more details on the model.
 
27
 
28
  ## Use with llama.cpp
29
- Install llama.cpp through brew (works on Mac and Linux)
30
 
 
31
  ```bash
32
  brew install llama.cpp
33
-
34
  ```
35
- Invoke the llama.cpp server or the CLI.
36
 
37
- ### CLI:
 
 
38
  ```bash
39
- llama-cli --hf-repo tommytracx/Mistral-Small-24B-Instruct-2501-Q6_K-GGUF --hf-file mistral-small-24b-instruct-2501-q6_k.gguf -p "The meaning to life and the universe is"
40
  ```
41
 
42
- ### Server:
43
  ```bash
44
- llama-server --hf-repo tommytracx/Mistral-Small-24B-Instruct-2501-Q6_K-GGUF --hf-file mistral-small-24b-instruct-2501-q6_k.gguf -c 2048
45
  ```
46
 
47
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
48
 
49
- Step 1: Clone llama.cpp from GitHub.
50
- ```
 
 
51
  git clone https://github.com/ggerganov/llama.cpp
52
  ```
53
 
54
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
55
- ```
56
  cd llama.cpp && LLAMA_CURL=1 make
57
  ```
 
58
 
59
- Step 3: Run inference through the main binary.
60
- ```
61
- ./llama-cli --hf-repo tommytracx/Mistral-Small-24B-Instruct-2501-Q6_K-GGUF --hf-file mistral-small-24b-instruct-2501-q6_k.gguf -p "The meaning to life and the universe is"
62
- ```
63
- or
64
  ```
65
- ./llama-server --hf-repo tommytracx/Mistral-Small-24B-Instruct-2501-Q6_K-GGUF --hf-file mistral-small-24b-instruct-2501-q6_k.gguf -c 2048
 
 
66
  ```
 
 
 
1
  ---
2
+ base_model: GainEnergy/OGAI-24B
3
  language:
4
  - en
5
  - fr
 
17
  - transformers
18
  - llama-cpp
19
  inference: false
20
+ extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://gain.energy/terms/">Privacy Policy</a>.
 
21
  ---
22
 
23
+ # GainEnergy/OGAI-24B-Q6_K-GGUF
24
+
25
+ This model was converted to GGUF format from GainEnergy/OGAI-24B using llama.cpp.
26
+ Refer to the original model card for more details.
27
 
28
  ## Use with llama.cpp
 
29
 
30
+ ### Install llama.cpp through Homebrew (macOS/Linux):
31
  ```bash
32
  brew install llama.cpp
 
33
  ```
 
34
 
35
+ ### Invoke the llama.cpp server or the CLI.
36
+
37
+ #### CLI:
38
  ```bash
39
+ llama-cli --hf-repo GainEnergy/OGAI-24B-Q6_K-GGUF --hf-file ogai-24b-q6_k.gguf -p "Explain the principles of reservoir simulation in oil and gas engineering."
40
  ```
41
 
42
+ #### Server:
43
  ```bash
44
+ llama-server --hf-repo GainEnergy/OGAI-24B-Q6_K-GGUF --hf-file ogai-24b-q6_k.gguf -c 2048
45
  ```
46
 
47
+ You can also follow the standard usage steps in the llama.cpp repository.
48
 
49
+ ## Manual Installation and Execution
50
+
51
+ ### Step 1: Clone llama.cpp
52
+ ```bash
53
  git clone https://github.com/ggerganov/llama.cpp
54
  ```
55
 
56
+ ### Step 2: Build llama.cpp with LLAMA_CURL=1 and optional GPU flags
57
+ ```bash
58
  cd llama.cpp && LLAMA_CURL=1 make
59
  ```
60
+ For Nvidia GPUs on Linux, add LLAMA_CUDA=1.
61
 
62
+ ### Step 3: Run inference
63
+ ```bash
64
+ ./llama-cli --hf-repo GainEnergy/OGAI-24B-Q6_K-GGUF --hf-file ogai-24b-q6_k.gguf -p "Explain the impact of wellbore stability on drilling efficiency."
 
 
65
  ```
66
+ or
67
+ ```bash
68
+ ./llama-server --hf-repo GainEnergy/OGAI-24B-Q6_K-GGUF --hf-file ogai-24b-q6_k.gguf -c 2048
69
  ```
70
+
71
+ This model is optimized for oil and gas engineering applications, featuring domain-specific knowledge in drilling, completions, reservoir management, and production optimization.