Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
base_model:
|
3 |
language:
|
4 |
- en
|
5 |
- fr
|
@@ -17,50 +17,55 @@ tags:
|
|
17 |
- transformers
|
18 |
- llama-cpp
|
19 |
inference: false
|
20 |
-
extra_gated_description: If you want to learn more about how we process your personal
|
21 |
-
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
|
22 |
---
|
23 |
|
24 |
-
#
|
25 |
-
|
26 |
-
|
|
|
27 |
|
28 |
## Use with llama.cpp
|
29 |
-
Install llama.cpp through brew (works on Mac and Linux)
|
30 |
|
|
|
31 |
```bash
|
32 |
brew install llama.cpp
|
33 |
-
|
34 |
```
|
35 |
-
Invoke the llama.cpp server or the CLI.
|
36 |
|
37 |
-
### CLI
|
|
|
|
|
38 |
```bash
|
39 |
-
llama-cli --hf-repo
|
40 |
```
|
41 |
|
42 |
-
|
43 |
```bash
|
44 |
-
llama-server --hf-repo
|
45 |
```
|
46 |
|
47 |
-
|
48 |
|
49 |
-
|
50 |
-
|
|
|
|
|
51 |
git clone https://github.com/ggerganov/llama.cpp
|
52 |
```
|
53 |
|
54 |
-
Step 2:
|
55 |
-
```
|
56 |
cd llama.cpp && LLAMA_CURL=1 make
|
57 |
```
|
|
|
58 |
|
59 |
-
Step 3: Run inference
|
60 |
-
```
|
61 |
-
./llama-cli --hf-repo
|
62 |
-
```
|
63 |
-
or
|
64 |
```
|
65 |
-
|
|
|
|
|
66 |
```
|
|
|
|
|
|
1 |
---
|
2 |
+
base_model: GainEnergy/OGAI-24B
|
3 |
language:
|
4 |
- en
|
5 |
- fr
|
|
|
17 |
- transformers
|
18 |
- llama-cpp
|
19 |
inference: false
|
20 |
+
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://gain.energy/terms/">Privacy Policy</a>.
|
|
|
21 |
---
|
22 |
|
23 |
+
# GainEnergy/OGAI-24B-Q6_K-GGUF
|
24 |
+
|
25 |
+
This model was converted to GGUF format from GainEnergy/OGAI-24B using llama.cpp.
|
26 |
+
Refer to the original model card for more details.
|
27 |
|
28 |
## Use with llama.cpp
|
|
|
29 |
|
30 |
+
### Install llama.cpp through Homebrew (macOS/Linux):
|
31 |
```bash
|
32 |
brew install llama.cpp
|
|
|
33 |
```
|
|
|
34 |
|
35 |
+
### Invoke the llama.cpp server or the CLI.
|
36 |
+
|
37 |
+
#### CLI:
|
38 |
```bash
|
39 |
+
llama-cli --hf-repo GainEnergy/OGAI-24B-Q6_K-GGUF --hf-file ogai-24b-q6_k.gguf -p "Explain the principles of reservoir simulation in oil and gas engineering."
|
40 |
```
|
41 |
|
42 |
+
#### Server:
|
43 |
```bash
|
44 |
+
llama-server --hf-repo GainEnergy/OGAI-24B-Q6_K-GGUF --hf-file ogai-24b-q6_k.gguf -c 2048
|
45 |
```
|
46 |
|
47 |
+
You can also follow the standard usage steps in the llama.cpp repository.
|
48 |
|
49 |
+
## Manual Installation and Execution
|
50 |
+
|
51 |
+
### Step 1: Clone llama.cpp
|
52 |
+
```bash
|
53 |
git clone https://github.com/ggerganov/llama.cpp
|
54 |
```
|
55 |
|
56 |
+
### Step 2: Build llama.cpp with LLAMA_CURL=1 and optional GPU flags
|
57 |
+
```bash
|
58 |
cd llama.cpp && LLAMA_CURL=1 make
|
59 |
```
|
60 |
+
For Nvidia GPUs on Linux, add LLAMA_CUDA=1.
|
61 |
|
62 |
+
### Step 3: Run inference
|
63 |
+
```bash
|
64 |
+
./llama-cli --hf-repo GainEnergy/OGAI-24B-Q6_K-GGUF --hf-file ogai-24b-q6_k.gguf -p "Explain the impact of wellbore stability on drilling efficiency."
|
|
|
|
|
65 |
```
|
66 |
+
or
|
67 |
+
```bash
|
68 |
+
./llama-server --hf-repo GainEnergy/OGAI-24B-Q6_K-GGUF --hf-file ogai-24b-q6_k.gguf -c 2048
|
69 |
```
|
70 |
+
|
71 |
+
This model is optimized for oil and gas engineering applications, featuring domain-specific knowledge in drilling, completions, reservoir management, and production optimization.
|