GGUF
English
llama
gaming
minecraft
mindcraft
conversational
File size: 7,450 Bytes
04bbc7f
b45aac5
2b93840
b45aac5
0f0d905
b45aac5
 
 
0f0d905
b45aac5
 
 
 
04bbc7f
 
1e960d7
04bbc7f
1e960d7
 
b45aac5
697ba4a
 
b45aac5
 
 
7e92a26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b45aac5
 
9dc6e9e
 
 
8751991
 
 
 
 
9dc6e9e
 
 
8751991
9dc6e9e
 
 
 
 
 
 
 
8751991
9dc6e9e
 
 
 
 
 
 
 
 
 
 
 
8751991
 
 
 
 
 
 
8083de9
 
8751991
8083de9
8751991
8083de9
8751991
8083de9
8751991
8083de9
8751991
8083de9
8751991
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9dc6e9e
 
 
 
 
 
b45aac5
 
 
 
 
 
1e960d7
 
 
b45aac5
0f0d905
7e92a26
b45aac5
 
 
 
 
 
 
f6f6565
 
1e960d7
b45aac5
f6f6565
b45aac5
f6f6565
 
 
 
 
b45aac5
1e960d7
b45aac5
 
 
 
 
 
 
 
 
1e960d7
b45aac5
 
 
7252af8
 
b45aac5
 
 
197fed8
b45aac5
5dd3e96
b45aac5
1e960d7
 
 
 
b45aac5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e960d7
 
 
b45aac5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0f0d905
 
b45aac5
 
 
 
 
 
7e92a26
2e12602
0f0d905
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
---
datasets:
- Sweaterdog/Andy-4-base
- Sweaterdog/Andy-4-ft
- Sweaterdog/Andy-base-2
language:
- en
base_model:
- unsloth/DeepSeek-R1-Distill-Llama-8B-bnb-4bit
tags:
- gaming
- minecraft
- mindcraft
---

# 🧠 Andy‑4 ⛏️


![image/png](https://cdn-uploads.huggingface.co/production/uploads/66960602f0ffd8e3a381106a/raWYEDo2An1biTLXd5PfN.png)
**Andy‑4** is an 8 billion‑parameter specialist model tuned for Minecraft gameplay via the Mindcraft framework.  Trained on a single RTX 3090 over **three weeks**, Andy‑4 delivers advanced reasoning, multi‑step planning, and robust in‑game decision‑making.
**The Current version of Andy-4 is** `Andy-4-0516`, this was the date training finished.

> ⚠️ **Certification:**  
> Andy‑4 is **not yet certified** by the Mindcraft developers. Use in production at your own discretion.

---
# This is a general model repo, any other models will be listed below:

### Andy-4 models:

*(Good all around model for anyone with less than 16GB of VRAM)*

* [This Repo](https://huggingface.co/Sweaterdog/Andy-4)

### Andy-4-micro models:

*(Great model to fit inside of laptops or low-end PCs)*

* [Andy-4-micro *(Latest Version)*](https://huggingface.co/Sweaterdog/Andy-4-micro)
* [Andy-4-micro-0427](https://huggingface.co/Sweaterdog/Andy-4-micro-0427)

### Andy-4-tiny models:

*(Generally not recommended due to low performance, but great for edge-case scenarios like phones)*

* [Andy-4-tiny *(Not released)*](https://huggingface.co/Sweaterdog/Andy-4-tiny)

  Andy-4-tiny has yet to be released, but is in training

---

## If you are downloading on Huggingface, follow these directions! 
## DO NOT Use the `Use This Model` feature in Huggingface!
<details>
  <summary>Andy-4 Huggingface Install Directions</summary>

  Method One:
  
  1. Select the model you would like to use
  
  2. Download the Modelfile

  3. Once downloaded, open Modelfile in a text editor, and change the `FROM` parameter from `YOUR/PATH/HERE` to the download location of the gguf file, this has to be exact!
     
  4. When changed, save the file, and open command terminal
  
  5. *(Optional if CMD isn't opened via file explorer)* Navigate to the correct directory using "cd"
  
  6. Run the command `ollama create sweaterdog/Andy-4 -f Modelfile` If you want multiple models, include a tag afterwards. Example: sweaterdog/Andy-4:micro-fp16 or sweaterdog/Andy-4:q2_k
  
  7. Go to a profile in MindCraft
  8. Change the model to be `sweaterdog/Andy-4` *Or whatever you named your model*
  
  9. Ensure you have the emdedding tag set to Ollama, like below
  ```
  {
    "name": "andy-4",

    "model": "Sweaterdog/Andy-4",

    "embedding": "ollama"

  }
  ```

  Method Two:
  
  1. Download the Modelfile

  2. Once downloaded, open Modelfile in a text editor, and change the `FROM` parameter from `YOUR/PATH/HERE` To one of the models listed here in the `Use This Model` tab under ollama, here are the options:
    ```


    hf.co/Sweaterdog/Andy-4:Q2_K
    
    hf.co/Sweaterdog/Andy-4:Q3_K_M
    
    hf.co/Sweaterdog/Andy-4:Q4_K_M
    
    hf.co/Sweaterdog/Andy-4:Q5_K_M
    
    hf.co/Sweaterdog/Andy-4:Q8_0
    
    hf.co/Sweaterdog/Andy-4:F16
  3. When changed, save the file, and open command terminal
  
  4. *(Optional if CMD isn't opened via file explorer)* Navigate to the correct directory using "cd"
  
  5. Run the command `ollama create sweaterdog/Andy-4 -f Modelfile` If you want multiple models, include a tag afterwards. Example: sweaterdog/Andy-4:micro-fp16 or sweaterdog/Andy-4:q2_k
  
  6. Go to a profile in MindCraft

  7. Change the model to be `sweaterdog/Andy-4` *Or whatever you named your model*
  
  8. Ensure you have the emdedding tag set to Ollama, like below
  ```
  {
    "name": "andy-4",

    "model": "Sweaterdog/Andy-4",

    "embedding": "ollama"

  }
  ```
</details>

## DO NOT SKIP THIS SECTION IF YOU INTEND ON INSTALLING OFF OF HUGGINGFACE

---

## 🔍 Model Specifications

- **Parameters:** 8 B  
- **Training Hardware:** 1 × NVIDIA RTX 3090  
- **Duration:** ~3 weeks total  
- **Data Volumes:**  
  - **Messages:** 179,384  
  - **Tokens:** 425,535,198  
  - **Conversations:** 62,149  

- **Base Architecture:** Deepseek-R1-LLaMA 
- **License:** [Andy 1.0 License](LICENSE)
- **Repository:** https://huggingface.co/Sweaterdog/Andy‑4

---

## 📊 Training Regimen

1. **Andy‑4‑base‑1** dataset  
   - **Epochs:** 2  
   - **Learning Rate:**   4e-5
   - **Dataset Size:** 47.4k

2. **Andy‑4‑base-2** dataset  
   - **Epochs:** 2.5  
   - **Learning Rate:**   7e-5
   - **Dataset Size:** 49.2k

3. **Fine‑tune (FT) dataset**  
   - **Epochs:** 1  
   - **Learning Rate:** 2e-5
   - **Dataset Size:** 4.12k

- **Optimizer:** AdamW_8bit with cosine decay  
- **Quantization:** 4‑bit (`bnb-4bit`) for inference
- **Warm Up Steps:** 0.1% of each dataset

---

## 🚀 Installation

First, you need to choose your quantization, this chart is with the base of `8192` set as the context window

| Quantization | VRAM Required |
|--------------|---------------|
| F16          | 20 GB+        |
| Q8_0         | 12 GB        |
| Q5_K_M       | 8 GB+         |
| Q4_K_M       | 6–8 GB        |
| Q3_K_M       | 6 GB (low)    |
| Q2_K         | 4–6 GB (ultra low)|

### 1. Installation directly on Ollama

1. Visit [Andy-4 on Ollama](https://ollama.com/Sweaterdog/Andy-4)
2. Copy the command after choosing model type / quantization
3. Run the command in the terminal
4. Set the profile's model to be what you installed, such as `ollama/sweaterdog/andy-4:latest`

### 2. Manual Download & Modelfile

1. **Download**  
   - From the HF **Files** tab, grab your chosen `.GGUF` quant weights (e.g. `Andy-4.Q4_K_M.gguf`).  
   - Download the provided `Modelfile`.


2. **Edit**

   Change
   ```text
   FROM YOUR/PATH/HERE
   ```
   to
   ```text
   FROM /path/to/Andy-4.Q4_K_M.gguf
   ```
  *Optional*:
  Increase the parameter `num_ctx` to a higher value for longer conversations if you:

  **A.** Have extra VRAM

  **B.** Quantized the context window

  **C.** Can use a smaller model

3. **Create**  
   ```bash
   ollama create andy-4 -f Modelfile
   ```

This registers the **Andy‑4** model locally.

---

If you lack a GPU, check the [Mindcraft Discord guide](https://ptb.discord.com/channels/1303399789995626667/1347027684768878644/1347027684768878644) for free cloud setups.


## 🔧 Context‑Window Quantization

To lower VRAM use for context windows:

#### **Windows**

1. Close Ollama.  
2. In **System Properties → Environment Variables**, add:  
   ```text
   OLLAMA_FLASH_ATTENTION=1  
   OLLAMA_KV_CACHE_TYPE=q8_0   # or q4_0 for extra savings, but far more unstable
   ```  
3. Restart Ollama.

#### **Linux/macOS**

```bash
export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_KV_CACHE_TYPE="q8_0"   # or "q4_0", but far more unstable
ollama serve
```

---

## 📌 Acknowledgments

<details>
<summary>Click to expand</summary>

- **Data & Models by:** @Sweaterdog  
- **Framework:** Mindcraft (https://github.com/kolbytn/mindcraft)  
- **LoRA Weights:** https://huggingface.co/Sweaterdog/Andy-4-LoRA
- *Explicit credit is not granted to Meta since this model was trained off of a slightly different architecture, from [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)
</details>

---

## ⚖️ License

See [Andy 1.0 License](LICENSE).

*This work uses data and models created by @Sweaterdog.*