GGUF
English
qwen2
minecraft
Mindcraft
Minecraft
MindCraft
Inference Endpoints
conversational
Sweaterdog commited on
Commit
29cdd03
·
verified ·
1 Parent(s): f9fee50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +161 -3
README.md CHANGED
@@ -1,3 +1,161 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Sweaterdog/Andy-3.5-MASSIVE
5
+ - Sweaterdog/Andy-3.5
6
+ - Sweaterdog/Andy-3.5-reasoning
7
+ language:
8
+ - en
9
+ base_model:
10
+ - unsloth/DeepSeek-R1-Distill-Qwen-7B-unsloth-bnb-4bit
11
+ tags:
12
+ - minecraft
13
+ - Mindcraft
14
+ - Minecraft
15
+ - MindCraft
16
+ ---
17
+
18
+ # 🚀 Welcome to Next Generation Minecraft with Andy 3.6 🚀
19
+
20
+ ## Andy 3.6 is a **LOCAL** model beating Andy-3.5 in performance
21
+ *Andy 3.6 is designed to be used with MindCraft, and is not designed nor intended to be used for any other applications*
22
+
23
+
24
+ > # Please note! [!WARNING]
25
+ >
26
+ > Andy-3.6 was trained on older data, and not the newest and latest versions of Mindcraft.
27
+ >
28
+ > I **cannot** guarantee that Andy-3.6 will work on future versions as the model was tuned to play MindCraft with a specific version!
29
+ >
30
+ > For the rest of the Andy-3.6 generation, this model will **ONLY** be supported on the version of Mindcraft in [this github repo!](https://github.com/Sweaterdog/Mindcraft-for-Andy-3.5)
31
+ >
32
+ > For more info, as well as the supported version of Mindcraft, please follow [this link to github](https://github.com/Sweaterdog/Mindcraft-for-Andy-3.5)
33
+
34
+ # How to Install / Setup
35
+
36
+ 1. Select the model you would like to use *(The regular model, as well as the small model is recommended)*
37
+ 2. Download the Modelfile
38
+ 3. Once downloaded, open Modelfile in a text editor, and change the path to the download location of the gguf file
39
+ 4. When changed, save the file, and open command terminal
40
+ 5. *(Optional if CMD isn't opened via file explorer)* Navigate to the correct directory using "cd"
41
+ 6. Run the command ```ollama create sweaterdog/Andy-3.5 -f Modelfile``` If you want multiple models, include a tag afterwards. Example: sweaterdog/Andy-3.5:mini-fp16 or sweaterdog/Andy-3.5:q2_k
42
+ 7. Go to a profile in MindCraft
43
+ 8. Change the model to be ```sweaterdog/Andy-3.5``` *Or whatever you named your model*
44
+ 9. Ensure you have the emdedding tag set to Ollama, like below
45
+ ```
46
+ {
47
+ "name": "andy-3.5",
48
+
49
+ "model": "Sweaterdog/Andy-3.5",
50
+
51
+ "embedding": "ollama"
52
+
53
+ }
54
+ ```
55
+
56
+ 10. Enjoy playing with an AI that you are hosting!
57
+
58
+
59
+ # How was model trained?
60
+
61
+ The model was trained on the [MindCraft dataset](https://huggingface.co/datasets/Sweaterdog/Andy-3.5-MASSIVE) for Andy-3.6, a curated dataset for Q & A, reasoning, and playing, which includes ~22,000 prompts.
62
+
63
+ # What are capabilities and Limitations?
64
+
65
+ Andy-3.6 was trained on EVERYTHING regarding Minecraft and MindCraft, it knows how to use commands natively without a system prompt.
66
+ Andy-3.6 also knows how to build / use !newAction to perform commands, it was trained on lots of building, as well as, using !newAction to do tasks like manually making something or strip mining.
67
+
68
+ # What models can I choose?
69
+
70
+ There are going to be 2 model sizes avaliable, Regular, Small, and Mini
71
+ * Regular is a 7B parameter model, tuned from [Deepseek-R1 Distilled](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
72
+ * Small is a 3B parameter model, tuned from [Qwen2.5 3B](Qwen/Qwen2.5-3B-Instruct)
73
+
74
+ Both models will have **case-by-case reasoning** baked **into** the model, meaning when it encounters a hard task, it will reason.
75
+
76
+ You can also *prompt* Andy-3.6 to reason for better performance
77
+
78
+ # Safety and FAQ
79
+
80
+ Q: Is this model safe to use?
81
+
82
+ A. Yes, this model is non-volatile, and cannot generate malicous content
83
+
84
+ Q. Can this model be used on a server?
85
+
86
+ A. Yes, In theory and practice the model is only capable of building and performing manual tasks via newAction
87
+
88
+ Q. Who is responsible if this model does generate malicous content?
89
+
90
+ A. You are responsible, even though the model was never trained to be able to make malicous content, there is a ***very very slight chance*** it still generates malicous code.
91
+
92
+ Q. If I make media based on this model, like photos / videos, do I have to mention the Creator?
93
+
94
+ A. No, if you are making a post about MindCraft, and using this model, you only have to mention the creator if you mention the model being used.
95
+
96
+ # 🔥UPDATE🔥
97
+
98
+ ## **Andy-3.6 Release!**
99
+ Andy-3.6 is our Next Generation model, feature more capabilties, and stronger performance over ANY other local LLM in Mindcraft!
100
+
101
+ > # I want to thank all supporters! [!NOTE]
102
+ > I would love to thank everyone who supported this project, there is a list of supporters in the files section.
103
+ >
104
+ > You can find all of the supporters [here](https://huggingface.co/Sweaterdog/Andy-3.5/blob/main/Supporters.txt)
105
+
106
+ # Performance Metrics
107
+
108
+ These benchmarks are a-typical, since most standard benchmarks don't apply to Minecraft
109
+
110
+ The benchmarks below include models via API that are cheap, and other fine-tuned local models
111
+
112
+ ## Zero info Prompting
113
+ *How fast can a model collect 16 oak logs, and convert them all into sticks*
114
+
115
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66960602f0ffd8e3a381106a/IEw1Gydg943qVSNGAL3RW.png)
116
+
117
+
118
+ As shown, the only models that are capable of play without information, is Andy-3.6, and all Andy-3.5 models
119
+
120
+ You can test this demo out for yourself using [this profile](https://huggingface.co/Sweaterdog/Andy-3.5/blob/main/local_demo.json)
121
+
122
+
123
+ ## Time to get a stone pickaxe
124
+
125
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66960602f0ffd8e3a381106a/frrT9IcJsNeUOLhszFrOq.png)
126
+
127
+ *For Andy-3.5-mini, I used the FP16 model, I had enough VRAM to do so*
128
+
129
+ *For Andy-3.5, I used the Q4_K_M quantization*
130
+
131
+ *For Andy-3.5-small, I used the Q8_0 quantization*
132
+
133
+ *Andy-3.5-reasoning-small was able to be the most efficient model producing the lowest amount of messages, but took a whopping 34.5 minutes to get a stone pickaxe.*
134
+
135
+ *For Andy-3.5-Teensy, I used the FP16 quantization*
136
+
137
+ *For Mineslayerv1 and Mineslayerv2, I used the default (and only) quantization, Q4_K_M*
138
+
139
+ ## Notes about the benchmarks
140
+
141
+ **Zero Info Prompting**
142
+
143
+ Andy-3.5-Mini collected 32 oak_log instead of 16 oak_log
144
+
145
+ Andy-3.5-small *No notes*
146
+
147
+ Andy-3.5 attempted to continue playing, and make a wooden_pickaxe after the goal was done.
148
+
149
+ Both Mineslayerv1 and Mineslayerv2 hallucinated commands, like !chop or !grab
150
+
151
+ **Time to get a stone pickaxe**
152
+
153
+ Andy-3.5-Mini was unable to make itself a stone pickaxe, however it collected enough wood, but then got stuck on converting logs to planks, it kept trying "!craftRecipe("wooden_planks", 6) instead of oak_planks
154
+
155
+ Andy-3.5-small kept trying to make a stone_pickaxe first
156
+
157
+ Andy-3.5 Made a stone pickaxe the fastest out of all models, including GPT-4o-mini and Claude-3.5-Haiku
158
+
159
+ Mineslayerv1 Was unable to use !collectBlocks, instead kept trying !collectBlock
160
+
161
+ Mineslayerv2 Was unable to play, it kept hallucinating on the first command