🤏 Andy‑4‑micro 🧠
Andy‑4‑micro is a lightweight Minecraft-tuned AI model derived from the Andy‑4 architecture. Built for responsiveness and portability, it’s ideal for local testing, light inference, and experimentation within the Mindcraft framework.
The current version of Andy-4-micro is Andy-4-micro-0417
, All previous versions of Andy-4-micro can still be found on my huggingface page.
💡 Trained on a single RTX 3070 over four days, Andy‑4‑micro maintains strong performance while staying efficient.
⚠️ Certification:
Andy‑4‑micro is not yet certified by the Mindcraft developers. Use in production at your own discretion.
📊 Model Overview
- Base Architecture: Qwen 2.5
- Parameter Count: 1.5 B
- Training Duration: ~4 days
- Training GPU: 1 × NVIDIA RTX 3070
- Total Tokens Used: ~42M
- License: Andy 1.1 License
- Repository: https://huggingface.co/Sweaterdog/Andy-4-micro
🚀 Installation
First, you need to choose your quantization, this chart is with the base of 8192
set as the context window
Quantization | VRAM Required |
---|---|
F16 | 6 GB+ |
Q5_K_M | 4 GB+ |
Q4_K_M | 4 GB |
Q3_K_M | 1.5 GB or CPU |
NOTE: GPUs made before 2017 will have significantly slower speeds than newer GPUs, also, CPU inference will be extremely slow.
1. Installation directly on Ollama
- Visit Andy-4 on Ollama
- Copy the command after choosing model type / quantization
- Run the command in the terminal
- Set the profile's model to be what you installed, such as
ollama/sweaterdog/andy-4:latest
2. Manual Download & Setup
Download
- Visit the Hugging Face Files tab.
- Download the
.GGUF
quantization weights (e.g.Andy-4-micro.Q4_K_M.gguf
). - Grab the provided
Modelfile
.
Edit
Modelfile
Change the path placeholder:
FROM YOUR/PATH/HERE
to:
FROM /path/to/Andy-4-micro.Q4_K_M.gguf
Optional: Adjust num_ctx
for longer context windows if your system supports it.
- Create Model
ollama create andy-4-micro -f Modelfile
This registers Andy‑4‑micro locally with Ollama.
If you lack a GPU, check the Mindcraft Discord guide for free cloud setups.
🔧 Context‑Window Quantization
To lower VRAM use for context windows:
Windows
- Close Ollama.
- In System Properties → Environment Variables, add:
OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 # or q4_0 for extra savings, but far more unstable
- Restart Ollama.
Linux/macOS
export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_KV_CACHE_TYPE="q8_0" # or "q4_0", but far more unstable
ollama serve
📌 Acknowledgments
Click to expand
- Data & Model by: @Sweaterdog
- Framework: Mindcraft (https://github.com/kolbytn/mindcraft)
- LoRA Weights: https://huggingface.co/Sweaterdog/Andy-4-micro-LoRA
⚖️ License
See Andy 1.1 License.
This work uses data and models created by @Sweaterdog.
- Downloads last month
- 169
3-bit
4-bit
5-bit
8-bit
16-bit
Model tree for Sweaterdog/Andy-4-micro-0427
Base model
Qwen/Qwen2.5-1.5B