🤏 Andy‑4‑micro 🧠

image/png

Andy‑4‑micro is a lightweight Minecraft-tuned AI model derived from the Andy‑4 architecture. Built for responsiveness and portability, it’s ideal for local testing, light inference, and experimentation within the Mindcraft framework.

The current version of Andy-4-micro is Andy-4-micro-0417, All previous versions of Andy-4-micro can still be found on my huggingface page.

💡 Trained on a single RTX 3070 over four days, Andy‑4‑micro maintains strong performance while staying efficient.

⚠️ Certification:
Andy‑4‑micro is not yet certified by the Mindcraft developers. Use in production at your own discretion.


📊 Model Overview


🚀 Installation

First, you need to choose your quantization, this chart is with the base of 8192 set as the context window

Quantization VRAM Required
F16 6 GB+
Q5_K_M 4 GB+
Q4_K_M 4 GB
Q3_K_M 1.5 GB or CPU

NOTE: GPUs made before 2017 will have significantly slower speeds than newer GPUs, also, CPU inference will be extremely slow.

1. Installation directly on Ollama

  1. Visit Andy-4 on Ollama
  2. Copy the command after choosing model type / quantization
  3. Run the command in the terminal
  4. Set the profile's model to be what you installed, such as ollama/sweaterdog/andy-4:latest

2. Manual Download & Setup

  1. Download

    • Visit the Hugging Face Files tab.
    • Download the .GGUF quantization weights (e.g. Andy-4-micro.Q4_K_M.gguf).
    • Grab the provided Modelfile.
  2. Edit Modelfile

Change the path placeholder:

FROM YOUR/PATH/HERE

to:

FROM /path/to/Andy-4-micro.Q4_K_M.gguf

Optional: Adjust num_ctx for longer context windows if your system supports it.

  1. Create Model
ollama create andy-4-micro -f Modelfile

This registers Andy‑4‑micro locally with Ollama.


If you lack a GPU, check the Mindcraft Discord guide for free cloud setups.

🔧 Context‑Window Quantization

To lower VRAM use for context windows:

Windows

  1. Close Ollama.
  2. In System Properties → Environment Variables, add:
    OLLAMA_FLASH_ATTENTION=1  
    OLLAMA_KV_CACHE_TYPE=q8_0   # or q4_0 for extra savings, but far more unstable
    
  3. Restart Ollama.

Linux/macOS

export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_KV_CACHE_TYPE="q8_0"   # or "q4_0", but far more unstable
ollama serve

📌 Acknowledgments

Click to expand

⚖️ License

See Andy 1.1 License.

This work uses data and models created by @Sweaterdog.

Downloads last month
169
GGUF
Model size
1.78B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Sweaterdog/Andy-4-micro-0427

Dataset used to train Sweaterdog/Andy-4-micro-0427

Collection including Sweaterdog/Andy-4-micro-0427