Zen Eco 4B Agent - MLX

MLX quantized version of zenlm/zen-eco-4b-agent.

Optimized for Apple Silicon Macs.

Usage

from mlx_lm import load, generate

model, tokenizer = load("zenlm/zen-eco-4b-agent-mlx")

response = generate(model, tokenizer, prompt="Who are you?", max_tokens=50)
print(response)

Original Model

  • Base: Qwen3-4B
  • Parameters: 4B
  • Training: Fine-tuned with Zen identity and tool-calling capabilities
  • Developer: Hanzo AI
Downloads last month
17
Safetensors
Model size
629M params
Tensor type
F16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support