Zen Guard Stream v1.0.1
Available Formats
This model is available in multiple formats for different platforms:
SafeTensors (Base Format)
- Standard HuggingFace format
- Compatible with Transformers library
- Use for training and fine-tuning
MLX Format (Apple Silicon Optimized)
/mlx/
- Full precision MLX format/mlx-4bit/
- 4-bit quantized (fastest on Mac)
GGUF Format (Coming Soon)
- Will be added for llama.cpp compatibility
- CPU-optimized for all platforms
Quick Start
Using Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("zenlm/zen-guard-stream-4b")
tokenizer = AutoTokenizer.from_pretrained("zenlm/zen-guard-stream-4b")
Using MLX (Apple Silicon)
from mlx_lm import load, generate
# Load 4-bit model (fastest)
model, tokenizer = load("zenlm/zen-guard-stream-4b", adapter_path="mlx-4bit")
# Generate
response = generate(model, tokenizer, prompt="Your prompt", max_tokens=256)
print(response)
Using llama.cpp (GGUF - Coming Soon)
llama-cli -m gguf/zen-guard-stream-q4_k_m.gguf -p "Your prompt"
Training with Zoo-Gym
pip install zoo-gym
zoo-gym train --model zenlm/zen-guard-stream-4b --data your_data.jsonl
Model Details
- Architecture: Based on Qwen 2.5
- Training: Zoo-Gym with RAIS (Recursive AI Self-Improvement System)
- License: Apache 2.0
- Partnership: Hanzo AI x Zoo Labs Foundation
Citation
@misc{zen_zen_guard_stream_2025,
title={Zen Guard Stream v1.0.1},
author={Hanzo AI and Zoo Labs Foundation},
year={2025},
version={1.0.1}
}
- Downloads last month
- 2