Granite-4.0 Family
Collection
10 items
•
Updated
•
1
Maintainer / Publisher: Susant Achary
This repository provides an Apple-Silicon MLX build of IBM Granite-4.0-H-Tiny quantized to 6-bit.
Among MLX quant variants, 6-bit offers the highest fidelity while still fitting comfortably on modern M-series Macs. If your workload involves precise extraction, structured outputs, or long contexts, 6-bit is usually the best on-device choice.
Use this table as a practical guide for a ~7B hybrid MoE LM on Apple Silicon. (Figures vary by device/context.)
Variant | Typical Peak RAM | Relative Speed | Typical Behavior | When to Choose |
---|---|---|---|---|
2-bit | ~3–4 GB | 🔥🔥🔥🔥 | Smallest, most lossy | Minimal RAM devices; smoke tests |
3-bit | ~5–6 GB | 🔥🔥🔥🔥 | Direct, concise | Great default on M1/M2/M3/M4 |
4-bit | ~6–7.5 GB | 🔥🔥🔥 | Better detail retention | If 3-bit misses details |
5-bit | ~8–9 GB | 🔥🔥☆ | Higher fidelity | Heavier docs/structured outputs |
6-bit (this repo) | ~9.5–11 GB | 🔥🔥 | Highest MLX fidelity | Best quality on-device if RAM permits |
Tips
This card documents the MLX 6-bit conversion. For lower-RAM devices, see the 2/3/4/5-bit guidance below.
config.json
(MLX), mlx_model*.safetensors
(6-bit shards)tokenizer.json
, tokenizer_config.json
model_index.json
)This build targets macOS on Apple Silicon (M-series) using Metal/MPS.
Deterministic generation
python -m mlx_lm.generate \
--model <this-repo-id> \
--prompt "Summarize the following meeting notes in 5 bullet points:\n<your text>" \
--max-tokens 256 \
--temperature 0.0 \
--device mps \
--seed 0
Base model
ibm-granite/granite-4.0-h-tiny