sjug/GLM-4.5-MLX-9bit

This model sjug/GLM-4.5-MLX-9bit was converted to MLX format from zai-org/GLM-4.5 using mlx-lm version 0.26.3.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("sjug/GLM-4.5-MLX-9bit")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
112
Safetensors
Model size
353B params
Tensor type
BF16
U32
F32
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for sjug/GLM-4.5-MLX-9bit

Base model

zai-org/GLM-4.5
Quantized
(29)
this model