gpt-oss-20B-mlx-metal-int4
This is a quantized INT4 model based on Apple MLX Framework gpt-oss-20B. You can deploy it on Apple Silicon devices (M1,M2,M3,M4).
Note: This is unoffical version,just for test and dev.
Installation
pip install -U mlx-lm
Conversion
python -m mlx_lm.convert --hf-path openai/gpt-oss-20b -q
Samples
from mlx_lm import load, generate
model, tokenizer = load("Your gpt-oss-20B-mlx-metal-int4 Path")
prompt = tokenizer.apply_chat_template(
[{"role": "user", "content": "can you introduce yourself"}],
tokenize=False,
add_generation_prompt=True,
)
response = generate(model, tokenizer, prompt=prompt,max_tokens=1024, verbose=True)
- Downloads last month
- 148
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for lokinfey/gpt-oss-20B-mlx-metal-int4
Base model
openai/gpt-oss-20b