cogbuji/OpenHermes-2.5-Mistral-7B-mlx-4bit
This model was converted to MLX format from teknium/OpenHermes-2.5-Mistral-7B and quantized. Refer to the original model card for more details on the model.
It was converted and quantized with mlx 0.7.0 and mlx_lm 0.3.0 and should be used with those versions. Later versions of these may deprecate this model
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("cogbuji/OpenHermes-2.5-Mistral-7B-mlx-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
- Downloads last month
- 10
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.
Model tree for cogbuji/OpenHermes-2.5-Mistral-7B-mlx-4bit
Base model
mistralai/Mistral-7B-v0.1