metadata
thumbnail: >-
https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/jg2NWmCUfPyzizm2USjMt.jpeg
datasets:
- NewEden/Orion-LIT
- NewEden/Orion-Asstr-Stories-16K
- Mielikki/Erebus-87k
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- Nitral-AI/ARES-ShareGPT
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned-20k
- NewEden/Claude-Instruct-2.7K
- NewEden/Claude-Instruct-5K
base_model: Delta-Vector/Hamanasu-15B-Instruct
tags:
- phi
- roleplay
- finetune
- storywriting
- mlx
- mlx-my-repo
aimeri/Hamanasu-15B-Instruct-6bit
The Model aimeri/Hamanasu-15B-Instruct-6bit was converted to MLX format from Delta-Vector/Hamanasu-15B-Instruct using mlx-lm version 0.21.5.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("aimeri/Hamanasu-15B-Instruct-6bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)