mrtoots's picture
Update README.md
4844f20 verified
metadata
language:
  - en
license: llama3
tags:
  - Llama-3.1
  - unsloth
  - instruct
  - finetune
  - reasoning
  - hybrid-mode
  - chatml
  - function calling
  - tool use
  - json mode
  - structured outputs
  - atropos
  - dataforge
  - long context
  - roleplaying
  - chat
  - mlx
  - mlx-my-repo
base_model: unsloth/Hermes-4-405B
library_name: transformers
widget:
  - example_title: Hermes 4
    messages:
      - role: system
        content: >-
          You are Hermes 4, a capable, neutrally-aligned assistant. Prefer
          concise, correct answers.
      - role: user
        content: Explain the difference between BFS and DFS to a new CS student.
model-index:
  - name: Hermes-4-Llama-3.1-405B
    results: []

mrtoots/unsloth-Hermes-4-405B-mlx-3Bit

The Model mrtoots/unsloth-Hermes-4-405B-mlx-3Bit was converted to MLX format from unsloth/Hermes-4-405B using mlx-lm version 0.26.4.

Toots' Note:

This model was converted and quantized utilizing unsloth's version of Hermes-4-405B. Should include the chat template fixes.

Please follow and support unsloth's work if you like it!

🦛 If you want a free consulting session, fill out this form to get in touch! 🤗

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mrtoots/Hermes-4-405B-mlx-3Bit")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)