| language: | |
| - en | |
| license: apache-2.0 | |
| tags: | |
| - Llama-3 | |
| - instruct | |
| - finetune | |
| - chatml | |
| - DPO | |
| - RLHF | |
| - gpt4 | |
| - synthetic data | |
| - distillation | |
| - function calling | |
| - json mode | |
| - axolotl | |
| - mlx | |
| base_model: NousResearch/Meta-Llama-3-8B | |
| datasets: | |
| - teknium/OpenHermes-2.5 | |
| widget: | |
| - example_title: Hermes 2 Pro | |
| messages: | |
| - role: system | |
| content: You are a sentient, superintelligent artificial general intelligence, | |
| here to teach and assist me. | |
| - role: user | |
| content: Write a short story about Goku discovering kirby has teamed up with Majin | |
| Buu to destroy the world. | |
| model-index: | |
| - name: Hermes-2-Pro-Llama-3-8B | |
| results: [] | |
| # batmac/Hermes-2-Pro-Llama-3-8B-mlx-4bit | |
| This model was converted to MLX format from [`NousResearch/Hermes-2-Pro-Llama-3-8B`]() using mlx-lm version **0.12.1**. | |
| Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) for more details on the model. | |
| ## Use with mlx | |
| ```bash | |
| pip install mlx-lm | |
| ``` | |
| ```python | |
| from mlx_lm import load, generate | |
| model, tokenizer = load("batmac/Hermes-2-Pro-Llama-3-8B-mlx-4bit") | |
| response = generate(model, tokenizer, prompt="hello", verbose=True) | |
| ``` | |