Update README.md
Browse files
README.md
CHANGED
|
@@ -2,42 +2,80 @@
|
|
| 2 |
base_model:
|
| 3 |
- mistralai/Mistral-Small-3.2-24B-Instruct-2506
|
| 4 |
library_name: transformers
|
|
|
|
|
|
|
| 5 |
tags:
|
| 6 |
-
- axolotl
|
| 7 |
-
- unsloth
|
| 8 |
-
- roleplay
|
| 9 |
-
- conversational
|
| 10 |
-
- mlx
|
| 11 |
datasets:
|
| 12 |
-
- PygmalionAI/PIPPA
|
| 13 |
-
- Alfitaria/nemotron-ultra-reasoning-synthkink
|
| 14 |
-
- PocketDoc/Dans-Prosemaxx-Gutenberg
|
| 15 |
-
- FreedomIntelligence/Medical-R1-Distill-Data
|
| 16 |
-
- cognitivecomputations/SystemChat-2.0
|
| 17 |
-
- allenai/tulu-3-sft-personas-instruction-following
|
| 18 |
-
- kalomaze/Opus_Instruct_25k
|
| 19 |
-
- simplescaling/s1K-claude-3-7-sonnet
|
| 20 |
-
- ai2-adapt-dev/flan_v2_converted
|
| 21 |
-
- grimulkan/theory-of-mind
|
| 22 |
-
- grimulkan/physical-reasoning
|
| 23 |
-
- nvidia/HelpSteer3
|
| 24 |
-
- nbeerbower/gutenberg2-dpo
|
| 25 |
-
- nbeerbower/gutenberg-moderne-dpo
|
| 26 |
-
- nbeerbower/Purpura-DPO
|
| 27 |
-
- antiven0m/physical-reasoning-dpo
|
| 28 |
-
- allenai/tulu-3-IF-augmented-on-policy-70b
|
| 29 |
-
- allenai/href
|
| 30 |
---
|
|
|
|
|
|
|
| 31 |
|
| 32 |
-
|
| 33 |
-
This model was converted to MLX format from [`allura-org/MS3.2-24b-Angel`]() using mlx-vlm version **0.3.1**.
|
| 34 |
-
Refer to the [original model card](https://huggingface.co/allura-org/MS3.2-24b-Angel) for more details on the model.
|
| 35 |
-
## Use with mlx
|
| 36 |
|
| 37 |
-
```bash
|
| 38 |
-
pip install -U mlx-vlm
|
| 39 |
-
```
|
| 40 |
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
base_model:
|
| 3 |
- mistralai/Mistral-Small-3.2-24B-Instruct-2506
|
| 4 |
library_name: transformers
|
| 5 |
+
thumbnail: >-
|
| 6 |
+
https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/g6hHxcdrD8r-HSUAz9b89.png
|
| 7 |
tags:
|
| 8 |
+
- axolotl
|
| 9 |
+
- unsloth
|
| 10 |
+
- roleplay
|
| 11 |
+
- conversational
|
|
|
|
| 12 |
datasets:
|
| 13 |
+
- PygmalionAI/PIPPA
|
| 14 |
+
- Alfitaria/nemotron-ultra-reasoning-synthkink
|
| 15 |
+
- PocketDoc/Dans-Prosemaxx-Gutenberg
|
| 16 |
+
- FreedomIntelligence/Medical-R1-Distill-Data
|
| 17 |
+
- cognitivecomputations/SystemChat-2.0
|
| 18 |
+
- allenai/tulu-3-sft-personas-instruction-following
|
| 19 |
+
- kalomaze/Opus_Instruct_25k
|
| 20 |
+
- simplescaling/s1K-claude-3-7-sonnet
|
| 21 |
+
- ai2-adapt-dev/flan_v2_converted
|
| 22 |
+
- grimulkan/theory-of-mind
|
| 23 |
+
- grimulkan/physical-reasoning
|
| 24 |
+
- nvidia/HelpSteer3
|
| 25 |
+
- nbeerbower/gutenberg2-dpo
|
| 26 |
+
- nbeerbower/gutenberg-moderne-dpo
|
| 27 |
+
- nbeerbower/Purpura-DPO
|
| 28 |
+
- antiven0m/physical-reasoning-dpo
|
| 29 |
+
- allenai/tulu-3-IF-augmented-on-policy-70b
|
| 30 |
+
- allenai/href
|
| 31 |
---
|
| 32 |
+
# MLX format for Angel 24b
|
| 33 |
+
Get em while they're hot.
|
| 34 |
|
| 35 |
+
This one is at Q4 quality. Vision stack appears to be horribly mangled.
|
|
|
|
|
|
|
|
|
|
| 36 |
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
+
# Angel 24b
|
| 39 |
+
|
| 40 |
+

|
| 41 |
+
|
| 42 |
+
***Better to reign in Hell than serve in Heaven.***
|
| 43 |
+
|
| 44 |
+
# Overview
|
| 45 |
+
MS3.2-24b-Angel is a model finetuned from Mistral Small 3.2 for roleplaying, storywriting, and differently-flavored general instruct usecases.
|
| 46 |
+
|
| 47 |
+
Testing revealed strong prose and character portrayal for its class, rivalling the preferred 72B models of some testers.
|
| 48 |
+
|
| 49 |
+
# Quantizations
|
| 50 |
+
EXL3:
|
| 51 |
+
- [Official EXL3 quants](https://huggingface.co/allura-quants/allura-org_MS3.2-24b-Angel-EXL3) (thanks artus <3)
|
| 52 |
+
|
| 53 |
+
GGUF:
|
| 54 |
+
- [Official GGUF imatrix quants w/ mmproj](https://hf.co/allura-quants/allura-org_MS3.2-24b-Angel-GGUF) (thanks artus, again <3)
|
| 55 |
+
|
| 56 |
+
MLX:
|
| 57 |
+
- Right here baby
|
| 58 |
+
|
| 59 |
+
# Usage
|
| 60 |
+
- Use Mistral v7 Tekken.
|
| 61 |
+
- It is **highly recommended** (if your framework supports it) to use the official Mistral tokenization code instead of Huggingface's. This is possible in vLLM with `--tokenizer-mode mistral`.
|
| 62 |
+
- Recommended samplers (from CURSE and corroborated by me, Fizz) are 1.2 temperature, 0.1 min_p, and 1.05 repetition penalty.
|
| 63 |
+
- We recommend *a* system prompt, but its contents only faintly matter (I accidentally had an assistant system prompt during the entire time I was testing)
|
| 64 |
+
|
| 65 |
+
# Training Process
|
| 66 |
+
1. [The original model had its vision adapter removed](https://huggingface.co/anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only) for better optimization and easier usage in training frameworks
|
| 67 |
+
2. The model was then put through an SFT process (using Axolotl) on various sources of general instruct, storytelling, and RP data, which resulted in [allura-forge/ms32-sft-merged](https://hf.co/allura-forge/ms32-sft-merged).
|
| 68 |
+
3. Afterwards, the model was put through a KTO process (using Unsloth) on more focused storywriting and anti-slop data, as well as general instruction following and human preference, which resulted in the final checkpoints at [allura-forge/ms32-final-TEXTONLY](https://hf.co/allura-forge/ms32-final-TEXTONLY).
|
| 69 |
+
4. Finally, the vision tower was manually added back to the weights to continue to support multimodality.
|
| 70 |
+
|
| 71 |
+
# Credits
|
| 72 |
+
- Fizz - training and data wrangling
|
| 73 |
+
- Artus (by proxy) & Bot - help with funding
|
| 74 |
+
- CURSE - testing
|
| 75 |
+
- Mango - testing, data, help with KTO configs
|
| 76 |
+
- DoctorShotgun - making the original text-only model
|
| 77 |
+
- Axolotl & Unsloth - creating the training frameworks used for parts of this finetune
|
| 78 |
+
- Everyone in Allura - moral support, being cool
|
| 79 |
+
- Vivziepop and co - Angel Dust
|
| 80 |
+
|
| 81 |
+
<3 love you all
|