|
--- |
|
license: apache-2.0 |
|
base_model: |
|
- h34v7/DXP-Zero-V1.0-24b-Small-Instruct |
|
base_model_relation: quantized |
|
pipeline_tag: text-generation |
|
tags: |
|
- roleplay |
|
- storywriting |
|
- mistral |
|
- erp |
|
- gguf |
|
- imatrix |
|
- creative |
|
- creative writing |
|
- story |
|
- writing |
|
- roleplaying |
|
- role play |
|
- sillytavern |
|
- rp |
|
language: |
|
- en |
|
- ru |
|
--- |
|
|
|
|
|
# Model Card for Model ID |
|
|
|
Imatrix GGUF Quants for: [DXP-Zero-V1.0-24b-Small-Instruct](https://huggingface.co/h34v7/DXP-Zero-V1.0-24b-Small-Instruct#dxp-zero-v10-24b-small-instruct). |
|
|
|
|
|
### Recommended Settings |
|
``` |
|
"temperature": 0.8, (Mistral Small 3.1 is sensitive to higher temperatures) |
|
"top_p": 0.95/1, |
|
"min_p": 0.025/0.03, |
|
"repeat_penalty": 1.05/1.1, |
|
``` |
|
|
|
IQ2_M: Usable, good for 10-16 GB RAM/VRAM |
|
|
|
IQ3_XXS: Very usable, good for 12-20 GB RAM/VRAM |
|
|
|
IQ3_M: Solid, good for 14-18 GB RAM/VRAM |
|
|
|
IQ4_XS: It's all you need, if you have 16+ GB RAM/VRAM |
|
|
|
|
|
The model might lack the necessary evil for making story twisty or dark adventure but it make ammend on creating coherent story in long context form. |
|
|
|
Perfect for romance, adventure, sci-fi, and even general purpose. |
|
|
|
So i was browsing for Mistral finetune and found this base model by ZeroAgency, and oh boy... It was perfect! |
|
|
|
So here are few notable improvements i observed. Pros: |
|
|
|
Increased output for storytelling or roleplay. |
|
Dynamic output (it can adjust how much output, i mean like when you go with shorter prompt it will do smaller outputs and so does with longer prompt more output too). |
|
Less repetitive (though it depends on your own prompt and settings). |
|
I have tested with 49444/65536 tokens no degradation although i notice it's actually learning the context better and it's impacting the output a lot. (what i don't like is, it's learning the previous context(of turns) too quickly and set it as new standards.). |
|
|
|
This model was merged using the TIES merge method using ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf as a base. Models Merged: |
|
|
|
PocketDoc/Dans-PersonalityEngine-V1.2.0-24b |
|
Gryphe/Pantheon-RP-1.8-24b-Small-3.1 |