File size: 2,018 Bytes
ab898de 7adb47a ab898de a3d5aa1 bdda78e e730bdf ab898de 582f6ad ab898de 29d65e4 ab898de 582f6ad 7e4ae01 bdda78e 582f6ad bfacc1e e730bdf bfacc1e e730bdf bdda78e e730bdf bfacc1e 582f6ad bdda78e e730bdf 29d65e4 e730bdf bdda78e 582f6ad bdda78e 582f6ad bdda78e 582f6ad bdda78e 959cb0e bdda78e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
license: apache-2.0
base_model:
- h34v7/DXP-Zero-V1.0-24b-Small-Instruct
base_model_relation: quantized
pipeline_tag: text-generation
tags:
- roleplay
- storywriting
- mistral
- erp
- gguf
- imatrix
- creative
- creative writing
- story
- writing
- roleplaying
- role play
- sillytavern
- rp
language:
- en
- ru
---
# Model Card for Model ID
Imatrix GGUF Quants for: [DXP-Zero-V1.0-24b-Small-Instruct](https://huggingface.co/h34v7/DXP-Zero-V1.0-24b-Small-Instruct#dxp-zero-v10-24b-small-instruct).
### Recommended Settings
```
"temperature": 0.8, (Mistral Small 3.1 is sensitive to higher temperatures)
"top_p": 0.95/1,
"min_p": 0.025/0.03,
"repeat_penalty": 1.05/1.1,
```
IQ2_M: Usable, good for 10-16 GB RAM/VRAM
IQ3_XXS: Very usable, good for 12-20 GB RAM/VRAM
IQ3_M: Solid, good for 14-18 GB RAM/VRAM
IQ4_XS: It's all you need, if you have 16+ GB RAM/VRAM
The model might lack the necessary evil for making story twisty or dark adventure but it make ammend on creating coherent story in long context form.
Perfect for romance, adventure, sci-fi, and even general purpose.
So i was browsing for Mistral finetune and found this base model by ZeroAgency, and oh boy... It was perfect!
So here are few notable improvements i observed. Pros:
Increased output for storytelling or roleplay.
Dynamic output (it can adjust how much output, i mean like when you go with shorter prompt it will do smaller outputs and so does with longer prompt more output too).
Less repetitive (though it depends on your own prompt and settings).
I have tested with 49444/65536 tokens no degradation although i notice it's actually learning the context better and it's impacting the output a lot. (what i don't like is, it's learning the previous context(of turns) too quickly and set it as new standards.).
This model was merged using the TIES merge method using ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf as a base. Models Merged:
PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
Gryphe/Pantheon-RP-1.8-24b-Small-3.1 |