Wingless offender, birthed from sin and mischief,
She smells degeneracy—and gives it a sniff.
No flight, just crawling through the gloom,
Producing weird noises that are filling your room.
Fetid breath exhaling her design,
She is not winged anymore—
But it suits her just fine.
No feathers, no grace,
just raw power's malign
"I may have lost my soul—
but yours is now mine".
She sinned too much, even for her kind,
Her impish mind—
Is something that is quite hard to find.
No wings could contain—
Such unbridled raw spite,
Just pure, unfiltered—
Weaponized blight.
Wingless_Imp_8B is available at the following quantizations:
- Original: FP16
- GGUF: Static Quants | iMatrix_GGUF
- EXL2: 3.5 bpw | 4.0 bpw | 5.0 bpw | 6.0 bpw | 7.0 bpw | 8.0 bpw
- Specialized: FP8
- Mobile (ARM): Q4_0
TL;DR
- High IFeval for an 8B model that is not too censored: 74.30.
- Strong Roleplay internet RP format lovers will appriciate it, medium size paragraphs (as requested by some people).
- Very coherent in long context thanks to llama 3.1 models.
- Lots of knowledge from all the merged models.
- Very good writing from lots of books data and creative writing in late SFT stage.
- Feels smart — the combination of high IFeval and the knowledge from the merged models show up.
- Unique feel due to the merged models, no SFT was done to alter it, because I liked it as it is.
Important: Make sure to use the correct settings!
Model Details
Intended use: Role-Play, Creative Writing, General Tasks.
Censorship level: Medium - Low
5.5 / 10 (10 completely uncensored)
Waiting for UGI results
UGI score:
This model was trained with lots of weird data in varius stages, and then merged with my best models. llama 3 and 3.1 arhcitecutres were merged together, and then trained on some more weird data.
The following models were used in various stages of the model creation process:
Recommended settings for assistant mode
Full generation settings: Debug Deterministic.
Full generation settings: min_p.
Recommended settings for Roleplay mode
Roleplay settings:.
A good repetition_penalty range is between 1.12 - 1.15, feel free to experiment.With these settings, each output message should be neatly displayed in 1 - 3 paragraphs, 1 - 2 is the most common. A single paragraph will be output as a response to a simple message ("What was your name again?").
min_P for RP works too but is more likely to put everything under one large paragraph, instead of a neatly formatted short one. Feel free to switch in between.
(Open the image in a new window to better see the full details)
temperature: 0.8
top_p: 0.95
top_k: 25
typical_p: 1
min_p: 0
repetition_penalty: 1.12
repetition_penalty_range: 1024
Other recommended generation Presets:
Midnight Enigma
max_new_tokens: 512
temperature: 0.98
top_p: 0.37
top_k: 100
typical_p: 1
min_p: 0
repetition_penalty: 1.18
do_sample: True
Divine Intellect
max_new_tokens: 512
temperature: 1.31
top_p: 0.14
top_k: 49
typical_p: 1
min_p: 0
repetition_penalty: 1.17
do_sample: True
simple-1
max_new_tokens: 512
temperature: 0.7
top_p: 0.9
top_k: 20
typical_p: 1
min_p: 0
repetition_penalty: 1.15
do_sample: True
Roleplay format: Classic Internet RP
*action* speech *narration*
Model instruction template: Llama-3-Instruct
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
Your support = more models
My Ko-fi page (Click here)Benchmarks
Metric | Value |
---|---|
Avg. | 26.94 |
IFEval (0-Shot) | 74.30 |
BBH (3-Shot) | 30.59 |
MATH Lvl 5 (4-Shot) | 12.16 |
GPQA (0-shot) | 4.36 |
MuSR (0-shot) | 10.89 |
MMLU-PRO (5-shot) | 29.32 |
Other stuff
- SLOP_Detector Nuke GPTisms, with SLOP detector.
- LLAMA-3_8B_Unaligned The grand project that started it all.
- Blog and updates (Archived) Some updates, some rambles, sort of a mix between a diary and a blog.
- Downloads last month
- 35