Update README.md
Browse files
README.md
CHANGED
@@ -126,13 +126,14 @@ A girl of peculiar appetites and an even more peculiar imagination lived in a sm
|
|
126 |
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Misc/ugi.png" alt="UGI Score" style="width: 70%; min-width: 500px; display: block; margin: auto;">
|
127 |
|
128 |
|
129 |
-
|
130 |
|
131 |
This model is the result of training a fraction (16M tokens) of the testing data Intended for [LLAMA-3_8B_Unaligned's](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) upcoming beta.
|
132 |
The base model is a merge of merges, made by [Invisietch's](https://huggingface.co/invisietch) and named [EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B). The name for this model reflects the base that was used for this finetune while hinting a darker, and more uncensored aspects associated with the nature of the [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) project.
|
133 |
|
134 |
As a result of the unique data added, this model has an exceptional adherence to instructions about paragraph length, and to the story writing prompt. I would like to emphasize, **no ChatGPT \ Claude** was used for any of the additional data I added in this finetune. The goal is to eventually have a model with a **minimal amount of slop**, this cannot be reliably done by relying on API models, which pollute datasets with their bias and repetitive words.
|
135 |
|
|
|
136 |
|
137 |
## Available quantizations:
|
138 |
|
|
|
126 |
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Misc/ugi.png" alt="UGI Score" style="width: 70%; min-width: 500px; display: block; margin: auto;">
|
127 |
|
128 |
|
129 |
+
---
|
130 |
|
131 |
This model is the result of training a fraction (16M tokens) of the testing data Intended for [LLAMA-3_8B_Unaligned's](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) upcoming beta.
|
132 |
The base model is a merge of merges, made by [Invisietch's](https://huggingface.co/invisietch) and named [EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B). The name for this model reflects the base that was used for this finetune while hinting a darker, and more uncensored aspects associated with the nature of the [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) project.
|
133 |
|
134 |
As a result of the unique data added, this model has an exceptional adherence to instructions about paragraph length, and to the story writing prompt. I would like to emphasize, **no ChatGPT \ Claude** was used for any of the additional data I added in this finetune. The goal is to eventually have a model with a **minimal amount of slop**, this cannot be reliably done by relying on API models, which pollute datasets with their bias and repetitive words.
|
135 |
|
136 |
+
---
|
137 |
|
138 |
## Available quantizations:
|
139 |
|