Text Generation
GGUF
English
mixture of experts
Mixture of Experts
8x3B
Llama 3.2 MOE
128k context
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
swearing
rp
horror
mergekit
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -49,7 +49,11 @@ And it is fast: 50+ t/s (2 experts) on a low end 16GB card, IQ4XS.
|
|
49 |
|
50 |
Double this speed for standard/mid-range video cards.
|
51 |
|
52 |
-
|
|
|
|
|
|
|
|
|
53 |
|
54 |
It is for any writing, fiction or roleplay activity.
|
55 |
|
|
|
49 |
|
50 |
Double this speed for standard/mid-range video cards.
|
51 |
|
52 |
+
<B>NEW: Version 2 with Brainstorm 5x infused on all 8 models, creating a 8X4B MOE is located here:</B>
|
53 |
+
|
54 |
+
[ https://huggingface.co/DavidAU/Llama-3.2-8X4B-MOE-V2-Dark-Champion-Instruct-uncensored-abliterated-21B-GGUF ]
|
55 |
+
|
56 |
+
This model (as well as version 2) can be used also for all genres (examples below showing this).
|
57 |
|
58 |
It is for any writing, fiction or roleplay activity.
|
59 |
|