Text Generation
GGUF
English
NEO Imatrix
MAX Quants
GGUF
uncensored
reasoning
thinking
r1
cot
reka-flash
deepseek
Qwen2.5
Hermes
DeepHermes
DeepSeek
DeepSeek-R1-Distill
128k context
instruct
all use cases
maxed quants
Neo Imatrix
finetune
chatml
gpt4
synthetic data
distillation
function calling
roleplaying
chat
Uncensored
creative
general usage
problem solving
brainstorming
solve riddles
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
story
writing
fiction
swearing
horror
imatrix
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -103,12 +103,12 @@ Please see the original model's repo for more details, benchmarks and methods of
|
|
103 |
|
104 |
---
|
105 |
|
106 |
-
"MAXED"
|
107 |
|
108 |
This means output tensor is set at "BF16" (full precision) for all quants.
|
109 |
This enhances quality, depth and general performance at the cost of a slightly larger quant.
|
110 |
|
111 |
-
"NEO IMATRIX"
|
112 |
|
113 |
A strong, in house built, imatrix dataset built by David_AU which results in better overall function,
|
114 |
instruction following, output quality and stronger connections to ideas, concepts and the world in general.
|
|
|
103 |
|
104 |
---
|
105 |
|
106 |
+
<b>"MAXED"</B>
|
107 |
|
108 |
This means output tensor is set at "BF16" (full precision) for all quants.
|
109 |
This enhances quality, depth and general performance at the cost of a slightly larger quant.
|
110 |
|
111 |
+
<b>"NEO IMATRIX"</B>
|
112 |
|
113 |
A strong, in house built, imatrix dataset built by David_AU which results in better overall function,
|
114 |
instruction following, output quality and stronger connections to ideas, concepts and the world in general.
|