DavidAU commited on
Commit
e98465e
·
verified ·
1 Parent(s): 10b50e2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -20
README.md CHANGED
@@ -54,22 +54,64 @@ You can download/access V1 here:
54
 
55
  [ https://huggingface.co/DavidAU/DeepSeek-Grand-Horror-SMB-R1-Distill-Llama-3.1-16B-GGUF ]
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  <B>Special Operation Instructions:</B>
58
 
59
- 1. Set Temp between 0 and .8, higher than this "think" functions will not activate. The most "stable" temp seems to be .6, with a variance of +-0.05. Lower for more "logic" reasoning, raise it for more "creative" reasoning (max .8 or so). Also set context to at least 4096, to account for "thoughts" generation.
60
- 2. Set "repeat penalty" to 1.09 to 1.12 (recommended) and "repeat penalty range" to 64-128. (because this model is just as "unhinged" as the org version)
61
- 3. This model requires a Llama 3 Instruct and/or Command-R chat template. (see notes on "System Prompt" / "Role" below)
62
- 4. It may take one or more regens for "thinking" to "activate."
63
- 5. This is 3 model merge (original Grand Horror models) that has been fused with Deepseek "Thinking" / "Reasoning" tech only - the rest was removed.
64
- 6. If you enter a prompt without implied "step by step" requirements, "thinking" (one or more) will activate AFTER first generation. You will also get a lot of variations - some will continue the generation, others will talk about how to improve it, and some (ie generation of a scene) will cause the characters to "reason" about this situation. In some cases, the model will ask you to continue generation / thoughts too. In some cases the model's "thoughts" may appear in the generation itself.
65
- 7. State the word size length max IN THE PROMPT for best results, especially for activation of "thinking." (see examples below)
66
- 8. If you enter a prompt where "thinking" is stated or implied, "thoughts" process(es) in Deepseek will activate almost immediately. Sometimes you need to regen it to activate.
67
- 9. Sometimes the "censorship" (from Deepseek) will activate, regen it to clear it.
68
- 10. I have found opening a "new chat" per prompt works best with "thinking/reasoning activation", with temp .6, rep pen 1.12 ... THEN "regen" as required.
69
- 11. Sometimes the model will really really get completely unhinged and you need to manually stop it. (I have a solution for this... see below)
70
- 12. Depending on your AI app, "thoughts" may appear with "<THINK>" and "</THINK>" tags AND/OR the AI will generate "thoughts" directly in the main output or later output(s).
71
- 13. Although quant IQ4XS was used for testing/examples, higher quants will provide better generation / more sound "reasoning/thinking".
72
- 14. To repeat: If you exceed temp of .8 or so, "thinking" processes may stop or change form or you will get "normal" model generation.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
 
74
  The Beta version of this solution (to make this model behave) is here:
75
 
@@ -77,19 +119,19 @@ https://huggingface.co/DavidAU/AI_Autocorrect__Auto-Creative-Enhancement__Auto-L
77
 
78
  For additional generational support, general questions, and detailed parameter info and a lot more see also:
79
 
80
- https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
81
 
82
- NOTE: This is a CLASS 4 model.
83
 
84
- This model will output HORROR LEVEL content at R18, but can be used to generate content for any genre.
85
 
86
- The content of your prompt determines if the model "goes dark/horror" or not.
87
 
88
- See the "tame" example and "horror" example below.
89
 
90
  ---
91
 
92
- <B>USAGE Instructions - General:</B>
93
 
94
  For full information / examples / settings / usage of Grand Horror; please see the original model card here:
95
 
 
54
 
55
  [ https://huggingface.co/DavidAU/DeepSeek-Grand-Horror-SMB-R1-Distill-Llama-3.1-16B-GGUF ]
56
 
57
+ <B>USE CASES:</B>
58
+
59
+ This model is for all use cases, and it has a slightly more creative slant than a standard model but this model has a strong HORROR BIAS.
60
+
61
+ This model can also be used for solving logic puzzles, riddles, and other problems with the enhanced "thinking" systems by DeepSeek.
62
+
63
+ This model also can solve problems/riddles/ and puzzles normally beyond the abilities of a Llama 3.1 model due to DeepSeek systems.
64
+
65
+ This model WILL produce NSFW / uncensored / HORROR content.
66
+
67
+ This model will output HORROR LEVEL content at R18, but can be used to generate content for any genre.
68
+
69
+ The content of your prompt determines if the model "goes dark/horror" or not.
70
+
71
+ See the "tame" example and "horror" examples below.
72
+
73
  <B>Special Operation Instructions:</B>
74
 
75
+ TEMP/SETTINGS:
76
+
77
+ 1. Set Temp between 0 and .8, higher than this "think" functions will activate differently. The most "stable" temp seems to be .6, with a variance of +-0.05. Lower for more "logic" reasoning, raise it for more "creative" reasoning (max .8 or so). Also set context to at least 4096, to account for "thoughts" generation.
78
+ 2. For temps 1+,2+ etc etc, thought(s) will expand, and become deeper and richer.
79
+ 3. Set "repeat penalty" to 1.09 to 1.12 (recommended) and "repeat penalty range" to 64-128. (because this model is just as "unhinged" as the org version)
80
+ 4. This model requires a Llama 3 Instruct and/or Command-R chat template. (see notes on "System Prompt" / "Role" below)
81
+
82
+ PROMPTS:
83
+
84
+ 1. If you enter a prompt without implied "step by step" requirements (ie: Generate a scene, write a story, give me 6 plots for xyz), "thinking" (one or more) MAY activate AFTER first generation. (IE: Generate a scene -> scene will generate, followed by suggestions for improvement in "thoughts")
85
+ 2. If you enter a prompt where "thinking" is stated or implied (ie puzzle, riddle, solve this, brainstorm this idea etc), "thoughts" process(es) in Deepseek will activate almost immediately. Sometimes you need to regen it to activate.
86
+ 3. You will also get a lot of variations - some will continue the generation, others will talk about how to improve it, and some (ie generation of a scene) will cause the characters to "reason" about this situation. In some cases, the model will ask you to continue generation / thoughts too.
87
+ 4. In some cases the model's "thoughts" may appear in the generation itself.
88
+ 5. State the word size length max IN THE PROMPT for best results, especially for activation of "thinking." (see examples below)
89
+ 6. Sometimes the "censorship" (from Deepseek) will activate, regen the prompt to clear it.
90
+ 7. You may want to try your prompt once at "default" or "safe" temp settings, another at temp 1.2, and a third at 2.5 as an example. This will give you a broad range of "reasoning/thoughts/problem" solving.
91
+
92
+ GENERATION - THOUGHTS/REASONING:
93
+
94
+ 1. It may take one or more regens for "thinking" to "activate." (depending on the prompt)
95
+ 2. Model can generate a LOT of "thoughts". Sometimes the most interesting ones are 3,4,5 or more levels deep.
96
+ 3. Many times the "thoughts" are unique and very different from one another.
97
+ 4. Temp/rep pen settings can affect reasoning/thoughts too.
98
+ 5. Change up or add directives/instructions or increase the detail level(s) in your prompt to improve reasoning/thinking.
99
+ 6. Adding to your prompt: "think outside the box", "brainstorm X number of ideas", "focus on the most uncommon approaches" can drastically improve your results.
100
+
101
+ GENERAL SUGGESTIONS:
102
+
103
+ 1. I have found opening a "new chat" per prompt works best with "thinking/reasoning activation", with temp .6, rep pen 1.05 ... THEN "regen" as required.
104
+ 2. Sometimes the model will really really get completely unhinged and you need to manually stop it.
105
+ 3. Depending on your AI app, "thoughts" may appear with "< THINK >" and "</ THINK >" tags AND/OR the AI will generate "thoughts" directly in the main output or later output(s).
106
+ 4. Although quant IQ4XS was used for testing/examples, higher quants will provide better generation / more sound "reasoning/thinking".
107
+
108
+ ADDITIONAL SUPPORT:
109
+
110
+ For additional generational support, general questions, and detailed parameter info and a lot more see also:
111
+
112
+ NOTE: This is a CLASS 3/4 model.
113
+
114
+ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
115
 
116
  The Beta version of this solution (to make this model behave) is here:
117
 
 
119
 
120
  For additional generational support, general questions, and detailed parameter info and a lot more see also:
121
 
122
+ ---
123
 
124
+ <B>Recommended Settings (all) - For usage with "Think" / "Reasoning":</B>
125
 
126
+ temp: .6 , rep pen: 1.07 (range : 1.02 to 1.12), rep pen range: 64, top_k: 40, top_p: .95, min_p: .05
127
 
128
+ Temp of 1+, 2+, 3+ will result in much deeper, richer and "more interesting" thoughts and reasoning.
129
 
130
+ Model behaviour may change with other parameter(s) and/or sampler(s) activated - especially the "thinking/reasoning" process.
131
 
132
  ---
133
 
134
+ <B>USAGE Instructions - General Information:</B>
135
 
136
  For full information / examples / settings / usage of Grand Horror; please see the original model card here:
137