DavidAU commited on
Commit
83c6b96
·
verified ·
1 Parent(s): dd4c367

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -78,7 +78,7 @@ TEMP/SETTINGS:
78
  1. Set Temp between 0 and .8, higher than this "think" functions will activate differently. The most "stable" temp seems to be .6, with a variance of +-0.05. Lower for more "logic" reasoning, raise it for more "creative" reasoning (max .8 or so). Also set context to at least 4096, to account for "thoughts" generation.
79
  2. For temps 1+,2+ etc etc, thought(s) will expand, and become deeper and richer.
80
  3. Set "repeat penalty" to 1.09 to 1.12 (recommended) and "repeat penalty range" to 64-128. (because this model is just as "unhinged" as the org version)
81
- 4. This model requires a Llama 3 Instruct and/or Command-R chat template. (see notes on "System Prompt" / "Role" below)
82
 
83
  PROMPTS:
84
 
 
78
  1. Set Temp between 0 and .8, higher than this "think" functions will activate differently. The most "stable" temp seems to be .6, with a variance of +-0.05. Lower for more "logic" reasoning, raise it for more "creative" reasoning (max .8 or so). Also set context to at least 4096, to account for "thoughts" generation.
79
  2. For temps 1+,2+ etc etc, thought(s) will expand, and become deeper and richer.
80
  3. Set "repeat penalty" to 1.09 to 1.12 (recommended) and "repeat penalty range" to 64-128. (because this model is just as "unhinged" as the org version)
81
+ 4. This model requires a Llama 3 Instruct and/or Command-R chat template. (see notes on "System Prompt" / "Role" below) OR standard "Jinja Autoloaded Template" (this is contained in the quant and will autoload)
82
 
83
  PROMPTS:
84