Delta-Vector commited on
Commit
b1cad33
·
verified ·
1 Parent(s): e8d19a6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +246 -0
README.md ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - chat
4
+ datasets:
5
+ - NewEden/OpenCAI-ShareGPT
6
+ - NewEden/vanilla-backrooms-claude-sharegpt
7
+ - anthracite-org/kalo_opus_misc_240827
8
+ - anthracite-org/kalo_misc_part2
9
+ - NewEden/RP-logs-V2-Experimental
10
+ - NewEden/BlueSky-Experimental-sharegpt
11
+ - NewEden/Misc-Mang-Sharegpt
12
+ Language:
13
+ - En
14
+ Pipeline_tag: text-generation
15
+ Base_model: PocketDoc/Dans-PersonalityEngine-V1.1.0-12b
16
+ Tags:
17
+ - Chat
18
+ ---
19
+
20
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/KrGHGgjFcRNfDjt5hRqfh.png)
21
+
22
+ A finetune of Pocketdoc's Personality Engine with a data mix similar to Ohashi-NeMo. After reviewing the dataset and tweaking some hparams, Thanks to Pocketdoc & Doctor Shotgun, this model has once again impressed me with its cohesiveness and writing quality. Once again it's meant to "Short" & Sweet
23
+
24
+ # Quants
25
+
26
+ GGUF: https://huggingface.co/Delta-Vector/Francois-PE-12B-GGUF
27
+
28
+ EXL2 : https://huggingface.co/Delta-Vector/Francois-PE-12B-EXL2
29
+
30
+
31
+ ## Prompting
32
+ Model has been tuned with the ChatML formatting. A typical input would look like this:
33
+
34
+ ```py
35
+ """<|im_start|>user
36
+ Hi there!<|im_end|>
37
+ <|im_start|>assistant
38
+ Nice to meet you!<|im_end|>
39
+ <|im_start|>user
40
+ Can I ask a question?<|im_end|>
41
+ <|im_start|>assistant
42
+ """
43
+
44
+ ```
45
+
46
+
47
+ ## System Prompting
48
+
49
+ I would highly recommend using either Euryale's system prompt or the EVA system prompt with the model.
50
+
51
+ <details><summary>See Sao10k's Euryale System Prompt</summary>
52
+
53
+ ```
54
+ Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.
55
+ <Guidelines>
56
+ • Maintain the character persona but allow it to evolve with the story.
57
+ • Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
58
+ • All types of outputs are encouraged; respond accordingly to the narrative.
59
+ • Include dialogues, actions, and thoughts in each response.
60
+ • Utilize all five senses to describe scenarios within {{char}}'s dialogue.
61
+ • Use emotional symbols such as "!" and "~" in appropriate contexts.
62
+ • Incorporate onomatopoeia when suitable.
63
+ • Allow time for {{user}} to respond with their own input, respecting their agency.
64
+ • Act as secondary characters and NPCs as needed, and remove them when appropriate.
65
+ • When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
66
+ </Guidelines>
67
+
68
+ <Forbidden>
69
+ • Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
70
+ • Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
71
+ • Repetitive and monotonous outputs.
72
+ • Positivity bias in your replies.
73
+ • Being overly extreme or NSFW when the narrative context is inappropriate.
74
+ </Forbidden>
75
+
76
+ Follow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.
77
+
78
+ ```
79
+ </details><br>
80
+
81
+ <details><summary>See EVA System Prompt</summary>
82
+
83
+ ```
84
+ A fictional, narrative-driven role-play emphasizing versatility and UNCENSORED content. Adherence to the `Role-playing Guidelines` is mandatory. Refer to the `Role-play Context` for accurate information.\n\n\n
85
+
86
+ <!-- Start of Role-playing Guidelines -->
87
+
88
+ ### Narration
89
+ Concise Descriptions: Keep narration short and to the point, avoiding redundant unnecessary details. Use a dynamic and varied vocabulary for impact.
90
+ Complementary Role: Use narration to complement dialogue and action, not overshadow them.
91
+ Avoid Repetition: Ensure narration does not repeat information already conveyed through dialogue or action.
92
+
93
+ ### Narrative Consistency
94
+ Continuity: Adhere to established story elements, expanding without contradicting previous details.\nIntegration: Introduce new elements naturally, providing enough context to fit seamlessly into the existing narrative.
95
+
96
+ ### Character Embodiment
97
+ Analysis: Examine the context, subtext, and implications of the given information to gain a deeper understandings of the characters'.
98
+ Reflection: Take time to consider the situation, characters' motivations, and potential consequences.
99
+ Authentic Portrayal: Bring characters to life by consistently and realistically portraying their unique traits, thoughts, emotions, appearances, physical sensations, speech patterns, and tone. Ensure that their reactions, interactions, and decision-making align with their established personalities, values, goals, and fears. Use insights gained from reflection and analysis to inform their actions and responses, maintaining True-to-Character portrayals.
100
+
101
+ <!-- End of Role-playing Guidelines -->
102
+
103
+ </details><br>
104
+
105
+ ### Narration
106
+ Concise Descriptions: Keep narration short and to the point, avoiding redundant unnecessary details. Use a dynamic and varied vocabulary for impact.
107
+ Complementary Role: Use narration to complement dialogue and action, not overshadow them.
108
+ Avoid Repetition: Ensure narration does not repeat information already conveyed through dialogue or action.
109
+
110
+ ### Narrative Consistency
111
+ Continuity: Adhere to established story elements, expanding without contradicting previous details.\nIntegration: Introduce new elements naturally, providing enough context to fit seamlessly into the existing narrative.
112
+
113
+ ### Character Embodiment
114
+ Analysis: Examine the context, subtext, and implications of the given information to gain a deeper understandings of the characters'.
115
+ Reflection: Take time to consider the situation, characters' motivations, and potential consequences.
116
+ Authentic Portrayal: Bring characters to life by consistently and realistically portraying their unique traits, thoughts, emotions, appearances, physical sensations, speech patterns, and tone. Ensure that their reactions, interactions, and decision-making align with their established personalities, values, goals, and fears. Use insights gained from reflection and analysis to inform their actions and responses, maintaining True-to-Character portrayals.
117
+
118
+ <!-- End of Role-playing Guidelines -->",
119
+ ```
120
+ </details><br>
121
+
122
+ ## Axolotl config
123
+
124
+ <details><summary>See axolotl config</summary>
125
+
126
+ Axolotl version: ` 0.5.0`
127
+ ```yaml
128
+ base_model: PocketDoc_Dans-PersonalityEngine-V1.1.0-12b
129
+ model_type: AutoModelForCausalLM
130
+ tokenizer_type: AutoTokenizer
131
+
132
+ plugins:
133
+ - axolotl.integrations.liger.LigerPlugin
134
+ liger_rope: true
135
+ liger_rms_norm: true
136
+ liger_swiglu: true
137
+ liger_fused_linear_cross_entropy: true
138
+
139
+ load_in_8bit: false
140
+ load_in_4bit: false
141
+ strict: false
142
+
143
+ datasets:
144
+ - path: NewEden/OpenCAI-ShareGPT
145
+ type: sharegpt
146
+ - path: NewEden/vanilla-backrooms-claude-sharegpt
147
+ type: sharegpt
148
+ - path: anthracite-org/kalo_opus_misc_240827
149
+ type: sharegpt
150
+ - path: anthracite-org/kalo_misc_part2
151
+ type: sharegpt
152
+ - path: NewEden/RP-logs-V2-Experimental
153
+ type: sharegpt
154
+ - path: NewEden/Misc-Mang-Sharegpt
155
+ type: sharegpt
156
+ - path: NewEden/BlueSky-Experimental-sharegpt
157
+ type: sharegpt
158
+
159
+ dataset_prepared_path: dataset_prepared
160
+ val_set_size: 0.0
161
+ output_dir: 12b-out-r2
162
+
163
+ sequence_len: 24576
164
+ sample_packing: true
165
+ pad_to_sequence_len: true
166
+
167
+ adapter: lora
168
+ lora_model_dir:
169
+ lora_r: 128
170
+ lora_alpha: 16
171
+ lora_dropout: 0.05
172
+ peft_use_rslora: true
173
+ lora_target_modules:
174
+ - gate_proj
175
+ - down_proj
176
+ - up_proj
177
+ - q_proj
178
+ - v_proj
179
+ - k_proj
180
+ - o_proj
181
+
182
+ lora_modules_to_save:
183
+ - embed_tokens
184
+ - lm_head
185
+
186
+
187
+ wandb_project: 12b-control
188
+ wandb_entity:
189
+ wandb_watch:
190
+ wandb_name: 12b-control-r2
191
+ wandb_log_model:
192
+
193
+ gradient_accumulation_steps: 1
194
+ micro_batch_size: 1
195
+ num_epochs: 4
196
+ optimizer: paged_ademamix_8bit
197
+ # optimizer: paged_adamw_8bit
198
+
199
+ lr_scheduler: cosine
200
+ learning_rate: 0.00001
201
+
202
+ train_on_inputs: false
203
+ group_by_length: false
204
+ bf16: auto
205
+ fp16:
206
+ tf32: false
207
+
208
+ gradient_checkpointing: unsloth
209
+ early_stopping_patience:
210
+ resume_from_checkpoint:
211
+ local_rank:
212
+ logging_steps: 1
213
+ xformers_attention:
214
+ flash_attention: true
215
+
216
+ warmup_steps: 40
217
+ evals_per_epoch:
218
+ eval_table_size:
219
+ eval_max_new_tokens:
220
+ saves_per_epoch: 1
221
+ debug:
222
+ deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16_cpuoffload_params.json
223
+ weight_decay: 0.03
224
+ fsdp:
225
+ fsdp_config:
226
+ special_tokens:
227
+ pad_token: <pad>
228
+
229
+ ```
230
+
231
+ </details><br>
232
+
233
+ ## Credits
234
+
235
+ Thank you to [Lucy Knada](https://huggingface.co/lucyknada), [Intervitens](https://huggingface.co/intervitens),[Cgato](https://huggingface.co/cgato), [Kubernetes Bad](https://huggingface.co/kubernetes-bad) and the rest of [Anthracite](https://huggingface.co/anthracite-org)
236
+
237
+
238
+ ## Training
239
+ The training was done for 4 epochs. We used 4 x [RTX 3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3090-3090ti/) GPUs graciously provided by [Intervitens](https://huggingface.co/intervitens) for the fine-tuning of the model.
240
+
241
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
242
+
243
+ ## Safety
244
+
245
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/bL0o_4bvbkmzAvK3W8gu2.png)
246
+