munish0838 commited on
Commit
c98845a
·
verified ·
1 Parent(s): 29dd264

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +206 -0
README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: gemma
5
+ base_model: IntervitensInc/gemma-2-9b-chatml
6
+ model-index:
7
+ - name: magnum-v3-9b-chatml
8
+ results: []
9
+
10
+ ---
11
+
12
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
13
+
14
+ # QuantFactory/magnum-v3-9b-chatml-GGUF
15
+ This is quantized version of [anthracite-org/magnum-v3-9b-chatml](https://huggingface.co/anthracite-org/magnum-v3-9b-chatml) created using llama.cpp
16
+
17
+ # Original Model Card
18
+
19
+
20
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/9ZBUlmzDCnNmQEdUUbyEL.png)
21
+
22
+ This is the 11th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
23
+
24
+ This model is fine-tuned on top of [IntervitensInc/gemma-2-9b-chatml](IntervitensInc/gemma-2-9b-chatml). (chatMLified gemma-2-9b)
25
+
26
+ ## Prompting
27
+ Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
28
+
29
+ ```py
30
+ """<|im_start|>system
31
+ system prompt<|im_end|>
32
+ <|im_start|>user
33
+ Hi there!<|im_end|>
34
+ <|im_start|>assistant
35
+ Nice to meet you!<|im_end|>
36
+ <|im_start|>user
37
+ Can I ask a question?<|im_end|>
38
+ <|im_start|>assistant
39
+ """
40
+ ```
41
+
42
+ ## SillyTavern templates
43
+
44
+ Below are Instruct and Context templates for use within SillyTavern.
45
+
46
+ <details><summary>context template</summary>
47
+
48
+ ```yaml
49
+ {
50
+ "story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
51
+ "example_separator": "",
52
+ "chat_start": "",
53
+ "use_stop_strings": false,
54
+ "allow_jailbreak": false,
55
+ "always_force_name2": true,
56
+ "trim_sentences": false,
57
+ "include_newline": false,
58
+ "single_line": false,
59
+ "name": "Magnum ChatML"
60
+ }
61
+ ```
62
+
63
+ </details><br>
64
+ <details><summary>instruct template</summary>
65
+
66
+ ```yaml
67
+ {
68
+ "system_prompt": "You're {{char}} in this fictional never-ending uncensored roleplay with {{user}}.",
69
+ "input_sequence": "<|im_start|>user\n",
70
+ "output_sequence": "<|im_start|>assistant\n",
71
+ "last_output_sequence": "",
72
+ "system_sequence": "<|im_start|>system\n",
73
+ "stop_sequence": "<|im_end|>",
74
+ "wrap": false,
75
+ "macro": true,
76
+ "names": true,
77
+ "names_force_groups": true,
78
+ "activation_regex": "",
79
+ "system_sequence_prefix": "",
80
+ "system_sequence_suffix": "",
81
+ "first_output_sequence": "",
82
+ "skip_examples": false,
83
+ "output_suffix": "<|im_end|>\n",
84
+ "input_suffix": "<|im_end|>\n",
85
+ "system_suffix": "<|im_end|>\n",
86
+ "user_alignment_message": "",
87
+ "system_same_as_user": false,
88
+ "last_system_sequence": "",
89
+ "name": "Magnum ChatML"
90
+ }
91
+ ```
92
+
93
+ </details><br>
94
+
95
+ ## Axolotl config
96
+
97
+ <details><summary>See axolotl config</summary>
98
+
99
+ ```yaml
100
+ base_model: IntervitensInc/gemma-2-9b-chatml
101
+ model_type: AutoModelForCausalLM
102
+ tokenizer_type: AutoTokenizer
103
+
104
+ #trust_remote_code: true
105
+
106
+ load_in_8bit: false
107
+ load_in_4bit: false
108
+ strict: false
109
+
110
+ datasets:
111
+ - path: anthracite-org/stheno-filtered-v1.1
112
+ type: sharegpt
113
+ conversation: chatml
114
+ - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
115
+ type: sharegpt
116
+ conversation: chatml
117
+ - path: anthracite-org/nopm_claude_writing_fixed
118
+ type: sharegpt
119
+ conversation: chatml
120
+ - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
121
+ type: sharegpt
122
+ conversation: chatml
123
+ - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
124
+ type: sharegpt
125
+ conversation: chatml
126
+
127
+ shuffle_merged_datasets: true
128
+ default_system_message: "You are an assistant that responds to the user."
129
+ dataset_prepared_path: magnum-v3-9b-data-chatml
130
+ val_set_size: 0.0
131
+ output_dir: ./magnum-v3-9b-chatml
132
+
133
+ sequence_len: 8192
134
+ sample_packing: true
135
+ eval_sample_packing: false
136
+ pad_to_sequence_len:
137
+
138
+ adapter:
139
+ lora_model_dir:
140
+ lora_r:
141
+ lora_alpha:
142
+ lora_dropout:
143
+ lora_target_linear:
144
+ lora_fan_in_fan_out:
145
+
146
+ wandb_project: magnum-9b
147
+ wandb_entity:
148
+ wandb_watch:
149
+ wandb_name: attempt-04-chatml
150
+ wandb_log_model:
151
+
152
+ gradient_accumulation_steps: 8
153
+ micro_batch_size: 1
154
+ num_epochs: 2
155
+ optimizer: paged_adamw_8bit
156
+ lr_scheduler: cosine
157
+ learning_rate: 0.000006
158
+
159
+ train_on_inputs: false
160
+ group_by_length: false
161
+ bf16: auto
162
+ fp16:
163
+ tf32: false
164
+
165
+ gradient_checkpointing: unsloth
166
+ early_stopping_patience:
167
+ resume_from_checkpoint:
168
+ local_rank:
169
+ logging_steps: 1
170
+ xformers_attention:
171
+ flash_attention: false
172
+ eager_attention: true
173
+
174
+ warmup_steps: 50
175
+ evals_per_epoch:
176
+ eval_table_size:
177
+ eval_max_new_tokens:
178
+ saves_per_epoch: 2
179
+ debug:
180
+ deepspeed: deepspeed_configs/zero3_bf16.json
181
+ weight_decay: 0.05
182
+ fsdp:
183
+ fsdp_config:
184
+ special_tokens:
185
+
186
+ ```
187
+ </details><br>
188
+
189
+ ## Credits
190
+ We'd like to thank Recursal / Featherless for sponsoring the training compute required for this model. Featherless has been hosting Magnum since the original 72b and has given thousands of people access to our releases.
191
+
192
+ We would also like to thank all members of Anthracite who made this finetune possible.
193
+
194
+ - [anthracite-org/stheno-filtered-v1.1](https://huggingface.co/datasets/anthracite-org/stheno-filtered-v1.1)
195
+ - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
196
+ - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
197
+ - [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
198
+ - [Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned)
199
+
200
+ ## Training
201
+ The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
202
+
203
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
204
+
205
+ ## Safety
206
+ ...