lbourdois commited on
Commit
3b7c9a9
·
verified ·
1 Parent(s): 91f1e1e

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +101 -87
README.md CHANGED
@@ -1,88 +1,102 @@
1
- ---
2
- base_model:
3
- - Qwen/Qwen2.5-32B
4
- - Qwen/QwQ-32B
5
- - trashpanda-org/QwQ-32B-Snowdrop-v0
6
- library_name: transformers
7
- tags:
8
- - mergekit
9
- - merge
10
- ---
11
- <h1 align="center">
12
- <span style="color: #ADD8E6; font-weight: bold;">SnowDr</span><span style="color: #00FF00; font-weight: bold; font-style: italic;">ogito</span><span style="color: #FFFFFF; font-weight: bold;">-</span><span style="color: #FF9999; font-weight: bold;">RpR</span>-32B
13
- </h1>
14
-
15
- <p align="center">
16
- <img src="https://cdn-uploads.huggingface.co/production/uploads/633e3b4136e87ddc64ad584d/XriPrqbrwSAju1XrNoxLK.png" alt="SnowDrogito-RpR-32B Banner" width="600"/>
17
- </p>
18
-
19
- <p align="center">
20
- <a href="https://huggingface.co/skatardude10/SnowDrogito-RpR-32B_IQ4-XS" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">Download IQ4_XS IMATRIX GGUF</a>
21
- </p>
22
-
23
- ## <span style="color: #CCFFCC;">Overview</span>
24
- SnowDrogito-RpR-32B is a QwQ RP Reasoning merge to add smarts to the popular <span style="color: #ADD8E6;">Snowdrop</span> roleplay model, with a little <span style="color: #FF9999;">ArliAI RpR</span> and <span style="color: #00FF00;">Deepcogito</span> for the smarts. Built using the TIES merge method, it attempts to combine strengths from multiple fine-tuned QwQ-32B models. Uploading because the PPL was lower, have been getting more varied/longer/more creative responses with this, but maybe it lacks contextual awareness compared to snowdrop? Not sure.
25
-
26
- ## <span style="color: #CCFFCC;">Setup for Reasoning and ChatML</span>
27
- - **ChatML Formatting**: Use ChatML with `<|im_start|>role\ncontent<|im_end|>\n` (e.g., `<|im_start|>user\nHello!<|im_end|>\n`).
28
- - **Reasoning Settings**: Set "include names" to "never." Start reply with `<think>\n` to enable reasoning.
29
- - **Sampler Settings**: From Snowdrop: Try temperature 0.9, min_p 0.05, top_a 0.3, TFS 0.75, repetition_penalty 1.03, DRY if available.
30
- - **My Settings**: Response (tokens): 2048
31
- Context (tokens): 40960
32
- Temperature: 3.25
33
- Top P: 0.98
34
- Min P: 0.04
35
- Top nsigna: 2.5
36
- Repetition Penalty: 1.03
37
- (XTC) Threshold: 0.3
38
- (XTC) Probability: 0.3
39
- Dry Multiplier: 0.8
40
- Dry Base: 1.75
41
- Dry Allowed Length: 4
42
- Dry Penalty Range: 1024
43
-
44
- For more details, see the setup guides and master import for ST for <a href="https://huggingface.co/trashpanda-org/QwQ-32B-Snowdrop-v0" style="color: #ADD8E6; text-decoration: none;" onmouseover="this.style.color='#E6E6FA'" onmouseout="this.style.color='#ADD8E6'">Snowdrop</a> and other info on <a href="https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v1" style="color: #FF9999; text-decoration: none;" onmouseover="this.style.color='#E6E6FA'" onmouseout="this.style.color='#FF9999'">ArliAI RpR</a>.
45
-
46
- ## <span style="color: #CCFFCC;">Performance</span>
47
- - Perplexity under identical conditions (IQ4_XS, 40,960 context, Q8_0 KV cache, on a 150K-token chat dataset) SnowDrogito-RpR-32B vs <span style="color: #ADD8E6;">QwQ-32B-Snowdrop-v0</span>:
48
- ```
49
- 4.5597 ± 0.02554
50
- 4.6779 ± 0.02671
51
- ```
52
- - IQ4_xs fits 40960 context 24GB VRAM using Q8 KV Cache with full GPU offload.
53
-
54
- ## <span style="color: #CCFFCC;">Model Details</span>
55
- - Base Model: <a href="https://huggingface.co/Qwen/Qwen2.5-32B" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">Qwen/Qwen2.5-32B</a>
56
- - Architecture: Qwen 2.5 (32B parameters)
57
- - Context Length: 40,960 tokens
58
-
59
- ## <span style="color: #CCFFCC;">Merge Configuration</span>
60
- This model was created using mergekit with the following TIES merge configuration:
61
- ```
62
- models:
63
- - model: trashpanda-org/QwQ-32B-Snowdrop-v0
64
- parameters:
65
- weight: 0.75
66
- density: 0.5
67
- - model: deepcogito/cogito-v1-preview-qwen-32B
68
- parameters:
69
- weight: 0.15
70
- density: 0.5
71
- - model: ArliAI/QwQ-32B-ArliAI-RpR-v1
72
- parameters:
73
- weight: 0.1
74
- density: 0.5
75
- merge_method: ties
76
- base_model: Qwen/Qwen2.5-32B
77
- parameters:
78
- weight: 0.9
79
- density: 0.9
80
- normalize: true
81
- int8_mask: true
82
- tokenizer_source: Qwen/Qwen2.5-32B-Instruct
83
- dtype: bfloat16
84
- ```
85
- ## <span style="color: #CCFFCC;">Acknowledgments</span>
86
- - <a href="https://github.com/arcee-ai/mergekit" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">mergekit</a> for merging.
87
- - <a href="https://github.com/ggerganov/llama.cpp" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">llama.cpp</a> for quantization.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
  - Original model creators: <a href="https://huggingface.co/Qwen" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">Qwen</a>, <a href="https://huggingface.co/trashpanda-org" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">trashpanda-org</a>, <a href="https://huggingface.co/deepcogito" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">deepcogito</a>, <a href="https://huggingface.co/ArliAI" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">ArliAI</a>.
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-32B
4
+ - Qwen/QwQ-32B
5
+ - trashpanda-org/QwQ-32B-Snowdrop-v0
6
+ library_name: transformers
7
+ tags:
8
+ - mergekit
9
+ - merge
10
+ language:
11
+ - zho
12
+ - eng
13
+ - fra
14
+ - spa
15
+ - por
16
+ - deu
17
+ - ita
18
+ - rus
19
+ - jpn
20
+ - kor
21
+ - vie
22
+ - tha
23
+ - ara
24
+ ---
25
+ <h1 align="center">
26
+ <span style="color: #ADD8E6; font-weight: bold;">SnowDr</span><span style="color: #00FF00; font-weight: bold; font-style: italic;">ogito</span><span style="color: #FFFFFF; font-weight: bold;">-</span><span style="color: #FF9999; font-weight: bold;">RpR</span>-32B
27
+ </h1>
28
+
29
+ <p align="center">
30
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/633e3b4136e87ddc64ad584d/XriPrqbrwSAju1XrNoxLK.png" alt="SnowDrogito-RpR-32B Banner" width="600"/>
31
+ </p>
32
+
33
+ <p align="center">
34
+ <a href="https://huggingface.co/skatardude10/SnowDrogito-RpR-32B_IQ4-XS" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">Download IQ4_XS IMATRIX GGUF</a>
35
+ </p>
36
+
37
+ ## <span style="color: #CCFFCC;">Overview</span>
38
+ SnowDrogito-RpR-32B is a QwQ RP Reasoning merge to add smarts to the popular <span style="color: #ADD8E6;">Snowdrop</span> roleplay model, with a little <span style="color: #FF9999;">ArliAI RpR</span> and <span style="color: #00FF00;">Deepcogito</span> for the smarts. Built using the TIES merge method, it attempts to combine strengths from multiple fine-tuned QwQ-32B models. Uploading because the PPL was lower, have been getting more varied/longer/more creative responses with this, but maybe it lacks contextual awareness compared to snowdrop? Not sure.
39
+
40
+ ## <span style="color: #CCFFCC;">Setup for Reasoning and ChatML</span>
41
+ - **ChatML Formatting**: Use ChatML with `<|im_start|>role\ncontent<|im_end|>\n` (e.g., `<|im_start|>user\nHello!<|im_end|>\n`).
42
+ - **Reasoning Settings**: Set "include names" to "never." Start reply with `<think>\n` to enable reasoning.
43
+ - **Sampler Settings**: From Snowdrop: Try temperature 0.9, min_p 0.05, top_a 0.3, TFS 0.75, repetition_penalty 1.03, DRY if available.
44
+ - **My Settings**: Response (tokens): 2048
45
+ Context (tokens): 40960
46
+ Temperature: 3.25
47
+ Top P: 0.98
48
+ Min P: 0.04
49
+ Top nsigna: 2.5
50
+ Repetition Penalty: 1.03
51
+ (XTC) Threshold: 0.3
52
+ (XTC) Probability: 0.3
53
+ Dry Multiplier: 0.8
54
+ Dry Base: 1.75
55
+ Dry Allowed Length: 4
56
+ Dry Penalty Range: 1024
57
+
58
+ For more details, see the setup guides and master import for ST for <a href="https://huggingface.co/trashpanda-org/QwQ-32B-Snowdrop-v0" style="color: #ADD8E6; text-decoration: none;" onmouseover="this.style.color='#E6E6FA'" onmouseout="this.style.color='#ADD8E6'">Snowdrop</a> and other info on <a href="https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v1" style="color: #FF9999; text-decoration: none;" onmouseover="this.style.color='#E6E6FA'" onmouseout="this.style.color='#FF9999'">ArliAI RpR</a>.
59
+
60
+ ## <span style="color: #CCFFCC;">Performance</span>
61
+ - Perplexity under identical conditions (IQ4_XS, 40,960 context, Q8_0 KV cache, on a 150K-token chat dataset) SnowDrogito-RpR-32B vs <span style="color: #ADD8E6;">QwQ-32B-Snowdrop-v0</span>:
62
+ ```
63
+ 4.5597 ± 0.02554
64
+ 4.6779 ± 0.02671
65
+ ```
66
+ - IQ4_xs fits 40960 context 24GB VRAM using Q8 KV Cache with full GPU offload.
67
+
68
+ ## <span style="color: #CCFFCC;">Model Details</span>
69
+ - Base Model: <a href="https://huggingface.co/Qwen/Qwen2.5-32B" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">Qwen/Qwen2.5-32B</a>
70
+ - Architecture: Qwen 2.5 (32B parameters)
71
+ - Context Length: 40,960 tokens
72
+
73
+ ## <span style="color: #CCFFCC;">Merge Configuration</span>
74
+ This model was created using mergekit with the following TIES merge configuration:
75
+ ```
76
+ models:
77
+ - model: trashpanda-org/QwQ-32B-Snowdrop-v0
78
+ parameters:
79
+ weight: 0.75
80
+ density: 0.5
81
+ - model: deepcogito/cogito-v1-preview-qwen-32B
82
+ parameters:
83
+ weight: 0.15
84
+ density: 0.5
85
+ - model: ArliAI/QwQ-32B-ArliAI-RpR-v1
86
+ parameters:
87
+ weight: 0.1
88
+ density: 0.5
89
+ merge_method: ties
90
+ base_model: Qwen/Qwen2.5-32B
91
+ parameters:
92
+ weight: 0.9
93
+ density: 0.9
94
+ normalize: true
95
+ int8_mask: true
96
+ tokenizer_source: Qwen/Qwen2.5-32B-Instruct
97
+ dtype: bfloat16
98
+ ```
99
+ ## <span style="color: #CCFFCC;">Acknowledgments</span>
100
+ - <a href="https://github.com/arcee-ai/mergekit" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">mergekit</a> for merging.
101
+ - <a href="https://github.com/ggerganov/llama.cpp" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">llama.cpp</a> for quantization.
102
  - Original model creators: <a href="https://huggingface.co/Qwen" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">Qwen</a>, <a href="https://huggingface.co/trashpanda-org" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">trashpanda-org</a>, <a href="https://huggingface.co/deepcogito" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">deepcogito</a>, <a href="https://huggingface.co/ArliAI" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">ArliAI</a>.