lbourdois commited on
Commit
af64eaf
·
verified ·
1 Parent(s): c04321e

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +92 -79
README.md CHANGED
@@ -1,80 +1,93 @@
1
- ---
2
- base_model:
3
- - ZeusLabs/Chronos-Platinum-72B
4
- - EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1
5
- - m8than/banana-2-b-72b
6
- - abacusai/Dracarys2-72B-Instruct
7
- - rombodawg/Rombos-LLM-V2.5-Qwen-72b
8
- - Qwen/Qwen2.5-72B
9
- library_name: transformers
10
- tags:
11
- - mergekit
12
- - merge
13
-
14
- ---
15
- # EurobeatVARemix-Qwen2.5-72b
16
-
17
- [![image/png](https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/UqQ-TJ8ZgHk02zvO7Oy11.png)](https://www.youtube.com/watch?v=1gW1uHRPChc)
18
-
19
- Updated EVA to 0.1. That's all folks!
20
-
21
- ...It didn't feel right calling it LLENN anymore so I'm changing the name. ["Pray I don't alter it any further."](<https://www.youtube.com/watch?v=WpE_xMRiCLE>)
22
-
23
- **Please do not ask for quants, contact others instead.**
24
-
25
- *All models are ready for testing on [featherless.ai](https://featherless.ai) as soon as it goes live.*
26
-
27
- ## Merge Details
28
- ### Merge Method
29
-
30
- This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) as a base.
31
-
32
- ### Prompt Format
33
-
34
- ChatML works for the most part.
35
-
36
- ### Sampler Settings
37
-
38
- Personally I use the following:
39
-
40
- ```
41
- Temp: 1.2
42
- Min P: 0.07
43
- Rep Pen: 1.1
44
- ```
45
-
46
- Others have suggested the following:
47
-
48
- ```
49
- Temp: 1.1
50
- Top P: 0.98
51
- Min P: 0.05
52
- ```
53
-
54
- ### Models Merged
55
-
56
- The following models were included in the merge:
57
- * [ZeusLabs/Chronos-Platinum-72B](https://huggingface.co/ZeusLabs/Chronos-Platinum-72B)
58
- * [EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1)
59
- * [m8than/banana-2-b-72b](https://huggingface.co/m8than/banana-2-b-72b)
60
- * [abacusai/Dracarys2-72B-Instruct](https://huggingface.co/abacusai/Dracarys2-72B-Instruct)
61
- * [rombodawg/Rombos-LLM-V2.5-Qwen-72b](https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-72b)
62
-
63
- ### Configuration
64
-
65
- The following YAML configuration was used to produce this model:
66
-
67
- ```yaml
68
- models:
69
- - model: EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1
70
- - model: ZeusLabs/Chronos-Platinum-72B
71
- - model: abacusai/Dracarys2-72B-Instruct
72
- - model: rombodawg/Rombos-LLM-V2.5-Qwen-72b
73
- - model: m8than/banana-2-b-72b
74
-
75
- merge_method: model_stock
76
- base_model: Qwen/Qwen2.5-72B
77
- parameters:
78
- normalize: true
79
- dtype: bfloat16
 
 
 
 
 
 
 
 
 
 
 
 
 
80
  ```
 
1
+ ---
2
+ base_model:
3
+ - ZeusLabs/Chronos-Platinum-72B
4
+ - EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1
5
+ - m8than/banana-2-b-72b
6
+ - abacusai/Dracarys2-72B-Instruct
7
+ - rombodawg/Rombos-LLM-V2.5-Qwen-72b
8
+ - Qwen/Qwen2.5-72B
9
+ library_name: transformers
10
+ tags:
11
+ - mergekit
12
+ - merge
13
+ language:
14
+ - zho
15
+ - eng
16
+ - fra
17
+ - spa
18
+ - por
19
+ - deu
20
+ - ita
21
+ - rus
22
+ - jpn
23
+ - kor
24
+ - vie
25
+ - tha
26
+ - ara
27
+ ---
28
+ # EurobeatVARemix-Qwen2.5-72b
29
+
30
+ [![image/png](https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/UqQ-TJ8ZgHk02zvO7Oy11.png)](https://www.youtube.com/watch?v=1gW1uHRPChc)
31
+
32
+ Updated EVA to 0.1. That's all folks!
33
+
34
+ ...It didn't feel right calling it LLENN anymore so I'm changing the name. ["Pray I don't alter it any further."](<https://www.youtube.com/watch?v=WpE_xMRiCLE>)
35
+
36
+ **Please do not ask for quants, contact others instead.**
37
+
38
+ *All models are ready for testing on [featherless.ai](https://featherless.ai) as soon as it goes live.*
39
+
40
+ ## Merge Details
41
+ ### Merge Method
42
+
43
+ This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) as a base.
44
+
45
+ ### Prompt Format
46
+
47
+ ChatML works for the most part.
48
+
49
+ ### Sampler Settings
50
+
51
+ Personally I use the following:
52
+
53
+ ```
54
+ Temp: 1.2
55
+ Min P: 0.07
56
+ Rep Pen: 1.1
57
+ ```
58
+
59
+ Others have suggested the following:
60
+
61
+ ```
62
+ Temp: 1.1
63
+ Top P: 0.98
64
+ Min P: 0.05
65
+ ```
66
+
67
+ ### Models Merged
68
+
69
+ The following models were included in the merge:
70
+ * [ZeusLabs/Chronos-Platinum-72B](https://huggingface.co/ZeusLabs/Chronos-Platinum-72B)
71
+ * [EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1)
72
+ * [m8than/banana-2-b-72b](https://huggingface.co/m8than/banana-2-b-72b)
73
+ * [abacusai/Dracarys2-72B-Instruct](https://huggingface.co/abacusai/Dracarys2-72B-Instruct)
74
+ * [rombodawg/Rombos-LLM-V2.5-Qwen-72b](https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-72b)
75
+
76
+ ### Configuration
77
+
78
+ The following YAML configuration was used to produce this model:
79
+
80
+ ```yaml
81
+ models:
82
+ - model: EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1
83
+ - model: ZeusLabs/Chronos-Platinum-72B
84
+ - model: abacusai/Dracarys2-72B-Instruct
85
+ - model: rombodawg/Rombos-LLM-V2.5-Qwen-72b
86
+ - model: m8than/banana-2-b-72b
87
+
88
+ merge_method: model_stock
89
+ base_model: Qwen/Qwen2.5-72B
90
+ parameters:
91
+ normalize: true
92
+ dtype: bfloat16
93
  ```