nthehai01 lbourdois commited on
Commit
e1de08e
·
verified ·
1 Parent(s): cdc7a25

Improve language tag (#1)

Browse files

- Improve language tag (42b2c37b715cfc6c21481715fbfef380c5f70d31)


Co-authored-by: Loïck BOURDOIS <[email protected]>

Files changed (1) hide show
  1. README.md +79 -66
README.md CHANGED
@@ -1,66 +1,79 @@
1
- ---
2
- base_model:
3
- - Qwen/Qwen2.5-7B-Instruct
4
- - Qwen/Qwen2.5-Coder-7B
5
- - Qwen/Qwen2.5-Math-7B
6
- - Qwen/Qwen2.5-7B
7
- library_name: transformers
8
- tags:
9
- - mergekit
10
- - merge
11
-
12
- ---
13
- # nthehai01/Qwen2.5-7B-Instruct-Math-Code-dare-linear
14
-
15
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
-
17
- ## Performance
18
- | Metric |Value|
19
- |---------------------------------|----:|
20
- |GSM8k (zero-shot) |87.79|
21
- |HellaSwag (zero-Shot) |34.29|
22
- |MBPP (zero-shot) |60.41|
23
-
24
- ## Merge Details
25
- ### Merge Method
26
-
27
- This model was merged using the [Linear DARE](https://arxiv.org/abs/2311.03099) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
28
-
29
- ### Models Merged
30
-
31
- The following models were included in the merge:
32
- * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
33
- * [Qwen/Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B)
34
- * [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B)
35
-
36
- ### Configuration
37
-
38
- The following YAML configuration was used to produce this model:
39
-
40
- ```yaml
41
- base_model: Qwen/Qwen2.5-7B
42
- dtype: bfloat16
43
- merge_method: dare_linear
44
- parameters:
45
- lambda: 0.690661354021995
46
- normalize: 1.0
47
- slices:
48
- - sources:
49
- - layer_range: [0, 28]
50
- model: Qwen/Qwen2.5-7B
51
- - layer_range: [0, 28]
52
- model: Qwen/Qwen2.5-Math-7B
53
- parameters:
54
- density: 0.9593725853706829
55
- weight: 0.11472446469404357
56
- - layer_range: [0, 28]
57
- model: Qwen/Qwen2.5-Coder-7B
58
- parameters:
59
- density: 0.768281938201547
60
- weight: 0.11350094855547865
61
- - layer_range: [0, 28]
62
- model: Qwen/Qwen2.5-7B-Instruct
63
- parameters:
64
- density: 0.48528478746069637
65
- weight: 0.6453505470133651
66
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-7B-Instruct
4
+ - Qwen/Qwen2.5-Coder-7B
5
+ - Qwen/Qwen2.5-Math-7B
6
+ - Qwen/Qwen2.5-7B
7
+ library_name: transformers
8
+ tags:
9
+ - mergekit
10
+ - merge
11
+ language:
12
+ - zho
13
+ - eng
14
+ - fra
15
+ - spa
16
+ - por
17
+ - deu
18
+ - ita
19
+ - rus
20
+ - jpn
21
+ - kor
22
+ - vie
23
+ - tha
24
+ - ara
25
+ ---
26
+ # nthehai01/Qwen2.5-7B-Instruct-Math-Code-dare-linear
27
+
28
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
29
+
30
+ ## Performance
31
+ | Metric |Value|
32
+ |---------------------------------|----:|
33
+ |GSM8k (zero-shot) |87.79|
34
+ |HellaSwag (zero-Shot) |34.29|
35
+ |MBPP (zero-shot) |60.41|
36
+
37
+ ## Merge Details
38
+ ### Merge Method
39
+
40
+ This model was merged using the [Linear DARE](https://arxiv.org/abs/2311.03099) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
41
+
42
+ ### Models Merged
43
+
44
+ The following models were included in the merge:
45
+ * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
46
+ * [Qwen/Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B)
47
+ * [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B)
48
+
49
+ ### Configuration
50
+
51
+ The following YAML configuration was used to produce this model:
52
+
53
+ ```yaml
54
+ base_model: Qwen/Qwen2.5-7B
55
+ dtype: bfloat16
56
+ merge_method: dare_linear
57
+ parameters:
58
+ lambda: 0.690661354021995
59
+ normalize: 1.0
60
+ slices:
61
+ - sources:
62
+ - layer_range: [0, 28]
63
+ model: Qwen/Qwen2.5-7B
64
+ - layer_range: [0, 28]
65
+ model: Qwen/Qwen2.5-Math-7B
66
+ parameters:
67
+ density: 0.9593725853706829
68
+ weight: 0.11472446469404357
69
+ - layer_range: [0, 28]
70
+ model: Qwen/Qwen2.5-Coder-7B
71
+ parameters:
72
+ density: 0.768281938201547
73
+ weight: 0.11350094855547865
74
+ - layer_range: [0, 28]
75
+ model: Qwen/Qwen2.5-7B-Instruct
76
+ parameters:
77
+ density: 0.48528478746069637
78
+ weight: 0.6453505470133651
79
+ ```