model_type
stringclasses
5 values
model
stringlengths
12
62
AVG
float64
0.03
0.7
CG
float64
0
0.68
EL
float64
0
0.77
FA
float64
0
0.62
HE
float64
0
0.83
MC
float64
0
0.95
MR
float64
0
0.95
MT
float64
0.19
0.86
NLI
float64
0
0.97
QA
float64
0
0.77
RC
float64
0
0.94
SUM
float64
0
0.29
aio_char_f1
float64
0
0.9
alt-e-to-j_bert_score_ja_f1
float64
0
0.88
alt-e-to-j_bleu_ja
float64
0
16
alt-e-to-j_comet_wmt22
float64
0.2
0.92
alt-j-to-e_bert_score_en_f1
float64
0
0.96
alt-j-to-e_bleu_en
float64
0
20.1
alt-j-to-e_comet_wmt22
float64
0.17
0.89
chabsa_set_f1
float64
0
0.77
commonsensemoralja_exact_match
float64
0
0.94
jamp_exact_match
float64
0
1
janli_exact_match
float64
0
1
jcommonsenseqa_exact_match
float64
0
0.98
jemhopqa_char_f1
float64
0
0.71
jmmlu_exact_match
float64
0
0.81
jnli_exact_match
float64
0
0.94
jsem_exact_match
float64
0
0.96
jsick_exact_match
float64
0
0.93
jsquad_char_f1
float64
0
0.94
jsts_pearson
float64
-0.35
0.94
jsts_spearman
float64
-0.6
0.91
kuci_exact_match
float64
0
0.93
mawps_exact_match
float64
0
0.95
mbpp_code_exec
float64
0
0.68
mbpp_pylint_check
float64
0
0.99
mmlu_en_exact_match
float64
0
0.86
niilc_char_f1
float64
0
0.7
wiki_coreference_set_f1
float64
0
0.4
wiki_dependency_set_f1
float64
0
0.88
wiki_ner_set_f1
float64
0
0.33
wiki_pas_set_f1
float64
0
0.57
wiki_reading_char_f1
float64
0
0.94
wikicorpus-e-to-j_bert_score_ja_f1
float64
0
0.88
wikicorpus-e-to-j_bleu_ja
float64
0
24
wikicorpus-e-to-j_comet_wmt22
float64
0.18
0.87
wikicorpus-j-to-e_bert_score_en_f1
float64
0
0.93
wikicorpus-j-to-e_bleu_en
float64
0
15.9
wikicorpus-j-to-e_comet_wmt22
float64
0.17
0.79
xlsum_ja_bert_score_ja_f1
float64
0
0.79
xlsum_ja_bleu_ja
float64
0
10.2
xlsum_ja_rouge1
float64
0
52.8
xlsum_ja_rouge2
float64
0
29.2
xlsum_ja_rouge2_scaling
float64
0
0.29
xlsum_ja_rougeLsum
float64
0
44.9
architecture
stringclasses
12 values
precision
stringclasses
3 values
license
stringclasses
14 values
params
float64
0
70.6
likes
int64
0
6.19k
revision
stringclasses
1 value
num_few_shot
int64
0
4
add_special_tokens
stringclasses
2 values
llm_jp_eval_version
stringclasses
1 value
vllm_version
stringclasses
1 value
๐Ÿค : base merges and moerges
wanlige/li-14b-v0.4-slerp0.1
0.5296
0.3655
0.264
0.1166
0.6476
0.8526
0.824
0.6369
0.7463
0.3942
0.8843
0.0938
0.4107
0.7561
10.755
0.7115
0.8412
15.4733
0.5468
0.264
0.8948
0.6121
0.8222
0.9276
0.3667
0.6707
0.7781
0.7247
0.7944
0.8843
0.8856
0.8559
0.7354
0.824
0.3655
0.6847
0.6244
0.4054
0.0135
0.0047
0
0.0008
0.5637
0.7301
8.1748
0.6923
0.8392
8.8619
0.5972
0.6856
2.884
24.1801
9.3772
0.0938
21.3495
Qwen2ForCausalLM
bfloat16
14.766
6
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
wanlige/li-14b-v0.4-slerp0.1
0.6179
0.3655
0.5847
0.2215
0.7419
0.8846
0.892
0.7806
0.7774
0.5443
0.9108
0.0938
0.5381
0.7909
12.1159
0.7766
0.9541
17.0533
0.8846
0.5847
0.9056
0.6667
0.8222
0.9562
0.5483
0.7128
0.8595
0.7816
0.7571
0.9108
0.905
0.8782
0.7921
0.892
0.3655
0.6847
0.771
0.5465
0.0478
0.3701
0.0088
0.0803
0.6003
0.7448
10.4635
0.7005
0.905
11.0215
0.7607
0.6856
2.884
24.1801
9.3772
0.0938
21.3495
Qwen2ForCausalLM
bfloat16
14.766
6
main
4
True
v1.4.1
v0.6.3.post1
โญ• : instruction-tuned
TIGER-Lab/Qwen2.5-32B-Instruct-CFT
0.6551
0.5301
0.5896
0.271
0.7754
0.8964
0.944
0.8478
0.8109
0.5398
0.9051
0.0955
0.5539
0.8642
13.319
0.9083
0.9554
17.68
0.8861
0.5896
0.8975
0.6724
0.8444
0.958
0.5693
0.7506
0.8969
0.7822
0.8587
0.9051
0.8899
0.8778
0.8336
0.944
0.5301
0.7651
0.8001
0.4962
0.0534
0.3708
0
0.1115
0.8196
0.8291
11.1464
0.8385
0.9044
11.1091
0.7582
0.6914
2.7236
25.5208
9.5416
0.0955
22.2206
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
5
main
4
True
v1.4.1
v0.6.3.post1
โญ• : instruction-tuned
TIGER-Lab/Qwen2.5-32B-Instruct-CFT
0.5453
0.5301
0.1079
0.1455
0.5738
0.8739
0.794
0.8385
0.7644
0.3888
0.8865
0.0955
0.4438
0.8489
11.3164
0.901
0.9511
15.7066
0.8796
0.1079
0.9031
0.6494
0.7931
0.9312
0.2681
0.5614
0.8184
0.7986
0.7625
0.8865
0.8949
0.876
0.7874
0.794
0.5301
0.7651
0.5861
0.4545
0.0281
0.0078
0.0354
0.0065
0.6495
0.8004
8.7498
0.8269
0.8968
9.5317
0.7467
0.6914
2.7236
25.5208
9.5416
0.0955
22.2206
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
5
main
0
True
v1.4.1
v0.6.3.post1
โญ• : instruction-tuned
TIGER-Lab/Qwen2.5-Math-7B-CFT
0.434
0
0.3597
0.0969
0.5189
0.5742
0.822
0.6971
0.6218
0.2407
0.8074
0.035
0.1146
0.7595
4.5895
0.6944
0.9276
12.2069
0.83
0.3597
0.7265
0.4856
0.6111
0.5675
0.4224
0.4759
0.6672
0.7197
0.6253
0.8074
0.6537
0.5905
0.4286
0.822
0
0
0.562
0.1851
0.0161
0.1415
0
0.0141
0.3128
0.6928
4.5142
0.597
0.8715
7.4828
0.6668
0.623
1.132
12.083
3.5072
0.035
10.2855
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
8
main
4
True
v1.4.1
v0.6.3.post1
โญ• : instruction-tuned
TIGER-Lab/Qwen2.5-Math-7B-CFT
0.2318
0
0.0021
0.0403
0.0999
0.3071
0.174
0.6655
0.5192
0.1131
0.5942
0.035
0.0549
0.7241
2.8312
0.6517
0.9183
9.7658
0.8094
0.0021
0.488
0.3448
0.4736
0.2368
0.1939
0.0215
0.5485
0.6105
0.6186
0.5942
0.6296
0.615
0.1966
0.174
0
0
0.1783
0.0905
0.0009
0
0
0
0.2006
0.6588
3.0223
0.5576
0.8577
6.3064
0.6433
0.623
1.132
12.083
3.5072
0.035
10.2855
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
8
main
0
True
v1.4.1
v0.6.3.post1
โญ• : instruction-tuned
qihoo360/TinyR1-32B-Preview
0.5665
0
0.5626
0.231
0.7446
0.8601
0.862
0.8318
0.7501
0.4913
0.8981
0
0.4666
0.849
11.6365
0.8927
0.9501
16.2483
0.8777
0.5626
0.8577
0.6408
0.7847
0.9383
0.5535
0.7275
0.8242
0.7948
0.7061
0.8981
0.8827
0.8464
0.7844
0.862
0
0
0.7618
0.4536
0.0133
0.3166
0.115
0.0365
0.6735
0.8099
9.653
0.8083
0.8999
10.2179
0.7488
0.5475
0.0211
0.0074
0
0
0.0074
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
325
main
4
True
v1.4.1
v0.6.3.post1
โญ• : instruction-tuned
qihoo360/TinyR1-32B-Preview
0.1237
0
0.1276
0.0774
0.001
0.522
0
0.5262
0.0116
0.0461
0.0486
0
0.047
0.7428
3.7697
0.6875
0.8084
3.091
0.4235
0.1276
0.8176
0
0
0.1206
0.064
0.002
0.0025
0.0549
0.0004
0.0486
-0.215
-0.5991
0.6276
0
0
0
0.0001
0.0272
0.0071
0.0173
0.0435
0.0025
0.3167
0.689
3.1838
0.6039
0.7704
3.4735
0.3898
0.5475
0.0211
0.0074
0
0
0.0074
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
325
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
Sakalti/oxyge1-33B
0.5055
0.2831
0.1063
0.1213
0.5663
0.8747
0.794
0.6899
0.7654
0.3803
0.8844
0.0946
0.4306
0.7644
10.9448
0.7332
0.8698
15.2391
0.661
0.1063
0.9033
0.6523
0.7944
0.9339
0.2681
0.5521
0.8184
0.7973
0.7646
0.8844
0.8949
0.8758
0.7868
0.794
0.2831
0.4739
0.5805
0.4421
0.0281
0.0069
0
0.0054
0.5661
0.7345
8.5113
0.7025
0.8634
9.1806
0.6628
0.6881
2.7576
25.2368
9.4588
0.0946
21.9712
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
2
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
Sakalti/oxyge1-33B
0.6314
0.2831
0.5842
0.2733
0.7761
0.8963
0.942
0.8428
0.8097
0.5387
0.9041
0.0946
0.5532
0.8639
13.2869
0.9074
0.955
17.794
0.8852
0.5842
0.8973
0.6724
0.8417
0.958
0.5672
0.7515
0.8969
0.7803
0.8573
0.9041
0.8882
0.8755
0.8335
0.942
0.2831
0.4739
0.8007
0.4958
0.0527
0.3809
0
0.1191
0.8136
0.8228
11.0997
0.828
0.9019
11.0915
0.7507
0.6881
2.7576
25.2368
9.4588
0.0946
21.9712
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
2
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
rsh345/llama3.1-8b-swallow-openmath-dare_ties-d5w5_d5w5
0.0715
0
0
0.003
0.0012
0.2901
0
0.3406
0.0477
0.0327
0.0713
0.0004
0.0062
0.58
0.3146
0.3564
0.7674
5.0764
0.3511
0
0.4654
0.0316
0.0542
0.1912
0.082
0.0023
0.0403
0
0.1124
0.0713
0.0327
0.0328
0.2136
0
0
0.0161
0.0001
0.0098
0
0
0
0
0.0149
0.5752
0.3131
0.3517
0.7512
4.219
0.3032
0.4842
0.0483
0.256
0.0388
0.0004
0.2422
LlamaForCausalLM
float16
8.03
0
main
0
False
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
rsh345/llama3.1-8b-swallow-openmath-dare_ties-d5w5_d5w5
0.2384
0
0.14
0.004
0.2884
0.3261
0.418
0.402
0.5082
0.1191
0.4164
0.0004
0.0376
0.6445
0.3864
0.4314
0.8601
7.7196
0.6057
0.14
0.4692
0.3966
0.5
0.2511
0.2471
0.2807
0.5534
0.6723
0.4185
0.4164
-0.2409
-0.3426
0.2579
0.418
0
0.0161
0.2961
0.0727
0.002
0
0
0
0.0179
0.5303
0.1744
0.2938
0.7294
4.1137
0.2772
0.4842
0.0483
0.256
0.0388
0.0004
0.2422
LlamaForCausalLM
float16
8.03
0
main
4
False
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
nvidia/OpenMath2-Llama3.1-8B
0.0396
0
0
0.0085
0
0.0443
0
0.3238
0
0.0311
0.0276
0.0001
0.0042
0.6166
0.0922
0.3589
0.807
5.7372
0.3528
0
0
0
0
0.0009
0.0644
0
0
0
0
0.0276
0
0
0.1322
0
0
0
0
0.0247
0
0
0
0
0.0425
0.557
0.0439
0.2645
0.7964
4.7115
0.319
0.5711
0.0317
0.3003
0.0085
0.0001
0.3002
LlamaForCausalLM
bfloat16
llama3.1
8.03
28
main
0
False
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
nvidia/OpenMath2-Llama3.1-8B
0.2268
0
0.064
0.0098
0.3163
0.3148
0.498
0.4558
0.5125
0.0769
0.2468
0.0001
0.0089
0.681
0.4733
0.4924
0.8863
8.599
0.6799
0.064
0.4724
0.3448
0.5
0.2207
0.1859
0.277
0.5534
0.6717
0.4928
0.2468
-0.0135
-0.0185
0.2513
0.498
0
0
0.3555
0.0358
0
0
0
0
0.049
0.5863
0.0469
0.3226
0.7818
4.8211
0.3284
0.5711
0.0317
0.3003
0.0085
0.0001
0.3002
LlamaForCausalLM
bfloat16
llama3.1
8.03
28
main
4
False
v1.4.1
v0.6.3.post1
๐ŸŸฆ : RL-tuned (Preference optimization)
Qwen/QwQ-32B
0.5856
0.3996
0.5617
0.2634
0.6509
0.8863
0.844
0.7916
0.7259
0.4758
0.7399
0.1023
0.4327
0.8153
10.6998
0.8323
0.9359
14.9658
0.85
0.5617
0.9018
0.6494
0.7944
0.9446
0.5407
0.6814
0.7506
0.798
0.6369
0.7399
0.8734
0.8478
0.8125
0.844
0.3996
0.6787
0.6204
0.4539
0.0177
0.3748
0.177
0.075
0.6727
0.8039
8.9104
0.8028
0.8759
7.7521
0.6811
0.6974
2.2513
29.6185
10.2373
0.1023
25.1995
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
2,616
main
4
True
v1.4.1
v0.6.3.post1
๐ŸŸฆ : RL-tuned (Preference optimization)
Qwen/QwQ-32B
0.3284
0.3996
0.1331
0.0997
0.082
0.7296
0.048
0.7489
0.4949
0.1939
0.5805
0.1023
0.1767
0.7857
9.3475
0.8029
0.9234
12.0282
0.8197
0.1331
0.8675
0.3592
0.7347
0.8677
0.2517
0.0573
0.3414
0.5202
0.5188
0.5805
0.329
0.7893
0.4535
0.048
0.3996
0.6787
0.1066
0.1533
0.0009
0.0072
0.0177
0.0026
0.4702
0.7276
5.7043
0.703
0.8656
7.0633
0.6698
0.6974
2.2513
29.6185
10.2373
0.1023
25.1995
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
2,616
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
Cran-May/tempmotacilla-cinerea-0308
0.6465
0.5823
0.569
0.2644
0.7412
0.8835
0.896
0.8496
0.7769
0.5481
0.9146
0.0863
0.5445
0.8647
12.8035
0.9101
0.9545
17.1006
0.8855
0.569
0.9046
0.6609
0.8181
0.9526
0.5633
0.7139
0.8509
0.7822
0.7723
0.9146
0.8951
0.8738
0.7932
0.896
0.5823
0.9157
0.7686
0.5366
0.0704
0.3562
0.0088
0.0899
0.7967
0.827
10.9534
0.8419
0.9055
11.0889
0.761
0.6843
2.9274
22.1803
8.6305
0.0863
19.6554
Qwen2ForCausalLM
bfloat16
14.766
1
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
Cran-May/tempmotacilla-cinerea-0308
0.5671
0.5843
0.2222
0.1391
0.6317
0.8575
0.84
0.8358
0.7477
0.4183
0.8747
0.0863
0.4379
0.8492
10.902
0.9034
0.949
15.8739
0.8737
0.2222
0.8968
0.6207
0.8292
0.933
0.415
0.6704
0.7609
0.7222
0.8058
0.8747
0.8878
0.8547
0.7427
0.84
0.5843
0.9157
0.5929
0.4022
0.0208
0.0039
0
0.0013
0.6698
0.7977
8.4769
0.8257
0.8921
9.1572
0.7406
0.6843
2.9274
22.1803
8.6305
0.0863
19.6554
Qwen2ForCausalLM
bfloat16
14.766
1
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
JungZoona/T3Q-qwen2.5-14b-v1.0-e3
0.6431
0.5904
0.5524
0.255
0.7391
0.8816
0.876
0.8434
0.7718
0.5518
0.9146
0.0975
0.5332
0.8605
12.5607
0.9068
0.9519
16.1318
0.879
0.5524
0.8998
0.6695
0.8069
0.9562
0.5887
0.7038
0.8505
0.7835
0.7485
0.9146
0.9041
0.8765
0.7888
0.876
0.5904
0.8996
0.7745
0.5335
0.0271
0.3938
0.0088
0.0634
0.7817
0.8134
10.3823
0.8358
0.9051
11.1879
0.7522
0.6893
2.5875
26.5648
9.7674
0.0975
23.0259
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
15
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
JungZoona/T3Q-qwen2.5-14b-v1.0-e3
0.5695
0.5904
0.2897
0.1441
0.6824
0.8494
0.818
0.8417
0.7348
0.3467
0.8702
0.0975
0.314
0.8474
11.6296
0.9056
0.9536
16.1262
0.8847
0.2897
0.8978
0.6063
0.7903
0.933
0.3518
0.6151
0.7169
0.7481
0.8123
0.8702
0.8508
0.8631
0.7174
0.818
0.5904
0.8996
0.7498
0.3744
0.0233
0.0044
0.0088
0
0.6839
0.7885
7.9874
0.8242
0.8993
9.8812
0.7523
0.6893
2.5875
26.5648
9.7674
0.0975
23.0259
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
15
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
JungZoona/T3Q-qwen2.5-14b-v1.2-e2
0.6467
0.6165
0.5652
0.2565
0.7344
0.8779
0.89
0.8447
0.7862
0.5315
0.9111
0.0997
0.5331
0.8628
13.0086
0.9082
0.9528
16.6127
0.8804
0.5652
0.8973
0.6925
0.8028
0.9544
0.5417
0.7001
0.8468
0.7835
0.8056
0.9111
0.9012
0.8701
0.7819
0.89
0.6165
0.9157
0.7687
0.5198
0.0303
0.3914
0.0088
0.0778
0.7743
0.8161
10.304
0.8368
0.9062
11.3567
0.7534
0.6948
2.5929
27.5891
9.9732
0.0997
23.9209
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
5
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
JungZoona/T3Q-qwen2.5-14b-v1.2-e2
0.5648
0.6165
0.2519
0.1401
0.7089
0.8513
0.808
0.8412
0.7496
0.285
0.8609
0.0997
0.3393
0.845
12.0082
0.9038
0.9531
15.5971
0.8845
0.2519
0.8985
0.6695
0.7847
0.9339
0.262
0.6693
0.6853
0.7532
0.8551
0.8609
0.8645
0.8651
0.7215
0.808
0.6165
0.9157
0.7485
0.2538
0.0133
0.0073
0.0088
0
0.6712
0.7893
8.1916
0.8242
0.8992
9.8162
0.7524
0.6948
2.5929
27.5891
9.9732
0.0997
23.9209
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
5
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
JungZoona/T3Q-qwen2.5-32b-v1.2-e2
0.5791
0.6727
0.0893
0.1543
0.6903
0.8779
0.868
0.8327
0.7961
0.3929
0.8865
0.1096
0.4166
0.8495
12.3906
0.9034
0.938
16.8244
0.8445
0.0893
0.9103
0.7184
0.8375
0.9392
0.3277
0.7187
0.8348
0.8043
0.7857
0.8865
0.8883
0.8733
0.7841
0.868
0.6727
0.9679
0.6619
0.4346
0.01
0.006
0.0088
0.0032
0.7436
0.8012
9.2148
0.8321
0.8992
10.1922
0.751
0.703
2.7529
28.7047
10.9627
0.1096
24.9853
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
1
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
JungZoona/T3Q-qwen2.5-32b-v1.2-e2
0.6734
0.6727
0.5859
0.2667
0.7849
0.8989
0.942
0.8481
0.8198
0.5648
0.9141
0.1096
0.5803
0.8612
13.5611
0.9014
0.9553
17.4587
0.8847
0.5859
0.9046
0.7098
0.8736
0.9634
0.5911
0.7588
0.8599
0.7898
0.8658
0.9141
0.8832
0.87
0.8288
0.942
0.6727
0.9679
0.811
0.523
0.0278
0.3629
0.0059
0.1118
0.8251
0.8358
11.7295
0.8432
0.9087
11.6244
0.7629
0.703
2.7529
28.7047
10.9627
0.1096
24.9853
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
1
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
prithivMLmods/Galactic-Qwen-14B-Exp2
0.6556
0.6586
0.5546
0.2595
0.7521
0.8848
0.904
0.8474
0.7764
0.5459
0.9179
0.1102
0.5389
0.8647
12.9944
0.9092
0.9536
17.1361
0.883
0.5546
0.9036
0.6638
0.8181
0.9526
0.5753
0.7258
0.8525
0.7898
0.7581
0.9179
0.9036
0.8735
0.7983
0.904
0.6586
0.9639
0.7784
0.5236
0.0564
0.3814
0.0177
0.0596
0.7825
0.8237
10.8463
0.8401
0.906
11.4467
0.7572
0.7012
2.7087
29.708
11.0175
0.1102
25.7031
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
5
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
prithivMLmods/Galactic-Qwen-14B-Exp2
0.549
0.6586
0.195
0.1469
0.4286
0.8626
0.776
0.8424
0.7405
0.3967
0.8819
0.1102
0.4116
0.8506
11.556
0.9056
0.9542
16.4825
0.8854
0.195
0.9058
0.6178
0.7972
0.9437
0.4128
0.3584
0.7375
0.7475
0.8027
0.8819
0.8775
0.8635
0.7384
0.776
0.6586
0.9639
0.4988
0.3657
0.0292
0.0087
0.0088
0.0006
0.6871
0.794
8.3713
0.8253
0.8987
9.6329
0.7533
0.7012
2.7087
29.708
11.0175
0.1102
25.7031
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
5
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
Sakalti/sakalinear-1-1.0
0.2194
0
0.2015
0.1145
0.1155
0.4912
0.006
0.6432
0.2306
0.1389
0.3624
0.1091
0.0957
0.7637
7.5029
0.7793
0.8203
12.5698
0.5875
0.2015
0.0005
0.4885
0
0.8159
0.1437
0.133
0.3217
0.0006
0.3424
0.3624
0.8887
0.8505
0.6571
0.006
0
0
0.0981
0.1773
0.0123
0.0123
0
0.0018
0.5463
0.6895
6.2676
0.6655
0.8049
7.2805
0.5407
0.7049
2.9128
33.4218
10.9144
0.1091
27.233
Qwen2ForCausalLM
float16
7.613
0
main
0
False
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
Sakalti/sakalinear-1-1.0
0.5252
0
0.4444
0.2022
0.6495
0.8288
0.786
0.7083
0.7464
0.4137
0.8883
0.1091
0.37
0.7795
11.2921
0.7082
0.944
15.6825
0.864
0.4444
0.8469
0.6092
0.7458
0.9035
0.5035
0.6043
0.8361
0.7191
0.8216
0.8883
0.8634
0.8461
0.7361
0.786
0
0
0.6947
0.3676
0.0509
0.3297
0.0531
0.0794
0.498
0.7419
8.2359
0.6399
0.8518
8.8304
0.621
0.7049
2.9128
33.4218
10.9144
0.1091
27.233
Qwen2ForCausalLM
float16
7.613
0
main
4
False
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-ties-knfxihf
0.5729
0.6406
0.2325
0.1539
0.6993
0.8583
0.816
0.8379
0.7361
0.3622
0.8589
0.106
0.4028
0.8447
10.4032
0.8986
0.9533
15.9587
0.8836
0.2325
0.8983
0.6264
0.8111
0.9357
0.352
0.6651
0.7769
0.6446
0.8216
0.8589
0.8898
0.857
0.741
0.816
0.6406
0.9819
0.7335
0.3317
0.0239
0.0114
0.0442
0
0.6897
0.7959
8.0867
0.8188
0.8966
9.293
0.7508
0.7009
2.8273
27.9811
10.6038
0.106
24.4541
Qwen2ForCausalLM
float16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-ties-knfxihf
0.651
0.6406
0.571
0.2718
0.7389
0.8825
0.896
0.8461
0.766
0.5378
0.9048
0.106
0.52
0.862
12.5494
0.9073
0.9535
16.5637
0.8834
0.571
0.9016
0.6034
0.8
0.9508
0.5969
0.7108
0.8611
0.7727
0.7928
0.9048
0.8862
0.861
0.7952
0.896
0.6406
0.9819
0.7671
0.4964
0.0822
0.3752
0.0531
0.0779
0.7709
0.8262
10.7399
0.8378
0.9037
10.7909
0.7559
0.7009
2.8273
27.9811
10.6038
0.106
24.4541
Qwen2ForCausalLM
float16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
qihoo360/Light-R1-14B-DS
0.0566
0
0.1234
0.0247
0
0.0002
0
0.3803
0
0.055
0.021
0.0181
0.0245
0.5836
0.2552
0.3607
0.7852
3.3407
0.41
0.1234
0.0003
0
0
0
0.0941
0
0
0
0
0.021
0.0946
0.0931
0.0005
0
0
0
0
0.0465
0.0025
0.0037
0.028
0
0.0891
0.5582
0.2487
0.3757
0.7674
2.6071
0.3749
0.6171
0.2832
10.0329
1.7969
0.0181
7.9065
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
34
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
qihoo360/Light-R1-14B-DS
0.5312
0
0.5636
0.2061
0.6449
0.8339
0.69
0.8206
0.744
0.4292
0.8925
0.0181
0.4252
0.8395
9.9861
0.8837
0.9459
14.5833
0.8704
0.5636
0.8545
0.6063
0.7736
0.9348
0.4472
0.6176
0.7239
0.774
0.8423
0.8925
0.862
0.8335
0.7126
0.69
0
0
0.6721
0.4151
0.0256
0.2781
0.0796
0.0351
0.6122
0.7926
8.771
0.7915
0.8929
9.8171
0.7368
0.6171
0.2832
10.0329
1.7969
0.0181
7.9065
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
34
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
qihoo360/Light-R1-32B
0.6472
0.5281
0.5679
0.2745
0.7579
0.8954
0.918
0.8435
0.7998
0.5434
0.9102
0.08
0.5514
0.8555
12.5021
0.9
0.9537
16.9516
0.883
0.5679
0.9151
0.681
0.8319
0.9634
0.566
0.7323
0.8804
0.7936
0.8121
0.9102
0.8817
0.8686
0.8079
0.918
0.5281
0.8233
0.7834
0.5128
0.021
0.3901
0.1416
0.0756
0.7442
0.8272
11.2441
0.8348
0.9008
10.7905
0.7564
0.6811
2.6471
24.9683
7.9879
0.08
21.2223
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
84
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
qihoo360/Light-R1-32B
0.5197
0.5281
0.1792
0.1342
0.6242
0.8659
0.648
0.7917
0.7465
0.2411
0.8781
0.08
0.1728
0.7943
10.0045
0.8012
0.9428
13.9928
0.8577
0.1792
0.8953
0.6379
0.7611
0.9348
0.2436
0.6043
0.6775
0.7866
0.8695
0.8781
0.8901
0.8687
0.7676
0.648
0.5281
0.8233
0.6441
0.3071
0.0092
0.0112
0.0044
0.0036
0.6425
0.7776
8.5686
0.7799
0.8875
8.9069
0.7278
0.6811
2.6471
24.9683
7.9879
0.08
21.2223
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
84
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
prithivMLmods/Alpha-UMa-Qwen-14B
0.6406
0.5582
0.5426
0.2654
0.751
0.8891
0.884
0.8442
0.754
0.5569
0.9101
0.0907
0.5574
0.8644
12.9476
0.9089
0.952
16.5603
0.88
0.5426
0.9023
0.6121
0.7806
0.9607
0.5621
0.7218
0.8373
0.7891
0.7508
0.9101
0.9137
0.8822
0.8043
0.884
0.5582
0.8353
0.7801
0.5511
0.0492
0.4055
0
0.0705
0.8019
0.8254
11.5501
0.8357
0.9054
11.4357
0.752
0.6744
2.908
25.0175
9.0731
0.0907
21.4943
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
4
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
prithivMLmods/Alpha-UMa-Qwen-14B
0.4224
0.5582
0.1381
0.1441
0.1643
0.8535
0.06
0.8406
0.7421
0.2525
0.802
0.0907
0.3583
0.8509
11.1281
0.9059
0.9538
16.5995
0.8844
0.1381
0.894
0.6236
0.7722
0.9366
0.2918
0.2299
0.7325
0.7405
0.8415
0.802
0.8742
0.8681
0.73
0.06
0.5582
0.8353
0.0988
0.1074
0
0.002
0
0
0.7186
0.7952
8.5391
0.8253
0.8968
9.6236
0.7469
0.6744
2.908
25.0175
9.0731
0.0907
21.4943
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
4
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
CjangCjengh/GaLLM-multi-14B-v0.1
0.1588
0.0442
0.0757
0.0602
0.033
0.4395
0
0.5537
0.1043
0.1061
0.2794
0.0511
0.0794
0.645
7.3135
0.631
0.7847
5.3111
0.5367
0.0757
0.004
0.1552
0
0.7962
0.1749
0.0271
0.1459
0
0.2206
0.2794
-0.0772
-0.2268
0.5183
0
0.0442
0.0783
0.0389
0.0639
0.0019
0.0012
0
0.0002
0.2975
0.6013
5.946
0.5713
0.7658
3.5741
0.476
0.6235
2.6406
15.3563
5.1222
0.0511
12.725
Qwen2ForCausalLM
bfloat16
cc-by-nc-sa-4.0
14.77
3
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
CjangCjengh/GaLLM-multi-14B-v0.1
0.2084
0.0442
0.195
0.0582
0.002
0.5178
0.002
0.6133
0.4009
0.1142
0.2938
0.0511
0.0874
0.6422
6.7974
0.6204
0.8578
9.532
0.6741
0.195
0.0098
0.4856
0
0.9178
0.2053
0
0.4006
0.7487
0.3696
0.2938
0.8143
0.8069
0.6259
0.002
0.0442
0.0783
0.0039
0.0497
0.0115
0.0502
0
0.0284
0.2009
0.6184
7.37
0.5564
0.8331
6.9912
0.6023
0.6235
2.6406
15.3563
5.1222
0.0511
12.725
Qwen2ForCausalLM
bfloat16
cc-by-nc-sa-4.0
14.77
3
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
CjangCjengh/GaLLM-14B-v0.2
0.1462
0.0622
0.0286
0.0388
0.0199
0.2519
0.002
0.5792
0.1671
0.0979
0.2944
0.066
0.0777
0.6874
6.2065
0.6842
0.7992
4.728
0.5999
0.0286
0.0035
0.1925
0.0458
0.3789
0.1578
0
0.1787
0.1389
0.2795
0.2944
-0.3535
-0.3097
0.3734
0.002
0.0622
0.1888
0.0399
0.058
0
0
0
0
0.1942
0.6152
4.525
0.5814
0.7509
1.831
0.4515
0.6577
2.2357
23.6713
6.5929
0.066
19.921
Qwen2ForCausalLM
bfloat16
cc-by-nc-sa-4.0
14.77
0
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
CjangCjengh/GaLLM-14B-v0.2
0.1919
0.0622
0.2219
0.1167
0
0.4119
0
0.6282
0.2244
0.0964
0.2836
0.066
0.0822
0.7241
6.748
0.7389
0.8577
8.4705
0.6643
0.2219
0.0013
0.3276
0
0.8052
0.1311
0
0.4178
0.0164
0.3601
0.2836
0.7553
0.7516
0.4293
0
0.0622
0.1888
0
0.0761
0.0155
0.0416
0.0619
0.0305
0.4339
0.6442
5.175
0.6189
0.79
4.9482
0.4904
0.6577
2.2357
23.6713
6.5929
0.066
19.921
Qwen2ForCausalLM
bfloat16
cc-by-nc-sa-4.0
14.77
0
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-slerp-ryfxivm
0.5714
0.6446
0.2585
0.1561
0.7022
0.8559
0.828
0.837
0.7377
0.3388
0.8211
0.1051
0.3923
0.8446
10.1118
0.8985
0.9525
15.7333
0.883
0.2585
0.9013
0.6264
0.8069
0.9303
0.2928
0.6687
0.779
0.6692
0.8068
0.8211
0.8875
0.8578
0.7361
0.828
0.6446
0.9679
0.7357
0.3313
0.0241
0.0156
0.0531
0
0.6877
0.7967
8.0992
0.8178
0.8965
9.2296
0.7487
0.6998
2.8566
27.5278
10.5125
0.1051
23.9857
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-slerp-ryfxivm
0.6504
0.6446
0.5779
0.2777
0.7358
0.879
0.896
0.8456
0.7657
0.5287
0.8985
0.1051
0.5103
0.8606
12.091
0.9063
0.9531
16.498
0.8829
0.5779
0.8993
0.6063
0.7931
0.9482
0.5917
0.7052
0.8591
0.7759
0.7942
0.8985
0.8835
0.8603
0.7896
0.896
0.6446
0.9679
0.7665
0.4841
0.086
0.3808
0.0619
0.0782
0.7817
0.8242
10.5418
0.8358
0.9042
10.8193
0.7574
0.6998
2.8566
27.5278
10.5125
0.1051
23.9857
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-model_stock-lkrglxd
0.544
0.6044
0.2061
0.1434
0.4789
0.7537
0.826
0.8421
0.745
0.4006
0.8848
0.0986
0.4433
0.8516
11.3221
0.9046
0.954
16.2531
0.8851
0.2061
0.5819
0.6178
0.8056
0.9357
0.3847
0.6459
0.7395
0.7405
0.8214
0.8848
0.8864
0.8624
0.7435
0.826
0.6044
0.9317
0.3119
0.3737
0.0196
0.006
0
0
0.6915
0.7976
8.6399
0.8264
0.8974
9.6261
0.752
0.6905
2.8019
26.0896
9.8648
0.0986
22.9115
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-model_stock-lkrglxd
0.6515
0.6044
0.5713
0.2573
0.7491
0.8897
0.904
0.8475
0.7849
0.5431
0.917
0.0986
0.5504
0.8657
12.9227
0.9086
0.9545
17.4483
0.8843
0.5713
0.9098
0.6897
0.8083
0.9553
0.5447
0.7213
0.8652
0.7847
0.7765
0.917
0.9019
0.874
0.804
0.904
0.6044
0.9317
0.7769
0.5342
0.0573
0.3656
0
0.0709
0.7927
0.8275
11.2552
0.8407
0.9052
11.351
0.7563
0.6905
2.8019
26.0896
9.8648
0.0986
22.9115
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-della-efwskwi
0.5725
0.6386
0.2366
0.1544
0.6999
0.856
0.812
0.8376
0.7385
0.3608
0.8561
0.1069
0.4025
0.8444
10.3412
0.8978
0.9532
15.9556
0.8838
0.2366
0.897
0.6351
0.8125
0.9321
0.3496
0.6651
0.7773
0.649
0.8188
0.8561
0.8891
0.8576
0.7388
0.812
0.6386
0.9839
0.7348
0.3302
0.0232
0.0127
0.0442
0.0005
0.6914
0.7963
8.0445
0.8187
0.8966
9.2755
0.7503
0.7013
2.8071
28.0731
10.6952
0.1069
24.5026
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-della-efwskwi
0.6507
0.6386
0.5719
0.2746
0.7378
0.8821
0.896
0.8466
0.7651
0.5347
0.9036
0.1069
0.5173
0.862
12.3545
0.9071
0.9535
16.8023
0.8836
0.5719
0.9013
0.6006
0.7986
0.9508
0.5894
0.7091
0.8587
0.7765
0.7914
0.9036
0.8862
0.8603
0.7943
0.896
0.6386
0.9839
0.7666
0.4973
0.0833
0.3849
0.0531
0.0783
0.7736
0.8257
10.7213
0.8377
0.9042
10.8191
0.7581
0.7013
2.8071
28.0731
10.6952
0.1069
24.5026
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
tanliboy/lambda-qwen2.5-14b-dpo-test
0.624
0.4398
0.5586
0.2491
0.7308
0.8827
0.886
0.8381
0.7499
0.5288
0.9031
0.0967
0.5091
0.8622
12.5346
0.9069
0.9397
16.8055
0.8533
0.5586
0.9018
0.6178
0.8125
0.9544
0.5666
0.7001
0.8131
0.7639
0.7422
0.9031
0.8926
0.8655
0.7918
0.886
0.4398
0.7028
0.7615
0.5107
0.0685
0.3425
0.0236
0.0826
0.7285
0.82
9.8716
0.8352
0.9016
10.6094
0.7571
0.6932
2.7105
25.2152
9.674
0.0967
22.1909
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
9
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
tanliboy/lambda-qwen2.5-14b-dpo-test
0.5463
0.4398
0.2247
0.1322
0.6866
0.8503
0.796
0.7885
0.7403
0.4039
0.8507
0.0967
0.3946
0.8407
9.0889
0.8949
0.8958
15.0443
0.7519
0.2247
0.8853
0.6178
0.8111
0.9258
0.3633
0.6628
0.7658
0.7159
0.7909
0.8507
0.8889
0.856
0.7399
0.796
0.4398
0.7028
0.7103
0.4538
0.0262
0.0114
0.0088
0.0009
0.6137
0.7915
7.3908
0.8136
0.8692
8.8078
0.6935
0.6932
2.7105
25.2152
9.674
0.0967
22.1909
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
9
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-ties-dvjxpbu
0.573
0.6446
0.2331
0.1516
0.6997
0.8575
0.814
0.8377
0.7351
0.3648
0.8598
0.1054
0.4032
0.8446
10.4737
0.8982
0.9533
15.9252
0.8837
0.2331
0.8983
0.6236
0.8097
0.9348
0.3544
0.6659
0.7769
0.6446
0.8206
0.8598
0.8893
0.8572
0.7396
0.814
0.6446
0.9819
0.7336
0.3367
0.0241
0.0109
0.0442
0
0.6787
0.7957
8.1386
0.8181
0.8967
9.2722
0.7508
0.7014
2.7651
27.9924
10.5553
0.1054
24.4495
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-ties-dvjxpbu
0.6511
0.6446
0.5695
0.2696
0.7385
0.8828
0.898
0.8466
0.7662
0.5373
0.904
0.1054
0.5195
0.8615
12.4333
0.9068
0.9536
16.6964
0.8837
0.5695
0.9018
0.6034
0.8014
0.9508
0.5894
0.71
0.8587
0.7746
0.793
0.904
0.887
0.8614
0.7956
0.898
0.6446
0.9819
0.767
0.5031
0.0783
0.3791
0.0442
0.0749
0.7713
0.8262
10.6991
0.8377
0.9042
10.8084
0.7582
0.7014
2.7651
27.9924
10.5553
0.1054
24.4495
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-model_stock-dmnvnmc
0.51
0.5582
0.2447
0.147
0.5178
0.5989
0.664
0.8388
0.6622
0.3961
0.8784
0.1043
0.4378
0.8455
11.0193
0.8959
0.9534
15.8596
0.8842
0.2447
0.1215
0.6006
0.7972
0.9374
0.3886
0.662
0.5822
0.6957
0.6353
0.8784
0.8865
0.8612
0.7377
0.664
0.5582
0.8474
0.3736
0.3618
0.0159
0.0091
0
0.0023
0.7075
0.7982
8.5253
0.8257
0.8965
9.4687
0.7493
0.698
2.7864
27.2241
10.4365
0.1043
23.8834
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-model_stock-dmnvnmc
0.6485
0.5582
0.573
0.2579
0.7484
0.8913
0.904
0.8465
0.7842
0.5496
0.9156
0.1043
0.552
0.8651
13.0231
0.9078
0.9539
16.8228
0.8835
0.573
0.9098
0.6897
0.8014
0.9589
0.5567
0.7213
0.8726
0.7866
0.7709
0.9156
0.9032
0.8737
0.8053
0.904
0.5582
0.8474
0.7755
0.54
0.0632
0.3818
0
0.0614
0.783
0.8292
11.3447
0.8408
0.9044
11.1753
0.7541
0.698
2.7864
27.2241
10.4365
0.1043
23.8834
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-model_stock-rmxmzvo
0.6503
0.5924
0.566
0.2633
0.7498
0.8871
0.9
0.8483
0.7826
0.5508
0.9159
0.0972
0.5462
0.8657
13.0801
0.9095
0.9544
17.5299
0.8842
0.566
0.9051
0.6782
0.8083
0.9553
0.5637
0.7244
0.8624
0.7816
0.7824
0.9159
0.9014
0.8734
0.8009
0.9
0.5924
0.9197
0.7753
0.5424
0.0665
0.3834
0
0.0762
0.7904
0.8275
11.2457
0.8417
0.9056
11.2391
0.7578
0.6897
2.8712
25.7118
9.7299
0.0972
22.5888
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-model_stock-rmxmzvo
0.5603
0.5924
0.2329
0.1404
0.5597
0.8512
0.82
0.8419
0.7472
0.399
0.8819
0.0972
0.4471
0.8527
11.3692
0.9064
0.9535
16.2389
0.8837
0.2329
0.8685
0.6207
0.8097
0.9401
0.3777
0.671
0.7371
0.7456
0.8228
0.8819
0.8842
0.862
0.7451
0.82
0.5924
0.9197
0.4484
0.3722
0.0202
0.0054
0
0
0.6765
0.7978
8.6419
0.8267
0.8972
9.5614
0.751
0.6897
2.8712
25.7118
9.7299
0.0972
22.5888
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
โญ• : instruction-tuned
spow12/ChatWaifu_32B_reasoning
0.6604
0.4739
0.5963
0.3142
0.782
0.8988
0.942
0.8538
0.7914
0.5801
0.9216
0.1107
0.5995
0.8693
13.5403
0.9131
0.9569
17.9617
0.8872
0.5963
0.8953
0.6868
0.8028
0.966
0.6055
0.7518
0.9006
0.81
0.7566
0.9216
0.8946
0.8734
0.8352
0.942
0.4739
0.8213
0.8122
0.5354
0.0399
0.4445
0.115
0.1156
0.8559
0.8435
12.6588
0.8488
0.9108
12.0015
0.7662
0.7006
3.3039
28.8442
11.0684
0.1107
25.026
Qwen2ForCausalLM
bfloat16
cc-by-nc-4.0
32.76
2
main
4
False
v1.4.1
v0.6.3.post1
โญ• : instruction-tuned
spow12/ChatWaifu_32B_reasoning
0.578
0.4739
0.2642
0.1522
0.7494
0.8781
0.858
0.8445
0.7369
0.3838
0.9059
0.1107
0.336
0.853
13.0923
0.9054
0.9545
16.8319
0.8853
0.2642
0.9023
0.6149
0.7792
0.9383
0.3953
0.7201
0.8316
0.7973
0.6617
0.9059
0.8848
0.8709
0.7937
0.858
0.4739
0.8213
0.7786
0.4203
0
0.0103
0.0088
0.0025
0.7392
0.814
10.0586
0.8397
0.8955
10.3739
0.7477
0.7006
3.3039
28.8442
11.0684
0.1107
25.026
Qwen2ForCausalLM
bfloat16
cc-by-nc-4.0
32.76
2
main
0
False
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-model_stock-fuvlumz
0.5537
0.6084
0.2075
0.142
0.5322
0.8
0.828
0.8417
0.7468
0.4048
0.884
0.0956
0.4435
0.8518
11.5022
0.9051
0.9537
16.4214
0.884
0.2075
0.7179
0.6178
0.8097
0.9366
0.3907
0.6625
0.7379
0.7456
0.823
0.884
0.8823
0.86
0.7455
0.828
0.6084
0.9217
0.4018
0.3804
0.0188
0.0056
0
0.0003
0.685
0.7972
8.729
0.826
0.8974
9.5884
0.7518
0.6881
2.7957
25.1364
9.5613
0.0956
22.1617
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-model_stock-fuvlumz
0.6505
0.6084
0.5616
0.2607
0.7496
0.8888
0.9
0.8479
0.7826
0.5436
0.9167
0.0956
0.5475
0.8659
12.8952
0.9092
0.9546
17.4067
0.8844
0.5616
0.9073
0.6782
0.8097
0.9562
0.5463
0.7232
0.8628
0.7816
0.781
0.9167
0.9013
0.8737
0.8029
0.9
0.6084
0.9217
0.776
0.5369
0.0592
0.3753
0
0.0805
0.7885
0.8271
11.1664
0.841
0.9054
11.2533
0.7572
0.6881
2.7957
25.1364
9.5613
0.0956
22.1617
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-model_stock-skvbpno
0.6461
0.5382
0.5719
0.2552
0.75
0.8909
0.904
0.8464
0.7846
0.5456
0.9157
0.1045
0.55
0.8647
12.8989
0.9077
0.9538
16.6015
0.8834
0.5719
0.9093
0.6897
0.8028
0.958
0.5558
0.723
0.8718
0.7879
0.7707
0.9157
0.904
0.8739
0.8055
0.904
0.5382
0.8353
0.777
0.5309
0.0572
0.382
0
0.0579
0.7788
0.8293
11.3306
0.8409
0.9043
11.1646
0.7536
0.6977
2.7563
27.3017
10.451
0.1045
23.8652
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-model_stock-skvbpno
0.5062
0.5382
0.2567
0.1449
0.5129
0.5904
0.652
0.8388
0.6463
0.4041
0.8798
0.1045
0.4364
0.8456
11.288
0.8961
0.9531
15.8321
0.8836
0.2567
0.0974
0.5948
0.7875
0.9366
0.4085
0.6608
0.5624
0.685
0.6016
0.8798
0.8889
0.8614
0.7372
0.652
0.5382
0.8353
0.3649
0.3673
0.0162
0.0101
0
0.0034
0.6951
0.7982
8.5815
0.8257
0.8964
9.5817
0.7496
0.6977
2.7563
27.3017
10.451
0.1045
23.8652
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-model_stock-batsnqf
0.5297
0.5703
0.1931
0.1471
0.3858
0.7778
0.82
0.8417
0.7418
0.3675
0.8828
0.0994
0.4203
0.8509
11.3528
0.9038
0.954
16.0153
0.8851
0.1931
0.6538
0.6207
0.7986
0.9366
0.3737
0.5335
0.735
0.7393
0.8155
0.8828
0.8843
0.8628
0.743
0.82
0.5703
0.8675
0.2381
0.3084
0.0182
0.0047
0
0.0006
0.7118
0.7963
8.7358
0.8252
0.8979
9.6534
0.7526
0.6905
2.8207
26.5802
9.9516
0.0994
23.2808
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
mergekit-community/mergekit-model_stock-batsnqf
0.6467
0.5703
0.5659
0.259
0.7492
0.8887
0.894
0.8467
0.7826
0.5407
0.9169
0.0994
0.5489
0.8654
13.1463
0.9084
0.9543
17.3906
0.8839
0.5659
0.9068
0.6782
0.8111
0.9571
0.5462
0.7215
0.8657
0.7866
0.7715
0.9169
0.8981
0.8679
0.8022
0.894
0.5703
0.8675
0.7768
0.5269
0.0534
0.3757
0
0.0713
0.7944
0.8271
11.4294
0.8409
0.9049
11.299
0.7534
0.6905
2.8207
26.5802
9.9516
0.0994
23.2808
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
tokyo-electron-device-ai/llama3-tedllm-8b-v0-annealing
0.5309
0.1707
0.4323
0.2432
0.514
0.7756
0.714
0.8211
0.6193
0.5883
0.8949
0.0661
0.717
0.8561
11.792
0.8947
0.9491
15.2136
0.8753
0.4323
0.8632
0.5259
0.625
0.8615
0.5019
0.4657
0.59
0.7525
0.6032
0.8949
0.8122
0.7801
0.6021
0.714
0.1707
0.4237
0.5623
0.546
0.0046
0.2841
0.0442
0.0633
0.8199
0.8012
9.9405
0.7921
0.8965
10.8849
0.7223
0.6459
1.9877
16.1163
6.6118
0.0661
14.3753
LlamaForCausalLM
float32
llama3
8.135
0
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
tokyo-electron-device-ai/llama3-tedllm-8b-v0-annealing
0.3592
0.1707
0.0135
0.0934
0.3224
0.5434
0.384
0.7853
0.4444
0.3181
0.8099
0.0661
0.4924
0.8385
10.2666
0.8739
0.9406
12.7295
0.8597
0.0135
0.7861
0.4167
0.5639
0.4593
0.1272
0.2991
0.2769
0.4842
0.4802
0.8099
0.5941
0.6235
0.3849
0.384
0.1707
0.4237
0.3457
0.3349
0.0058
0.0003
0
0
0.4611
0.7422
6.8119
0.734
0.8479
8.2216
0.6737
0.6459
1.9877
16.1163
6.6118
0.0661
14.3753
LlamaForCausalLM
float32
llama3
8.135
0
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
allenai/Llama-3.1-Tulu-3-70B
0.684
0.6064
0.5798
0.3184
0.7371
0.8976
0.928
0.8605
0.7892
0.6605
0.9258
0.2203
0.7401
0.8733
13.7339
0.9145
0.9583
19.1642
0.8893
0.5798
0.9141
0.6523
0.8861
0.9517
0.6366
0.7128
0.8657
0.8018
0.74
0.9258
0.9015
0.8777
0.8269
0.928
0.6064
0.992
0.7615
0.6047
0.0999
0.4635
0.0885
0.0802
0.8597
0.8566
15.381
0.8586
0.9188
13.0986
0.7796
0.7674
5.7644
47.4995
21.9881
0.2203
39.7019
LlamaForCausalLM
bfloat16
llama3.1
70.554
55
main
4
False
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
allenai/Llama-3.1-Tulu-3-70B
0.612
0.6064
0.195
0.1838
0.7022
0.8587
0.872
0.8532
0.7464
0.5832
0.9111
0.2203
0.6851
0.864
13.5015
0.9103
0.9575
17.9217
0.8878
0.195
0.8935
0.6954
0.7472
0.9303
0.4752
0.6758
0.7954
0.7866
0.7071
0.9111
0.8983
0.8721
0.7523
0.872
0.6064
0.992
0.7287
0.5894
0
0.056
0.0531
0.0131
0.7966
0.8393
12.2999
0.8509
0.9078
11.6042
0.7639
0.7674
5.7644
47.4995
21.9881
0.2203
39.7019
LlamaForCausalLM
bfloat16
llama3.1
70.554
55
main
0
False
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
Sakalti/deneb-v1-7b
0.5403
0.006
0.4826
0.2074
0.6395
0.8448
0.778
0.8314
0.7458
0.3989
0.9096
0.0992
0.382
0.8405
11.3141
0.8953
0.9492
15.6681
0.8772
0.4826
0.8662
0.6552
0.7222
0.9321
0.4346
0.6024
0.8205
0.7588
0.7723
0.9096
0.8745
0.8532
0.7362
0.778
0.006
0.006
0.6765
0.38
0.0131
0.3028
0
0.0918
0.6293
0.7844
7.8711
0.8126
0.8922
9.4271
0.7405
0.6958
2.1445
32.5547
9.9368
0.0992
26.4666
Qwen2ForCausalLM
float16
7.616
1
main
4
False
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
Sakalti/deneb-v1-7b
0.4349
0.006
0.2145
0.1032
0.5876
0.7522
0.41
0.8134
0.7106
0.2119
0.875
0.0992
0.2823
0.8246
8.4727
0.8801
0.9431
11.9981
0.8682
0.2145
0.859
0.5747
0.6917
0.7784
0.2037
0.5383
0.7165
0.7481
0.8222
0.875
0.864
0.8569
0.6194
0.41
0.006
0.006
0.6369
0.1499
0
0.0088
0
0.0011
0.5059
0.7617
6.2972
0.785
0.8829
7.9987
0.7203
0.6958
2.1445
32.5547
9.9368
0.0992
26.4666
Qwen2ForCausalLM
float16
7.616
1
main
0
False
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
deepcogito/cogito-v1-preview-qwen-14B
0.508
0.508
0.2962
0.1249
0.4981
0.6296
0.8
0.77
0.6811
0.3662
0.8481
0.066
0.4011
0.8182
11.109
0.8486
0.8996
14.862
0.7363
0.2962
0.2147
0.6236
0.7847
0.9374
0.304
0.6089
0.6142
0.6989
0.6842
0.8481
0.888
0.863
0.7368
0.8
0.508
0.7631
0.3874
0.3934
0.0073
0.0099
0
0.0011
0.606
0.7721
8.7913
0.787
0.8796
9.225
0.7082
0.6634
2.8233
18.5883
6.6039
0.066
16.0344
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
75
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
deepcogito/cogito-v1-preview-qwen-14B
0.6449
0.508
0.5971
0.2684
0.792
0.8899
0.886
0.8277
0.7912
0.5587
0.9085
0.066
0.56
0.8601
12.6075
0.9022
0.9552
17.5366
0.8873
0.5971
0.9096
0.6437
0.8278
0.9598
0.5611
0.7574
0.8837
0.7816
0.8192
0.9085
0.908
0.8806
0.8003
0.886
0.508
0.7631
0.8265
0.5548
0.0381
0.3842
0.115
0.0782
0.7265
0.7748
10.965
0.7705
0.9007
10.6218
0.7509
0.6634
2.8233
18.5883
6.6039
0.066
16.0344
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
75
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
deepcogito/cogito-v1-preview-qwen-32B
0.5636
0.5442
0.2056
0.1285
0.8077
0.8843
0.844
0.6299
0.7561
0.4102
0.8988
0.09
0.4499
0.7362
11.6904
0.6872
0.8363
15.6136
0.546
0.2056
0.9141
0.6264
0.8208
0.9437
0.3745
0.7811
0.8406
0.8018
0.6909
0.8988
0.9015
0.8772
0.7951
0.844
0.5442
0.8213
0.8342
0.4061
0.005
0.0108
0
0.0095
0.617
0.7182
9.233
0.6886
0.8386
9.5302
0.5978
0.6807
3.0211
25.1315
8.9886
0.09
21.8319
Qwen2ForCausalLM
bfloat16
apache-2.0
32.76
104
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
deepcogito/cogito-v1-preview-qwen-32B
0.6573
0.5442
0.5712
0.2659
0.8335
0.9046
0.95
0.7791
0.8065
0.5744
0.9109
0.09
0.5945
0.8359
12.9027
0.8177
0.9537
17.8795
0.8819
0.5712
0.9113
0.7328
0.8583
0.9687
0.5728
0.8051
0.8932
0.8043
0.7441
0.9109
0.9003
0.8801
0.8337
0.95
0.5442
0.8213
0.8619
0.5558
0.0512
0.3503
0.115
0.1082
0.705
0.764
10.9008
0.6718
0.8979
10.9055
0.7452
0.6807
3.0211
25.1315
8.9886
0.09
21.8319
Qwen2ForCausalLM
bfloat16
apache-2.0
32.76
104
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
deepcogito/cogito-v1-preview-llama-3B
0.2448
0.1325
0.0021
0.0457
0.1204
0.4181
0.008
0.7406
0.3806
0.0942
0.6816
0.0686
0.1199
0.8001
7.3976
0.8313
0.9229
10.9794
0.8282
0.0021
0.5341
0.3333
0.5097
0.454
0.0054
0.0144
0.1656
0.6711
0.2233
0.6816
0.0007
-0.0318
0.2662
0.008
0.1325
0.3976
0.2265
0.1573
0
0
0
0
0.2283
0.7107
4.7835
0.6868
0.8464
5.8504
0.6159
0.6737
2.3271
24.74
6.8676
0.0686
20.6089
LlamaForCausalLM
bfloat16
llama3.2
3.607
92
main
0
False
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
deepcogito/cogito-v1-preview-llama-3B
0.4605
0.1325
0.4069
0.1788
0.4956
0.6505
0.616
0.7897
0.5489
0.3185
0.8599
0.0686
0.2994
0.8313
9.0253
0.868
0.9395
13.2868
0.8569
0.4069
0.7395
0.5057
0.6403
0.7775
0.351
0.4256
0.6504
0.6427
0.3057
0.8599
0.7621
0.7054
0.4347
0.616
0.1325
0.3976
0.5656
0.305
0.0221
0.231
0.0619
0.0577
0.5209
0.7529
7.7698
0.7322
0.8818
8.7608
0.7017
0.6737
2.3271
24.74
6.8676
0.0686
20.6089
LlamaForCausalLM
bfloat16
llama3.2
3.607
92
main
4
False
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
deepcogito/cogito-v1-preview-llama-8B
0.218
0.002
0.0097
0.048
0.2552
0.2673
0.044
0.4877
0.3878
0.1167
0.7184
0.0615
0.1372
0.6592
10.0076
0.5843
0.7557
12.6427
0.3845
0.0097
0
0.5115
0.5903
0.4138
0.1537
0.039
0.4519
0
0.3854
0.7184
0.8271
0.8018
0.3882
0.044
0.002
0.002
0.4714
0.0592
0
0.0023
0.002
0
0.2358
0.6435
7.6176
0.576
0.7653
7.6352
0.4058
0.6611
2.3741
18.3344
6.152
0.0615
15.4374
LlamaForCausalLM
bfloat16
llama3.1
8.03
40
main
0
False
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
deepcogito/cogito-v1-preview-llama-8B
0.5274
0.002
0.4987
0.2279
0.6421
0.7621
0.73
0.8341
0.7167
0.4234
0.9024
0.0615
0.4358
0.8535
12.1316
0.9005
0.9497
15.3182
0.8759
0.4987
0.7986
0.546
0.7819
0.8811
0.4128
0.5688
0.7658
0.7367
0.7532
0.9024
0.8661
0.8389
0.6065
0.73
0.002
0.002
0.7154
0.4216
0.017
0.2669
0.1062
0.0676
0.6816
0.8101
10.0612
0.8167
0.8982
10.335
0.7432
0.6611
2.3741
18.3344
6.152
0.0615
15.4374
LlamaForCausalLM
bfloat16
llama3.1
8.03
40
main
4
False
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
deepcogito/cogito-v1-preview-llama-70B
0.6339
0.2189
0.5944
0.2935
0.8255
0.8827
0.938
0.7874
0.8303
0.6196
0.9067
0.0762
0.716
0.871
13.591
0.9144
0.9582
18.0392
0.8912
0.5944
0.9051
0.7443
0.9236
0.941
0.5513
0.7888
0.8665
0.7904
0.8267
0.9067
0.905
0.8754
0.8022
0.938
0.2189
0.3695
0.8622
0.5914
0.0808
0.3397
0.115
0.085
0.8471
0.7698
12.3823
0.6971
0.8557
11.3216
0.6471
0.6705
2.5632
22.387
7.6298
0.0762
19.0424
LlamaForCausalLM
bfloat16
llama3.1
70.554
71
main
4
False
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
deepcogito/cogito-v1-preview-llama-70B
0.5002
0.2189
0.325
0.1601
0.7603
0.8423
0.296
0.851
0.749
0.3425
0.8806
0.0762
0.1884
0.8562
12.2053
0.9104
0.9549
16.3648
0.8864
0.325
0.8888
0.6868
0.7639
0.9062
0.3907
0.7173
0.712
0.7689
0.8133
0.8806
0.8843
0.8465
0.7321
0.296
0.2189
0.3695
0.8033
0.4483
0.0025
0.0004
0
0.0081
0.7893
0.8138
10.3604
0.8421
0.9065
10.9555
0.7649
0.6705
2.5632
22.387
7.6298
0.0762
19.0424
LlamaForCausalLM
bfloat16
llama3.1
70.554
71
main
0
False
v1.4.1
v0.6.3.post1
๐ŸŸฆ : RL-tuned (Preference optimization)
agentica-org/DeepCoder-14B-Preview
0.2154
0.0964
0.211
0.058
0.0038
0.3239
0
0.7394
0.1861
0.1511
0.4941
0.1058
0.1139
0.7851
7.2287
0.7965
0.906
10.0484
0.8017
0.211
0.012
0.3908
0
0.4433
0.1502
0
0.2638
0
0.2758
0.4941
0.7966
0.8124
0.5165
0
0.0964
0.1928
0.0077
0.1893
0.005
0.0139
0.0177
0
0.2535
0.7299
6.2307
0.6966
0.8559
6.7413
0.663
0.697
2.363
29.7711
10.5962
0.1058
21.78
Qwen2ForCausalLM
float32
mit
14.77
607
main
0
True
v1.4.1
v0.6.3.post1
๐ŸŸฆ : RL-tuned (Preference optimization)
agentica-org/DeepCoder-14B-Preview
0.5697
0.0964
0.574
0.2186
0.6736
0.8371
0.81
0.8297
0.7676
0.4569
0.8973
0.1058
0.4381
0.8458
10.3416
0.8932
0.9483
15.1478
0.8759
0.574
0.8419
0.6264
0.7847
0.9339
0.5114
0.6357
0.8209
0.7696
0.8362
0.8973
0.8732
0.8508
0.7356
0.81
0.0964
0.1928
0.7115
0.4212
0.0297
0.3021
0.0973
0.0353
0.6287
0.7975
8.83
0.8063
0.8959
9.8074
0.7434
0.697
2.363
29.7711
10.5962
0.1058
21.78
Qwen2ForCausalLM
float32
mit
14.77
607
main
4
True
v1.4.1
v0.6.3.post1
๐ŸŸฆ : RL-tuned (Preference optimization)
agentica-org/DeepCoder-1.5B-Preview
0.0521
0
0
0.015
0.065
0.0002
0
0.3633
0.0006
0.034
0.0921
0.0029
0.0274
0.6501
0.2639
0.454
0.7963
2.3522
0.3617
0
0.0005
0
0.0028
0
0.0234
0.011
0
0
0
0.0921
0
0
0
0
0
0
0.1189
0.0511
0
0
0
0
0.0751
0.5785
0.3889
0.3372
0.7656
1.4679
0.3002
0.5399
0.1099
2.4376
0.2942
0.0029
1.9999
Qwen2ForCausalLM
float32
mit
1.777
62
main
0
True
v1.4.1
v0.6.3.post1
๐ŸŸฆ : RL-tuned (Preference optimization)
agentica-org/DeepCoder-1.5B-Preview
0.263
0
0.2455
0.0174
0.2744
0.3392
0.498
0.48
0.3685
0.159
0.5084
0.0029
0.0779
0.6411
1.886
0.4167
0.8766
8.0923
0.6541
0.2455
0.4679
0.3448
0.4986
0.2887
0.2863
0.2587
0.3122
0.5196
0.1674
0.5084
0.0848
0.0849
0.2611
0.498
0
0
0.2902
0.1128
0.0025
0.0128
0.0088
0.0025
0.0602
0.5774
2.5082
0.3638
0.8138
5.3901
0.4855
0.5399
0.1099
2.4376
0.2942
0.0029
1.9999
Qwen2ForCausalLM
float32
mit
1.777
62
main
4
True
v1.4.1
v0.6.3.post1
๐ŸŸฆ : RL-tuned (Preference optimization)
agentica-org/DeepScaleR-1.5B-Preview
0.2634
0
0.2463
0.0243
0.2729
0.3456
0.488
0.4773
0.3817
0.1567
0.502
0.0025
0.0741
0.6366
1.9005
0.4148
0.8736
8.2729
0.6468
0.2463
0.4677
0.3477
0.4903
0.3029
0.2835
0.2553
0.3159
0.5884
0.166
0.502
0.2678
0.2641
0.2662
0.488
0
0
0.2905
0.1124
0
0.0116
0.0088
0.0027
0.0982
0.5759
2.4808
0.3621
0.8133
5.3965
0.4853
0.539
0.1239
1.913
0.257
0.0025
1.5706
Qwen2ForCausalLM
float32
mit
1.777
548
main
4
True
v1.4.1
v0.6.3.post1
๐ŸŸฆ : RL-tuned (Preference optimization)
agentica-org/DeepScaleR-1.5B-Preview
0.0625
0
0
0.0184
0.0728
0.0231
0
0.426
0.0106
0.0369
0.0969
0.0025
0.0304
0.6705
0.3206
0.4892
0.8153
4.3309
0.4694
0
0.0629
0
0.0528
0.0063
0.0249
0.0062
0
0
0
0.0969
-0.0238
-0.0248
0.0003
0
0
0
0.1393
0.0553
0
0
0
0
0.0921
0.6017
0.4542
0.3723
0.7757
2.6801
0.373
0.539
0.1239
1.913
0.257
0.0025
1.5706
Qwen2ForCausalLM
float32
mit
1.777
548
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
Gen-Verse/ReasonFlux-F1-7B
0.1356
0.002
0.0006
0.0286
0.028
0.2893
0.004
0.574
0.0645
0.0717
0.4071
0.022
0.0397
0.6853
2.2929
0.5239
0.8875
7.0131
0.7157
0.0006
0.4299
0.0345
0.0611
0.1877
0.1139
0.0071
0.0776
0.0044
0.1447
0.4071
0.0072
0.0147
0.2503
0.004
0.002
0.002
0.049
0.0614
0
0
0
0
0.1431
0.631
2.5403
0.4779
0.8345
4.6942
0.5785
0.5816
0.5013
11.0933
2.1971
0.022
8.5995
Qwen2ForCausalLM
bfloat16
other
7.616
1
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
Gen-Verse/ReasonFlux-F1-7B
0.3846
0.002
0.2984
0.0527
0.4479
0.5659
0.712
0.6457
0.5558
0.1932
0.7354
0.022
0.109
0.7401
4.1208
0.614
0.9148
10.757
0.7928
0.2984
0.6743
0.4138
0.5944
0.5907
0.3305
0.4084
0.5
0.6919
0.5789
0.7354
0.6845
0.6184
0.4325
0.712
0.002
0.002
0.4874
0.1403
0.0115
0.0677
0.0354
0.0097
0.1391
0.6668
4.0435
0.5243
0.8635
7.2584
0.6518
0.5816
0.5013
11.0933
2.1971
0.022
8.5995
Qwen2ForCausalLM
bfloat16
other
7.616
1
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
Gen-Verse/ReasonFlux-F1
0.5937
0.0422
0.5801
0.2561
0.7492
0.879
0.918
0.8413
0.7733
0.5043
0.881
0.1064
0.5178
0.856
12.0468
0.903
0.9523
16.515
0.8817
0.5801
0.8953
0.6695
0.7667
0.9437
0.4982
0.7244
0.857
0.791
0.7822
0.881
0.8987
0.8747
0.7979
0.918
0.0422
0.1426
0.774
0.4969
0.0136
0.3404
0.1416
0.0302
0.7547
0.8184
10.3952
0.8228
0.9026
10.607
0.7577
0.6975
2.2342
30.9631
10.6453
0.1064
22.5886
Qwen2ForCausalLM
bfloat16
other
32.764
8
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
Gen-Verse/ReasonFlux-F1
0.29
0.0422
0.191
0.1052
0
0.8317
0
0.774
0.4441
0.1559
0.5397
0.1064
0.1456
0.8093
9.7847
0.8392
0.9295
13.8286
0.819
0.191
0.8657
0.592
0.0361
0.9008
0.1461
0
0.6919
0.3718
0.5289
0.5397
0.8965
0.8753
0.7285
0
0.0422
0.1426
0
0.1762
0.0222
0.0083
0.0431
0.0052
0.4471
0.7613
7.89
0.7516
0.879
8.4693
0.6862
0.6975
2.2342
30.9631
10.6453
0.1064
22.5886
Qwen2ForCausalLM
bfloat16
other
32.764
8
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
Gen-Verse/ReasonFlux-F1-14B
0.2134
0.0442
0.2168
0.0633
0
0.2802
0
0.7504
0.2233
0.1535
0.5054
0.1107
0.1202
0.7878
7.0266
0.805
0.9228
11.4509
0.8127
0.2168
0.0018
0.477
0
0.3003
0.1583
0
0.3389
0
0.3004
0.5054
0.8432
0.8268
0.5386
0
0.0442
0.0924
0.0001
0.182
0.0043
0.015
0.0246
0
0.2725
0.7359
6.2567
0.7064
0.8718
7.7009
0.6775
0.703
2.2964
32.9093
11.0674
0.1107
23.4231
Qwen2ForCausalLM
bfloat16
other
14.77
2
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
Gen-Verse/ReasonFlux-F1-14B
0.5617
0.0442
0.5657
0.2154
0.6648
0.8354
0.8
0.8265
0.7677
0.4482
0.9
0.1107
0.4312
0.8436
10.5291
0.8893
0.9471
14.9971
0.8728
0.5657
0.8525
0.6178
0.7944
0.9339
0.4885
0.6289
0.8279
0.7746
0.8238
0.9
0.8593
0.8361
0.7198
0.8
0.0442
0.0924
0.7007
0.4248
0.0263
0.2861
0.0973
0.0366
0.6308
0.7977
8.9574
0.8029
0.8953
9.7113
0.741
0.703
2.2964
32.9093
11.0674
0.1107
23.4231
Qwen2ForCausalLM
bfloat16
other
14.77
2
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
netease-youdao/Confucius-o1-14B
0.6272
0.4257
0.5561
0.2465
0.7373
0.88
0.876
0.8376
0.7824
0.5254
0.9168
0.1151
0.5073
0.8517
11.4517
0.8973
0.9519
16.4666
0.8801
0.5561
0.8963
0.6667
0.8333
0.9553
0.5791
0.7091
0.871
0.7689
0.7721
0.9168
0.8951
0.8677
0.7883
0.876
0.4257
0.6667
0.7654
0.4899
0.0695
0.3194
0.0619
0.0837
0.6979
0.8113
9.9454
0.8244
0.8972
10.5627
0.7485
0.7043
2.6929
31.556
11.5101
0.1151
26.6866
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
39
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
netease-youdao/Confucius-o1-14B
0.5027
0.4257
0.1759
0.1119
0.6979
0.846
0.51
0.7566
0.7174
0.2844
0.8883
0.1151
0.3008
0.8159
8.4722
0.8569
0.8845
14.6671
0.7239
0.1759
0.9031
0.6063
0.8236
0.9214
0.2087
0.6823
0.8188
0.6313
0.7071
0.8883
0.8844
0.8646
0.7136
0.51
0.4257
0.6667
0.7134
0.3437
0.0019
0
0.0088
0.0004
0.5482
0.7657
7.2229
0.7724
0.863
8.8013
0.6731
0.7043
2.6929
31.556
11.5101
0.1151
26.6866
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
39
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
sometimesanotion/Lamarck-14B-v0.7-Fusion
0.6491
0.6325
0.5636
0.2567
0.7439
0.8868
0.898
0.8468
0.7723
0.5195
0.9107
0.1091
0.5347
0.8623
12.6579
0.9077
0.953
16.3768
0.8826
0.5636
0.9048
0.6437
0.8014
0.9544
0.5666
0.717
0.8636
0.7803
0.7727
0.9107
0.8995
0.8725
0.8012
0.898
0.6325
0.99
0.7707
0.4573
0.0628
0.3638
0.0177
0.0603
0.7792
0.828
11.1553
0.8381
0.9052
11.1905
0.7589
0.7012
2.9203
28.7088
10.911
0.1091
25.0263
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
8
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿค : base merges and moerges
sometimesanotion/Lamarck-14B-v0.7-Fusion
0.5529
0.6325
0.2214
0.1477
0.5172
0.8589
0.806
0.8394
0.7456
0.335
0.8686
0.1091
0.4098
0.8479
10.6678
0.9005
0.9536
15.8744
0.8846
0.2214
0.8975
0.6293
0.8097
0.9366
0.3949
0.6583
0.7716
0.7102
0.8072
0.8686
0.8914
0.8575
0.7425
0.806
0.6325
0.99
0.3762
0.2004
0.0186
0.0057
0
0.0016
0.7128
0.8
8.3644
0.8215
0.8966
9.4676
0.751
0.7012
2.9203
28.7088
10.911
0.1091
25.0263
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
8
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
rubenroy/Zurich-1.5B-GCv2-5m
0.3978
0
0.3934
0.0619
0.4635
0.6637
0.54
0.6599
0.5021
0.2433
0.8129
0.0354
0.2089
0.7701
7.6118
0.7705
0.8914
11.4563
0.7328
0.3934
0.7069
0.3851
0.5736
0.7909
0.2773
0.4098
0.675
0.6282
0.2488
0.8129
0.6884
0.6559
0.4932
0.54
0
0
0.5173
0.2435
0
0.1053
0.0088
0.0072
0.1882
0.6733
6.1185
0.5905
0.8247
7.0968
0.5458
0.6081
0.8315
10.9192
3.5307
0.0354
9.441
Qwen2ForCausalLM
bfloat16
apache-2.0
1.544
2
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
rubenroy/Zurich-1.5B-GCv2-5m
0.151
0
0
0.0214
0
0.3453
0.008
0.603
0.2234
0.0589
0.3653
0.0354
0.0479
0.5959
3.7554
0.5571
0.9158
7.814
0.7924
0
0.2916
0.0575
0.0333
0.5103
0.0989
0
0.2802
0.161
0.5849
0.3653
0.1342
0.1408
0.2342
0.008
0
0
0
0.0299
0
0
0
0
0.1067
0.5609
2.391
0.4837
0.8375
5.0725
0.5787
0.6081
0.8315
10.9192
3.5307
0.0354
9.441
Qwen2ForCausalLM
bfloat16
apache-2.0
1.544
2
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
rubenroy/Geneva-12B-GCv2-5m
0.1897
0.4558
0.0135
0.0231
0.0257
0.377
0
0.4335
0.335
0.0607
0.3015
0.0612
0.0542
0.4086
5.3513
0.5799
0.0205
0.4848
0.3487
0.0135
0.1005
0.319
0.2139
0.6309
0.0519
0.0443
0.5399
0.1143
0.4879
0.3015
0.1728
0.1628
0.3997
0
0.4558
0.9418
0.0071
0.0759
0.0033
0
0
0.005
0.1074
0.3219
3.4635
0.4648
0.0401
0.5549
0.3407
0.5061
1.7498
15.8627
6.1197
0.0612
13.8795
MistralForCausalLM
bfloat16
apache-2.0
12.248
12
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
rubenroy/Geneva-12B-GCv2-5m
0.5651
0.4558
0.5246
0.2278
0.5597
0.83
0.74
0.8245
0.6313
0.4698
0.8919
0.0612
0.5014
0.8527
12.1124
0.8943
0.948
14.8919
0.8722
0.5246
0.8805
0.3764
0.7361
0.9196
0.4806
0.5075
0.5703
0.7595
0.714
0.8919
0.8642
0.8384
0.6899
0.74
0.4558
0.9418
0.6119
0.4274
0.016
0.3142
0.0354
0.0784
0.6951
0.7987
9.7046
0.8038
0.8915
9.7459
0.7277
0.5061
1.7498
15.8627
6.1197
0.0612
13.8795
MistralForCausalLM
bfloat16
apache-2.0
12.248
12
main
4
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
rubenroy/Zurich-14B-GCv2-5m
0.4401
0.5783
0.0388
0.133
0.0621
0.8218
0.626
0.8097
0.5892
0.2624
0.8324
0.0875
0.2951
0.8348
9.0998
0.891
0.9452
10.3479
0.8586
0.0388
0.8665
0.5374
0.7597
0.9133
0.3179
0.124
0.6574
0.1338
0.8579
0.8324
0.8845
0.8479
0.6855
0.626
0.5783
0.9538
0.0002
0.1741
0.0035
0.0084
0
0
0.6529
0.7767
6.9515
0.7946
0.8846
7.0046
0.6946
0.6841
2.388
22.9382
8.7553
0.0875
20.2441
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
12
main
0
True
v1.4.1
v0.6.3.post1
๐Ÿ”ถ : fine-tuned
rubenroy/Zurich-14B-GCv2-5m
0.6368
0.5783
0.5299
0.2596
0.7241
0.8736
0.888
0.8422
0.7736
0.534
0.9139
0.0875
0.5308
0.8616
12.3457
0.9063
0.9538
17.5219
0.8832
0.5299
0.8895
0.6494
0.8014
0.9535
0.548
0.6899
0.8632
0.7734
0.7808
0.9139
0.8943
0.8537
0.7778
0.888
0.5783
0.9538
0.7584
0.5233
0.0443
0.3607
0.0796
0.0609
0.7526
0.8251
11.2246
0.8329
0.9013
10.9388
0.7462
0.6841
2.388
22.9382
8.7553
0.0875
20.2441
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
12
main
4
True
v1.4.1
v0.6.3.post1