model_type
stringclasses
5 values
model
stringlengths
12
62
AVG
float64
0.03
0.7
CG
float64
0
0.68
EL
float64
0
0.77
FA
float64
0
0.62
HE
float64
0
0.83
MC
float64
0
0.95
MR
float64
0
0.95
MT
float64
0.19
0.86
NLI
float64
0
0.97
QA
float64
0
0.77
RC
float64
0
0.94
SUM
float64
0
0.29
aio_char_f1
float64
0
0.9
alt-e-to-j_bert_score_ja_f1
float64
0
0.88
alt-e-to-j_bleu_ja
float64
0
16
alt-e-to-j_comet_wmt22
float64
0.2
0.92
alt-j-to-e_bert_score_en_f1
float64
0
0.96
alt-j-to-e_bleu_en
float64
0
20.1
alt-j-to-e_comet_wmt22
float64
0.17
0.89
chabsa_set_f1
float64
0
0.77
commonsensemoralja_exact_match
float64
0
0.94
jamp_exact_match
float64
0
1
janli_exact_match
float64
0
1
jcommonsenseqa_exact_match
float64
0
0.98
jemhopqa_char_f1
float64
0
0.71
jmmlu_exact_match
float64
0
0.81
jnli_exact_match
float64
0
0.94
jsem_exact_match
float64
0
0.96
jsick_exact_match
float64
0
0.93
jsquad_char_f1
float64
0
0.94
jsts_pearson
float64
-0.35
0.94
jsts_spearman
float64
-0.6
0.91
kuci_exact_match
float64
0
0.93
mawps_exact_match
float64
0
0.95
mbpp_code_exec
float64
0
0.68
mbpp_pylint_check
float64
0
0.99
mmlu_en_exact_match
float64
0
0.86
niilc_char_f1
float64
0
0.7
wiki_coreference_set_f1
float64
0
0.4
wiki_dependency_set_f1
float64
0
0.88
wiki_ner_set_f1
float64
0
0.33
wiki_pas_set_f1
float64
0
0.57
wiki_reading_char_f1
float64
0
0.94
wikicorpus-e-to-j_bert_score_ja_f1
float64
0
0.88
wikicorpus-e-to-j_bleu_ja
float64
0
24
wikicorpus-e-to-j_comet_wmt22
float64
0.18
0.87
wikicorpus-j-to-e_bert_score_en_f1
float64
0
0.93
wikicorpus-j-to-e_bleu_en
float64
0
15.9
wikicorpus-j-to-e_comet_wmt22
float64
0.17
0.79
xlsum_ja_bert_score_ja_f1
float64
0
0.79
xlsum_ja_bleu_ja
float64
0
10.2
xlsum_ja_rouge1
float64
0
52.8
xlsum_ja_rouge2
float64
0
29.2
xlsum_ja_rouge2_scaling
float64
0
0.29
xlsum_ja_rougeLsum
float64
0
44.9
architecture
stringclasses
12 values
precision
stringclasses
3 values
license
stringclasses
14 values
params
float64
0
70.6
likes
int64
0
6.19k
revision
stringclasses
1 value
num_few_shot
int64
0
4
add_special_tokens
stringclasses
2 values
llm_jp_eval_version
stringclasses
1 value
vllm_version
stringclasses
1 value
πŸ”Ά : fine-tuned
rubenroy/Zurich-7B-GCv2-5m
0.5242
0
0.4068
0.2089
0.6576
0.8253
0.764
0.8251
0.6945
0.436
0.8785
0.0696
0.3794
0.8484
11.6003
0.8925
0.9488
16.0949
0.8728
0.4068
0.8314
0.5489
0.7333
0.9088
0.5293
0.623
0.7925
0.6881
0.7098
0.8785
0.8648
0.8362
0.7358
0.764
0
0
0.6921
0.3994
0.0295
0.3073
0
0.0589
0.6486
0.7958
8.7196
0.7991
0.8953
9.8567
0.7359
0.6684
2.2049
17.8987
6.971
0.0696
15.8858
Qwen2ForCausalLM
float16
apache-2.0
0
9
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
rubenroy/Zurich-7B-GCv2-5m
0.2831
0
0.1413
0.0999
0.3678
0.6762
0.002
0.8056
0.3296
0.1399
0.4817
0.0696
0.0817
0.8188
7.1819
0.8719
0.947
14.4011
0.872
0.1413
0.6142
0.4224
0.075
0.7882
0.2564
0.1937
0.3176
0.4886
0.3442
0.4817
0.801
0.7985
0.6263
0.002
0
0
0.5419
0.0814
0.0074
0.005
0
0
0.4869
0.7458
6.3715
0.7624
0.8849
8.2792
0.7162
0.6684
2.2049
17.8987
6.971
0.0696
15.8858
Qwen2ForCausalLM
float16
apache-2.0
0
9
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Azure99/Blossom-V6-14B
0.4999
0
0.3378
0.149
0.6661
0.8382
0.688
0.8313
0.5878
0.4186
0.8689
0.1134
0.4484
0.8451
9.8241
0.8969
0.9498
14.2158
0.8775
0.3378
0.8833
0.5287
0.7236
0.9249
0.3599
0.6332
0.4926
0.767
0.4268
0.8689
0.8426
0.827
0.7063
0.688
0
0
0.6991
0.4475
0.0044
0.0127
0.0354
0
0.6923
0.7863
7.7664
0.8114
0.8936
8.9187
0.7393
0.7056
2.4663
29.6552
11.3425
0.1134
25.5974
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
4
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Azure99/Blossom-V6-14B
0.578
0
0.562
0.2773
0.702
0.8774
0.868
0.8462
0.6784
0.5217
0.9117
0.1134
0.5325
0.8632
12.7414
0.9066
0.9535
16.3503
0.8843
0.562
0.8965
0.5833
0.7375
0.9562
0.534
0.6687
0.6775
0.7771
0.6164
0.9117
0.8949
0.8694
0.7795
0.868
0
0
0.7353
0.4986
0.0557
0.3852
0.0973
0.0757
0.7724
0.8187
10.0543
0.8374
0.9029
10.3033
0.7567
0.7056
2.4663
29.6552
11.3425
0.1134
25.5974
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
4
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Azure99/Blossom-V6-7B
0.3698
0
0.1633
0.0802
0.5652
0.6857
0.246
0.7997
0.4615
0.2518
0.705
0.11
0.2403
0.8221
7.7095
0.8759
0.9384
12.7178
0.8479
0.1633
0.774
0.4885
0.6375
0.7096
0.262
0.5284
0.5333
0.1755
0.4729
0.705
0.8188
0.8007
0.5735
0.246
0
0
0.602
0.2529
0.0085
0.0069
0.01
0
0.3755
0.762
6.1816
0.782
0.8744
7.5075
0.6929
0.7039
2.6555
29.961
11.0063
0.11
25.595
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
3
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Azure99/Blossom-V6-7B
0.5344
0
0.4591
0.2266
0.6443
0.8341
0.766
0.8315
0.6804
0.4222
0.9043
0.11
0.3892
0.8498
11.1366
0.8955
0.9491
14.9734
0.877
0.4591
0.8547
0.546
0.6847
0.9249
0.483
0.6117
0.7638
0.7355
0.672
0.9043
0.8779
0.8407
0.7227
0.766
0
0
0.677
0.3942
0.0022
0.3393
0.0619
0.051
0.6784
0.7985
8.2228
0.8128
0.8947
9.138
0.7408
0.7039
2.6555
29.961
11.0063
0.11
25.595
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
3
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Azure99/Blossom-V6-32B
0.6022
0
0.5804
0.3008
0.7427
0.8965
0.924
0.8503
0.7457
0.5515
0.9214
0.1104
0.5618
0.8662
12.969
0.9099
0.9558
17.3941
0.8875
0.5804
0.9111
0.6695
0.7667
0.9651
0.5618
0.7207
0.8287
0.8062
0.6574
0.9214
0.906
0.8862
0.8134
0.924
0
0
0.7646
0.5309
0.0133
0.4291
0.1593
0.0906
0.8118
0.8253
10.3734
0.8429
0.9046
10.7222
0.7609
0.7025
2.5017
30.148
11.0511
0.1104
25.8464
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
0
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Azure99/Blossom-V6-32B
0.4394
0
0.1792
0.1541
0.6863
0.8568
0.4
0.8342
0.5273
0.2545
0.8301
0.1104
0.1629
0.8501
11.0994
0.9022
0.95
15.0313
0.8744
0.1792
0.879
0.5891
0.7014
0.9383
0.3272
0.684
0.6191
0.1465
0.5803
0.8301
0.8978
0.8669
0.7531
0.4
0
0
0.6886
0.2734
0.0033
0.0004
0.0265
0
0.7404
0.7918
8.467
0.8194
0.8948
8.6601
0.7407
0.7025
2.5017
30.148
11.0511
0.1104
25.8464
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
0
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
prithivMLmods/Messier-Opus-14B-Elite7
0.6507
0.6345
0.5715
0.2696
0.7362
0.8828
0.9
0.8464
0.7703
0.5348
0.905
0.1061
0.5207
0.8619
12.4479
0.9071
0.9536
16.6927
0.8836
0.5715
0.9023
0.6207
0.8028
0.9508
0.5877
0.7063
0.8591
0.7759
0.7932
0.905
0.8867
0.8596
0.7954
0.9
0.6345
0.9819
0.7661
0.4959
0.0799
0.3778
0.0442
0.0766
0.7694
0.8257
10.6493
0.8374
0.904
10.7965
0.7573
0.7014
2.8979
28.1433
10.613
0.1061
24.5471
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
2
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
prithivMLmods/Messier-Opus-14B-Elite7
0.5732
0.6345
0.2396
0.1538
0.6997
0.8583
0.816
0.8379
0.7361
0.3639
0.8591
0.1061
0.4036
0.8444
10.4714
0.8983
0.9532
15.9376
0.8836
0.2396
0.8983
0.6293
0.8083
0.9357
0.358
0.6653
0.7786
0.6433
0.821
0.8591
0.8901
0.858
0.7409
0.816
0.6345
0.9819
0.734
0.3302
0.0229
0.0118
0.0442
0
0.6901
0.796
8.0932
0.8186
0.8967
9.2799
0.7512
0.7014
2.8979
28.1433
10.613
0.1061
24.5471
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
2
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
prithivMLmods/Equuleus-Opus-14B-Exp
0.6497
0.6225
0.569
0.2699
0.7351
0.8806
0.892
0.8468
0.7867
0.5406
0.9017
0.1015
0.5245
0.8623
12.5404
0.907
0.9534
16.5971
0.8837
0.569
0.8978
0.6351
0.8167
0.9553
0.5794
0.7026
0.8632
0.7753
0.8431
0.9017
0.9005
0.8708
0.7887
0.892
0.6225
0.9799
0.7675
0.5179
0.0806
0.3738
0.0531
0.0741
0.7681
0.8263
10.8727
0.8381
0.904
10.6808
0.7584
0.697
2.8057
26.5198
10.1641
0.1015
23.2268
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
2
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
prithivMLmods/Equuleus-Opus-14B-Exp
0.5613
0.6225
0.1699
0.1465
0.698
0.8556
0.782
0.8373
0.7267
0.3799
0.8545
0.1015
0.417
0.8465
9.9701
0.8988
0.9527
15.9437
0.8835
0.1699
0.8995
0.6379
0.8167
0.9312
0.3854
0.6645
0.7514
0.5947
0.8328
0.8545
0.8787
0.8565
0.736
0.782
0.6225
0.9799
0.7316
0.3373
0.0176
0.0095
0.0265
0
0.6789
0.7976
8.2888
0.8193
0.8961
9.186
0.7478
0.697
2.8057
26.5198
10.1641
0.1015
23.2268
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
2
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
SicariusSicariiStuff/Impish_QWEN_14B-1M
0.498
0.5723
0.233
0.1408
0.4099
0.836
0.52
0.8339
0.7338
0.292
0.8195
0.0863
0.3321
0.835
10.4669
0.895
0.9502
14.8106
0.8792
0.233
0.854
0.6178
0.7903
0.9205
0.2764
0.5662
0.6984
0.738
0.8246
0.8195
0.8785
0.8607
0.7335
0.52
0.5723
0.9177
0.2535
0.2675
0.0252
0.0085
0
0.0019
0.6685
0.7868
7.8035
0.8136
0.8958
9.7805
0.7477
0.6832
2.6374
23.1044
8.6411
0.0863
20.2858
Qwen2ForCausalLM
float16
apache-2.0
14.77
16
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
SicariusSicariiStuff/Impish_QWEN_14B-1M
0.6321
0.5723
0.566
0.2512
0.7146
0.8669
0.866
0.8332
0.768
0.521
0.9078
0.0863
0.4997
0.8489
10.8752
0.8982
0.9481
15.2433
0.8704
0.566
0.8735
0.6121
0.8125
0.9482
0.5703
0.6846
0.8377
0.7784
0.7991
0.9078
0.8778
0.8582
0.779
0.866
0.5723
0.9177
0.7446
0.4929
0.0667
0.3712
0.0088
0.0612
0.748
0.8076
10.4801
0.8172
0.9006
10.7189
0.7471
0.6832
2.6374
23.1044
8.6411
0.0863
20.2858
Qwen2ForCausalLM
float16
apache-2.0
14.77
16
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Zhihu-ai/Zhi-writing-dsr1-14b
0.2177
0.0422
0.1481
0.0547
0.0008
0.4454
0
0.7074
0.2581
0.1418
0.4989
0.0975
0.1162
0.7717
6.5311
0.7732
0.8979
9.4522
0.7443
0.1481
0
0.5057
0
0.7373
0.1301
0.0017
0.4252
0
0.3594
0.4989
0.8295
0.7937
0.599
0
0.0422
0.1185
0
0.179
0.0007
0.0008
0.0177
0
0.2546
0.7272
6.2891
0.6824
0.8516
6.3808
0.6295
0.6917
2.0468
28.5802
9.7494
0.0975
20.4886
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
16
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Zhihu-ai/Zhi-writing-dsr1-14b
0.574
0.0422
0.5764
0.2385
0.6932
0.8565
0.832
0.8312
0.7677
0.4798
0.8987
0.0975
0.4734
0.8482
10.6265
0.8982
0.9493
15.2187
0.8771
0.5764
0.8545
0.5891
0.8042
0.9491
0.5004
0.656
0.8426
0.7771
0.8255
0.8987
0.8771
0.849
0.7661
0.832
0.0422
0.1185
0.7305
0.4655
0.0258
0.3118
0.1327
0.0367
0.6856
0.808
9.5629
0.8141
0.8932
9.8451
0.7354
0.6917
2.0468
28.5802
9.7494
0.0975
20.4886
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
16
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
soob3123/amoral-cogito-14b
0.5508
0.5643
0.3128
0.0604
0.7032
0.8494
0.772
0.7819
0.7294
0.3556
0.869
0.0603
0.3916
0.8154
10.8466
0.8714
0.901
14.6011
0.7671
0.3128
0.9006
0.6121
0.7681
0.9294
0.3
0.693
0.7301
0.7323
0.8043
0.869
0.8879
0.8686
0.7184
0.772
0.5643
0.9217
0.7133
0.3752
0
0.0034
0
0.0006
0.2983
0.7657
8.5109
0.7969
0.8695
9.3955
0.6921
0.6656
2.545
15.1197
6.037
0.0603
13.596
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
soob3123/amoral-cogito-14b
0.6389
0.5643
0.5813
0.1899
0.7765
0.8863
0.886
0.8334
0.7885
0.5468
0.9146
0.0603
0.546
0.8577
12.0324
0.8987
0.9549
17.2865
0.8864
0.5813
0.8998
0.6523
0.8333
0.9651
0.5463
0.7419
0.8578
0.7784
0.8204
0.9146
0.9057
0.878
0.7939
0.886
0.5643
0.9217
0.8111
0.5481
0.0219
0.3144
0.1062
0.0878
0.4192
0.8029
10.5031
0.8051
0.8978
10.46
0.7435
0.6656
2.545
15.1197
6.037
0.0603
13.596
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
aixonlab/Zara-14b-v1.2
0.6446
0.6064
0.5362
0.2508
0.7306
0.8754
0.896
0.8462
0.7945
0.5496
0.9087
0.0962
0.5261
0.8603
12.5115
0.9078
0.9522
16.4674
0.8817
0.5362
0.8805
0.6925
0.8236
0.9634
0.5855
0.6973
0.8652
0.762
0.8293
0.9087
0.9029
0.8727
0.7824
0.896
0.6064
0.988
0.764
0.5373
0.0686
0.3417
0.0354
0.0536
0.7547
0.8268
11.0871
0.8382
0.9045
11.0548
0.7572
0.6925
2.878
25.4737
9.6173
0.0962
22.4172
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
4
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
aixonlab/Zara-14b-v1.2
0.5483
0.6064
0.1677
0.1414
0.5963
0.8492
0.818
0.8405
0.7256
0.3412
0.8494
0.0962
0.4184
0.8451
10.4522
0.9023
0.9526
16.1706
0.8839
0.1677
0.8953
0.6149
0.7972
0.9312
0.3691
0.6504
0.7436
0.6641
0.808
0.8494
0.8695
0.8421
0.7211
0.818
0.6064
0.988
0.5422
0.236
0.0159
0.0095
0.0177
0.004
0.6601
0.7996
8.8361
0.8259
0.8959
9.3006
0.7498
0.6925
2.878
25.4737
9.6173
0.0962
22.4172
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
4
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Ttimofeyka/Tissint-14B-v1.2-128k-RP
0.635
0.5703
0.5444
0.2696
0.7308
0.8687
0.872
0.8415
0.7798
0.5302
0.9068
0.0711
0.5337
0.8589
12.1319
0.9046
0.9521
16.4994
0.8806
0.5444
0.8758
0.6609
0.8
0.9544
0.5258
0.6987
0.8644
0.7721
0.8017
0.9068
0.8991
0.8666
0.776
0.872
0.5703
0.9016
0.7629
0.5309
0.0397
0.3261
0.1504
0.0734
0.7585
0.8273
11.7911
0.8332
0.9031
11.3799
0.7475
0.6622
2.5375
18.2334
7.1163
0.0711
15.9869
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
4
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Ttimofeyka/Tissint-14B-v1.2-128k-RP
0.4988
0.5703
0.0953
0.1399
0.3135
0.8118
0.764
0.8273
0.6908
0.3591
0.8443
0.0711
0.4098
0.8331
9.3821
0.8871
0.9512
14.8943
0.8805
0.0953
0.8825
0.6149
0.7597
0.9062
0.3523
0.5707
0.7798
0.5612
0.7384
0.8443
0.8588
0.8378
0.6466
0.764
0.5703
0.9016
0.0562
0.3152
0.0031
0.0135
0.0088
0
0.6738
0.7961
8.4729
0.8103
0.8903
9.2394
0.7313
0.6622
2.5375
18.2334
7.1163
0.0711
15.9869
Qwen2ForCausalLM
bfloat16
apache-2.0
14.77
4
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sao10K/14B-Qwen2.5-Freya-x1
0.6485
0.6124
0.5684
0.2591
0.7399
0.8836
0.9
0.845
0.7795
0.5505
0.9157
0.0797
0.5493
0.8637
12.8508
0.9086
0.9528
16.9401
0.8817
0.5684
0.9031
0.6609
0.8208
0.9544
0.5848
0.7105
0.8615
0.774
0.7802
0.9157
0.8971
0.8733
0.7932
0.9
0.6124
0.9498
0.7693
0.5172
0.0702
0.3536
0
0.0693
0.8023
0.8276
11.2268
0.8369
0.9046
11.4434
0.7529
0.6746
2.8361
20.3929
7.9624
0.0797
18.0861
Qwen2ForCausalLM
bfloat16
other
14.77
17
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sao10K/14B-Qwen2.5-Freya-x1
0.5385
0.6124
0.1761
0.1332
0.4655
0.8479
0.828
0.838
0.7341
0.3439
0.8651
0.0797
0.4445
0.8486
10.6394
0.9022
0.9521
16.101
0.88
0.1761
0.886
0.6149
0.825
0.9339
0.3607
0.6569
0.7354
0.6888
0.8064
0.8651
0.8898
0.8594
0.7238
0.828
0.6124
0.9498
0.274
0.2266
0.0171
0.0104
0
0.0014
0.6373
0.7989
8.3847
0.8233
0.8955
9.3717
0.7464
0.6746
2.8361
20.3929
7.9624
0.0797
18.0861
Qwen2ForCausalLM
bfloat16
other
14.77
17
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sao10K/14B-Qwen2.5-Kunou-v1
0.6406
0.6325
0.5479
0.2779
0.7348
0.8705
0.878
0.8395
0.7609
0.5072
0.8913
0.1064
0.4881
0.8541
11.8434
0.9013
0.9505
15.8431
0.8783
0.5479
0.887
0.6121
0.8097
0.9455
0.5835
0.7026
0.8398
0.7721
0.7709
0.8913
0.8835
0.8559
0.779
0.878
0.6325
0.9739
0.7671
0.45
0.0872
0.3716
0.1239
0.055
0.7518
0.8164
9.7946
0.8244
0.9019
10.5614
0.7541
0.6984
2.766
29.0841
10.6212
0.1064
25.1146
Qwen2ForCausalLM
bfloat16
other
14.77
26
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sao10K/14B-Qwen2.5-Kunou-v1
0.5605
0.6325
0.206
0.1424
0.6993
0.8493
0.82
0.8267
0.735
0.3491
0.7992
0.1064
0.373
0.8318
7.9806
0.8863
0.9495
14.7862
0.8787
0.206
0.8935
0.6379
0.8153
0.9276
0.3595
0.6645
0.744
0.6604
0.8175
0.7992
0.878
0.8514
0.7268
0.82
0.6325
0.9739
0.7342
0.3147
0.0215
0.0106
0.0442
0.0037
0.6318
0.7858
7.1288
0.8006
0.8927
9.0866
0.7411
0.6984
2.766
29.0841
10.6212
0.1064
25.1146
Qwen2ForCausalLM
bfloat16
other
14.77
26
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
mergekit-community/mergekit-model_stock-zelysxr
0.5402
0.5261
0.1969
0.1465
0.6654
0.7068
0.768
0.8367
0.7385
0.3859
0.8696
0.1017
0.4424
0.8451
10.679
0.8935
0.9521
15.7157
0.8813
0.1969
0.4394
0.6034
0.8167
0.9366
0.3688
0.6783
0.7773
0.6951
0.8001
0.8696
0.8915
0.8598
0.7445
0.768
0.5261
0.8213
0.6525
0.3466
0.0187
0.0088
0
0.0027
0.7023
0.7973
8.5303
0.8216
0.8969
9.4658
0.7505
0.6951
2.7989
26.2431
10.1707
0.1017
23.0785
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
mergekit-community/mergekit-model_stock-zelysxr
0.6445
0.5261
0.5784
0.2647
0.7495
0.8879
0.89
0.8487
0.7795
0.5518
0.9113
0.1017
0.5479
0.865
13.0381
0.9091
0.954
17.0038
0.8844
0.5784
0.9068
0.6638
0.8125
0.958
0.5799
0.7218
0.8595
0.7841
0.7776
0.9113
0.9039
0.8777
0.7988
0.89
0.5261
0.8213
0.7772
0.5274
0.0709
0.3774
0
0.0733
0.8017
0.828
10.9848
0.8414
0.9052
11.1218
0.7598
0.6951
2.7989
26.2431
10.1707
0.1017
23.0785
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
mergekit-community/mergekit-model_stock-odyqbix
0.6534
0.6044
0.5876
0.2652
0.7499
0.8895
0.894
0.8491
0.783
0.5516
0.9125
0.1006
0.5502
0.8646
13.2172
0.9092
0.9542
17.1767
0.8847
0.5876
0.9091
0.658
0.8139
0.9598
0.5761
0.7224
0.8644
0.7841
0.7944
0.9125
0.9036
0.8753
0.7996
0.894
0.6044
0.9217
0.7775
0.5286
0.0701
0.3763
0
0.0813
0.7984
0.8282
11.0528
0.8421
0.9057
11.1529
0.7603
0.6942
2.8262
26.1507
10.0569
0.1006
23.0128
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
mergekit-community/mergekit-model_stock-odyqbix
0.5603
0.6044
0.2069
0.1497
0.6774
0.7771
0.802
0.8427
0.7402
0.3882
0.8746
0.1006
0.4356
0.8513
11.039
0.9059
0.953
15.7775
0.8838
0.2069
0.6483
0.6063
0.8153
0.9366
0.3798
0.6783
0.765
0.7045
0.81
0.8746
0.89
0.8604
0.7464
0.802
0.6044
0.9217
0.6765
0.349
0.0222
0.0072
0
0.003
0.7161
0.8002
8.55
0.8284
0.8975
9.4302
0.7526
0.6942
2.8262
26.1507
10.0569
0.1006
23.0128
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
mergekit-community/mergekit-model_stock-jlodpmg
0.6501
0.5783
0.5813
0.265
0.7515
0.8867
0.886
0.8495
0.7806
0.5608
0.9112
0.0998
0.5474
0.8656
12.6976
0.9099
0.9548
17.3341
0.8856
0.5813
0.9063
0.6466
0.8236
0.9571
0.6061
0.7241
0.8587
0.786
0.7881
0.9112
0.9038
0.8786
0.7967
0.886
0.5783
0.8976
0.7789
0.5288
0.0779
0.3655
0
0.0799
0.8018
0.8272
10.8848
0.8414
0.9054
11.0724
0.761
0.6935
2.7831
25.7865
9.9774
0.0998
22.7565
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
mergekit-community/mergekit-model_stock-jlodpmg
0.5643
0.5783
0.2136
0.1436
0.6921
0.8484
0.816
0.8106
0.7432
0.3915
0.8701
0.0998
0.4459
0.8167
10.8901
0.8373
0.9508
15.7627
0.8776
0.2136
0.8625
0.6149
0.8236
0.9366
0.3682
0.6829
0.7691
0.7008
0.8074
0.8701
0.8915
0.8614
0.7462
0.816
0.5783
0.8976
0.7013
0.3605
0.0165
0.0071
0
0.0009
0.6935
0.7745
8.3511
0.7772
0.8967
9.4295
0.7503
0.6935
2.7831
25.7865
9.9774
0.0998
22.7565
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
sthenno/tempesthenno-sft-0314-stage1-ckpt50
0.6472
0.5803
0.5788
0.2504
0.7383
0.8792
0.896
0.8488
0.7818
0.5499
0.9155
0.0999
0.5468
0.8629
12.8788
0.9091
0.9536
16.62
0.8841
0.5788
0.9018
0.6695
0.8111
0.9526
0.5515
0.7074
0.8661
0.7771
0.7853
0.9155
0.8979
0.8709
0.7833
0.896
0.5803
0.9237
0.7692
0.5514
0.0726
0.3098
0.0088
0.0896
0.7714
0.8279
11.1971
0.8418
0.9054
11.0568
0.7601
0.6932
2.9905
25.7716
9.9919
0.0999
22.6676
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
3
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
sthenno/tempesthenno-sft-0314-stage1-ckpt50
0.5732
0.5803
0.262
0.1082
0.6707
0.853
0.846
0.8362
0.7409
0.4301
0.878
0.0999
0.4589
0.8498
10.3064
0.9052
0.9475
15.6024
0.8719
0.262
0.8958
0.6322
0.8069
0.9339
0.4155
0.6668
0.7445
0.714
0.807
0.878
0.8828
0.8533
0.7293
0.846
0.5803
0.9237
0.6747
0.4161
0.0172
0.0082
0
0.0015
0.5141
0.8041
8.6678
0.8311
0.8899
9.1219
0.7365
0.6932
2.9905
25.7716
9.9919
0.0999
22.6676
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
3
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
mergekit-community/mergekit-model_stock-jwxwwmz
0.5633
0.5582
0.2035
0.143
0.6793
0.8618
0.828
0.8116
0.7461
0.4002
0.8703
0.0943
0.4493
0.8221
10.9347
0.8498
0.9502
15.9679
0.8758
0.2035
0.9023
0.6207
0.825
0.9374
0.3762
0.6814
0.7601
0.709
0.8157
0.8703
0.89
0.8637
0.7456
0.828
0.5582
0.8956
0.6771
0.3752
0.0172
0.0071
0
0.0013
0.6895
0.7722
8.2572
0.7725
0.8959
9.3469
0.7482
0.6895
2.8143
24.6494
9.4409
0.0943
21.8396
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
mergekit-community/mergekit-model_stock-jwxwwmz
0.6479
0.5602
0.5795
0.2674
0.7512
0.886
0.894
0.8496
0.7845
0.5482
0.9113
0.0943
0.5407
0.866
12.9361
0.9107
0.9546
17.2405
0.8858
0.5795
0.9058
0.658
0.8361
0.9553
0.5667
0.7241
0.8525
0.7828
0.7932
0.9113
0.901
0.8769
0.797
0.894
0.5602
0.8956
0.7784
0.5372
0.0841
0.3678
0
0.0786
0.8063
0.8265
10.9246
0.8409
0.905
11.0172
0.7612
0.6895
2.8143
24.6494
9.4409
0.0943
21.8396
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
mergekit-community/mergekit-model_stock-grrwjhl
0.5386
0.4779
0.201
0.1172
0.6792
0.8563
0.81
0.6805
0.7459
0.4004
0.8577
0.0987
0.424
0.7666
9.7402
0.7346
0.8608
15.1145
0.6207
0.201
0.8963
0.6264
0.8264
0.9303
0.3609
0.6687
0.7716
0.6919
0.8133
0.8577
0.8855
0.8565
0.7422
0.81
0.4779
0.761
0.6896
0.4164
0.0226
0.0082
0
0.0008
0.5546
0.7363
7.9654
0.6978
0.8661
8.8758
0.6689
0.693
2.8624
25.5558
9.8691
0.0987
22.5649
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
mergekit-community/mergekit-model_stock-grrwjhl
0.6353
0.4779
0.5645
0.2615
0.7408
0.8835
0.894
0.8411
0.77
0.5518
0.9042
0.0987
0.5246
0.8634
12.6456
0.9091
0.952
17.011
0.8798
0.5645
0.9036
0.6494
0.8278
0.9544
0.5972
0.7108
0.8299
0.7727
0.77
0.9042
0.8927
0.868
0.7925
0.894
0.4779
0.761
0.7708
0.5336
0.0992
0.3278
0.0088
0.0716
0.8002
0.8126
10.5302
0.8158
0.9039
10.6967
0.7596
0.693
2.8624
25.5558
9.8691
0.0987
22.5649
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
mergekit-community/mergekit-model_stock-qndyhny
0.6514
0.5803
0.5806
0.268
0.7527
0.8881
0.896
0.8494
0.7837
0.5558
0.9125
0.0979
0.5461
0.8652
13.0474
0.9096
0.9545
17.1803
0.885
0.5806
0.9071
0.6494
0.8222
0.9589
0.5894
0.7266
0.864
0.7847
0.7981
0.9125
0.9034
0.878
0.7985
0.896
0.5803
0.9076
0.7787
0.5318
0.0755
0.3823
0.0088
0.0746
0.7987
0.8277
11.0137
0.842
0.9053
11.2041
0.7608
0.692
2.825
25.1948
9.8012
0.0979
22.2221
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
mergekit-community/mergekit-model_stock-qndyhny
0.5648
0.5803
0.2145
0.1461
0.6813
0.8372
0.812
0.8397
0.7425
0.3877
0.8734
0.0979
0.445
0.8489
11.1106
0.9018
0.9523
15.8968
0.8815
0.2145
0.8289
0.6092
0.8181
0.9366
0.3626
0.6823
0.7642
0.7083
0.8127
0.8734
0.8915
0.8609
0.7461
0.812
0.5803
0.9076
0.6802
0.3556
0.0139
0.0071
0
0
0.7096
0.7982
8.4516
0.8246
0.8969
9.4688
0.7507
0.692
2.825
25.1948
9.8012
0.0979
22.2221
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Sorawiz/Qwen2.5-14B-Instinct-RP
0.5266
0.6145
0.1709
0.1493
0.3187
0.8484
0.8
0.8413
0.7392
0.3578
0.8668
0.0856
0.447
0.8473
11.0869
0.905
0.9537
15.8782
0.8851
0.1709
0.895
0.6466
0.7806
0.9303
0.3968
0.6041
0.7346
0.738
0.7962
0.8668
0.8688
0.8466
0.7198
0.8
0.6145
0.9558
0.0334
0.2297
0.0149
0.0076
0.0088
0.0046
0.7106
0.8013
8.7547
0.8275
0.8965
9.7556
0.7476
0.679
2.7125
22.331
8.548
0.0856
19.7706
Qwen2ForCausalLM
bfloat16
14.766
1
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Sorawiz/Qwen2.5-14B-Instinct-RP
0.6513
0.6145
0.5707
0.2775
0.7474
0.8857
0.884
0.8438
0.7935
0.5455
0.916
0.0856
0.5528
0.8626
12.5748
0.9082
0.953
16.8867
0.8817
0.5707
0.9023
0.681
0.825
0.9562
0.5432
0.7184
0.8648
0.7898
0.807
0.916
0.9024
0.8756
0.7985
0.884
0.6145
0.9558
0.7765
0.5407
0.0562
0.3545
0.1062
0.076
0.7948
0.8281
11.916
0.8362
0.904
11.5781
0.749
0.679
2.7125
22.331
8.548
0.0856
19.7706
Qwen2ForCausalLM
bfloat16
14.766
1
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Sorawiz/Qwen2.5-14B-Instinct
0.6549
0.6305
0.5739
0.283
0.7536
0.8882
0.886
0.8445
0.7908
0.5497
0.9176
0.086
0.5533
0.8623
12.6227
0.9064
0.9536
17.062
0.8828
0.5739
0.9043
0.6667
0.8208
0.9607
0.5537
0.7249
0.8685
0.7885
0.8096
0.9176
0.9029
0.8758
0.7997
0.886
0.6305
0.9558
0.7822
0.542
0.0631
0.361
0.115
0.0749
0.8008
0.8294
11.8386
0.8378
0.9042
11.6211
0.7508
0.6783
2.7509
22.4969
8.5903
0.086
19.909
Qwen2ForCausalLM
bfloat16
14.766
0
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Sorawiz/Qwen2.5-14B-Instinct
0.5314
0.6305
0.1711
0.1516
0.3517
0.8452
0.8
0.8414
0.738
0.36
0.8702
0.086
0.4596
0.8484
10.9122
0.9038
0.9541
16.3631
0.8857
0.1711
0.8973
0.6379
0.7986
0.9205
0.3871
0.6459
0.7214
0.726
0.8062
0.8702
0.8783
0.8526
0.7179
0.8
0.6305
0.9558
0.0576
0.2334
0.0168
0.0068
0
0.003
0.7313
0.8013
8.5551
0.8273
0.8965
9.5247
0.7488
0.6783
2.7509
22.4969
8.5903
0.086
19.909
Qwen2ForCausalLM
bfloat16
14.766
0
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sao10K/32B-Qwen2.5-Kunou-v1
0.5168
0.5944
0.1425
0.1291
0.1283
0.8753
0.832
0.8432
0.7554
0.4065
0.8979
0.0803
0.4472
0.8566
12.5423
0.9061
0.9542
16.5213
0.8843
0.1425
0.9041
0.6207
0.7833
0.9464
0.3982
0.2499
0.8131
0.8018
0.7581
0.8979
0.8932
0.8741
0.7753
0.832
0.5944
0.8775
0.0068
0.3739
0
0.0062
0
0.0028
0.6368
0.8103
9.9715
0.8303
0.9002
10.4816
0.7519
0.6764
3.2759
19.0859
8.0118
0.0803
16.9762
Qwen2ForCausalLM
bfloat16
other
32.764
34
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Sao10K/32B-Qwen2.5-Kunou-v1
0.6626
0.5944
0.5793
0.2723
0.7821
0.9033
0.94
0.846
0.8006
0.5754
0.9149
0.0803
0.5928
0.8664
13.2904
0.9108
0.9553
17.1684
0.8848
0.5793
0.9106
0.6954
0.8278
0.9651
0.6092
0.7602
0.8583
0.7891
0.8324
0.9149
0.8892
0.8727
0.8341
0.94
0.5944
0.8775
0.8039
0.5242
0.0375
0.3685
0
0.1068
0.8486
0.8329
12.7109
0.8327
0.9083
12.0642
0.7559
0.6764
3.2759
19.0859
8.0118
0.0803
16.9762
Qwen2ForCausalLM
bfloat16
other
32.764
34
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
fluently-lm/FluentlyLM-Prinum
0.641
0.3373
0.5784
0.3011
0.7868
0.8996
0.934
0.8274
0.813
0.5627
0.9059
0.1045
0.5707
0.8466
13.3835
0.8768
0.9561
17.6757
0.887
0.5784
0.899
0.7011
0.8444
0.9651
0.6025
0.7681
0.8989
0.7955
0.8252
0.9059
0.8875
0.8788
0.8345
0.934
0.3373
0.6024
0.8055
0.515
0.0644
0.4011
0.0973
0.1175
0.8249
0.7974
11.8705
0.7821
0.9079
11.4828
0.7638
0.6902
2.8941
26.87
10.4524
0.1045
23.4764
Qwen2ForCausalLM
bfloat16
mit
32.764
28
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
fluently-lm/FluentlyLM-Prinum
0.5227
0.3373
0.182
0.1359
0.7417
0.8776
0.614
0.6565
0.7657
0.4347
0.8995
0.1045
0.4539
0.7574
10.9995
0.7115
0.8565
15.7768
0.591
0.182
0.9066
0.658
0.7958
0.9374
0.4113
0.7167
0.8389
0.8037
0.7319
0.8995
0.8935
0.874
0.7887
0.614
0.3373
0.6024
0.7666
0.439
0.0183
0.006
0
0.0051
0.6503
0.7314
8.872
0.6883
0.8537
8.9716
0.6351
0.6902
2.8941
26.87
10.4524
0.1045
23.4764
Qwen2ForCausalLM
bfloat16
mit
32.764
28
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
NovaSky-AI/Sky-T1-32B-Preview
0.662
0.5863
0.5808
0.2794
0.7703
0.8961
0.938
0.8469
0.8031
0.5509
0.9185
0.1114
0.5541
0.861
12.8865
0.9043
0.9544
16.8698
0.8851
0.5808
0.9083
0.6983
0.7972
0.958
0.6028
0.7439
0.8858
0.7967
0.8376
0.9185
0.8738
0.88
0.8221
0.938
0.5863
0.8715
0.7967
0.4959
0.0407
0.413
0.0708
0.0782
0.7943
0.8262
11.1348
0.8346
0.9055
11.029
0.7634
0.7035
2.9549
30.1857
11.1317
0.1114
25.7682
Qwen2ForCausalLM
float16
apache-2.0
32.764
544
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
NovaSky-AI/Sky-T1-32B-Preview
0.5464
0.5863
0.1678
0.1488
0.4983
0.8669
0.85
0.8345
0.7314
0.3205
0.8943
0.1114
0.2452
0.843
11.5121
0.8939
0.9524
16.1382
0.8828
0.1678
0.9011
0.6063
0.7639
0.9312
0.348
0.5784
0.7436
0.7948
0.7485
0.8943
0.8861
0.8721
0.7683
0.85
0.5863
0.8715
0.4182
0.3682
0
0.0131
0.0354
0.0045
0.6912
0.793
8.736
0.8134
0.8964
9.5862
0.748
0.7035
2.9549
30.1857
11.1317
0.1114
25.7682
Qwen2ForCausalLM
float16
apache-2.0
32.764
544
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
NovaSky-AI/Sky-T1-32B-Flash
0.5463
0.6225
0.1731
0.15
0.4522
0.8663
0.85
0.8344
0.7328
0.3222
0.8949
0.1107
0.2556
0.8433
11.549
0.8946
0.9523
16.1198
0.8823
0.1731
0.8998
0.6121
0.7639
0.9312
0.3308
0.5518
0.7428
0.7986
0.7467
0.8949
0.8855
0.8721
0.768
0.85
0.6225
0.9197
0.3526
0.3801
0
0.0125
0.0354
0.0044
0.6977
0.793
8.7385
0.8128
0.8964
9.5713
0.748
0.7029
2.9101
29.9646
11.0686
0.1107
25.7046
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
64
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
NovaSky-AI/Sky-T1-32B-Flash
0.665
0.6225
0.5791
0.2773
0.7716
0.8961
0.936
0.8465
0.8045
0.5522
0.9182
0.1107
0.5549
0.8608
12.7186
0.9031
0.9544
17.0616
0.8852
0.5791
0.9086
0.704
0.8
0.958
0.5996
0.7478
0.887
0.7967
0.8346
0.9182
0.8714
0.879
0.8219
0.936
0.6225
0.9197
0.7955
0.5022
0.0399
0.4097
0.0619
0.0835
0.7915
0.8267
11.2319
0.8344
0.9055
11.0557
0.7631
0.7029
2.9101
29.9646
11.0686
0.1107
25.7046
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
64
main
4
True
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v2-qwen2.5-7b
0.5541
0
0.4511
0.2166
0.6681
0.8562
0.814
0.8413
0.781
0.4612
0.9046
0.1008
0.4261
0.8683
13.1155
0.9087
0.9524
16.3949
0.8806
0.4511
0.88
0.681
0.8181
0.9392
0.496
0.6377
0.8685
0.786
0.7512
0.9046
0.884
0.8613
0.7494
0.814
0
0.002
0.6985
0.4615
0.0241
0.3537
0.0354
0.0508
0.619
0.815
9.4698
0.8286
0.9001
10.1521
0.7475
0.6975
2.658
26.2831
10.0892
0.1008
22.6994
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
4
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v2-qwen2.5-7b
0.4888
0
0.3132
0.1156
0.6385
0.83
0.704
0.7567
0.7068
0.3238
0.8875
0.1008
0.3545
0.8419
12.4324
0.8827
0.8797
15.2873
0.6937
0.3132
0.8532
0.6638
0.7708
0.9249
0.2453
0.6027
0.7395
0.5385
0.8212
0.8875
0.9002
0.8666
0.712
0.704
0
0.002
0.6743
0.3715
0
0.0128
0
0.0011
0.5641
0.7814
8.4037
0.7988
0.8552
8.7703
0.6516
0.6975
2.658
26.2831
10.0892
0.1008
22.6994
Qwen2ForCausalLM
bfloat16
apache-2.0
7.616
4
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v2-llama3.1-8b
0.5361
0.008
0.5307
0.2237
0.5846
0.8046
0.746
0.8131
0.7457
0.4617
0.9069
0.0721
0.4697
0.8689
14.0218
0.9062
0.942
16.8746
0.8526
0.5307
0.8575
0.6667
0.7861
0.9044
0.4589
0.5315
0.8328
0.7652
0.6777
0.9069
0.8743
0.8421
0.652
0.746
0.008
0.0462
0.6377
0.4565
0
0.3001
0.0619
0.0646
0.6916
0.815
10.7058
0.8248
0.8716
9.9827
0.669
0.6777
3.1347
16.681
7.21
0.0721
15.0889
LlamaForCausalLM
bfloat16
llama3.1
8.03
1
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v2-llama3.1-8b
0.319
0.008
0.0971
0.0963
0.1221
0.7076
0.05
0.7699
0.7387
0.0943
0.753
0.0721
0.0838
0.854
12.7979
0.8988
0.8964
7.5275
0.7689
0.0971
0.6165
0.6695
0.7639
0.8919
0.1585
0.0068
0.7108
0.7803
0.769
0.753
0.8872
0.8453
0.6145
0.05
0.008
0.0462
0.2374
0.0406
0
0.0032
0
0.0008
0.4778
0.7725
8.3271
0.7919
0.8343
6.8691
0.6199
0.6777
3.1347
16.681
7.21
0.0721
15.0889
LlamaForCausalLM
bfloat16
llama3.1
8.03
1
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v2-mistral-nemo-12b
0.5312
0.2731
0.3784
0.138
0.5473
0.8083
0.708
0.8436
0.7622
0.3905
0.8918
0.1016
0.4555
0.861
13.2315
0.9114
0.9516
15.2979
0.8816
0.3784
0.8309
0.6839
0.7764
0.916
0.2558
0.5069
0.7486
0.7753
0.8269
0.8918
0.8864
0.8475
0.678
0.708
0.2731
0.5462
0.5876
0.4603
0
0
0.0265
0.002
0.6614
0.8041
8.9319
0.831
0.8984
9.6436
0.7504
0.6985
2.9274
26.2285
10.1687
0.1016
22.8773
MistralForCausalLM
bfloat16
apache-2.0
12.248
3
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v2-mistral-nemo-12b
0.5775
0.2731
0.5268
0.2444
0.5964
0.8524
0.752
0.8489
0.7435
0.5035
0.9104
0.1016
0.5365
0.872
14.0721
0.9135
0.9535
16.8044
0.8837
0.5268
0.8988
0.6293
0.7792
0.9205
0.4656
0.5546
0.8291
0.7765
0.7035
0.9104
0.8667
0.813
0.7378
0.752
0.2731
0.5462
0.6381
0.5084
0.0185
0.3189
0.0442
0.0717
0.7685
0.8254
10.4961
0.842
0.9025
10.4132
0.7566
0.6985
2.9274
26.2285
10.1687
0.1016
22.8773
MistralForCausalLM
bfloat16
apache-2.0
12.248
3
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v2-qwen2.5-32b
0.6561
0.4659
0.6013
0.2835
0.7775
0.9021
0.932
0.852
0.8267
0.5832
0.915
0.0785
0.5974
0.8689
14.401
0.9101
0.9573
18.5751
0.8889
0.6013
0.9033
0.75
0.8625
0.9651
0.5989
0.7585
0.894
0.8005
0.8263
0.915
0.9083
0.887
0.8377
0.932
0.4659
0.7108
0.7964
0.5534
0.0693
0.3902
0.0088
0.109
0.8402
0.8377
12.3081
0.845
0.9088
11.2778
0.7641
0.6817
3.0535
18.9282
7.8581
0.0785
16.9487
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
1
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v2-qwen2.5-32b
0.5606
0.4659
0.1732
0.1615
0.6441
0.879
0.844
0.8441
0.763
0.422
0.8914
0.0785
0.464
0.8626
12.6117
0.9092
0.9541
17.5532
0.883
0.1732
0.9013
0.6868
0.7778
0.9482
0.3387
0.7094
0.7346
0.7961
0.8196
0.8914
0.9012
0.8682
0.7875
0.844
0.4659
0.7108
0.5788
0.4633
0.0258
0.0072
0
0.0081
0.7664
0.8151
9.7607
0.8357
0.8979
10.2168
0.7488
0.6817
3.0535
18.9282
7.8581
0.0785
16.9487
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
1
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v2-llama3.3-70b
0.5291
0.0281
0.346
0.1497
0.671
0.847
0.734
0.8508
0.7187
0.467
0.9048
0.1032
0.5023
0.8627
13.5308
0.9123
0.9565
17.6453
0.8877
0.346
0.8963
0.6523
0.7264
0.9169
0.4524
0.6659
0.5945
0.7696
0.8508
0.9048
0.8901
0.8549
0.7278
0.734
0.0281
0.0402
0.6761
0.4462
0.0096
0
0.0088
0.005
0.7248
0.8211
10.4283
0.8454
0.903
10.561
0.7579
0.6995
2.9535
25.9622
10.317
0.1032
22.8281
LlamaForCausalLM
bfloat16
llama3.3
70.554
2
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v2-llama3.3-70b
0.6058
0.0281
0.5451
0.2541
0.7455
0.8792
0.93
0.856
0.7845
0.6195
0.9189
0.1032
0.6917
0.8728
14.8915
0.915
0.9585
19.1442
0.8903
0.5451
0.9081
0.6839
0.7889
0.9473
0.5948
0.699
0.8155
0.7917
0.8427
0.9189
0.8941
0.8674
0.7822
0.93
0.0281
0.0402
0.7921
0.5721
0.0373
0.3267
0.0177
0.075
0.8138
0.8371
12.1205
0.8536
0.909
11.2276
0.7651
0.6995
2.9535
25.9622
10.317
0.1032
22.8281
LlamaForCausalLM
bfloat16
llama3.3
70.554
2
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v2-mistral-small-24b
0.6134
0.247
0.5163
0.278
0.7287
0.8833
0.882
0.8451
0.7938
0.5649
0.9217
0.0868
0.6199
0.8703
14.9622
0.9098
0.9489
18.7172
0.8654
0.5163
0.9088
0.7213
0.8292
0.9535
0.5144
0.6879
0.8291
0.791
0.7983
0.9217
0.8982
0.88
0.7877
0.882
0.247
0.4297
0.7695
0.5604
0.0203
0.4049
0.0265
0.0866
0.8518
0.8312
12.1585
0.8454
0.9077
11.3817
0.7599
0.6866
3.444
21.1231
8.6948
0.0868
18.845
MistralForCausalLM
bfloat16
apache-2.0
23.572
1
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v2-mistral-small-24b
0.4563
0.247
0.1864
0.165
0.4927
0.8638
0.368
0.8262
0.7612
0.1963
0.8259
0.0868
0.1544
0.8554
15.0277
0.8956
0.9486
18.0557
0.8677
0.1864
0.894
0.681
0.775
0.9455
0.3262
0.4171
0.7231
0.7904
0.8364
0.8259
0.9016
0.8708
0.7519
0.368
0.247
0.4297
0.5682
0.1083
0.0113
0.0021
0
0.0009
0.8107
0.7928
9.6942
0.8153
0.8905
10.2762
0.7261
0.6866
3.444
21.1231
8.6948
0.0868
18.845
MistralForCausalLM
bfloat16
apache-2.0
23.572
1
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v1-llama3-70b
0.2769
0.004
0.0112
0.1013
0
0.5489
0
0.8272
0.2177
0.4475
0.7707
0.1176
0.4897
0.8338
13.1504
0.8944
0.9474
16.1619
0.8706
0.0112
0
0.4856
0
0.9303
0.417
0
0.3578
0
0.245
0.7707
0.909
0.8702
0.7165
0
0.004
0.1546
0
0.4358
0
0
0
0
0.5064
0.7767
9.0003
0.8112
0.8933
9.6949
0.7327
0.7071
3.038
30.8188
11.7642
0.1176
26.5965
LlamaForCausalLM
bfloat16
llama3
70.554
3
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
shisa-ai/shisa-v1-llama3-70b
0.5516
0.004
0.4699
0.2095
0.4536
0.8186
0.88
0.8447
0.9202
0.5702
0.7796
0.1176
0.6439
0.8734
14.4194
0.9124
0.9522
16.7374
0.879
0.4699
0.745
0.9224
0.9944
0.95
0.5597
0.381
0.9063
0.887
0.8906
0.7796
0.9238
0.8938
0.7608
0.88
0.004
0.1546
0.5261
0.5069
0.0185
0.2854
0.0088
0.0444
0.6904
0.812
10.8948
0.8371
0.9027
10.8993
0.7502
0.7071
3.038
30.8188
11.7642
0.1176
26.5965
LlamaForCausalLM
bfloat16
llama3
70.554
3
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
augmxnt/shisa-7b-v1
0.3569
0.01
0
0.0112
0.2478
0.6346
0.142
0.6105
0.8767
0.4034
0.9126
0.0766
0.3618
0.8037
8.702
0.8092
0.827
5.3748
0.609
0
0.4817
0.8247
0.975
0.8597
0.4833
0.2225
0.9026
0.7936
0.8876
0.9126
0.9204
0.8859
0.5625
0.142
0.01
0.8574
0.273
0.365
0
0
0
0
0.056
0.6948
7.0288
0.5908
0.785
3.0739
0.4329
0.6817
2.6202
27.2012
7.6666
0.0766
21.6621
MistralForCausalLM
bfloat16
apache-2.0
7.964
28
main
0
True
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
augmxnt/shisa-7b-v1
0.4594
0.01
0.3844
0.0938
0.418
0.6492
0.486
0.7806
0.8349
0.4062
0.9135
0.0766
0.3787
0.8481
11.979
0.8881
0.9386
14.372
0.8526
0.3844
0.5038
0.7557
0.9583
0.8543
0.4601
0.3434
0.8743
0.7973
0.7887
0.9135
0.869
0.8553
0.5895
0.486
0.01
0.8574
0.4927
0.3797
0.005
0.0679
0.0619
0.0137
0.3206
0.7371
6.8199
0.7012
0.8798
9.4703
0.6807
0.6817
2.6202
27.2012
7.6666
0.0766
21.6621
MistralForCausalLM
bfloat16
apache-2.0
7.964
28
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Casual-Autopsy/Llama-3-VNTL-Yollisa-8B
0.2339
0.1586
0
0.0461
0.0269
0.3213
0.046
0.6912
0.5243
0.2083
0.5481
0.0016
0.2379
0.7598
4.4654
0.703
0.9418
13.839
0.8548
0
0.5068
0.3448
0.5
0.2055
0.1772
0.0387
0.491
0.6667
0.619
0.5481
0.1085
0.1005
0.2517
0.046
0.1586
0.4779
0.015
0.2099
0
0
0
0
0.2304
0.6854
2.0294
0.5294
0.8813
8.4269
0.6777
0.5869
0.0515
0.961
0.1622
0.0016
0.8496
LlamaForCausalLM
float32
8.03
1
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Casual-Autopsy/Llama-3-VNTL-Yollisa-8B
0.5069
0.1586
0.4173
0.224
0.4656
0.7655
0.698
0.8419
0.5608
0.5575
0.8849
0.0016
0.6626
0.8594
12.237
0.9051
0.9537
16.7834
0.8847
0.4173
0.8547
0.4397
0.6
0.8999
0.5001
0.4213
0.6931
0.6629
0.4082
0.8849
0.7566
0.7494
0.5418
0.698
0.1586
0.4779
0.5098
0.5097
0.0011
0.1836
0.1239
0.0482
0.7632
0.8107
9.2785
0.8128
0.9095
11.5415
0.7649
0.5869
0.0515
0.961
0.1622
0.0016
0.8496
LlamaForCausalLM
float32
8.03
1
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Sorawiz/Qwen2.5-14B-Instinct-Talk
0.6522
0.6185
0.5659
0.2852
0.7474
0.8851
0.886
0.8444
0.7913
0.5489
0.9172
0.0849
0.5475
0.8614
12.7099
0.9065
0.9534
17.1545
0.8825
0.5659
0.9008
0.6609
0.8319
0.9562
0.5662
0.7167
0.8698
0.7917
0.8021
0.9172
0.9015
0.8766
0.7983
0.886
0.6185
0.9518
0.778
0.533
0.0573
0.3661
0.1416
0.0749
0.7863
0.8296
11.777
0.8376
0.9041
11.5986
0.7508
0.6782
2.6963
22.1774
8.483
0.0849
19.6408
Qwen2ForCausalLM
bfloat16
14.766
1
main
4
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Sorawiz/Qwen2.5-14B-Instinct-Talk
0.5283
0.6185
0.2024
0.1509
0.3081
0.8499
0.792
0.8419
0.7382
0.354
0.871
0.0849
0.4505
0.8469
11.131
0.905
0.9538
15.7058
0.8849
0.2024
0.894
0.6437
0.7792
0.9303
0.3817
0.5761
0.7153
0.7443
0.8086
0.871
0.8708
0.8489
0.7254
0.792
0.6185
0.9518
0.04
0.2299
0.0129
0.0077
0.0088
0.002
0.7228
0.7997
8.8547
0.8276
0.8969
9.6847
0.75
0.6782
2.6963
22.1774
8.483
0.0849
19.6408
Qwen2ForCausalLM
bfloat16
14.766
1
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2.5-0.5B
0.3139
0.0241
0.308
0.0459
0.3484
0.3864
0.314
0.6414
0.4532
0.2238
0.6332
0.0747
0.1319
0.7562
5.7707
0.7085
0.9018
9.3313
0.7579
0.308
0.5253
0.3333
0.5042
0.3691
0.3972
0.3005
0.5185
0.6496
0.2602
0.6332
0.0039
0.0163
0.2648
0.314
0.0241
0.0843
0.3963
0.1424
0.0076
0.0241
0.0088
0.0116
0.1772
0.6599
4.5559
0.5434
0.8349
6.1615
0.5559
0.6754
1.8932
20.7644
7.4776
0.0747
17.8396
Qwen2ForCausalLM
bfloat16
apache-2.0
0.494
271
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
Qwen/Qwen2.5-0.5B
0.1756
0.0241
0
0.0469
0.0185
0.2883
0.034
0.5883
0.3708
0.1392
0.3471
0.0747
0.0604
0.7041
3.291
0.6245
0.8076
3.3941
0.6981
0
0.4682
0.3276
0.5
0.1448
0.2518
0.0113
0.145
0.6604
0.2208
0.3471
0
0
0.252
0.034
0.0241
0.0843
0.0257
0.1054
0
0
0
0
0.2343
0.6504
2.9894
0.5076
0.7656
1.6117
0.5231
0.6754
1.8932
20.7644
7.4776
0.0747
17.8396
Qwen2ForCausalLM
bfloat16
apache-2.0
0.494
271
main
0
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
tiiuae/Falcon3-1B-base
0.2122
0
0.1768
0.0446
0.3341
0.3331
0.048
0.4391
0.5416
0.1295
0.2785
0.0088
0.086
0.6246
2.2558
0.3651
0.8603
6.3247
0.5707
0.1768
0.5321
0.3563
0.5056
0.2163
0.2383
0.2691
0.5534
0.6717
0.6209
0.2785
0.0535
0.0532
0.2509
0.048
0
0
0.3992
0.0643
0.005
0.0506
0
0.0119
0.1552
0.5972
1.8499
0.3849
0.8001
4.6456
0.4357
0.5894
0.1442
3.9909
0.8837
0.0088
3.401
LlamaForCausalLM
bfloat16
other
1.669
23
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
tiiuae/Falcon3-1B-base
0.0536
0
0
0.0148
0
0
0
0.4109
0
0.0711
0.0836
0.0088
0.042
0.6218
0.1086
0.4429
0.7647
1.5444
0.3887
0
0
0
0
0
0.1118
0
0
0
0
0.0836
0.0179
0.0216
0
0
0
0
0
0.0593
0
0
0
0
0.0738
0.5966
0.2564
0.4357
0.7676
2.1691
0.3764
0.5894
0.1442
3.9909
0.8837
0.0088
3.401
LlamaForCausalLM
bfloat16
other
1.669
23
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Elizezen/Antler-7B
0.3985
0.0341
0.3788
0.142
0.3476
0.6184
0.368
0.7777
0.4004
0.4217
0.7994
0.0956
0.4621
0.8338
9.5307
0.8756
0.9313
11.8381
0.8396
0.3788
0.7776
0.4138
0.5
0.6702
0.409
0.3482
0.2469
0.6206
0.2208
0.7994
0.2751
0.2773
0.4073
0.368
0.0341
0.3815
0.347
0.394
0
0.0706
0.0265
0.0287
0.5843
0.749
5.9275
0.7357
0.8713
8.2373
0.66
0.6763
2.459
27.0502
9.5531
0.0956
22.8163
MistralForCausalLM
float16
7.242
8
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
Elizezen/Antler-7B
0.229
0.0341
0.0019
0.0574
0.2572
0.3152
0.036
0.7072
0.2445
0.2529
0.5174
0.0956
0.3089
0.8043
6.2192
0.8447
0.8285
3.9582
0.705
0.0019
0.4987
0.3276
0.4986
0.1957
0.2258
0.2228
0.145
0.0303
0.2208
0.5174
0.0339
0.033
0.2511
0.036
0.0341
0.3815
0.2915
0.224
0
0
0
0
0.2869
0.7148
4.3629
0.6999
0.8274
4.5111
0.5791
0.6763
2.459
27.0502
9.5531
0.0956
22.8163
MistralForCausalLM
float16
7.242
8
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2
0.5125
0
0.4682
0.1953
0.6266
0.8282
0.588
0.8047
0.717
0.4138
0.8956
0.1005
0.3651
0.8367
10.4877
0.8722
0.9425
13.8136
0.8613
0.4682
0.8647
0.5776
0.7069
0.9223
0.4825
0.5755
0.8151
0.7595
0.7258
0.8956
0.8934
0.8602
0.6976
0.588
0
0
0.6776
0.3939
0.027
0.2668
0.1062
0.023
0.5537
0.7751
8.18
0.7555
0.8918
9.4665
0.7297
0.693
2.2146
28.389
10.0695
0.1005
21.0006
Qwen2ForCausalLM
bfloat16
14.77
129
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2
0.1605
0
0.0998
0.052
0
0.2275
0
0.6854
0.0048
0.1255
0.4705
0.1005
0.0781
0.7611
6.0601
0.7197
0.9069
10.1046
0.782
0.0998
0
0.0172
0
0.2985
0.1363
0
0.0066
0
0.0002
0.4705
0.4363
0.4157
0.3841
0
0
0
0
0.1622
0.0024
0.0023
0.0064
0
0.2489
0.7054
4.8844
0.6169
0.8467
5.3966
0.6229
0.693
2.2146
28.389
10.0695
0.1005
21.0006
Qwen2ForCausalLM
bfloat16
14.77
129
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
nbeerbower/DeepSeek-R1-Qwen-lorablated-32B
0.3295
0.2912
0.2587
0.1045
0.0111
0.8249
0
0.7893
0.5499
0.1599
0.5311
0.104
0.156
0.8147
10.4526
0.8483
0.9318
14.1674
0.8314
0.2587
0.8372
0.5115
0.2944
0.9106
0.1474
0.0023
0.5312
0.7191
0.6933
0.5311
0.7862
0.8125
0.7268
0
0.2912
0.5482
0.0199
0.1762
0.0081
0.0106
0.0177
0.0016
0.4846
0.7626
7.6841
0.7656
0.8844
8.7265
0.712
0.6933
2.3278
27.5105
10.3982
0.104
22.0419
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
7
main
0
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
nbeerbower/DeepSeek-R1-Qwen-lorablated-32B
0.6171
0.2912
0.5677
0.259
0.7377
0.8761
0.912
0.8402
0.7706
0.5228
0.907
0.104
0.5094
0.8588
11.8553
0.9015
0.9525
16.7543
0.8818
0.5677
0.8785
0.6552
0.7306
0.9508
0.5614
0.7066
0.8652
0.7955
0.8068
0.907
0.8985
0.8693
0.7989
0.912
0.2912
0.5482
0.7689
0.4977
0.0211
0.3619
0.1239
0.0331
0.7548
0.8164
10.3205
0.8254
0.9028
11.0062
0.7521
0.6933
2.3278
27.5105
10.3982
0.104
22.0419
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
7
main
4
False
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
nbeerbower/DeepSeek-R1-Qwen-lorablated-32B
0.32
0.2369
0.2561
0.1057
0.0029
0.8279
0
0.7856
0.5131
0.1578
0.532
0.1017
0.1545
0.8111
10.1852
0.8409
0.9302
13.8373
0.8284
0.2561
0.8409
0.5086
0.1806
0.9142
0.1456
0.0017
0.5082
0.6963
0.6718
0.532
0.786
0.8126
0.7287
0
0.2369
0.4257
0.0041
0.1732
0.0109
0.0081
0.0177
0.0023
0.4895
0.7615
7.6467
0.7616
0.8845
8.7855
0.7113
0.6924
2.3025
26.8232
10.1766
0.1017
21.2201
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
7
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
nbeerbower/DeepSeek-R1-Qwen-lorablated-32B
0.6133
0.2369
0.5686
0.2636
0.7399
0.8749
0.914
0.8398
0.7713
0.5245
0.9111
0.1017
0.5113
0.8589
11.9443
0.9012
0.9525
16.9022
0.8814
0.5686
0.8785
0.6552
0.7278
0.9482
0.5614
0.7108
0.8681
0.7942
0.8112
0.9111
0.8999
0.8706
0.7981
0.914
0.2369
0.4257
0.769
0.5007
0.0204
0.369
0.1327
0.0361
0.7599
0.8165
10.2802
0.8248
0.9029
10.8589
0.7518
0.6924
2.3025
26.8232
10.1766
0.1017
21.2201
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
7
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Triangle104/DeepSky-T100
0.5445
0.6064
0.1544
0.1516
0.4536
0.868
0.85
0.8392
0.7434
0.31
0.8972
0.1161
0.2019
0.8468
11.8103
0.9005
0.953
16.3377
0.8843
0.1544
0.8995
0.6264
0.7653
0.9312
0.3724
0.4922
0.7654
0.7885
0.7715
0.8972
0.8867
0.8761
0.7734
0.85
0.6064
0.8675
0.4149
0.3558
0.0083
0.0108
0.0354
0.0039
0.6994
0.7967
8.9773
0.8223
0.8974
9.7636
0.7497
0.7069
3.1084
30.7099
11.6186
0.1161
26.3257
Qwen2ForCausalLM
float16
apache-2.0
32.764
1
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Triangle104/DeepSky-T100
0.6654
0.6064
0.5708
0.2892
0.7749
0.8961
0.94
0.849
0.8022
0.5555
0.9188
0.1161
0.5606
0.8633
13.2477
0.9075
0.9552
17.1443
0.8856
0.5708
0.9061
0.6925
0.7944
0.9589
0.5904
0.7495
0.8862
0.7999
0.8378
0.9188
0.8857
0.8845
0.8234
0.94
0.6064
0.8675
0.8002
0.5156
0.0319
0.4129
0.115
0.0812
0.805
0.8289
11.3559
0.839
0.9067
11.1619
0.7639
0.7069
3.1084
30.7099
11.6186
0.1161
26.3257
Qwen2ForCausalLM
float16
apache-2.0
32.764
1
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
mobiuslabsgmbh/DeepSeek-R1-ReDistill-Llama3-8B-v1.1
0.4289
0.004
0.394
0.1169
0.4538
0.6108
0.63
0.7853
0.5937
0.2353
0.8182
0.0763
0.2311
0.8245
8.8367
0.8594
0.939
13.5663
0.8534
0.394
0.7457
0.4914
0.5139
0.714
0.1958
0.3815
0.6976
0.7121
0.5533
0.8182
0.6956
0.6917
0.3727
0.63
0.004
0.006
0.526
0.2792
0.0043
0.1999
0.0354
0.0472
0.2979
0.7489
7.7371
0.7136
0.8879
8.983
0.715
0.6761
2.0641
20.4584
7.6364
0.0763
17.5578
LlamaForCausalLM
bfloat16
mit
8.03
10
main
4
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
mobiuslabsgmbh/DeepSeek-R1-ReDistill-Llama3-8B-v1.1
0.1968
0.004
0.0031
0.0403
0.0603
0.3906
0
0.717
0.2714
0.1085
0.4927
0.0763
0.0567
0.7528
6.3463
0.7396
0.9288
9.6932
0.8316
0.0031
0.7154
0.319
0.5
0.2029
0.1761
0.009
0.2268
0.0341
0.2772
0.4927
0.0963
0.0916
0.2534
0
0.004
0.006
0.1116
0.0928
0
0
0
0
0.2013
0.7088
4.9579
0.6611
0.8612
5.8148
0.6359
0.6761
2.0641
20.4584
7.6364
0.0763
17.5578
LlamaForCausalLM
bfloat16
mit
8.03
10
main
0
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
0.5519
0.5944
0.0725
0.1275
0.5723
0.8687
0.85
0.8364
0.7683
0.393
0.8863
0.1011
0.4224
0.8435
10.8408
0.8994
0.9493
14.6143
0.8762
0.0725
0.9051
0.6466
0.7694
0.9231
0.3049
0.5021
0.8155
0.7973
0.8125
0.8863
0.8695
0.8651
0.7779
0.85
0.5944
0.9257
0.6425
0.4518
0.0098
0.0056
0
0.0102
0.6119
0.7965
8.5812
0.825
0.8953
9.343
0.7449
0.6956
2.7616
27.0298
10.1225
0.1011
23.6235
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
15
main
0
True
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
0.6561
0.5944
0.5592
0.2636
0.7741
0.8957
0.924
0.8478
0.8138
0.5357
0.9076
0.1011
0.5477
0.8647
13.6036
0.9077
0.9548
17.3217
0.8848
0.5592
0.9006
0.7155
0.8153
0.9571
0.5658
0.752
0.8948
0.7778
0.8654
0.9076
0.8526
0.8646
0.8295
0.924
0.5944
0.9257
0.7962
0.4937
0.0383
0.3486
0.0177
0.098
0.8152
0.8278
10.9455
0.8394
0.9038
11.0421
0.7591
0.6956
2.7616
27.0298
10.1225
0.1011
23.6235
Qwen2ForCausalLM
bfloat16
apache-2.0
32.764
15
main
4
True
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4
0.6263
0
0.5838
0.3282
0.7456
0.9077
0.942
0.8599
0.7804
0.7274
0.9129
0.1011
0.8925
0.8744
14.9404
0.9161
0.9593
18.8572
0.8923
0.5838
0.9168
0.681
0.8
0.9786
0.6158
0.7151
0.8541
0.7942
0.7727
0.9129
0.907
0.879
0.8278
0.942
0
0
0.7762
0.6739
0.0614
0.4815
0.0796
0.1164
0.902
0.8538
13.5185
0.8596
0.9147
12.3744
0.7715
0.6897
3.77
24.947
10.1003
0.1011
22.0284
LlamaForCausalLM
bfloat16
llama3.3;gemma
70.554
5
main
4
False
v1.4.1
v0.6.3.post1
β­• : instruction-tuned
tokyotech-llm/Llama-3.3-Swallow-70B-Instruct-v0.4
0.3868
0
0.3231
0.1775
0.1628
0.8538
0.002
0.8398
0.6971
0.3832
0.7147
0.1011
0.542
0.8582
13.5448
0.9077
0.9559
17.3186
0.8867
0.3231
0.8747
0.6236
0.7028
0.9508
0.2698
0.1186
0.6109
0.7367
0.8117
0.7147
0.8859
0.8672
0.7359
0.002
0
0
0.207
0.3379
0.0174
0.0131
0.0088
0.0084
0.8399
0.8126
10.3546
0.8355
0.8942
11.0328
0.7292
0.6897
3.77
24.947
10.1003
0.1011
22.0284
LlamaForCausalLM
bfloat16
llama3.3;gemma
70.554
5
main
0
False
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
tokyo-electron-device-ai/llama3-tedllm-8b-v0-annealing
0.3594
0.1767
0.0135
0.0952
0.3198
0.5254
0.396
0.7853
0.4406
0.3242
0.8115
0.0647
0.4907
0.8392
10.2221
0.8751
0.9409
12.6459
0.8599
0.0135
0.7457
0.4167
0.5597
0.4477
0.1358
0.2945
0.2777
0.4811
0.4678
0.8115
0.574
0.6089
0.3829
0.396
0.1767
0.4659
0.345
0.346
0.0075
0.0008
0
0
0.4677
0.7425
6.7642
0.7323
0.8498
8.2575
0.674
0.6432
2.0367
15.634
6.4841
0.0647
13.9413
LlamaForCausalLM
bfloat16
llama3
8.135
0
main
0
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
tokyo-electron-device-ai/llama3-tedllm-8b-v0-annealing
0.5317
0.1767
0.4365
0.2453
0.5085
0.7762
0.714
0.82
0.6155
0.5948
0.896
0.0647
0.718
0.8565
11.572
0.8951
0.9491
15.4152
0.875
0.4365
0.8637
0.5287
0.6153
0.8642
0.5202
0.4558
0.592
0.7544
0.587
0.896
0.8032
0.7736
0.6007
0.714
0.1767
0.4659
0.5612
0.5463
0.0104
0.281
0.0442
0.0661
0.8246
0.8007
10.0399
0.7902
0.8961
10.871
0.7198
0.6432
2.0367
15.634
6.4841
0.0647
13.9413
LlamaForCausalLM
bfloat16
llama3
8.135
0
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
prithivMLmods/Sombrero-Opus-14B-Elite6
0.6515
0.6446
0.5714
0.2715
0.7382
0.8827
0.894
0.8466
0.7674
0.5384
0.9049
0.1069
0.5224
0.862
12.2798
0.9072
0.9534
16.6782
0.8834
0.5714
0.9021
0.6063
0.8014
0.9508
0.5879
0.7111
0.8615
0.7753
0.7924
0.9049
0.8868
0.8594
0.7951
0.894
0.6446
0.9819
0.7653
0.5051
0.0868
0.3756
0.0442
0.0791
0.7718
0.8259
10.698
0.8377
0.9042
10.7626
0.7582
0.7017
2.7809
28.2526
10.6992
0.1069
24.6508
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
2
main
4
True
v1.4.1
v0.6.3.post1
πŸ”Ά : fine-tuned
prithivMLmods/Sombrero-Opus-14B-Elite6
0.574
0.6446
0.245
0.1536
0.6993
0.8585
0.812
0.838
0.7353
0.3626
0.8581
0.1069
0.4055
0.8448
10.3969
0.8986
0.9534
15.7958
0.8838
0.245
0.8988
0.6236
0.8097
0.9366
0.353
0.6642
0.7769
0.6458
0.8206
0.8581
0.8913
0.8591
0.7403
0.812
0.6446
0.9819
0.7343
0.3294
0.0232
0.0123
0.0442
0.0021
0.6859
0.7962
8.131
0.8187
0.8966
9.2831
0.7508
0.7017
2.7809
28.2526
10.6992
0.1069
24.6508
Qwen2ForCausalLM
bfloat16
apache-2.0
14.766
2
main
0
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Sorawiz/Qwen2.5-14B-Instinct-Talk
0.6522
0.6185
0.5659
0.2852
0.7474
0.8851
0.886
0.8444
0.7913
0.5489
0.9172
0.0849
0.5475
0.8614
12.7099
0.9065
0.9534
17.1545
0.8825
0.5659
0.9008
0.6609
0.8319
0.9562
0.5662
0.7167
0.8698
0.7917
0.8021
0.9172
0.9015
0.8766
0.7983
0.886
0.6185
0.9518
0.778
0.533
0.0573
0.3661
0.1416
0.0749
0.7863
0.8296
11.777
0.8376
0.9041
11.5986
0.7508
0.6782
2.6963
22.1774
8.483
0.0849
19.6408
Qwen2ForCausalLM
bfloat16
14.766
1
main
4
True
v1.4.1
v0.6.3.post1
🀝 : base merges and moerges
Sorawiz/Qwen2.5-14B-Instinct-Talk
0.5283
0.6185
0.2024
0.1509
0.3081
0.8499
0.792
0.8419
0.7382
0.354
0.871
0.0849
0.4505
0.8469
11.131
0.905
0.9538
15.7058
0.8849
0.2024
0.894
0.6437
0.7792
0.9303
0.3817
0.5761
0.7153
0.7443
0.8086
0.871
0.8708
0.8489
0.7254
0.792
0.6185
0.9518
0.04
0.2299
0.0129
0.0077
0.0088
0.002
0.7228
0.7997
8.8547
0.8276
0.8969
9.6847
0.75
0.6782
2.6963
22.1774
8.483
0.0849
19.6408
Qwen2ForCausalLM
bfloat16
14.766
1
main
0
True
v1.4.1
v0.6.3.post1
🟒 : pretrained
meta-llama/Llama-3.1-8B
0.4896
0.012
0.4126
0.2357
0.5315
0.6845
0.712
0.8244
0.6016
0.4354
0.8937
0.0425
0.4407
0.8519
11.8018
0.8869
0.9498
15.9382
0.8741
0.4126
0.6816
0.4799
0.5736
0.8374
0.4577
0.4648
0.6191
0.726
0.6095
0.8937
0.7814
0.7524
0.5344
0.712
0.012
0.0201
0.5981
0.4077
0.0138
0.3127
0.0619
0.0513
0.7388
0.8118
11.091
0.8035
0.8997
10.8934
0.7333
0.6238
1.0888
10.3823
4.2447
0.0425
9.2067
LlamaForCausalLM
bfloat16
llama3.1
8.03
1,626
main
4
False
v1.4.1
v0.6.3.post1
🟒 : pretrained
meta-llama/Llama-3.1-8B
0.3288
0.012
0.0217
0.063
0.2956
0.4751
0.426
0.7933
0.4383
0.268
0.7814
0.0425
0.2629
0.8306
9.5688
0.8679
0.9432
13.7085
0.8647
0.0217
0.7112
0.4023
0.5
0.4245
0.305
0.2897
0.2366
0.666
0.3864
0.7814
0.5967
0.6033
0.2896
0.426
0.012
0.0201
0.3015
0.236
0
0.0039
0
0
0.3111
0.7627
7.2546
0.7471
0.8787
8.5829
0.6935
0.6238
1.0888
10.3823
4.2447
0.0425
9.2067
LlamaForCausalLM
bfloat16
llama3.1
8.03
1,626
main
0
False
v1.4.1
v0.6.3.post1