Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'categories'}) and 49 missing columns ({'spanish_xnli_es_spanish_bench', 'portuguese_score', 'translation_flores_ita_spa', 'spanish_teleia', 'spanish_escola', 'structured_extraction_hallucination_rate', 'translation_flores_ita_por', 'spanish_teleia_cervantes_ave', 'translation_flores_cmn_spa', 'teleia_teleia_siele', 'translation_flores_fra_por', 'translation_flores_por_spa', 'structured_extraction_field_f1_partial', 'spanish_openbookqa_es', 'structured_extraction_schema_validity', 'spanish_score', 'teleia_teleia_cervantes_ave', 'translation_flores_plus_bidirectional', 'translation_flores_eng_spa', 'translation_opus_100_en-es', 'portuguese_faquad_nli', 'teleia_score', 'translation_flores_hin_spa', 'translation_flores_arb_spa', 'spanish_mgsm_direct_es_spanish_bench', 'spanish_wnli_es', 'portuguese_bluex', 'teleia_teleia_pce', 'structured_extraction_extraction_quality_score', 'spanish_copa_es', 'portuguese_assin2_rte', 'translation_flores_deu_spa', 'structured_extraction_composite_score', 'portuguese_oab_exams', 'spanish_teleia_pce', 'translation_flores_fra_spa', 'translation_opus_100_en-pt', 'translation_flores_hin_por', 'spanish_teleia_siele', 'translation_opus', 'structured_extraction_score', 'translation_flores_cmn_por', 'translation_score', 'translation_flores_eng_por', 'translation_flores_deu_por', 'spanish_paws_es_spanish_bench', 'portuguese_enem_challenge', 'translation_flores_arb_por', 'spanish_spanish'}).

This happened while the json dataset builder was generating data using

hf://datasets/LatamBoard/leaderboard-results/summaries/AFM-4.5B_summary.json (at revision 07f02222e95e34a4c8d46524b72ad8650d37525a)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              model_name: string
              publisher: string
              full_model_name: string
              categories: struct<spanish: struct<task_scores: struct<spanish: struct<score: double, stderr: double, metric: st (... 5227 chars omitted)
                child 0, spanish: struct<task_scores: struct<spanish: struct<score: double, stderr: double, metric: string, alias: str (... 1572 chars omitted)
                    child 0, task_scores: struct<spanish: struct<score: double, stderr: double, metric: string, alias: string>, copa_es: struc (... 928 chars omitted)
                        child 0, spanish: struct<score: double, stderr: double, metric: string, alias: string>
                            child 0, score: double
                            child 1, stderr: double
                            child 2, metric: string
                            child 3, alias: string
                        child 1, copa_es: struct<score: double, stderr: double, metric: string, alias: string>
                            child 0, score: double
                            child 1, stderr: double
                            child 2, metric: string
                            child 3, alias: string
                        child 2, escola: struct<score: double, stderr: double, metric: string, alias: string>
                            child 0, score: double
                            child 1, stderr: double
                            child 2, metric: string
                            child 3, alias: string
                        child 3, mgsm_direct_es_spanish_bench: struct<score: double, stderr: double, metric: string, alias: string>
                            child 0, score: double
                            child 1, stderr: double
                            child 2, metric: string
                            child 3, alias: s
              ...
              ld 1, stderr: double
                            child 2, metric: string
                            child 3, alias: string
                        child 3, field_f1_partial: struct<score: double, stderr: double, metric: string, alias: string>
                            child 0, score: double
                            child 1, stderr: double
                            child 2, metric: string
                            child 3, alias: string
                        child 4, hallucination_rate: struct<score: double, stderr: double, metric: string, alias: string>
                            child 0, score: double
                            child 1, stderr: double
                            child 2, metric: string
                            child 3, alias: string
                    child 2, category_scores: struct<structured_extraction: struct<score: double, stderr: double, metric: string, alias: string>>
                        child 0, structured_extraction: struct<score: double, stderr: double, metric: string, alias: string>
                            child 0, score: double
                            child 1, stderr: double
                            child 2, metric: string
                            child 3, alias: string
                    child 3, top_level_scores: struct<structured_extraction: struct<score: double, stderr: double, metric: string, alias: string>>
                        child 0, structured_extraction: struct<score: double, stderr: double, metric: string, alias: string>
                            child 0, score: double
                            child 1, stderr: double
                            child 2, metric: string
                            child 3, alias: string
                    child 4, overall_score: double
                    child 5, evaluation_time: string
              overall_latam_score: double
              to
              {'model_name': Value('string'), 'publisher': Value('string'), 'full_model_name': Value('string'), 'overall_latam_score': Value('float64'), 'spanish_score': Value('float64'), 'spanish_spanish': Value('float64'), 'spanish_copa_es': Value('float64'), 'spanish_escola': Value('float64'), 'spanish_mgsm_direct_es_spanish_bench': Value('float64'), 'spanish_openbookqa_es': Value('float64'), 'spanish_paws_es_spanish_bench': Value('float64'), 'spanish_teleia': Value('float64'), 'spanish_teleia_cervantes_ave': Value('float64'), 'spanish_teleia_pce': Value('float64'), 'spanish_teleia_siele': Value('float64'), 'spanish_wnli_es': Value('float64'), 'spanish_xnli_es_spanish_bench': Value('float64'), 'teleia_score': Value('float64'), 'teleia_teleia_cervantes_ave': Value('float64'), 'teleia_teleia_pce': Value('float64'), 'teleia_teleia_siele': Value('float64'), 'portuguese_score': Value('float64'), 'portuguese_assin2_rte': Value('float64'), 'portuguese_bluex': Value('float64'), 'portuguese_enem_challenge': Value('float64'), 'portuguese_faquad_nli': Value('float64'), 'portuguese_oab_exams': Value('float64'), 'translation_score': Value('float64'), 'translation_flores_plus_bidirectional': Value('float64'), 'translation_flores_arb_por': Value('float64'), 'translation_flores_arb_spa': Value('float64'), 'translation_flores_cmn_por': Value('float64'), 'translation_flores_cmn_spa': Value('float64'), 'translation_flores_deu_por': Value('float64'), 'translation_flores_deu_spa': Value('float64'), 'translation_flores_eng_por': Value('float64'), 'translation_flores_eng_spa': Value('float64'), 'translation_flores_fra_por': Value('float64'), 'translation_flores_fra_spa': Value('float64'), 'translation_flores_hin_por': Value('float64'), 'translation_flores_hin_spa': Value('float64'), 'translation_flores_ita_por': Value('float64'), 'translation_flores_ita_spa': Value('float64'), 'translation_flores_por_spa': Value('float64'), 'translation_opus': Value('float64'), 'translation_opus_100_en-es': Value('float64'), 'translation_opus_100_en-pt': Value('float64'), 'structured_extraction_score': Value('float64'), 'structured_extraction_extraction_quality_score': Value('float64'), 'structured_extraction_composite_score': Value('float64'), 'structured_extraction_schema_validity': Value('float64'), 'structured_extraction_field_f1_partial': Value('float64'), 'structured_extraction_hallucination_rate': Value('float64')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1455, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1054, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 new columns ({'categories'}) and 49 missing columns ({'spanish_xnli_es_spanish_bench', 'portuguese_score', 'translation_flores_ita_spa', 'spanish_teleia', 'spanish_escola', 'structured_extraction_hallucination_rate', 'translation_flores_ita_por', 'spanish_teleia_cervantes_ave', 'translation_flores_cmn_spa', 'teleia_teleia_siele', 'translation_flores_fra_por', 'translation_flores_por_spa', 'structured_extraction_field_f1_partial', 'spanish_openbookqa_es', 'structured_extraction_schema_validity', 'spanish_score', 'teleia_teleia_cervantes_ave', 'translation_flores_plus_bidirectional', 'translation_flores_eng_spa', 'translation_opus_100_en-es', 'portuguese_faquad_nli', 'teleia_score', 'translation_flores_hin_spa', 'translation_flores_arb_spa', 'spanish_mgsm_direct_es_spanish_bench', 'spanish_wnli_es', 'portuguese_bluex', 'teleia_teleia_pce', 'structured_extraction_extraction_quality_score', 'spanish_copa_es', 'portuguese_assin2_rte', 'translation_flores_deu_spa', 'structured_extraction_composite_score', 'portuguese_oab_exams', 'spanish_teleia_pce', 'translation_flores_fra_spa', 'translation_opus_100_en-pt', 'translation_flores_hin_por', 'spanish_teleia_siele', 'translation_opus', 'structured_extraction_score', 'translation_flores_cmn_por', 'translation_score', 'translation_flores_eng_por', 'translation_flores_deu_por', 'spanish_paws_es_spanish_bench', 'portuguese_enem_challenge', 'translation_flores_arb_por', 'spanish_spanish'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/LatamBoard/leaderboard-results/summaries/AFM-4.5B_summary.json (at revision 07f02222e95e34a4c8d46524b72ad8650d37525a)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

model_name
string
publisher
string
full_model_name
string
overall_latam_score
float64
spanish_score
float64
spanish_spanish
float64
spanish_copa_es
float64
spanish_escola
float64
spanish_mgsm_direct_es_spanish_bench
float64
spanish_openbookqa_es
float64
spanish_paws_es_spanish_bench
float64
spanish_teleia
float64
spanish_teleia_cervantes_ave
float64
spanish_teleia_pce
float64
spanish_teleia_siele
float64
spanish_wnli_es
float64
spanish_xnli_es_spanish_bench
float64
teleia_score
float64
teleia_teleia_cervantes_ave
float64
teleia_teleia_pce
float64
teleia_teleia_siele
float64
portuguese_score
float64
portuguese_assin2_rte
float64
portuguese_bluex
float64
portuguese_enem_challenge
float64
portuguese_faquad_nli
float64
portuguese_oab_exams
float64
translation_score
float64
translation_flores_plus_bidirectional
float64
translation_flores_arb_por
float64
translation_flores_arb_spa
float64
translation_flores_cmn_por
float64
translation_flores_cmn_spa
float64
translation_flores_deu_por
float64
translation_flores_deu_spa
float64
translation_flores_eng_por
float64
translation_flores_eng_spa
float64
translation_flores_fra_por
float64
translation_flores_fra_spa
float64
translation_flores_hin_por
float64
translation_flores_hin_spa
float64
translation_flores_ita_por
float64
translation_flores_ita_spa
float64
translation_flores_por_spa
float64
translation_opus
float64
translation_opus_100_en-es
float64
translation_opus_100_en-pt
float64
structured_extraction_score
float64
structured_extraction_extraction_quality_score
float64
structured_extraction_composite_score
float64
structured_extraction_schema_validity
float64
structured_extraction_field_f1_partial
float64
structured_extraction_hallucination_rate
float64
Yi-1.5-6B-Chat
01-ai
01-ai/Yi-1.5-6B-Chat
0.5457
0.5376
0.5356
0.684
0.661
0.096
0.28
0.615
0.619
0.6667
0.4286
0.5
0.5634
0.4398
0.619
0.6667
0.4286
0.5
0.7682
0.8145
0.4798
0.5136
0.6463
0.4027
0.1376
0.1227
0.1305
0.1314
0.3193
0.2975
0.4509
0.4009
0.6228
0.5396
0.5406
0.4726
0.2383
0.2276
0.4602
0.4234
0.4684
0.2492
0.5351
0.4817
0.7392
0.7392
0.881
0.8887
0.635
0.1858
zephyr-7b-beta
HuggingFaceH4
HuggingFaceH4/zephyr-7b-beta
0.6545
0.5687
0.5441
0.768
0.6695
0.06
0.356
0.602
0.6667
0.3333
0.5714
0.75
0.6338
0.4345
0.6667
0.3333
0.5714
0.75
0.8272
0.8956
0.4757
0.5612
0.6206
0.4082
0.4208
0.4087
0.1324
0.1432
0.3066
0.3014
0.5099
0.4735
0.6544
0.5476
0.5743
0.5139
0.2389
0.2456
0.502
0.4799
0.5076
0.5117
0.5293
0.494
0.8011
0.8011
0.8868
0.9769
0.7115
0.1258
gemma-3n-E2B-it
google
google/gemma-3n-E2B-it
0.5946
0.58
0.5283
0.786
0.699
0.064
0.336
0.5695
0.619
0.5
0.4286
0.875
0.6197
0.4064
0.619
0.5
0.4286
0.875
0.8557
0.924
0.5591
0.6788
0.7034
0.4715
0.0965
0.0747
0.164
0.1685
0.1841
0.172
0.2627
0.2383
0.6554
0.5507
0.2776
0.2688
0.1546
0.1358
0.2703
0.2701
0.2768
0.2603
0.5713
0.5307
0.846
0.846
0.9029
0.8908
0.7643
0.0558
Ministral-8B-Instruct-2410
mistralai
mistralai/Ministral-8B-Instruct-2410
0.6725
0.6166
0.569
0.84
0.6087
0.068
0.388
0.62
0.619
0.6667
0.4286
0.75
0.7606
0.4871
0.619
0.6667
0.4286
0.75
0.8604
0.9206
0.5981
0.7096
0.7147
0.5148
0.3728
0.3507
0.1636
0.1637
0.187
0.2015
0.3833
0.399
0.6973
0.5687
0.4688
0.4539
0.159
0.1849
0.3787
0.4227
0.4279
0.5385
0.5737
0.5033
0.8402
0.8402
0.8937
0.9916
0.7622
0.085
DeepSeek-R1-Distill-Qwen-7B
deepseek-ai
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
0.5035
0.4438
0.4625
0.58
0.3742
0.084
0.24
0.5675
0.4286
0.5
0.4286
0.375
0.493
0.4361
0.4286
0.5
0.4286
0.375
0.7526
0.7817
0.484
0.499
0.4392
0.3645
0.048
0.0411
0.1444
0.1448
0.1455
0.144
0.2252
0.2215
0.5401
0.4741
0.2479
0.244
0.1138
0.116
0.2479
0.2455
0.2447
0.0996
0.3642
0.3408
0.7698
0.7698
0.8774
0.9223
0.6734
0.1579
Llama-3.1-8B-Instruct
meta-llama
meta-llama/Llama-3.1-8B-Instruct
0.6179
0.6278
0.5916
0.818
0.6923
0.008
0.352
0.6485
0.6667
0.5
0.5714
0.875
0.6901
0.5024
0.6667
0.5
0.5714
0.875
0.8611
0.9074
0.6036
0.7131
0.6145
0.5185
0.1702
0.1623
0.1727
0.1812
0.3163
0.3071
0.5493
0.492
0.663
0.552
0.5967
0.5225
0.3926
0.3337
0.5307
0.5017
0.5174
0.229
0.5714
0.4922
0.8127
0.8127
0.9047
0.9853
0.7297
0.1236
Qwen3-4B-Instruct-2507
Qwen
Qwen/Qwen3-4B-Instruct-2507
0.6269
0.6112
0.5533
0.738
0.6714
0
0.298
0.6145
0.6667
0.6667
0.4286
0.875
0.7465
0.4618
0.6667
0.6667
0.4286
0.875
0.8839
0.9264
0.6732
0.7537
0.8119
0.5139
0.1656
0.1516
0.1147
0.1142
0.3625
0.3409
0.5467
0.492
0.6452
0.5359
0.5801
0.5024
0.4084
0.3769
0.5161
0.4781
0.5037
0.2702
0.5614
0.5193
0.8468
0.8468
0.898
0.9937
0.7668
0.066
DeepSeek-R1-Distill-Llama-8B
deepseek-ai
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
0.5433
0.4791
0.5424
0.73
0.6486
0.004
0.292
0.6095
0.3333
0
0.4286
0.5
0.6479
0.455
0.3333
0
0.4286
0.5
0.8072
0.8531
0.4924
0.5689
0.6801
0.4337
0.078
0.0571
0.1604
0.1649
0.1943
0.1641
0.2622
0.2287
0.6314
0.5264
0.2572
0.2436
0.139
0.1231
0.254
0.2453
0.2424
0.2345
0.522
0.4845
0.8088
0.8088
0.8747
0.9895
0.7227
0.1227
Llama-3.2-3B-Instruct
meta-llama
meta-llama/Llama-3.2-3B-Instruct
0.5532
0.5145
0.4975
0.742
0.4435
0.004
0.274
0.5785
0.5238
0.3333
0.4286
0.75
0.6338
0.447
0.5238
0.3333
0.4286
0.75
0.7993
0.8339
0.4951
0.5962
0.6762
0.4333
0.1514
0.1353
0.1596
0.1745
0.3519
0.3343
0.4637
0.3775
0.659
0.542
0.4855
0.3511
0.3778
0.3148
0.45
0.3999
0.3936
0.2722
0.5576
0.5112
0.7473
0.7473
0.8966
0.9811
0.6497
0.2082
aya-expanse-8b
CohereLabs
CohereLabs/aya-expanse-8b
0.6864
0.6226
0.569
0.808
0.7066
0.08
0.366
0.609
0.6667
0.6667
0.4286
0.875
0.6761
0.4675
0.6667
0.6667
0.4286
0.875
0.8534
0.9056
0.5466
0.6515
0.7644
0.4765
0.4824
0.4697
0.1538
0.1547
0.4261
0.3948
0.5825
0.4866
0.7041
0.5557
0.6046
0.5307
0.4967
0.4428
0.541
0.4929
0.4779
0.5775
0.6032
0.5518
0.7871
0.7871
0.8816
0.9958
0.6972
0.1552
Yi-1.5-9B-Chat
01-ai
01-ai/Yi-1.5-9B-Chat
0.5754
0.5355
0.5465
0.722
0.7104
0.188
0.266
0.63
0.7143
0.3333
0.4286
0.625
0.6761
0.4281
0.7143
0.3333
0.4286
0.625
0.8356
0.8911
0.3574
0.6396
0.7618
0.4624
0.1622
0.152
0.1231
0.1267
0.3423
0.3306
0.4771
0.4216
0.6326
0.5273
0.5472
0.4758
0.291
0.2751
0.4725
0.4396
0.485
0.2387
0.5194
0.4749
0.7683
0.7683
0.8786
0.9055
0.6696
0.1517
Hermes-3-Llama-3.1-8B
NousResearch
NousResearch/Hermes-3-Llama-3.1-8B
0.6987
0.6176
0.5742
0.834
0.7085
0
0.372
0.6035
0.619
0.5
0.4286
0.875
0.7606
0.4767
0.619
0.5
0.4286
0.875
0.8622
0.9133
0.573
0.6739
0.7108
0.4902
0.4655
0.4594
0.1916
0.1876
0.3648
0.3476
0.544
0.4953
0.6799
0.5588
0.5886
0.5248
0.4484
0.4051
0.5315
0.5014
0.5212
0.5112
0.5352
0.4872
0.8493
0.8493
0.8922
0.9937
0.7711
0.0663
Hunyuan-MT-7B
tencent
tencent/Hunyuan-MT-7B
0.5378
0.4844
0.5035
0.678
0.5983
0.024
0.304
0.5795
0.4286
0.3333
0.2857
0.625
0.5493
0.4068
0.4286
0.3333
0.2857
0.625
0.833
0.906
0.3616
0.4864
0.4696
0.3718
0.1036
0.1027
0.0475
0.0693
0.3956
0.385
0.3729
0.3377
0.2069
0.189
0.3558
0.3496
0.3279
0.2951
0.3966
0.3677
0.3262
0.1105
0.3849
0.2255
0.7301
0.7301
0.8805
0.9706
0.6247
0.2167
Apertus-8B-Instruct-2509
swiss-ai
swiss-ai/Apertus-8B-Instruct-2509
0.6838
0.6358
0.5898
0.842
0.7066
0.012
0.394
0.612
0.7143
0.6667
0.4286
0.875
0.6901
0.5076
0.7143
0.6667
0.4286
0.875
0.8675
0.9175
0.5355
0.6697
0.7526
0.4888
0.4799
0.4676
0.1815
0.1798
0.4324
0.3865
0.5834
0.4744
0.6915
0.5263
0.6142
0.4901
0.5079
0.4413
0.5396
0.4682
0.4969
0.5724
0.5987
0.5461
0.752
0.752
0.8848
0.979
0.6575
0.208
Seed-X-PPO-7B
ByteDance-Seed
ByteDance-Seed/Seed-X-PPO-7B
0.4226
0.571
0.5293
0.822
0.5907
0.044
0.358
0.6
0.6667
0.6667
0.2857
0.75
0.6479
0.4185
0.6667
0.6667
0.2857
0.75
0.779
0.8227
0.4353
0.5514
0.4397
0.3681
0.088
0.057
0.1687
0.1724
0.1897
0.1909
0.2328
0.2284
0.5919
0.5596
0.2512
0.2442
0.1275
0.127
0.2485
0.2447
0.2415
0.3206
0.5791
0.5413
0.2525
0.2525
0.932
0.1996
0.064
0.1855
gemma-3n-E4B-it
google
google/gemma-3n-E4B-it
0.6611
0.6412
0.5665
0.838
0.7189
0.124
0.388
0.6115
0.7143
0.6667
0.4286
1
0.6761
0.443
0.7143
0.6667
0.4286
1
0.8654
0.926
0.605
0.7152
0.7119
0.5198
0.3023
0.2677
0.178
0.1792
0.1806
0.1705
0.2381
0.2338
0.7052
0.5722
0.2628
0.2625
0.1334
0.1277
0.2602
0.256
0.2552
0.5616
0.5862
0.5371
0.8354
0.8354
0.8947
0.9811
0.7503
0.0727
Phi-4-mini-instruct
microsoft
microsoft/Phi-4-mini-instruct
0.5669
0.5082
0.5266
0.736
0.529
0.144
0.332
0.582
0.5238
0.3333
0.2857
0.625
0.6761
0.4747
0.5238
0.3333
0.2857
0.625
0.8227
0.8871
0.5188
0.6081
0.6872
0.4323
0.1871
0.1775
0.1477
0.1552
0.3596
0.3515
0.5061
0.4685
0.6646
0.5613
0.5763
0.5065
0.3016
0.2835
0.5025
0.4747
0.4955
0.2586
0.5453
0.4907
0.7495
0.7495
0.8956
0.9916
0.6539
0.2124
AFM-4.5B
arcee-ai
arcee-ai/AFM-4.5B
0.5892
0.528
0.5387
0.7
0.7132
0.052
0.304
0.6035
0.4286
0.3333
0.4286
0.5
0.7465
0.4225
0.4286
0.3333
0.4286
0.5
0.82
0.8719
0.4979
0.5598
0.7549
0.3977
0.262
0.2542
0.128
0.1236
0.234
0.1958
0.251
0.2365
0.485
0.4211
0.2698
0.274
0.1849
0.1844
0.269
0.2709
0.2845
0.3204
0.355
0.2858
0.7468
0.7468
0.8804
0.9727
0.6453
0.196
Hormoz-8B
mann-e
mann-e/Hormoz-8B
0.6179
0.6096
0.5709
0.814
0.7094
0.092
0.358
0.619
0.6667
0.6667
0.4286
0.75
0.6761
0.4643
0.6667
0.6667
0.4286
0.75
0.8565
0.909
0.5299
0.6431
0.7663
0.4811
0.2175
0.2023
0.154
0.1551
0.4285
0.3945
0.5789
0.4586
0.7038
0.559
0.5862
0.4954
0.4985
0.4409
0.5245
0.4555
0.426
0.3313
0.6041
0.5498
0.7881
0.7881
0.8792
0.9958
0.6985
0.1545
Seed-X-Instruct-7B
ByteDance-Seed
ByteDance-Seed/Seed-X-Instruct-7B
0.4279
0.5872
0.526
0.822
0.5954
0.032
0.352
0.5985
0.619
0.6667
0.4286
0.75
0.662
0.4092
0.619
0.6667
0.4286
0.75
0.7772
0.825
0.3755
0.5248
0.4397
0.354
0.0856
0.0561
0.1621
0.1644
0.1679
0.1663
0.2317
0.2263
0.5909
0.56
0.2493
0.2419
0.1253
0.1245
0.2467
0.2422
0.2399
0.3072
0.5809
0.5129
0.2615
0.2615
1
0.0273
0.0769
0.0252
AFM-4.5B
arcee-ai
arcee-ai/AFM-4.5B
0.589192
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Apertus-8B-Instruct-2509
swiss-ai
swiss-ai/Apertus-8B-Instruct-2509
0.683834
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
DeepSeek-R1-Distill-Llama-8B
deepseek-ai
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
0.543268
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
DeepSeek-R1-Distill-Qwen-7B
deepseek-ai
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
0.503546
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Hermes-3-Llama-3.1-8B
NousResearch
NousResearch/Hermes-3-Llama-3.1-8B
0.69866
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Hormoz-8B
mann-e
mann-e/Hormoz-8B
0.617899
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Hunyuan-MT-7B
tencent
tencent/Hunyuan-MT-7B
0.537802
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Llama-3.1-8B-Instruct
meta-llama
meta-llama/Llama-3.1-8B-Instruct
0.617931
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Llama-3.2-3B-Instruct
meta-llama
meta-llama/Llama-3.2-3B-Instruct
0.55315
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Ministral-8B-Instruct-2410
mistralai
mistralai/Ministral-8B-Instruct-2410
0.672497
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Phi-4-mini-instruct
microsoft
microsoft/Phi-4-mini-instruct
0.566865
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Qwen3-4B-Instruct-2507
Qwen
Qwen/Qwen3-4B-Instruct-2507
0.626855
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Seed-X-Instruct-7B
ByteDance-Seed
ByteDance-Seed/Seed-X-Instruct-7B
0.427887
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Seed-X-PPO-7B
ByteDance-Seed
ByteDance-Seed/Seed-X-PPO-7B
0.422619
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Yi-1.5-6B-Chat
01-ai
01-ai/Yi-1.5-6B-Chat
0.545657
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
Yi-1.5-9B-Chat
01-ai
01-ai/Yi-1.5-9B-Chat
0.575384
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
aya-expanse-8b
CohereLabs
CohereLabs/aya-expanse-8b
0.686363
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
gemma-3n-E2B-it
google
google/gemma-3n-E2B-it
0.594583
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
gemma-3n-E4B-it
google
google/gemma-3n-E4B-it
0.661055
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
zephyr-7b-beta
HuggingFaceH4
HuggingFaceH4/zephyr-7b-beta
0.654483
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null

mauroibz/leaderboard-results

Results from model evaluations on the leaderboard

This dataset contains evaluation results from the leaderboard system.

Structure

  • Each JSON file contains results for a specific model evaluation
  • Files are organized by organization/model structure
  • Each result file includes:
    • Model configuration
    • Evaluation results across different benchmarks
    • Metadata about the evaluation run

Usage

These results are used by the leaderboard system to display model performance.

Downloads last month
690