leaderboard-pr-bot
commited on
Adding Evaluation Results
Browse filesThis is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr
The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.
If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions
README.md
CHANGED
@@ -1,11 +1,114 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
-
datasets:
|
4 |
-
- togethercomputer/RedPajama-Data-1T-Sample
|
5 |
language:
|
6 |
- en
|
|
|
7 |
tags:
|
8 |
- llama
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
---
|
10 |
|
11 |
This is eight copies of [BEE-spoke-data/smol_llama-101M-GQA](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA) ensembled into a Mixtral model, then trained very briefly on a small subset of RedPajama. Mostly just an experiment to demonstrate that training it works at all.
|
@@ -13,4 +116,17 @@ This is eight copies of [BEE-spoke-data/smol_llama-101M-GQA](https://huggingface
|
|
13 |
It's very, very smart. Probably the smartest model ever made. Better than GPT-5. See its thoughts on the internet:
|
14 |
|
15 |
> In a world where the internet is so much more than a web browser, it's also very important to have a good understanding of how the internet works.
|
16 |
-
> The first thing we need to do is to understand what the internet looks like and what the future looks like. We can use the internet to look at the internet's history, but we don't want to go into detail about the history of the internet. The internet was created by the internet's history, which is often called the history of the internet. It was originally developed as a way for people to learn about the internet, but it wasn't until the 1960s that the internet became a place to work. Today, the internet is used in many ways, from the internet's history to the internet itself.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
|
|
|
|
2 |
language:
|
3 |
- en
|
4 |
+
license: apache-2.0
|
5 |
tags:
|
6 |
- llama
|
7 |
+
datasets:
|
8 |
+
- togethercomputer/RedPajama-Data-1T-Sample
|
9 |
+
model-index:
|
10 |
+
- name: SmolLlamix-8x101M
|
11 |
+
results:
|
12 |
+
- task:
|
13 |
+
type: text-generation
|
14 |
+
name: Text Generation
|
15 |
+
dataset:
|
16 |
+
name: AI2 Reasoning Challenge (25-Shot)
|
17 |
+
type: ai2_arc
|
18 |
+
config: ARC-Challenge
|
19 |
+
split: test
|
20 |
+
args:
|
21 |
+
num_few_shot: 25
|
22 |
+
metrics:
|
23 |
+
- type: acc_norm
|
24 |
+
value: 22.7
|
25 |
+
name: normalized accuracy
|
26 |
+
source:
|
27 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=chargoddard/SmolLlamix-8x101M
|
28 |
+
name: Open LLM Leaderboard
|
29 |
+
- task:
|
30 |
+
type: text-generation
|
31 |
+
name: Text Generation
|
32 |
+
dataset:
|
33 |
+
name: HellaSwag (10-Shot)
|
34 |
+
type: hellaswag
|
35 |
+
split: validation
|
36 |
+
args:
|
37 |
+
num_few_shot: 10
|
38 |
+
metrics:
|
39 |
+
- type: acc_norm
|
40 |
+
value: 28.5
|
41 |
+
name: normalized accuracy
|
42 |
+
source:
|
43 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=chargoddard/SmolLlamix-8x101M
|
44 |
+
name: Open LLM Leaderboard
|
45 |
+
- task:
|
46 |
+
type: text-generation
|
47 |
+
name: Text Generation
|
48 |
+
dataset:
|
49 |
+
name: MMLU (5-Shot)
|
50 |
+
type: cais/mmlu
|
51 |
+
config: all
|
52 |
+
split: test
|
53 |
+
args:
|
54 |
+
num_few_shot: 5
|
55 |
+
metrics:
|
56 |
+
- type: acc
|
57 |
+
value: 24.69
|
58 |
+
name: accuracy
|
59 |
+
source:
|
60 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=chargoddard/SmolLlamix-8x101M
|
61 |
+
name: Open LLM Leaderboard
|
62 |
+
- task:
|
63 |
+
type: text-generation
|
64 |
+
name: Text Generation
|
65 |
+
dataset:
|
66 |
+
name: TruthfulQA (0-shot)
|
67 |
+
type: truthful_qa
|
68 |
+
config: multiple_choice
|
69 |
+
split: validation
|
70 |
+
args:
|
71 |
+
num_few_shot: 0
|
72 |
+
metrics:
|
73 |
+
- type: mc2
|
74 |
+
value: 46.09
|
75 |
+
source:
|
76 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=chargoddard/SmolLlamix-8x101M
|
77 |
+
name: Open LLM Leaderboard
|
78 |
+
- task:
|
79 |
+
type: text-generation
|
80 |
+
name: Text Generation
|
81 |
+
dataset:
|
82 |
+
name: Winogrande (5-shot)
|
83 |
+
type: winogrande
|
84 |
+
config: winogrande_xl
|
85 |
+
split: validation
|
86 |
+
args:
|
87 |
+
num_few_shot: 5
|
88 |
+
metrics:
|
89 |
+
- type: acc
|
90 |
+
value: 51.3
|
91 |
+
name: accuracy
|
92 |
+
source:
|
93 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=chargoddard/SmolLlamix-8x101M
|
94 |
+
name: Open LLM Leaderboard
|
95 |
+
- task:
|
96 |
+
type: text-generation
|
97 |
+
name: Text Generation
|
98 |
+
dataset:
|
99 |
+
name: GSM8k (5-shot)
|
100 |
+
type: gsm8k
|
101 |
+
config: main
|
102 |
+
split: test
|
103 |
+
args:
|
104 |
+
num_few_shot: 5
|
105 |
+
metrics:
|
106 |
+
- type: acc
|
107 |
+
value: 0.61
|
108 |
+
name: accuracy
|
109 |
+
source:
|
110 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=chargoddard/SmolLlamix-8x101M
|
111 |
+
name: Open LLM Leaderboard
|
112 |
---
|
113 |
|
114 |
This is eight copies of [BEE-spoke-data/smol_llama-101M-GQA](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA) ensembled into a Mixtral model, then trained very briefly on a small subset of RedPajama. Mostly just an experiment to demonstrate that training it works at all.
|
|
|
116 |
It's very, very smart. Probably the smartest model ever made. Better than GPT-5. See its thoughts on the internet:
|
117 |
|
118 |
> In a world where the internet is so much more than a web browser, it's also very important to have a good understanding of how the internet works.
|
119 |
+
> The first thing we need to do is to understand what the internet looks like and what the future looks like. We can use the internet to look at the internet's history, but we don't want to go into detail about the history of the internet. The internet was created by the internet's history, which is often called the history of the internet. It was originally developed as a way for people to learn about the internet, but it wasn't until the 1960s that the internet became a place to work. Today, the internet is used in many ways, from the internet's history to the internet itself.
|
120 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
121 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__SmolLlamix-8x101M)
|
122 |
+
|
123 |
+
| Metric |Value|
|
124 |
+
|---------------------------------|----:|
|
125 |
+
|Avg. |28.98|
|
126 |
+
|AI2 Reasoning Challenge (25-Shot)|22.70|
|
127 |
+
|HellaSwag (10-Shot) |28.50|
|
128 |
+
|MMLU (5-Shot) |24.69|
|
129 |
+
|TruthfulQA (0-shot) |46.09|
|
130 |
+
|Winogrande (5-shot) |51.30|
|
131 |
+
|GSM8k (5-shot) | 0.61|
|
132 |
+
|