Update README.md
Browse files
README.md
CHANGED
@@ -145,35 +145,31 @@ Evaluations on Serbian LLM eval suite (or rather, performance and knowledge of S
|
|
145 |
|-----------|-------|-------|-----------|-------|------------|------------|-------|---------|----------|-------|
|
146 |
| [Zamfir-7B](https://huggingface.co/Stopwolf/Zamfir-7B-slerp) | 51.85 | 32.25 | 46.03 | 75.59 | 62.59 | 26.00 | 66.81 | 16.09 | 36.11 | 45.92 |
|
147 |
| [Mustra-7B](https://huggingface.co/Stopwolf/Mustra-7B-Instruct-v0.1) | 52.95 | 33.70 | 45.89 | **77.55** | 64.17 | **30.60** | 67.25 | 15.40 | 34.84 | 46.93 |
|
148 |
-
| [Tito-7B](https://huggingface.co/Stopwolf/Tito-7B) | 55.43 | **34.73** | 48.19 | 77.37 | **65.27** | 30.00 | 67.30 | **16.7** | 35.38 | **47.82** |
|
149 |
| [YugoGPT](https://huggingface.co/gordicaleksa/YugoGPT) | **57.79** | **34.73** | **49.89** | 69.45 | 64.56 | 28.20 | **72.03** | 15.82 | **36.14** | 47.62 |
|
150 |
|
151 |
Here, all benchmarks were done 0-shot, on the exception of NQ Open and TriviaQA which were done in 5-shot manner, in order to be comparable to Mistral paper.
|
152 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
153 |
|
154 |
-
Evalutaions on Open LLM Leaderboard (or rather, performance and knowledge of English):
|
155 |
-
| | ARC | Hellaswag | Winogrande | MMLU | GSM8k | ThruthfulQA | Avg. |
|
156 |
-
|---------|-------|-----------|------------|------|-------|-------------|-------|
|
157 |
-
| Tito-7B | 68.08 | 86.37 | 81.69 |64.01 | 63.61 | 57.01 | 70.13 |
|
158 |
-
| YugoGPT | 58.10 | 81.44 | 76.56 |60.68 | 30.70 | 36.60 | 57.34 |
|
159 |
|
160 |
-
Here, Winogrande, GSM8k, MMLU were done in 5-shot manner, Hellaswag in 10-shot manner, and finally ARC in 25-shot manner.
|
161 |
|
162 |
-
If we try to replicate these approaches on available Serbian datasets (running an appropriate amount of shots instead of 0), we get:
|
163 |
-
| | ARC | Hellaswag | Winogrande | Avg. |
|
164 |
-
|---------|-------|-----------|------------|-------|
|
165 |
-
| Tito-7B | 47.27 | - | 69.93 | - |
|
166 |
-
| YugoGPT | 44.03 | - | 70.64 | - |
|
167 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
168 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Stopwolf__Tito-7B-slerp)
|
169 |
|
170 |
-
| Metric |
|
171 |
-
|
172 |
-
|Avg. |70.13|
|
173 |
-
|AI2 Reasoning Challenge (25-Shot)|68.09|
|
174 |
-
|HellaSwag (10-Shot) |86.38|
|
175 |
-
|MMLU (5-Shot) |64.01|
|
176 |
-
|TruthfulQA (0-shot) |57.01|
|
177 |
-
|Winogrande (5-shot) |81.69|
|
178 |
-
|GSM8k (5-shot) |63.61|
|
179 |
|
|
|
145 |
|-----------|-------|-------|-----------|-------|------------|------------|-------|---------|----------|-------|
|
146 |
| [Zamfir-7B](https://huggingface.co/Stopwolf/Zamfir-7B-slerp) | 51.85 | 32.25 | 46.03 | 75.59 | 62.59 | 26.00 | 66.81 | 16.09 | 36.11 | 45.92 |
|
147 |
| [Mustra-7B](https://huggingface.co/Stopwolf/Mustra-7B-Instruct-v0.1) | 52.95 | 33.70 | 45.89 | **77.55** | 64.17 | **30.60** | 67.25 | 15.40 | 34.84 | 46.93 |
|
148 |
+
| [Tito-7B](https://huggingface.co/Stopwolf/Tito-7B-slerp) | 55.43 | **34.73** | 48.19 | 77.37 | **65.27** | 30.00 | 67.30 | **16.7** | 35.38 | **47.82** |
|
149 |
| [YugoGPT](https://huggingface.co/gordicaleksa/YugoGPT) | **57.79** | **34.73** | **49.89** | 69.45 | 64.56 | 28.20 | **72.03** | 15.82 | **36.14** | 47.62 |
|
150 |
|
151 |
Here, all benchmarks were done 0-shot, on the exception of NQ Open and TriviaQA which were done in 5-shot manner, in order to be comparable to Mistral paper.
|
152 |
|
153 |
+
If we try to replicate OpenLLM Leaderboard results on available Serbian datasets (running an appropriate amount of shots instead of 0), we get:
|
154 |
+
| | ARC | Hellaswag | Winogrande | TruthfulQA | Avg. |
|
155 |
+
|---------|-------|-----------|------------|------------|-------|
|
156 |
+
| Tito-7B | 47.27 | - | 69.93 | | - |
|
157 |
+
| YugoGPT | 44.03 | - | 70.64 | | - |
|
158 |
+
| [Perucac-7B](https://huggingface.co/Stopwolf/Perucac-7B-slerp) | 49.74 | - | 71.98 | 56.03 | - |
|
159 |
+
| Llama3-8B | 42.24 | - | 61.25 | | - |
|
160 |
|
|
|
|
|
|
|
|
|
|
|
161 |
|
|
|
162 |
|
|
|
|
|
|
|
|
|
|
|
163 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
164 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Stopwolf__Tito-7B-slerp)
|
165 |
|
166 |
+
| Metric |Tito | YugoGPT |
|
167 |
+
|---------------------------------|----:|--------:|
|
168 |
+
|Avg. |70.13| 57.34 |
|
169 |
+
|AI2 Reasoning Challenge (25-Shot)|68.09| 58.10 |
|
170 |
+
|HellaSwag (10-Shot) |86.38| 81.44 |
|
171 |
+
|MMLU (5-Shot) |64.01| 60.68 |
|
172 |
+
|TruthfulQA (0-shot) |57.01| 36.60 |
|
173 |
+
|Winogrande (5-shot) |81.69| 76.56 |
|
174 |
+
|GSM8k (5-shot) |63.61| 30.70 |
|
175 |
|