Update README.md
Browse files
README.md
CHANGED
@@ -37,10 +37,27 @@ dtype: bfloat16
|
|
37 |
|
38 |
## Results
|
39 |
|
|
|
40 |
| | ARC-E | ARC-C | Hellaswag | BoolQ | Winogrande | OpenbookQA | PiQA | NQ Open | TriviaQA | Avg. |
|
41 |
|-----------|-------|-------|-----------|-------|------------|------------|-------|---------|----------|-------|
|
42 |
| [Zamfir-7B](https://huggingface.co/Stopwolf/Zamfir-7B-slerp) | 51.85 | 32.25 | 46.03 | 75.59 | 62.59 | 26.00 | 66.81 | 16.09 | 36.11 | 45.92 |
|
43 |
| [Mustra-7B](https://huggingface.co/Stopwolf/Mustra-7B-Instruct-v0.1) | 52.95 | 33.70 | 45.89 | **77.55** | 64.17 | **30.60** | 67.25 | 15.40 | 34.84 | 46.93 |
|
44 |
| [Tito-7B](https://huggingface.co/Stopwolf/Tito-7B) | 55.43 | **34.73** | 48.19 | 77.37 | **65.27** | 30.00 | 67.30 | **16.7** | 35.38 | **47.82** |
|
45 |
| [YugoGPT](https://huggingface.co/gordicaleksa/YugoGPT) | **57.79** | **34.73** | **49.89** | 69.45 | 64.56 | 28.20 | **72.03** | 15.82 | **36.14** | 47.62 |
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
## Results
|
39 |
|
40 |
+
Evaluations on Serbian LLM eval suite (or rather, performance and knowledge of Serbian):
|
41 |
| | ARC-E | ARC-C | Hellaswag | BoolQ | Winogrande | OpenbookQA | PiQA | NQ Open | TriviaQA | Avg. |
|
42 |
|-----------|-------|-------|-----------|-------|------------|------------|-------|---------|----------|-------|
|
43 |
| [Zamfir-7B](https://huggingface.co/Stopwolf/Zamfir-7B-slerp) | 51.85 | 32.25 | 46.03 | 75.59 | 62.59 | 26.00 | 66.81 | 16.09 | 36.11 | 45.92 |
|
44 |
| [Mustra-7B](https://huggingface.co/Stopwolf/Mustra-7B-Instruct-v0.1) | 52.95 | 33.70 | 45.89 | **77.55** | 64.17 | **30.60** | 67.25 | 15.40 | 34.84 | 46.93 |
|
45 |
| [Tito-7B](https://huggingface.co/Stopwolf/Tito-7B) | 55.43 | **34.73** | 48.19 | 77.37 | **65.27** | 30.00 | 67.30 | **16.7** | 35.38 | **47.82** |
|
46 |
| [YugoGPT](https://huggingface.co/gordicaleksa/YugoGPT) | **57.79** | **34.73** | **49.89** | 69.45 | 64.56 | 28.20 | **72.03** | 15.82 | **36.14** | 47.62 |
|
47 |
+
|
48 |
+
Here, all benchmarks were done 0-shot, on the exception of NQ Open and TriviaQA which were done in 5-shot manner.
|
49 |
+
|
50 |
+
|
51 |
+
Evalutaions on Open LLM Leaderboard (or rather, performance and knowledge of English):
|
52 |
+
| | ARC | Hellaswag | Winogrande | MMLU | GSM8k | ThruthfulQA | Avg. |
|
53 |
+
|---------|-------|-----------|------------|------|-------|-------------|-------|
|
54 |
+
| Tito-7B | 63.65 | 66.48 | 81.69 | | 63.61 | 57.01 | 64.33 |
|
55 |
+
| YugoGPT | 53.07 | 61.45 | 76.56 | | 30.70 | 36.60 | 60.34 |
|
56 |
+
|
57 |
+
Here, Winogrande, GSM8k, MMLU were done in 5-shot manner, Hellaswag in 10-shot manner, and finally ARC in 25-shot manner.
|
58 |
+
|
59 |
+
If we try to replicate these approaches on available Serbian datasets (running an appropriate amount of shots instead of 0), we get:
|
60 |
+
| | ARC | Hellaswag | Winogrande | Avg. |
|
61 |
+
|---------|-------|-----------|------------|-------|
|
62 |
+
| Tito-7B | 47.27 | - | 69.93 | - |
|
63 |
+
| YugoGPT | 44.03 | - | 70.64 | - |
|