Stopwolf commited on
Commit
a6be454
·
verified ·
1 Parent(s): ab62e9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -44,6 +44,7 @@ Evaluations on Serbian LLM eval suite (or rather, performance and knowledge of S
44
  | [Mustra-7B](https://huggingface.co/Stopwolf/Mustra-7B-Instruct-v0.1) | 52.95 | 33.70 | 45.89 | **77.55** | 64.17 | **30.60** | 67.25 | 15.40 | 34.84 | 46.93 |
45
  | [Tito-7B](https://huggingface.co/Stopwolf/Tito-7B) | 55.43 | **34.73** | 48.19 | 77.37 | **65.27** | 30.00 | 67.30 | **16.7** | 35.38 | **47.82** |
46
  | [YugoGPT](https://huggingface.co/gordicaleksa/YugoGPT) | **57.79** | **34.73** | **49.89** | 69.45 | 64.56 | 28.20 | **72.03** | 15.82 | **36.14** | 47.62 |
 
47
  Here, all benchmarks were done 0-shot, on the exception of NQ Open and TriviaQA which were done in 5-shot manner, in order to be comparable to Mistral paper.
48
 
49
 
@@ -52,6 +53,7 @@ Evalutaions on Open LLM Leaderboard (or rather, performance and knowledge of Eng
52
  |---------|-------|-----------|------------|------|-------|-------------|-------|
53
  | Tito-7B | 68.08 | 86.37 | 81.69 |64.01 | 63.61 | 57.01 | 70.13 |
54
  | YugoGPT | 58.10 | 81.44 | 76.56 |60.68 | 30.70 | 36.60 | 57.34 |
 
55
  Here, Winogrande, GSM8k, MMLU were done in 5-shot manner, Hellaswag in 10-shot manner, and finally ARC in 25-shot manner.
56
 
57
  If we try to replicate these approaches on available Serbian datasets (running an appropriate amount of shots instead of 0), we get:
 
44
  | [Mustra-7B](https://huggingface.co/Stopwolf/Mustra-7B-Instruct-v0.1) | 52.95 | 33.70 | 45.89 | **77.55** | 64.17 | **30.60** | 67.25 | 15.40 | 34.84 | 46.93 |
45
  | [Tito-7B](https://huggingface.co/Stopwolf/Tito-7B) | 55.43 | **34.73** | 48.19 | 77.37 | **65.27** | 30.00 | 67.30 | **16.7** | 35.38 | **47.82** |
46
  | [YugoGPT](https://huggingface.co/gordicaleksa/YugoGPT) | **57.79** | **34.73** | **49.89** | 69.45 | 64.56 | 28.20 | **72.03** | 15.82 | **36.14** | 47.62 |
47
+
48
  Here, all benchmarks were done 0-shot, on the exception of NQ Open and TriviaQA which were done in 5-shot manner, in order to be comparable to Mistral paper.
49
 
50
 
 
53
  |---------|-------|-----------|------------|------|-------|-------------|-------|
54
  | Tito-7B | 68.08 | 86.37 | 81.69 |64.01 | 63.61 | 57.01 | 70.13 |
55
  | YugoGPT | 58.10 | 81.44 | 76.56 |60.68 | 30.70 | 36.60 | 57.34 |
56
+
57
  Here, Winogrande, GSM8k, MMLU were done in 5-shot manner, Hellaswag in 10-shot manner, and finally ARC in 25-shot manner.
58
 
59
  If we try to replicate these approaches on available Serbian datasets (running an appropriate amount of shots instead of 0), we get: