Stopwolf commited on
Commit
346da61
·
verified ·
1 Parent(s): 539e72c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -5
README.md CHANGED
@@ -153,11 +153,10 @@ Here, all benchmarks were done 0-shot, on the exception of NQ Open and TriviaQA
153
  If we try to replicate OpenLLM Leaderboard results on available Serbian datasets (running an appropriate amount of shots instead of 0), we get:
154
  | | ARC | Hellaswag | Winogrande | TruthfulQA | Avg. |
155
  |---------|-------|-----------|------------|------------|-------|
156
- | Tito-7B | 47.27 | - | 69.93 | | - |
157
- | YugoGPT | 44.03 | - | 70.64 | | - |
158
- | [Perucac-7B](https://huggingface.co/Stopwolf/Perucac-7B-slerp) | 49.74 | - | 71.98 | 56.03 | - |
159
- | Llama3-8B | 42.24 | - | 61.25 | | - |
160
-
161
 
162
 
163
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
 
153
  If we try to replicate OpenLLM Leaderboard results on available Serbian datasets (running an appropriate amount of shots instead of 0), we get:
154
  | | ARC | Hellaswag | Winogrande | TruthfulQA | Avg. |
155
  |---------|-------|-----------|------------|------------|-------|
156
+ | Tito-7B | 47.27 | - | 69.93 | **57.48** | 58.23 |
157
+ | YugoGPT | 44.03 | - | 70.64 | 48.06 | 54.24 |
158
+ | [Perucac-7B](https://huggingface.co/Stopwolf/Perucac-7B-slerp) | **49.74** | - | **71.98** | 56.03 | **59.25** |
159
+ | Llama3-8B | 42.24 | - | 61.25 | 51.08 | 51.52 |
 
160
 
161
 
162
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)