Adding Evaluation Results
#9
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -877,4 +877,17 @@ We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigsci
|
|
877 |
journal={arXiv preprint arXiv:2211.01786},
|
878 |
year={2022}
|
879 |
}
|
880 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
877 |
journal={arXiv preprint arXiv:2211.01786},
|
878 |
year={2022}
|
879 |
}
|
880 |
+
```
|
881 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
882 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_bigscience__bloomz-3b)
|
883 |
+
|
884 |
+
| Metric | Value |
|
885 |
+
|-----------------------|---------------------------|
|
886 |
+
| Avg. | 35.22 |
|
887 |
+
| ARC (25-shot) | 36.86 |
|
888 |
+
| HellaSwag (10-shot) | 54.95 |
|
889 |
+
| MMLU (5-shot) | 32.91 |
|
890 |
+
| TruthfulQA (0-shot) | 40.34 |
|
891 |
+
| Winogrande (5-shot) | 57.14 |
|
892 |
+
| GSM8K (5-shot) | 0.0 |
|
893 |
+
| DROP (3-shot) | 24.36 |
|