| language: | |
| - pl | |
| datasets: | |
| - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish | |
| license: other | |
| model_type: llama-2 | |
| pipeline_tag: text-generation | |
| tags: | |
| - meta | |
| - pytorch | |
| - llama | |
| - llama-2 | |
| # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | |
| Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aspik101__Vicuzard-30B-Uncensored-instruct-PL-lora_unload) | |
| | Metric | Value | | |
| |-----------------------|---------------------------| | |
| | Avg. | 50.86 | | |
| | ARC (25-shot) | 62.46 | | |
| | HellaSwag (10-shot) | 83.66 | | |
| | MMLU (5-shot) | 57.82 | | |
| | TruthfulQA (0-shot) | 50.94 | | |
| | Winogrande (5-shot) | 78.37 | | |
| | GSM8K (5-shot) | 15.31 | | |
| | DROP (3-shot) | 7.46 | | |