Similar to llama2-22b, but with BLOCK_DIAGONAL=false in the merge and twice the fine-tuning tokens.

Again, not intended for direct use - meant as a base for further tuning and merging.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 46.86
ARC (25-shot) 58.28
HellaSwag (10-shot) 82.69
MMLU (5-shot) 54.53
TruthfulQA (0-shot) 39.23
Winogrande (5-shot) 75.93
GSM8K (5-shot) 11.22
DROP (3-shot) 6.17
Downloads last month
1,139
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for chargoddard/llama2-22b-blocktriangular

Adapters
2 models

Dataset used to train chargoddard/llama2-22b-blocktriangular

Spaces using chargoddard/llama2-22b-blocktriangular 22

Collection including chargoddard/llama2-22b-blocktriangular