VerB-Etheria-55b
An attempt to make a functional goliath style merge to create a [Etheria] 55b-200k with two yi-34b-200k models, this is Version B or VerB, it is a Double Model Passthrough merge. with a 50/50 split between high performing models.
Roadmap:
Depending on quality, I Might private the other Version. Then generate a sacrificial 55b and perform a 55b Dare ties merge or Slerp merge.
1: If the Dual Model Merge performs well I will make a direct inverse of the config then merge.
2: If the single model performs well I will generate a 55b of the most performant model the either Slerp or Dare ties merge.
3: If both models perform well, then I will complete both 1 & 2 then change the naming scheme to match each of the new models.
Configuration
The following YAML configuration was used to produce this model:
dtype: bfloat16
slices:
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [0, 14]
- sources:
- model: one-man-army/UNA-34Beagles-32K-bf16-v1
layer_range: [7, 21]
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [15, 29]
- sources:
- model: one-man-army/UNA-34Beagles-32K-bf16-v1
layer_range: [22, 36]
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [30, 44]
- sources:
- model: one-man-army/UNA-34Beagles-32K-bf16-v1
layer_range: [37, 51]
- sources:
- model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
layer_range: [45, 59]
merge_method: passthrough
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 63.83 |
AI2 Reasoning Challenge (25-Shot) | 65.96 |
HellaSwag (10-Shot) | 81.48 |
MMLU (5-Shot) | 73.78 |
TruthfulQA (0-shot) | 57.52 |
Winogrande (5-shot) | 75.45 |
GSM8k (5-shot) | 28.81 |
- Downloads last month
- 28
Model tree for SteelStorage/VerB-Etheria-55b
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard65.960
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard81.480
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard73.780
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard57.520
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard75.450
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard28.810