Uploaded model
- Developed by: Daemontatox
- License: apache-2.0
- Finetuned from model : yentinglin/Mistral-Small-24B-Instruct-2501-reasoning
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 11.08 |
IFEval (0-Shot) | 18.15 |
BBH (3-Shot) | 29.08 |
MATH Lvl 5 (4-Shot) | 8.61 |
GPQA (0-shot) | 0.89 |
MuSR (0-shot) | 4.93 |
MMLU-PRO (5-shot) | 4.84 |
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 11.08 |
IFEval (0-Shot) | 18.15 |
BBH (3-Shot) | 29.08 |
MATH Lvl 5 (4-Shot) | 8.61 |
GPQA (0-shot) | 0.89 |
MuSR (0-shot) | 4.93 |
MMLU-PRO (5-shot) | 4.84 |
- Downloads last month
- 22
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Daemontatox/Cogito-MIS
Base model
mistralai/Mistral-Small-24B-Base-2501Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard18.150
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard29.080
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard8.610
- acc_norm on GPQA (0-shot)Open LLM Leaderboard0.890
- acc_norm on MuSR (0-shot)Open LLM Leaderboard4.930
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard4.840