unconsolidated uploads
Collection
Re-Uploads without "consolidated" or "original" weights
•
3 items
•
Updated
Re-Uploaded without consolidated weights
Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models!
Check out our fine-tuned Instruct version Mistral-Small-24B-Instruct-2501.
For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community.
This release demonstrates our commitment to open source, serving as a strong base model.
Learn more about Mistral Small in our blog post.
Model developper: Mistral AI Team
Benchmark | Metric | Mistral-Small-24B-Base |
---|---|---|
MMLU | 5-shot | 80.73 |
MMLU Pro | 5-shot, CoT | 54.37 |
GPQA Main | 5-shot, CoT | 34.37 |
TriviaQA | 5-shot | 80.32 |
ARC-c | 0-shot | 91.29 |
TriviaQA | 5-shot | 76.6 |
MBPP | pass@1 | 69.64 |
GSM8K | 5-shot, maj@1 | 80.73 |
MATH | 4-shot, MaJ | 45.98 |
AGIEval | - | 65.80 |
Benchmark | Metric | Mistral-Small-24B-Base |
---|---|---|
French MMLU | - | 78.03 |
German MMLU | - | 77.69 |
Spanish MMLU | - | 78.86 |
Russian MMLU | - | 75.64 |
Chinese MMLU | - | 70.35 |
Korean MMLU | - | 56.42 |
Japanese MMLU | - | 74.46 |