Magistral Logo

Magistral is the first reasoning model by Mistral AI, excelling in domain-specific, transparent, and multilingual reasoning.

Magistral small comparison to medium

Magistral small

Key features

  • Reasoning: Capable of long chains of reasoning traces before providing an answer.
  • Multilingual: Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, and Farsi.
  • Apache 2.0 License: Open license allowing usage and modification for both commercial and non-commercial purposes.
  • Context Window: A 128k context window, but performance might degrade past 40k. Hence we recommend setting the maximum model length to 40k.

Magistral is ideal for general purpose use requiring longer thought processing and better accuracy than with non-reasoning LLMs. From legal research and financial forecasting to software development and creative storytelling β€” this model solves multi-step challenges where transparency and precision are critical.

Business strategy and operations.

Building on our flagship models, Magistral is designed for research, strategic planning, operational optimization, and data-driven decision making β€” whether executing risk assessment and modelling with multiple factors, or calculating optimal delivery windows under constraints.

Regulated industries and sectors.

Legal, finance, healthcare, and government professionals get traceable reasoning that meets compliance requirements. Every conclusion can be traced back through its logical steps, providing auditability for high-stakes environments with domain-specialized AI.

Systems, software, and data engineering.

Magistral enhances coding and development use cases: compared to non-reasoning models, it significantly improves project planning, backend architecture, frontend design, and data engineering through sequenced, multi-step actions involving external tools or API.

Content and communication.

Our early tests indicated that Magistral is an excellent creative companion. We highly recommend it for creative writing and storytelling, with the model capable of producing coherent or β€” if needed β€” delightfully eccentric copy.

Reference

Downloads last month
5
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for tripolskypetr/magistral-24b