Axion-Pro-Indic-24B

Model Information

Axion-Pro-Indic-24B is a multilingual, hybrid-reasoning, text-only language model built on Mistral-Small.
This post-trained version delivers exceptional improvements over the base model:

  • +20% average improvement on Indian language benchmarks
  • +21.6% enhancement on math benchmarks
  • +17.6% boost on programming benchmarks
  • +86% improvement in romanized Indian language GSM-8K benchmarks (languages ร— mathematics intersection).

Key Features

  • Hybrid Thinking Mode: Supports both "think" and "non-think" modes.
  • Advanced Indic Skills: Post-trained on Indian languages + English, reflecting Indian cultural values.
  • Superior Reasoning Capabilities: Outperforms similarly sized models on coding and math benchmarks.
  • Seamless Multilingual Experience: Full support for Indic scripts and romanized text.

Quickstart

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "AdvRahul/Axion-Pro-Indic-24B"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name, torch_dtype="auto", device_map="auto"
)

prompt = "Who are you and what is your purpose on this planet?"

messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    enable_thinking=True,  # Default True; set False for no-think mode
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(**model_inputs, max_new_tokens=8192)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :].tolist()
output_text = tokenizer.decode(output_ids)

if "</think>" in output_text:
    reasoning_content = output_text.split("</think>")[0].rstrip("\n")
    content = output_text.split("</think>")[-1].lstrip("\n").rstrip("</s>")
else:
    reasoning_content = ""
    content = output_text.rstrip("</s>")

print("reasoning content:", reasoning_content)
print("content:", content)
Downloads last month
46
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support