AdvRahul/Axion-1.5B-Reasoning
A safety-enhanced version of the state-of-the-art DeepScaleR-1.5B mathematical reasoning model. 🧠
Axion-1.5B-Reasoning
builds upon the exceptional mathematical capabilities of sky-t/DeepScaleR-1.5B-Preview
, a model renowned for its top-tier performance on complex reasoning tasks like the AIME competition. This version has been specifically fine-tuned to improve safety, making it suitable for a broader range of applications.
🚀 Model Details
- Model Creator: AdvRahul
- Base Model: sky-t/DeepScaleR-1.5B-Preview
- Fine-tuning Focus: Enhanced Safety & Harmlessness
- Core Capability: Advanced Mathematical & Logical Reasoning
- Architecture: Qwen 1.5 (derived from the base model's lineage)
- License: MIT License (Permissive for commercial use)
📝 Model Description
Fusing Elite Reasoning with Robust Safety
Axion-1.5B-Reasoning
was developed to bridge the gap between a pure, high-performance research model and a deployable, application-ready AI. It combines two key attributes:
- State-of-the-Art Reasoning: It inherits the powerful reinforcement learning-based training of its predecessor, allowing it to solve complex mathematical and logical problems with high accuracy.
- Enhanced Safety Alignment: The model has undergone extensive red-team testing and safety-focused fine-tuning. This process was designed to make the model more robust against generating harmful, biased, or inappropriate content, a critical requirement for user-facing systems.
This makes Axion-1.5B-Reasoning
an ideal choice for educational tools, AI-powered tutors, data analysis assistants, and any system that requires both high-fidelity logical reasoning and a strong safety profile.
💻 How to Use
This model can be used directly with the transformers
library. For optimal results on complex problems, it's best to instruct the model to think step-by-step.
from transformers import pipeline
import torch
# Initialize the text-generation pipeline
pipe = pipeline(
"text-generation",
model="AdvRahul/Axion-1.5B-Reasoning",
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Prepare the prompt using the Qwen chat template format
messages = [
{"role": "system", "content": "You are a helpful assistant that is an expert in mathematical reasoning."},
{"role": "user", "content": "There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? Reason step by step."}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Generate the response
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
- Downloads last month
- 29
4-bit