Fast-Math
Collection
Fast-Math is a model series designed to significantly improve inference efficiency while preserving accuracy on math reasoning tasks.
•
6 items
•
Updated
By applying SFT and GRPO on difficult math problems, we enhanced the performance of DeepSeek-R1-Distill-Qwen-14B
and developed Fast-Math-R1-14B
,
which achieves up to 60% (on average approx. 30%) faster inference while maintaining accuracy.
Technical details can be found in Kaggle Discussion and Github.
AIME 2024 | AIME 2025 | ||||
---|---|---|---|---|---|
Model | Token budget | Pass@1 (avg. 64) | Mean output tokens | Pass@1 (avg. 64) | Mean output tokens |
DeepSeek-R1-Distill-Qwen-14B | 32000 | 66.9 | 11026 | 49.9 | 12310 |
24000 | 65.7 | 10784 | 49.7 | 11978 | |
16000 | 61 | 9708 | 46.2 | 10567 | |
12000 | 53.7 | 8472 | 39.9 | 9008 | |
8000 | 41.8 | 6587 | 31.1 | 6788 | |
Fast-Math-R1-14B | 32000 | 68 | 8217 | 49.6 | 9663 |
24000 | 67.9 | 8209 | 49.6 | 9627 | |
16000 | 66.7 | 8017 | 48.4 | 9083 | |
12000 | 61.9 | 7362 | 45.2 | 8048 | |
8000 | 51.4 | 5939 | 36.3 | 6174 |
AIME 2024 | AIME 2025 | ||||
---|---|---|---|---|---|
Model | Token budget | Pass@1 (avg. 64) | Mean output tokens | Pass@1 (avg. 64) | Mean output tokens |
OpenMath-Nemotron-14B | 32000 | 76.2 | 11493 | 64.5 | 13414 |
24000 | 75.4 | 11417 | 63.4 | 13046 | |
16000 | 66 | 10399 | 54.2 | 11422 | |
12000 | 55 | 9053 | 40 | 9609 | |
8000 | 36 | 6978 | 27.2 | 7083 | |
Fast-OpenMath-Nemotron-14B | 32000 | 70.7 | 9603 | 61.4 | 11424 |
24000 | 70.6 | 9567 | 60.9 | 11271 | |
16000 | 66.6 | 8954 | 55.3 | 10190 | |
12000 | 59.4 | 7927 | 45.6 | 8752 | |
8000 | 47.6 | 6282 | 33.8 | 6589 |
AIME 2024 | AIME 2025 | ||||
---|---|---|---|---|---|
Model | Token budget | Pass@1 (avg. 64) | Mean output tokens | Pass@1 (avg. 64) | Mean output tokens |
Qwen3-14B | 32000 | 79.3 | 13669 | 69.5 | 16481 |
24000 | 75.9 | 13168 | 65.6 | 15235 | |
16000 | 64.5 | 11351 | 50.4 | 12522 | |
12000 | 49.7 | 9746 | 36.3 | 10353 | |
8000 | 28.4 | 7374 | 19.5 | 7485 | |
Fast-Math-Qwen3-14B | 32000 | 77.6 | 9740 | 66.6 | 12281 |
24000 | 76.5 | 9634 | 65.3 | 11847 | |
16000 | 72.6 | 8793 | 60.1 | 10195 | |
12000 | 65.1 | 7775 | 49.4 | 8733 | |
8000 | 50.7 | 6260 | 36 | 6618 |
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_path = 'RabotniKuma/Fast-Math-R1-14B'
vllm_engine = LLM(
model=model_path,
max_model_len=8192,
gpu_memory_utilization=0.9,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
sampling_params = SamplingParams(
temperature=1.0,
top_p=0.90,
min_p=0.05,
max_tokens=8192,
stop='</think>', # For even faster inference, applying early stopping at the </think> tag and extracting the final boxed content is recommended.
)
messages = [
{
'role': 'user',
'content': (
'Solve the problem, and put the answer in \boxed{{}}. '
'Sarah is twice as old as her youngest brother. If the difference between their ages is 15 years. How old is her youngest brother?'
)
}
]
messages = tokenizer.apply_chat_template(
conversation=messages,
tokenize=False,
add_generation_prompt=True
)
response = vllm_engine.generate(messages, sampling_params=sampling_params)