zihanliu commited on
Commit
8f28e0e
·
verified ·
1 Parent(s): 7de383a

Upload README.md

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +138 -0
  3. fig/main_fig.png +3 -0
.gitattributes CHANGED
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ fig/main_fig.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ license_name: nvidia-open-model-license
5
+ license_link: >-
6
+ https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
7
+ pipeline_tag: text-generation
8
+ language:
9
+ - en
10
+ tags:
11
+ - nvidia
12
+ - reasoning
13
+ - math
14
+ - code
15
+ - supervised fine-tuning
16
+ - reinforcement learning
17
+ - pytorch
18
+ ---
19
+
20
+ # AceReason-Nemotron 1.1: Advancing Math and Code Reasoning through SFT and RL Synergy
21
+
22
+ <p align="center">
23
+
24
+ [![Technical Report](https://img.shields.io/badge/2506.13284-Technical_Report-blue)](https://arxiv.org/abs/2506.13284)
25
+ [![SFT Dataset](https://img.shields.io/badge/🤗-SFT_Datset-blue)](https://huggingface.co/datasets/nvidia/AceReason-1.1-SFT)
26
+ [![Math RL Dataset](https://img.shields.io/badge/🤗-Math_RL_Datset-blue)](https://huggingface.co/datasets/nvidia/AceReason-Math)
27
+ [![Models](https://img.shields.io/badge/🤗-Models-blue)](https://huggingface.co/collections/nvidia/acereason-682f4e1261dc22f697fd1485)
28
+ [![Eval Toolkit](https://img.shields.io/badge/🤗-Eval_Code-blue)](https://huggingface.co/nvidia/AceReason-Nemotron-14B/blob/main/README_EVALUATION.md)
29
+ </p>
30
+
31
+ <img src="fig/main_fig.png" alt="main_fig" style="width: 1000px; max-width: 100%;" />
32
+
33
+ We're thrilled to introduce [AceReason-Nemotron-1.1-7B](https://huggingface.co/nvidia/AceReason-Nemotron-1.1-7B), a math and code reasoning model built upon the Qwen2.5-Math-7B base. The model is first trained with supervised fine-tuning (SFT) on math and code tasks, then further enhanced through reinforcement learning (RL) using the same recipe as [AceReason-Nemotron-1.0-7B](https://huggingface.co/nvidia/AceReason-Nemotron-7B), which is a con. We initiate RL training from various SFT models and find that stronger SFT models continue to produce consistently better results after large-scale RL, although the performance gap narrows during RL training. Thanks to its stronger SFT backbone, AceReason-Nemotron-1.1-7B significantly outperforms its predecessor and sets a record-high performance among Qwen2.5-7B-based reasoning models on challenging math and code reasoning benchmarks. For more details, check our [technical report](https://arxiv.org/abs/2506.13284).
34
+
35
+ ## Results
36
+
37
+ We evaluate our model against competitive reasoning models of comparable size on AIME 2024, AIME 2025, and LiveCodeBench (LCB) v5 (2024/08/01 - 2025/02/01) and v6 (2025/02/01-2025/05/01).
38
+ For AceReason-Nemotron-1.0-7B, the RL training recipe improves its starting SFT model, DeepSeek-R1-Distill-Qwen-7B, by 13.5% on AIME24, 14.6% on AIME25, 14.2% on LCB v5, and 10.0% on LCB v6.
39
+ In comparison, AceReason-Nemotron-1.1-7B, built on a stronger SFT model, also benefits substantially from the same RL recipe, achieving absolute improvements of 10.6% on AIME24, 16.4% on AIME25, 8.4% on LCB v5, and 8.3% on LCB v6.
40
+
41
+ | **Model** | **AIME 2024<br>(avg@64)** | **AIME 2025<br>(avg@64)** | **LCB v5<br>(avg@8)** | **LCB v6<br>(avg@8)** |
42
+ | :---: | :---: | :---: | :---: | :---: |
43
+ | <small>Skywork-OR1-7B</small> | 70.2 | 54.6 | 47.6 | 42.7 |
44
+ | <small>MiMo-7B-RL</small> | 68.2 | 55.4 | 57.8 | 49.3 |
45
+ | <small>o3-mini (low)</small> | 60.0 | 48.3 | 60.9 | - |
46
+ | <small>OpenMath-Nemotron-7B</small> | 74.8 | 61.2 | - | - |
47
+ | <small>OpenCodeReasoning-Nemotron-7B</small> | - | - | 51.3 | 46.1 |
48
+ | <small>Magistral Small (24B)</small> | 70.7 | 62.8 | 55.8 | 47.4 |
49
+ | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 39.0 | 37.6 | 34.1 |
50
+ | AceReason-Nemotron-1.0-7B | 69.0 | 53.6 | 51.8 | 44.1 |
51
+ | Our SFT-7B (starting point of RL) | 62.0 | 48.4 | 48.8 | 43.8 |
52
+ | [AceReason-Nemotron-1.1-7B 🤗](https://huggingface.co/nvidia/AceReason-Nemotron-1.1-7B)| 72.6 | 64.8 | 57.2 | 52.1 |
53
+
54
+
55
+ ## How to use
56
+ ```python
57
+ from transformers import AutoModelForCausalLM, AutoTokenizer
58
+
59
+ model_name = 'nvidia/AceReason-Nemotron-1.1-7B'
60
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
61
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
62
+
63
+ prompt = "Jen enters a lottery by picking $4$ distinct numbers from $S=\\{1,2,3,\\cdots,9,10\\}.$ $4$ numbers are randomly chosen from $S.$ She wins a prize if at least two of her numbers were $2$ of the randomly chosen numbers, and wins the grand prize if all four of her numbers were the randomly chosen numbers. The probability of her winning the grand prize given that she won a prize is $\\tfrac{m}{n}$ where $m$ and $n$ are relatively prime positive integers. Find $m+n$."
64
+ messages = [{"role": "user", "content": prompt}]
65
+
66
+ text = tokenizer.apply_chat_template(
67
+ messages,
68
+ tokenize=False,
69
+ add_generation_prompt=True
70
+ )
71
+ model_inputs = tokenizer([text], return_tensors="pt").to("cuda")
72
+
73
+ generated_ids = model.generate(
74
+ **model_inputs,
75
+ max_new_tokens=32768,
76
+ temperature=0.6,
77
+ top_p=0.95
78
+ )
79
+ generated_ids = [
80
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
81
+ ]
82
+
83
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
84
+ ```
85
+
86
+ ## Usage Recommendations
87
+ 1. We recommend using the system prompt: "You are a helpful and harmless assistant. You should think step-by-step."
88
+ 2. We recommend using the following instruction for math questions:
89
+ ```python
90
+ math_question = "MATH_QUESTION"
91
+ math_instruction = "Please place your final answer inside \\boxed{}."
92
+ system_instruction = "You are a helpful and harmless assistant. You should think step-by-step."
93
+
94
+ final_prompt = "<|im_start|>system\n" + system_instruction + "<|im_end|>\n<|im_start|>user\n" + math_question + "\n\n" + math_instruction + "<|im_end|>\n<|im_start|>assistant\n<think>\n"
95
+ ```
96
+ 3. We recommend using the following instruction for code questions:
97
+ ```python
98
+ code_question = "CODE_QUESTION"
99
+ starter_code = "STARTER_CODE" # starter code function header, set empty string ("") if there is no starter code
100
+
101
+ code_instruction_nostartercode = """Write Python code to solve the problem. Please place the solution code in the following format:\n```python\n# Your solution code here\n```"""
102
+ code_instruction_hasstartercode = """Please place the solution code in the following format:\n```python\n# Your solution code here\n```"""
103
+ if starter_code != "":
104
+ code_question += "\n\n" + "Solve the problem starting with the provided function header.\n\nFunction header:\n" + "```\n" + starter_code + "\n```"
105
+ code_question += "\n\n" + code_instruction_hasstartercode
106
+ else:
107
+ code_question += "\n\n" + code_instruction_nostartercode
108
+
109
+ final_prompt = "<|im_start|>system\n" + system_instruction + "<|im_end|>\n<|im_start|>user\n" + code_question + "<|im_end|>\n<|im_start|>assistant\n<think>\n"
110
+ ```
111
+ 4. Our inference engine for evaluation is vLLM==0.7.3 using top-p=0.95, temperature=0.6, max_tokens=32768.
112
+
113
+
114
+ ## Evaluation Toolkit
115
+
116
+ Please check evaluation code, scripts, cached prediction files in https://huggingface.co/nvidia/AceReason-Nemotron-14B/blob/main/README_EVALUATION.md
117
+
118
+ ## Correspondence to
119
+ Zihan Liu ([email protected]), Zhuolin Yang ([email protected]), Yang Chen ([email protected]), Chankyu Lee ([email protected]), Wei Ping ([email protected])
120
+
121
+
122
+ ## License
123
+ Your use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/).
124
+
125
+
126
+ ### Release Date
127
+ June 16, 2025
128
+
129
+
130
+ ## Citation
131
+ ```
132
+ @article{liu2025acereason,
133
+ title={AceReason-Nemotron 1.1: Advancing Math and Code Reasoning through SFT and RL Synergy},
134
+ author={Liu, Zihan and Yang, Zhuolin and Chen, Yang and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan and Ping, Wei},
135
+ journal={arXiv preprint arXiv:2506.13284},
136
+ year={2025}
137
+ }
138
+ ```
fig/main_fig.png ADDED

Git LFS Details

  • SHA256: ef002bd8ce8c589598ff0f2e2b0055b672d15113b685852c117f7c1041155ee7
  • Pointer size: 131 Bytes
  • Size of remote file: 369 kB