Update README.md
Browse files
README.md
CHANGED
@@ -10,169 +10,93 @@ tags:
|
|
10 |
- pytorch
|
11 |
---
|
12 |
|
13 |
-
#
|
14 |
|
15 |
-
##
|
16 |
|
17 |
-
|
18 |
It is a reasoning model that is post trained for reasoning while code generation. The model supports a context length of 32K tokens.
|
19 |
|
20 |
This model is ready for commercial use.
|
21 |
|
22 |
-
|
23 |
-
## License/Terms of Use
|
24 |
-
|
25 |
GOVERNING TERMS: Your use of this model is governed by the [NVIDIA Internal Scientific Research and Development Model License.](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-internal-scientific-research-and-development-model-license/)
|
26 |
|
27 |
-
|
28 |
-
|
29 |
-
**Model Dates:** Trained between February 2025 and March 2025
|
30 |
|
|
|
|
|
31 |
|
32 |
-
###
|
33 |
-
|
34 |
|
35 |
-
## Release Date:
|
36 |
-
|
37 |
-
2025-04-21
|
38 |
|
39 |
## References
|
40 |
-
|
41 |
- [\[2504.01943\] OpenCodeReasoning: Advancing Data Distillation for Competitive Coding](https://arxiv.org/abs/2504.01943)
|
42 |
|
|
|
43 |
## Model Architecture
|
44 |
- Architecture Type: Dense decoder-only Transformer model
|
45 |
-
- Network Architecture:
|
46 |
-
|
47 |
-
**This model was developed based on Qwen2.5-7B-Instruct. <br>
|
48 |
-
** This model has 7B of model parameters. <br>
|
49 |
-
|
50 |
-
## Intended use
|
51 |
-
|
52 |
-
OpenCodeReasoning-7B is a competitive code generation focused reasoning and chat model intended to be used in English.
|
53 |
-
|
54 |
-
## Input
|
55 |
-
- **Input Type:** Text
|
56 |
-
- **Input Format:** String
|
57 |
-
- **Input Parameters:** One-Dimensional (1D)
|
58 |
-
- **Other Properties Related to Input:** Context length up to 32,768 tokens
|
59 |
|
60 |
-
## Output
|
61 |
-
- **Output Type:** Text
|
62 |
-
- **Output Format:** String
|
63 |
-
- **Output Parameters:** One-Dimensional (1D)
|
64 |
-
- **Other Properties Related to Output:** Context length up to 32,768 tokens
|
65 |
|
66 |
-
##
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
- **Preferred Operating System(s):** Linux
|
72 |
|
73 |
-
## Model Version
|
74 |
-
1.0 (4/21/2025)
|
75 |
-
|
76 |
-
## Quick Start and Usage Recommendations:
|
77 |
-
|
78 |
-
We recommend setting temperature to `0.6`, and Top P to `0.95` for inference on LiveCodeBench.
|
79 |
-
|
80 |
-
### Use It with Transformers
|
81 |
-
See the snippet below for usage with [Hugging Face Transformers](https://huggingface.co/docs/transformers/main/en/index) library. Please see the example below.
|
82 |
-
|
83 |
-
We recommend using the *transformers* package with version 4.48.3.
|
84 |
-
Example:
|
85 |
-
|
86 |
-
```py
|
87 |
-
import torch
|
88 |
-
import transformers
|
89 |
-
|
90 |
-
model_id = "nvidia/OpenCodeReasoning-7B"
|
91 |
-
model_kwargs = {"torch_dtype": torch.bfloat16, "trust_remote_code": True, "device_map": "auto"}
|
92 |
-
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
|
93 |
-
tokenizer.pad_token_id = tokenizer.eos_token_id
|
94 |
-
|
95 |
-
pipeline = transformers.pipeline(
|
96 |
-
"text-generation",
|
97 |
-
model=model_id,
|
98 |
-
tokenizer=tokenizer,
|
99 |
-
max_new_tokens=32768,
|
100 |
-
temperature=0.6,
|
101 |
-
top_p=0.95,
|
102 |
-
**model_kwargs
|
103 |
-
)
|
104 |
-
|
105 |
-
print(pipeline([{"role": "user", "content": "Solve x*(sin(x)+2)=0"}]))
|
106 |
-
```
|
107 |
|
108 |
-
##
|
|
|
|
|
|
|
|
|
109 |
|
110 |
-
|
111 |
|
112 |
-
The training corpus for OpenCodeReasoning-7B is [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) dataset, which is composed of competitive programming questions and DeepSeek-R1 generated responses.
|
113 |
|
114 |
-
|
|
|
|
|
|
|
|
|
|
|
115 |
|
116 |
-
- Hybrid: Automated, Human, Synthetic
|
117 |
|
118 |
-
|
|
|
119 |
|
120 |
-
- Hybrid: Automated, Human, Synthetic
|
121 |
|
122 |
-
##
|
|
|
|
|
|
|
123 |
|
124 |
-
We used the datasets listed in the next section to evaluate Llama-3.1-Nemotron-Ultra-253B-v1.
|
125 |
|
126 |
-
|
|
|
|
|
|
|
127 |
|
128 |
-
- Hybrid: Human/Synthetic
|
129 |
-
|
130 |
-
Data Labeling for Evaluation Datasets:
|
131 |
-
|
132 |
-
- Hybrid: Human/Synthetic/Automatic
|
133 |
-
|
134 |
-
|
135 |
-
## Evaluation Results
|
136 |
-
|
137 |
-
### LiveCodeBench
|
138 |
|
|
|
139 |
| Easy | Medium | Hard | Avg. |
|
140 |
|:------|:------|:------|:-----|
|
141 |
| 95.4 | 64.0| 18.0 | 51.3 |
|
142 |
|
143 |
-
|
144 |
-
|
145 |
-
````
|
146 |
-
"You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests.
|
147 |
-
|
148 |
-
Question: {prompt}
|
149 |
-
|
150 |
-
Read the inputs from stdin solve the problem and write the answer to stdout (do not directly test on the sample inputs). Enclose your code within delimiters as follows. Ensure that when the python program runs, it reads the inputs, runs the algorithm and writes output to STDOUT.
|
151 |
-
```python
|
152 |
-
# YOUR CODE HERE
|
153 |
-
```
|
154 |
-
````
|
155 |
-
|
156 |
-
User Prompt Template (with starter code):
|
157 |
-
|
158 |
-
````
|
159 |
-
You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests.
|
160 |
-
|
161 |
-
Question: {prompt}
|
162 |
-
|
163 |
-
You will use the following starter code to write the solution to the problem and enclose your code within delimiters.
|
164 |
-
```python
|
165 |
-
{starter_code}
|
166 |
-
```
|
167 |
-
````
|
168 |
-
|
169 |
-
### CodeContests
|
170 |
-
|
171 |
| Public | Private | Generated | All |
|
172 |
|:--------|:--------|:----------|:----|
|
173 |
| 46.7 | 29.6 | 32.3 | 18.1|
|
174 |
|
175 |
|
|
|
|
|
|
|
|
|
|
|
176 |
## Ethical Considerations:
|
177 |
|
178 |
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
|
|
10 |
- pytorch
|
11 |
---
|
12 |
|
13 |
+
# OpenCode-Nemotron-7B Overview
|
14 |
|
15 |
+
## Description
|
16 |
|
17 |
+
OpenCode-Nemotron-7B is a large language model (LLM) which is a derivative of [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) (AKA the *reference model*).
|
18 |
It is a reasoning model that is post trained for reasoning while code generation. The model supports a context length of 32K tokens.
|
19 |
|
20 |
This model is ready for commercial use.
|
21 |
|
22 |
+
### License/Terms of Use
|
|
|
|
|
23 |
GOVERNING TERMS: Your use of this model is governed by the [NVIDIA Internal Scientific Research and Development Model License.](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-internal-scientific-research-and-development-model-license/)
|
24 |
|
25 |
+
### Deployment Geography:
|
26 |
+
Global<br>
|
|
|
27 |
|
28 |
+
### Use Case: <br>
|
29 |
+
This model is intended for developers and researchers building LLMs. <br>
|
30 |
|
31 |
+
### Release Date: <br>
|
32 |
+
Huggingface [04/25/2025] via https://huggingface.co/nvidia/OpenCode-Nemotron-7B/ <br>
|
33 |
|
|
|
|
|
|
|
34 |
|
35 |
## References
|
|
|
36 |
- [\[2504.01943\] OpenCodeReasoning: Advancing Data Distillation for Competitive Coding](https://arxiv.org/abs/2504.01943)
|
37 |
|
38 |
+
|
39 |
## Model Architecture
|
40 |
- Architecture Type: Dense decoder-only Transformer model
|
41 |
+
- Network Architecture: Qwen2.5-32B-Instruct
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
+
## Input
|
45 |
+
**Input Type(s):** Text <br>
|
46 |
+
**Input Format(s):** String <br>
|
47 |
+
**Input Parameters:** One-Dimensional (1D) <br>
|
48 |
+
**Other Properties Related to Input:** Context length up to 32,768 tokens <br>
|
|
|
49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
|
51 |
+
## Output
|
52 |
+
**Output Type(s):** Text <br>
|
53 |
+
**Output Format:** String <br>
|
54 |
+
**Output Parameters:** One-Dimensional (1D) <br>
|
55 |
+
**Other Properties Related to Output:** Context length up to 32,768 tokens <br>
|
56 |
|
57 |
+
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
|
58 |
|
|
|
59 |
|
60 |
+
## Software Integration
|
61 |
+
* Runtime Engine: Transformers, vLLM <br>
|
62 |
+
* Recommended Hardware Microarchitecture Compatibility: <br>
|
63 |
+
- NVIDIA Ampere
|
64 |
+
- NVIDIA Hopper
|
65 |
+
* Preferred/Supported Operating System(s): Linux <br>
|
66 |
|
|
|
67 |
|
68 |
+
## Model Version(s)
|
69 |
+
1.0 (4/25/2025) <br>
|
70 |
|
|
|
71 |
|
72 |
+
## Training Dataset
|
73 |
+
The training corpus for OpenCode-Nemotron-7B is [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) dataset, which is composed of competitive programming questions and DeepSeek-R1 generated responses.
|
74 |
+
* Data Collection Method: Hybrid: Automated, Human, Synthetic <br>
|
75 |
+
* Data Labeling Method: Hybrid: Automated, Human, Synthetic <br>
|
76 |
|
|
|
77 |
|
78 |
+
## Evaluation Dataset
|
79 |
+
We used the datasets listed in the next section to evaluate OpenCodeReasoning-32B. <br>
|
80 |
+
* Data Collection Method: Hybrid: Automated, Human, Synthetic <br>
|
81 |
+
* Data Labeling Method: Hybrid: Automated, Human, Synthetic <br>
|
82 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
83 |
|
84 |
+
### [LiveCodeBench](https://huggingface.co/datasets/livecodebench/code_generation_lite)
|
85 |
| Easy | Medium | Hard | Avg. |
|
86 |
|:------|:------|:------|:-----|
|
87 |
| 95.4 | 64.0| 18.0 | 51.3 |
|
88 |
|
89 |
+
### [CodeContests](https://huggingface.co/datasets/deepmind/code_contests)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
90 |
| Public | Private | Generated | All |
|
91 |
|:--------|:--------|:----------|:----|
|
92 |
| 46.7 | 29.6 | 32.3 | 18.1|
|
93 |
|
94 |
|
95 |
+
## Inference
|
96 |
+
**Engine:** vLLM <br>
|
97 |
+
**Test Hardware** NVIDIA H100-80GB <br>
|
98 |
+
|
99 |
+
|
100 |
## Ethical Considerations:
|
101 |
|
102 |
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|