wasiuddina commited on
Commit
a9c5889
·
verified ·
1 Parent(s): 538d225

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -42,17 +42,17 @@ Huggingface [04/25/2025] via https://huggingface.co/nvidia/OpenCode-Nemotron-14B
42
 
43
 
44
  ## Input
45
- **Input Type(s):** Text <br>
46
- **Input Format(s):** String <br>
47
- **Input Parameters:** One-Dimensional (1D) <br>
48
- **Other Properties Related to Input:** Context length up to 32,768 tokens <br>
49
 
50
 
51
  ## Output
52
- **Output Type(s):** Text <br>
53
- **Output Format:** String <br>
54
- **Output Parameters:** One-Dimensional (1D) <br>
55
- **Other Properties Related to Output:** Context length up to 32,768 tokens <br>
56
 
57
  Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
58
 
@@ -93,8 +93,8 @@ We used the datasets listed in the next section to evaluate OpenCodeReasoning-32
93
 
94
 
95
  ## Inference
96
- **Engine:** vLLM <br>
97
- **Test Hardware** NVIDIA H100-80GB <br>
98
 
99
 
100
  ## Ethical Considerations:
 
42
 
43
 
44
  ## Input
45
+ - **Input Type(s):** Text <br>
46
+ - **Input Format(s):** String <br>
47
+ - **Input Parameters:** One-Dimensional (1D) <br>
48
+ - **Other Properties Related to Input:** Context length up to 32,768 tokens <br>
49
 
50
 
51
  ## Output
52
+ - **Output Type(s):** Text <br>
53
+ - **Output Format:** String <br>
54
+ - **Output Parameters:** One-Dimensional (1D) <br>
55
+ - **Other Properties Related to Output:** Context length up to 32,768 tokens <br>
56
 
57
  Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
58
 
 
93
 
94
 
95
  ## Inference
96
+ - **Engine:** vLLM <br>
97
+ - **Test Hardware** NVIDIA H100-80GB <br>
98
 
99
 
100
  ## Ethical Considerations: