wasiuddina commited on
Commit
5f7ffb4
·
verified ·
1 Parent(s): f55121a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +187 -3
README.md CHANGED
@@ -1,3 +1,187 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ license_name: nvidia-internal-scientific-research-and-development-model-license
5
+ license_link: >-
6
+ https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-internal-scientific-research-and-development-model-license/
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - nvidia
10
+ - pytorch
11
+ ---
12
+
13
+ # Nemotron-H-56B-Base-8K
14
+
15
+ ## Model Overview
16
+
17
+ NVIDIA Nemotron-H-56B-Base-8K Base is a large language model (LLM) developed by NVIDIA that is designed as a completion model for a given piece of text. It uses a hybrid model architecture that consists primarily of Mamba-2 and MLP layers combined with just four Attention layers. The model features a context length of 8K. The supported languages include: English, German, Spanish, French, Italian, Korean, Portuguese, Russian, Japanese, and Chinese. For more detailed information on the model architecture, training, and evaluation, please see the [project page](https://research.nvidia.com/labs/adlr/nemotronh/) and the [technical report](https://arxiv.org/abs/2504.03624).
18
+
19
+ For best performance on a given task, users are encouraged to customize the model using the [NeMo Framework](https://docs.nvidia.com/nemo-framework/index.html) suite of customization tools including Parameter-Efficient Fine-Tuning (P-tuning, Adapters, LoRA, and more), and Model Alignment (SFT, SteerLM, RLHF, and more) using [NeMo-Aligner](https://github.com/NVIDIA/NeMo-Aligner).
20
+
21
+ This model is for research and development only.
22
+
23
+ This model is part of the Nemotron-H Collection. You can find the models in this family here:
24
+ - [Nemotron-H-56B-Base-8K](https://huggingface.co/nvidia/Nemotron-H-56B-Base-8K)
25
+ - [Nemotron-H-47B-Base-8K](https://huggingface.co/nvidia/Nemotron-H-47B-Base-8K)
26
+ - [Nemotron-H-8B-Base-8K](https://huggingface.co/nvidia/Nemotron-H-8B-Base-8K)
27
+
28
+ ## License/Terms of Use
29
+
30
+ GOVERNING TERMS: Use of this model is governed by the [NVIDIA Internal Scientific Research and Development Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-internal-scientific-research-and-development-model-license/).
31
+
32
+ **Model Developer:** NVIDIA
33
+
34
+ **Model Dates:**
35
+
36
+ October 2024 - March 2025
37
+
38
+ **Data Freshness:**
39
+
40
+ September 2024
41
+
42
+ The pretraining data has a cutoff date of September 2024.
43
+
44
+
45
+ ## Use Case:
46
+
47
+ This model is intended for developers and researchers building LLMs.
48
+
49
+ ## Release Date:
50
+
51
+ 4/14/2025
52
+
53
+ ## References
54
+
55
+ - [\[2504.03624\] Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models](https://arxiv.org/abs/2504.03624)
56
+
57
+ ## Model Architecture
58
+ - Architecture Type: Hybrid Mamba-Transformer
59
+ - Network Architecture: Nemotron-H
60
+
61
+ This model has 56B of model parameters.
62
+
63
+ ## Input
64
+ - Input Type(s): Text
65
+ - Input Format(s): String
66
+ - Input Parameters: One-Dimensional (1D): Sequences
67
+ - Other Properties Related to Input: Context length up to 8K. Supported languages include German, Spanish, French, Italian, Korean, Portuguese, Russian, Japanese, Chinese and English.
68
+
69
+ ## Output
70
+ - Output Type(s): Text
71
+ - Output Format: String
72
+ - Output Parameters: One-Dimensional (1D): Sequences
73
+
74
+ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
75
+
76
+ ## Software Integration
77
+ - Runtime Engine(s): NeMo 24.12
78
+ - Supported Hardware Microarchitecture Compatibility: NVIDIA H100-80GB, NVIDIA A100
79
+ - Operating System(s): Linux
80
+
81
+ ## Model Version
82
+ - v1.0
83
+
84
+ ## Prompt Format
85
+
86
+ As this is a base model, no explicit prompt format is recommended or required.
87
+
88
+ ### Example
89
+
90
+ ```python
91
+ import torch
92
+ from transformers import AutoTokenizer, AutoModelForCausalLM
93
+
94
+ # Load the tokenizer and model
95
+ tokenizer = AutoTokenizer.from_pretrained("nvidia/Nemotron-H-56B-Base-8K", trust_remote_code=True)
96
+ model = AutoModelForCausalLM.from_pretrained("nvidia/Nemotron-H-56B-Base-8K", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
97
+
98
+ prompt = "When was NVIDIA founded?"
99
+
100
+ outputs = model.generate(**tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to(model.device))
101
+ print(tokenizer.decode(outputs[0]))
102
+ ```
103
+
104
+ ## Training, Testing, and Evaluation Datasets
105
+
106
+ ### Training & Testing Datasets:
107
+
108
+ The training corpus for Nemotron-H-56B-Base-8K Base consists of English and multilingual text (German, Spanish, French, Italian, Korean, Portuguese, Russian, Japanese, Chinese and English), as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. This model was also improved using synthetic data from Qwen (Built with Qwen). The corpus spans domains including legal, math, science, finance, and more. We also include a small portion of question-answering, and alignment style data to improve model accuracies.
109
+
110
+ **Data Collection for Training & Testing Datasets:**
111
+ Hybrid: Automated, Human, Synthetic
112
+
113
+ **Data Labeling for Training & Testing Datasets:**
114
+ Hybrid: Automated, Human, Synthetic
115
+
116
+ ### Evaluation Datasets
117
+
118
+ We used the datasets listed in the next section to evaluate Nemotron-H-56B-Base-8K Base.
119
+
120
+ Data Collection for Evaluation Datasets:
121
+ Hybrid: Human, Synthetic
122
+
123
+ Data Labeling for Evaluation Datasets:
124
+ Hybrid: Human, Synthetic, Automatic
125
+
126
+ #### Commonsense Understanding Evaluations:
127
+
128
+ | ARC Challenge 25-shot | Hellaswag 10-shot | Winogrande 5-shot | CommonsenseQA 7-shot |
129
+ |-------------|--------------|-----------------|------------------|
130
+ | 94.97 | 89.00 | 84.45 | 86.73 |
131
+
132
+ - ARC (Ai2 reasoning challenge)-Challenge - The challenge set of questions from a benchmark that contains grade-school level, multiple-choice science questions to assess question answering ability of language models. [Dataset](https://huggingface.co/datasets/allenai/ai2_arc)
133
+ - Hellaswag - Tests the ability of a language model to correctly finish the provided context from a choice of possible options. [Dataset](https://huggingface.co/datasets/Rowan/hellaswag )
134
+ - Winogrande - Tests the ability to choose the right option for a given sentence which requires commonsense reasoning. [Dataset](https://huggingface.co/datasets/allenai/winogrande )
135
+ - CommonsenseQA - A multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers. [Dataset](https://huggingface.co/datasets/tau/commonsense_qa )
136
+
137
+ #### Coding Evaluations:
138
+
139
+ | MBPP (sanitized) 3-shot | MBPP+ 0-shot | HumanEval 0-shot | HumanEval+ 0-shot |
140
+ |-------------|--------------|-----------------|------------------|
141
+ | 77.82 | 67.20| 60.37 | 54.27 |
142
+
143
+ - MBPP (Mostly Basic Python Programming Problems) - Evaluates ability to generate solutions for Python programming tasks. [Dataset](https://github.com/google-research/google-research/tree/master/mbpp)
144
+ - MBPP+ - Extended version of MBPP with additional validation. [Dataset](https://huggingface.co/datasets/evalplus/mbppplus)
145
+ - HumanEval - Tests code generation and completion abilities in Python. [Dataset](https://github.com/openai/human-eval)
146
+
147
+ #### Math Evaluations:
148
+
149
+
150
+ | GSM8K 8-shot CoT | MATH 4-shot CoT | MATH-Lvl 5 4-shot CoT | MATH-500 4-shot CoT |
151
+ |--------------|------------|------------|------------|
152
+ | 93.71 | 59.42 | 35.19 | 57.37 |
153
+
154
+ - GSM8K (Grade School Math 8K) - Evaluates grade school level mathematical word problem solving. [Dataset](https://github.com/openai/grade-school-math)
155
+ - MATH - Tests mathematical ability across multiple difficulty levels and various subjects including: Prealgebra, Algebra, Number Theory, Counting and Probability, Geometry, Intermediate Algebra, and Precalculus. [Dataset](https://github.com/hendrycks/math)
156
+ - MATH Lvl 5 - Only the most difficult questions from the MATH dataset. [Dataset](https://github.com/hendrycks/math)
157
+ - MATH-500 - Tests advanced mathematical problem solving across algebra, geometry, and calculus. [Dataset](https://huggingface.co/datasets/HuggingFaceH4/MATH-500)
158
+
159
+
160
+ #### General Evaluations:
161
+
162
+
163
+ | MMLU-Pro 5-shot CoT | MMLU 5-shot|
164
+ |-------------------|------------------|
165
+ |60.51 |84.21 |
166
+
167
+ - MMLU Pro - Evaluates language understanding models across a broad range of challenging, reasoning-focused questions across 14 diverse domains.
168
+ [Dataset](https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro)
169
+ - MMLU - Tests knowledge across 57 subjects including science, humanities, math and more. [Dataset](https://github.com/hendrycks/test)
170
+
171
+ ## Potential Known Risks for Usage
172
+
173
+ The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
174
+
175
+ The model demonstrates weakness to indirect prompt injection via some encodings, including Base16, Hex/ASCII, and Braille, though is more resilient than other similar models to injections using the more common Base64 vector.
176
+
177
+ ## Inference
178
+ - Engine: NeMo
179
+ - Test Hardware NVIDIA H100-80GB
180
+
181
+ ## Ethical Considerations
182
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
183
+
184
+ For more detailed information on ethical considerations for this model, please see the Responsible Use Guide available at http://nvidia.com/nemotron-responsible-use.
185
+
186
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
187
+