Safetensors
custom_code
gheinrich commited on
Commit
d9813a1
·
verified ·
1 Parent(s): 2085b77

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +153 -123
README.md CHANGED
@@ -1,199 +1,229 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
 
11
 
12
- ## Model Details
 
 
 
 
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
45
 
46
- ### Downstream Use [optional]
 
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
- [More Information Needed]
 
51
 
52
- ### Out-of-Scope Use
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
 
 
55
 
56
- [More Information Needed]
57
 
58
- ## Bias, Risks, and Limitations
 
 
 
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
 
62
- [More Information Needed]
 
 
 
63
 
64
- ### Recommendations
 
 
 
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
69
 
70
- ## How to Get Started with the Model
 
 
71
 
72
- Use the code below to get started with the model.
 
73
 
74
- [More Information Needed]
 
75
 
76
- ## Training Details
 
 
 
77
 
78
- ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
 
83
 
84
- ### Training Procedure
 
 
 
 
 
 
 
 
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
 
 
 
87
 
88
- #### Preprocessing [optional]
89
 
90
- [More Information Needed]
 
 
 
91
 
 
92
 
93
- #### Training Hyperparameters
 
 
 
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
 
97
- #### Speeds, Sizes, Times [optional]
98
 
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
 
101
- [More Information Needed]
102
 
103
- ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
 
107
- ### Testing Data, Factors & Metrics
108
 
109
- #### Testing Data
110
 
111
- <!-- This should link to a Dataset Card if possible. -->
112
 
113
- [More Information Needed]
114
 
115
- #### Factors
116
 
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
 
119
- [More Information Needed]
120
 
121
- #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
 
125
- [More Information Needed]
126
 
127
- ### Results
128
 
129
- [More Information Needed]
 
130
 
131
- #### Summary
132
 
 
133
 
 
134
 
135
- ## Model Examination [optional]
136
 
137
- <!-- Relevant interpretability work for the model goes here -->
138
 
139
- [More Information Needed]
 
 
 
140
 
141
- ## Environmental Impact
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
 
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
 
 
 
 
 
 
 
 
 
 
 
 
146
 
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
- ## Technical Specifications [optional]
154
 
155
- ### Model Architecture and Objective
 
 
 
 
 
 
 
156
 
157
- [More Information Needed]
158
 
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
1
  ---
2
+ license: other
3
+ license_name: nvidia-open-model-license
4
+ license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
5
  ---
6
 
7
+ # Model Overview
8
 
9
+ [[**Github**](https://github.com/NVlabs/RADIO)] [[**CVPR 2025**](https://arxiv.org/abs/2412.07679)] [[**CVPR 2024**](https://arxiv.org/abs/2312.06709)]
10
 
11
+ ## Description
12
 
13
+ This model performs visual feature extraction.
14
+ For instance, RADIO generates image embeddings that can be used by a downstream model to classify images.
15
 
16
+ C-RADIOv2 models are available in multiple sizes:
17
+ * Base (90M parameters).
18
+ * Large (320M parameters).
19
+ * Huge (653M parameters).
20
+ * Gigantic (1.1B parameters).
21
 
22
+ C-RADIOv2 was trained for 1M steps (400k more steps than v1), using inverse frequency sampling for data balancing, and [PHI Standardization](https://arxiv.org/abs/2410.01680) for teacher distribution balancing.
23
 
24
+ This model is ready for commercial/non-commercial use.
25
 
26
+ ### License/Terms of Use
27
 
28
+ GOVERNING TERMS: Use of this model is governed by the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
 
 
 
 
 
 
29
 
30
+ ## Deployment Geography
31
 
32
+ Global.
33
 
34
+ ## Use Case
 
 
35
 
36
+ The embeddings generated by this model are expected to be used by a downstream application.
37
+ For example:
38
 
39
+ * Image-level understanding (image classification, curation, etc.).
40
+ * Dense processing (semantic segmentation, depth estimation, etc.).
41
+ * Integration into a Vision-Language Model.
42
 
43
+ ## Release Date
44
 
45
+ Huggingface: 03/26/2025 via [RADIO Collection of Models](https://huggingface.co/collections/nvidia/radio-669f77f1dd6b153f007dd1c6).
46
 
47
+ ## References
48
 
49
+ * \[CVPR 2025\] [**RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Models**](https://arxiv.org/abs/2412.07679)
50
+ * \[CVPR 2024\] [**AM-RADIO: Agglomerative Vision Foundation Model - Reduce All Domains Into One**](https://arxiv.org/abs/2312.06709)
51
 
52
+ ## Model Architecture
53
 
54
+ **Architecture Type:** Neural Network <br>
55
+ **Network Architecture:** Vision Transformer <br>
56
 
57
+ ## Input
58
 
59
+ **Input Type(s):** Image <br>
60
+ **Input Format(s):** Red, Green, Blue (RGB) <br>
61
+ **Input Parameters:** Two Dimensional (2D) <br>
62
+ **Other Properties Related to Input:** Image resolutions up to 2048x2028 in increments of 16 pixels <br>
63
 
64
+ ## Output
65
 
66
+ **Output Type(s):** Embeddings <br>
67
+ **Output Format:** Tensor <br>
68
+ **Output Parameters:** 2D <br>
69
+ **Other Properties Related to Output:** Downstream model required to leverage image features <br>
70
 
71
+ ## Usage:
72
 
73
+ RADIO will return a tuple with two tensors.
74
+ The `summary` is similar to the `cls_token` in ViT and is meant to represent the general concept of the entire image.
75
+ It has shape `(B,C)` with `B` being the batch dimension, and `C` being some number of channels.
76
+ The `spatial_features` represent more localized content which should be suitable for dense tasks such as semantic segmentation, or for integration into an LLM.
77
 
78
+ ```python
79
+ import torch
80
+ from PIL import Image
81
+ from transformers import AutoModel, CLIPImageProcessor
82
 
83
+ hf_repo = "nvidia/C-RADIOv2-VLM-H-RC3"
84
 
85
+ image_processor = CLIPImageProcessor.from_pretrained(hf_repo)
86
+ model = AutoModel.from_pretrained(hf_repo, trust_remote_code=True)
87
+ model.eval().cuda()
88
 
89
+ image = Image.open('./assets/radio.png').convert('RGB')
90
+ pixel_values = image_processor(images=image, return_tensors='pt', do_resize=True).pixel_values
91
+ pixel_values = pixel_values.cuda()
92
 
93
+ summary, features = model(pixel_values)
94
+ ```
95
 
96
+ Spatial features have shape `(B,T,D)` with `T` being the flattened spatial tokens, and `D` being the channels for spatial features. Note that `C!=D` in general.
97
+ Converting to a spatial tensor format can be done using the downsampling size of the model, combined with the input tensor shape. For RADIO, the patch size is 16.
98
 
99
+ ```Python
100
+ from einops import rearrange
101
+ spatial_features = rearrange(spatial_features, 'b (h w) d -> b d h w', h=x.shape[-2] // patch_size, w=x.shape[-1] // patch_size)
102
+ ```
103
 
104
+ The resulting tensor will have shape `(B,D,H,W)`, as is typically seen with computer vision models.
105
 
106
+ ## Software Integration
107
 
108
+ **Runtime Engine(s):**
109
+ * TAO- 24.10 <br>
110
 
111
+ **Supported Hardware Microarchitecture Compatibility:** <br>
112
+ * NVIDIA Ampere <br>
113
+ * NVIDIA Blackwell <br>
114
+ * NVIDIA Jetson <br>
115
+ * NVIDIA Hopper <br>
116
+ * NVIDIA Lovelace <br>
117
+ * NVIDIA Pascal <br>
118
+ * NVIDIA Turing <br>
119
+ * NVIDIA Volta <br>
120
 
121
+ **[Preferred/Supported] Operating System(s):** <br>
122
+ * Linux
123
+ * Linux 4 Tegra
124
+ * QNX
125
+ * Windows
126
 
127
+ ## Model Version(s)
128
 
129
+ * C-RADIOv2-B (90M parameters).
130
+ * C-RADIOv2-L (320M parameters).
131
+ * C-RADIOv2-H (653M parameters).
132
+ * C-RADIOv2-G (1.8B parameters).
133
 
134
+ **Links:**
135
 
136
+ * https://huggingface.co/nvidia/C-RADIOv2-B
137
+ * https://huggingface.co/nvidia/C-RADIOv2-L
138
+ * https://huggingface.co/nvidia/C-RADIOv2-H
139
+ * https://huggingface.co/nvidia/C-RADIOv2-g
140
 
141
+ # Training and Evaluation Datasets
142
 
143
+ ## Training Dataset
144
 
145
+ NV-CC-Img-Text-Dataset <br>
146
 
147
+ ### Data Collection Method by dataset
148
 
149
+ * Automated <br>
150
 
151
+ ### Labeling Method by dataset
152
 
153
+ * Not Applicable (no labels are needed)
154
 
155
+ ### Properties
156
 
157
+ * 700 Million Images <br>
158
 
159
+ ## Evaluation Dataset
160
 
161
+ **Link:** [ImageNet](https://www.image-net.org/) <br>
162
 
163
+ ### Data Collection Method by dataset
164
 
165
+ * Automated <br>
166
 
167
+ ### Labeling Method by dataset
168
 
169
+ * Human <br>
170
 
171
+ **Properties:** This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images.<br>
172
 
173
+ ## Inference
174
 
175
+ **Engine:** PyTorch <br>
176
+ **Test Hardware:** A100 <br>
177
 
178
+ ## Ethical Considerations
179
 
180
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
181
 
182
+ For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards below.
183
 
184
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
185
 
186
+ ### Bias
187
 
188
+ Field | Response
189
+ :---------------------------------------------------------------------------------------------------|:---------------
190
+ Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None
191
+ Measures taken to mitigate against unwanted bias: | None
192
 
 
193
 
194
+ ### Explainability
195
 
196
+ Field | Response
197
+ :------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
198
+ Intended Application & Domain: | Visual Feature Extraction
199
+ Model Type: | Vision Transformer
200
+ Intended Users: | Developers of downstream vision applications
201
+ Output: | Image embeddings
202
+ Describe how the model works: | The model takes an image as input, processes the image through multiple transformer blocks, and outputs summary and patch embeddings.
203
+ Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable
204
+ Technical Limitations: | This model generates image embeddings that can be used by a downstream model to, for example, classify images. The downstream model must be trained to leverage the visual embeddings.
205
+ Verified to have met prescribed NVIDIA quality standards: | Yes
206
+ Performance Metrics: | Image classification accuracy, semantic segmentation mean-over-intersection.
207
+ Potential Known Risks: | This model is only tested on input resolutions ranging from 256 to 2048, in increments of 16 pixels. Additionally, the generated embeddings might fail to disambiguate differences that appear evident to humans (e.g. two images showing different breeds of dogs might in fact produce very similar embeddings). Domain-specific evaluation is required for the target application.
208
+ Licensing: | [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
209
 
 
 
 
 
 
210
 
211
+ ### Privacy
212
 
213
+ Field | Response
214
+ :----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
215
+ Generatable or reverse engineerable personal data? | None
216
+ Personal data used to create this model? | None
217
+ How often is dataset reviewed? | Before Every Release
218
+ Is there provenance for all datasets used in training? | Yes
219
+ Does data labeling (annotation, metadata) comply with privacy laws? | Yes
220
+ Is data compliant with data subject requests for data correction or removal, if such a request was made? | Yes
221
 
222
+ ### Safety
223
 
224
+ Field | Response
225
+ :---------------------------------------------------|:----------------------------------
226
+ Model Application(s): | Generation of visual embeddings
227
+ Describe the life critical impact (if present). | Not Applicable
228
+ Use Case Restrictions: | Abide by NVIDIA Open Model License Agreement
229
+ Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.