Image-Text-to-Text
Transformers
Safetensors
Cosmos
English
qwen2_5_vl
nvidia
conversational
text-generation-inference
tsungyi commited on
Commit
260e2ec
·
1 Parent(s): 098a5bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +117 -16
README.md CHANGED
@@ -26,9 +26,11 @@ tags:
26
 
27
  ## Description:
28
 
29
- **Cosmos-Reason1 Models**: Physical AI models understand the physical common sense and generate appropriate embodied decisions in natural language through long chain-of-thought reasoning processes.
30
 
31
- The Cosmos-Reason1 models are post-trained with physical common sense and embodied reasoning data with supervised fine-tuning and RL. It can serve as a critic model to reason about AI-generated videos defying physical laws or a planning model to reason about the next action of an embodied agent. The models are ready for commercial use.
 
 
32
 
33
  **Model Developer**: NVIDIA
34
 
@@ -36,8 +38,7 @@ The Cosmos-Reason1 models are post-trained with physical common sense and embodi
36
 
37
  The Cosmos-Reason1 includes the following model:
38
 
39
- - [Cosmos-Reason1-7B](https://huggingface.co/nvidia/Cosmos-Reason1-7B)
40
- - Given a text prompt and an input video, think and generate the answer with respect to the input text prompt and video.
41
 
42
  ### License:
43
 
@@ -49,9 +50,7 @@ Under the NVIDIA Open Model License, NVIDIA confirms:
49
  * You are free to create and distribute Derivative Models.
50
  * NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
51
 
52
- **Important Note**: If you bypass, disable, reduce the efficacy of, or circumvent any technical limitation, **safety guardrail** or
53
- associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism contained
54
- in the Model, your rights under [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) will automatically terminate.
55
 
56
  ### Deployment Geography:
57
 
@@ -59,18 +58,70 @@ Global
59
 
60
  ### Use Case:
61
 
62
- Physical AI: synthetic data evaluation, encompassing robotics, autonomous vehicles (AV), and more.
63
 
64
  ### Release Date:
65
 
66
- 05/17/2025
 
67
 
68
  ## Model Architecture:
69
 
70
- Cosmos-Reason-7B is developed based on [https://Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) and follows the same model architecture.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
  ## Software Integration
73
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
  # Usage
75
 
76
  See [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) for details.
@@ -78,13 +129,59 @@ See [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) for detail
78
 
79
  # Evaluation
80
 
81
- Please see our [technical paper](https://arxiv.org/pdf/2503.15558) for detailed evaluations on physical common sense and embodied reasoning. Part of the evaluation datasets are released under [Cosmos-Reason1-Benchmark-Sample](https://huggingface.co/datasets/nvidia/Cosmos-Reason1-Benchmark-Sample)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
  ## Ethical Considerations
84
 
85
  NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
86
 
87
- For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
 
 
 
 
88
 
89
  ### Plus Plus (++) Promise
90
 
@@ -102,7 +199,7 @@ We value you, the datasets, the diversity they represent, and what we have been
102
 
103
  | Field | Response |
104
  | :--------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- |
105
- | Participation considerations from adversely impacted groups[protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None |
106
  | Measures taken to mitigate against unwanted bias: | None |
107
 
108
  ### Explainability
@@ -114,9 +211,9 @@ We value you, the datasets, the diversity they represent, and what we have been
114
  | Intended Users: | Physical AI developers |
115
  | Output: | Text |
116
  | Describe how the model works: | Generates text answers based on input text prompt and video |
117
- | Technical Limitations: | The model may not follow the video or text input accurately in challenging cases |
118
  | Verified to have met prescribed NVIDIA quality standards: | Yes |
119
- | Performance Metrics: | Quantitative and Qualitative Evaluation |
120
  | Potential Known Risks: | The model's output can generate all forms of texts, including what may be considered toxic, offensive, or indecent. |
121
  | Licensing: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
122
 
@@ -130,13 +227,17 @@ We value you, the datasets, the diversity they represent, and what we have been
130
  | How often is dataset reviewed? | Before Release |
131
  | Is there provenance for all datasets used in training? | Yes |
132
  | Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
 
 
133
 
134
  ### Safety
135
 
136
  | Field | Response |
137
  | :---------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
138
- | Model Application(s): | World Generation |
139
  | Describe the life critical impact (if present). | None Known |
140
  | Use Case Restrictions: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
141
  | Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. |
142
 
 
 
 
26
 
27
  ## Description:
28
 
29
+ **Cosmos-Reason1 Models**: Physical AI models understand physical common sense and generate appropriate embodied decisions in natural language through long chain-of-thought reasoning processes.
30
 
31
+ The Cosmos-Reason1 models are post-trained with physical common sense and embodied reasoning data with supervised fine-tuning and reinforcement learning. These are Physical AI models that can understand space, time, and fundamental physics, and can serve as planning models to reason about the next steps of an embodied agent.
32
+
33
+ The models are ready for commercial use.
34
 
35
  **Model Developer**: NVIDIA
36
 
 
38
 
39
  The Cosmos-Reason1 includes the following model:
40
 
41
+ - [Cosmos-Reason1-7B](https://huggingface.co/nvidia/Cosmos-Reason1-7B): Given a text prompt and an input video, think and generate the answer with respect to the input text prompt and video.
 
42
 
43
  ### License:
44
 
 
50
  * You are free to create and distribute Derivative Models.
51
  * NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
52
 
53
+ **Important Note**: If You bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism (collectively “Guardrail”) contained in the Model without a substantially similar Guardrail appropriate for your use case, your rights under this Agreement [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) will automatically terminate.
 
 
54
 
55
  ### Deployment Geography:
56
 
 
58
 
59
  ### Use Case:
60
 
61
+ Physical AI: Space, time, fundamental physics understanding and embodied reasoning, encompassing robotics, and autonomous vehicles (AV).
62
 
63
  ### Release Date:
64
 
65
+ * Github: [05/17/2025](https://github.com/nvidia-cosmos/cosmos-reason1)
66
+ * Huggingface: [05/17/2025](https://huggingface.co/collections/nvidia/cosmos-reason1-67c9e926206426008f1da1b7)
67
 
68
  ## Model Architecture:
69
 
70
+ Architecture Type: A Multi-modal LLM consists of a Vision Transformer (ViT) for vision encoder and a Dense Transformer model for LLM.
71
+ Network Architecture: Qwen2.5-VL-7B-Instruct.
72
+
73
+ Cosmos-Reason-7B is post-trained based on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) and follows the same model architecture.
74
+
75
+
76
+ ## Input
77
+
78
+ **Input Type(s)**: Text+Video/Image
79
+
80
+ **Input Format(s)**:
81
+ * Text: String
82
+ * Video: mp4
83
+ * Image: jpg
84
+
85
+ **Input Parameters**:
86
+ * Text: One-dimensional (1D)
87
+ * Video: Three-dimensional (3D)
88
+ * Image: Two-dimensional (2D)
89
+
90
+ **Other Properties Related to Input**:
91
+ * Use `FPS=4` for input video to match the training setup.
92
+ * Append `Answer the question in the following format: <think>\nyour reasoning\n</think>\n\n<answer>\nyour answer\n</answer>.` in the system prompt to encourage long chain-of-thought reasoning response.
93
+
94
+ ## Output
95
+
96
+ **Output Type(s)**: Text
97
+
98
+ **Output Format**: String
99
+
100
+ **Output Parameters**: Text: One-dimensional (1D)
101
+
102
+ **Other Properties Related to Output**:
103
+ * Recommend using 4096 or more output max tokens to avoid truncation of long chain-of-thought response.
104
+ * Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
105
+
106
 
107
  ## Software Integration
108
 
109
+ **Runtime Engine(s):**
110
+
111
+ * [vLLM](https://github.com/vllm-project/vllm)
112
+
113
+ **Supported Hardware Microarchitecture Compatibility:**
114
+
115
+ * NVIDIA Blackwell
116
+ * NVIDIA Hopper
117
+
118
+ **Note**: We have only tested doing inference with BF16 precision.
119
+
120
+ **Operating System(s):**
121
+
122
+ * Linux (We have not tested on other operating systems.)
123
+
124
+
125
  # Usage
126
 
127
  See [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) for details.
 
129
 
130
  # Evaluation
131
 
132
+ Please see our [technical paper](https://arxiv.org/pdf/2503.15558) for detailed evaluations on physical common sense and embodied reasoning. Part of the evaluation datasets are released under [Cosmos-Reason1-Benchmark](https://huggingface.co/datasets/nvidia/Cosmos-Reason1-Benchmark-Sample). The embodied reasoning datasets and benchmarks focus on the following areas: robotics (RoboVQA, BridgeDataV2, Agibot, RobFail), ego-centric human demonstration (HoloAssist), and Autonomous Vehicle (AV) driving video data. The AV dataset is collected and annotated by NVIDIA.
133
+ All datasets go through the data annotation process described in the technical paper to prepare training and evaluation data and annotations.
134
+
135
+ **Data Collection Method**:
136
+ * RoboVQA: Hybrid: Automatic/Sensors
137
+ * BridgeDataV2: Automatic/Sensors
138
+ * AgiBot: Automatic/Sensors
139
+ * RoboFail: Automatic/Sensors
140
+ * HoloAssist: Human
141
+ * AV: Automatic/Sensors
142
+
143
+ **Labeling Method**:
144
+ * RoboVQA: Hybrid: Human,Automated
145
+ * BridgeDataV2: Hybrid: Human,Automated
146
+ * AgiBot: Hybrid: Human,Automated
147
+ * RoboFail: Hybrid: Human,Automated
148
+ * HoloAssist: Hybrid: Human,Automated
149
+ * AV: Hybrid: Human,Automated
150
+
151
+ ## Dataset Format
152
+ Modality: Video (mp4) and Text
153
+
154
+ ## Dataset Quantification
155
+ We release the embodied reasoning data and benchmarks. Each data sample is a pair of video and text. The text annotations include understanding and reasoning annotations described in the Cosmos-Reason1 paper. Each video may have multiple text annotations. The quantity of the video and text pairs is described in the table below.
156
+
157
+ | Dataset | SFT Data | RL Data | Benchmark Data |
158
+ |--------------|---------:|--------:|---------------:|
159
+ | [RoboVQA](https://robovqa.github.io/) | 1.14m | 252 | 110 |
160
+ | AV | 24.7k | 200 | 100 |
161
+ | [BridgeDataV2](https://rail-berkeley.github.io/bridgedata/) | 258k | 240 | 100 |
162
+ | [Agibot](https://github.com/OpenDriveLab/AgiBot-World) | 38.9k | 200 | 100 |
163
+ | [HoloAssist](https://holoassist.github.io/) | 273k | 200 | 100 |
164
+ | [RoboFail](https://robot-reflect.github.io/) | N/A | N/A | 100 |
165
+ | **Total Storage Size** | **300.6GB** | **2.6GB** | **1.5GB** | |
166
+
167
+
168
+ We release text annotations for all embodied reasoning datasets and videos for RoboVQA and AV datasets. For other datasets, users may download the source videos from the original data source and find corresponding video sources via the video names. The held-out RoboFail benchmark is released for measuring the generalization capability.
169
+
170
+
171
+ ## Inference:
172
+ **Acceleration Engine:** PyTorch, flash attention <br>
173
+ **Test Hardware:** H100, A100, GB200 <br>
174
+ * Minimum 2 GPU cards, multi nodes require Infiniband / ROCE connection <br>
175
 
176
  ## Ethical Considerations
177
 
178
  NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
179
 
180
+ Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
181
+
182
+ For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below.
183
+
184
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
185
 
186
  ### Plus Plus (++) Promise
187
 
 
199
 
200
  | Field | Response |
201
  | :--------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- |
202
+ | Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None |
203
  | Measures taken to mitigate against unwanted bias: | None |
204
 
205
  ### Explainability
 
211
  | Intended Users: | Physical AI developers |
212
  | Output: | Text |
213
  | Describe how the model works: | Generates text answers based on input text prompt and video |
214
+ | Technical Limitations: | The model may not follow the video or text input accurately in challenging cases, where the input video shows complex scene composition and temporal dynamics. |
215
  | Verified to have met prescribed NVIDIA quality standards: | Yes |
216
+ | Performance Metrics: | Quantitative and Qualitative Evaluation. Cosmos-Reason1 proposes the embodied reasoning benchmark and physical common sense benchmark to evaluate accuracy with visual question answering. |
217
  | Potential Known Risks: | The model's output can generate all forms of texts, including what may be considered toxic, offensive, or indecent. |
218
  | Licensing: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
219
 
 
227
  | How often is dataset reviewed? | Before Release |
228
  | Is there provenance for all datasets used in training? | Yes |
229
  | Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
230
+ | Applicable Privacy Policy | [NVIDIA Privacy Policy](https://www.nvidia.com/en-us/about-nvidia/privacy-policy) |
231
+
232
 
233
  ### Safety
234
 
235
  | Field | Response |
236
  | :---------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
237
+ | Model Application(s): | Physical AI common sense understanding and embodied reasoning |
238
  | Describe the life critical impact (if present). | None Known |
239
  | Use Case Restrictions: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
240
  | Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. |
241
 
242
+
243
+