You need to agree to share your contact information to access this model
The information you provide will be collected, stored, processed and shared in accordance with the NVIDIA Privacy Policy.
NVIDIA Open Model License Agreement
Version Release Date: September 23, 2025
This NVIDIA Open Model License Agreement (the “Agreement”) is a legal agreement between the Legal Entity You represent, or if no entity is identified, You and NVIDIA Corporation and its Affiliates (“NVIDIA”) and governs Your use of the Models that NVIDIA provides to You under this Agreement. NVIDIA and You are each a “party” and collectively the “parties.”
NVIDIA models released under this Agreement are intended to be used permissively and enable the further development of AI technologies. Subject to the terms of this Agreement, NVIDIA confirms that:
- Models are commercially usable. - You are free to create and distribute Derivative Models. - NVIDIA does not claim ownership to any outputs generated using the Models or Model Derivatives.
By using, reproducing, modifying, distributing, performing or displaying any portion or element of the Model or Derivative Model, or otherwise accepting the terms of this Agreement, you agree to be bound by this Agreement.
1. Definitions
1.1. Derivative Model means all (a) modifications to the Model, (b) works based on the Model, and (c) any other derivative works of the Model. An output is not a Derivative Model.
1.2. Legal Entity means the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, “control” means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of fifty percent (50%) or more of the outstanding shares, or (c) beneficial ownership of such entity.
1.3. Model means the machine learning model, software, checkpoints, learnt weights, algorithms, parameters, configuration files and documentation shared under this Agreement.
1.4. NVIDIA Cosmos Model means a multimodal Model shared under this Agreement.
1.5. Special-Purpose Model means a Model that is only competent in a narrow set of purpose-specific tasks and should not be used for unintended or general-purpose applications.
1.6. You or Your means an individual or Legal Entity exercising permissions granted by this Agreement.
2. Conditions for Use, License Grant, AI Ethics and IP Ownership
2.1. Conditions for Use - The Model and any Derivative Model are subject to additional terms as described in Section 2 and Section 3 of this Agreement. - If You institute copyright or patent litigation against any entity alleging that the Model or a Derivative Model constitutes infringement, then any licenses granted will terminate as of the date such litigation is filed. - If You bypass or disable any technical limitation, safety guardrail, encryption, DRM, or authentication mechanism contained in the Model without a substantially similar Guardrail, your rights will terminate. - NVIDIA may designate a Model as a Special-Purpose Model. - NVIDIA may update this Agreement to comply with legal and regulatory requirements.
2.2. License Grant NVIDIA grants You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, revocable license to publicly perform, publicly display, reproduce, use, create derivative works of, make, have made, sell, offer for sale, distribute, and import the Model.
2.3. AI Ethics Use of the Models must be consistent with NVIDIA’s Trustworthy AI terms.
2.4. IP Ownership - NVIDIA owns the Model and any Model Derivatives it creates. - You own your Model Derivatives. - NVIDIA claims no ownership rights in outputs. - Except as expressly granted, NVIDIA reserves all rights.
3. Redistribution
You may reproduce and distribute copies of the Model or Derivative Models in any medium, with or without modifications, provided that:
3.1. You must provide recipients with a copy of this Agreement and include this attribution in a “Notice” text file:
“Licensed by NVIDIA Corporation under the NVIDIA Open Model License”3.2. If distributing or making available a NVIDIA Cosmos Model, or products/services derived from it, you must include:
“Built on NVIDIA Cosmos”3.3. You may add your own copyright statements and license terms for your modifications, provided use still complies with this Agreement.
4. Separate Components The Models may include components licensed under separate legal notices (e.g., Open Source Software Licenses). These terms apply, except where overridden by this Agreement unless required by third-party license terms.
5. Trademarks No permission is granted to use NVIDIA’s trade names, trademarks, or product names, except for reasonable descriptive use.
6. Disclaimer of Warranty The Model is provided “AS IS”, without warranties of any kind, including title, non-infringement, merchantability, or fitness for purpose. You assume risks associated with its use.
7. Limitation of Liability NVIDIA is not liable for damages (direct, indirect, incidental, or consequential) arising from use of the Model, unless required by law.
8. Indemnity You will indemnify and hold NVIDIA harmless against claims from third parties arising from your use or distribution of the Model, derivatives, or outputs.
9. Feedback NVIDIA may use any feedback you provide without restriction or compensation.
10. Governing Law This Agreement is governed by U.S. and Delaware law. Courts in Santa Clara County, California, have exclusive jurisdiction, except for urgent injunctive relief.
11. Trade and Compliance You must comply with all export, import, trade, and sanctions laws, including U.S. Export Administration Regulations and OFAC rules.
Log in or Sign Up to review the conditions and access this model content.
Cosmos-Predict2.5: A Suite of Diffusion-based World Foundation Models
Cosmos | Code | White Paper | Website
NVIDIA Cosmos™ is a platform of state-of-the-art generative world foundation models, advanced tokenizers, guardrails, and an accelerated data processing and curation pipeline, purpose-built to accelerate the development of physical AI systems, such as autonomous vehicles (AVs) and robots.
Model Overview
Description
Cosmos-Predict2.5: A family of highly performant pre-trained world foundation models purpose-built for generating physics-aware images, videos and world states for physical AI development.
Cosmos-Predict2.5 diffusion models are a collection of diffusion based world foundation models that generate dynamic, high quality images and videos from text, image, or video inputs. It can serve as the building block for various applications or research that are related to world generation.
This model is ready for commercial/non-commercial use.
Model Developer: NVIDIA
Model Versions
The Cosmos-Predict2.5 diffusion-based model family includes the following models:
Cosmos-Predict2.5-2B/ Pre-trained
- Given a text description, an image as the first frame, and/or a video, predict the future frames.
- Produces 720P video with 16FPS
Cosmos-Predict2.5-2B/ Post-trained
- Given a text description, an image as the first frame, and/or a video, predict the future frames.
- Produces 720P video with 16FPS
Cosmos-Predict2.5-2B/ Auto/ Multiview
- Given a text description, an image as the first frame, and/or a video, predict world senario in 7-camera views .
- Produces 720P video with 16FPS
Cosmos-Predict2.5-2B/ Robot / Multiview
- Given a text description, a static video, and two target camera trajectories, predict two re-rendered videos.
- Produces 720P video with 16FPS
Cosmos-Predict2.5-2B/ Robot / Multiview-Agibot
- Given a text description, a head-view video, and two target hand-view camera trajectories, predict two head-view videos.
- Produces 720P video with 16FPS
Cosmos-Predict2.5-2B/ Robot / Action-Cond
- Given a single conditional image, and a sequence of robot actions, predict a chunk of future frames that follow the provided action sequence.
- Produces 720P video with 16FPS
License
This model is released under the NVIDIA Open Model License. Additional Information: Apache License 2.0.
For a custom license, please contact [email protected].
Under the NVIDIA Open Model License, NVIDIA confirms:
- Models are commercially usable.
- You are free to create and distribute Derivative Models.
- NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
Important Note: If you bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism contained in the Model, your rights under NVIDIA Open Model License Agreement will automatically terminate.
Deployment Geography:
Global
Use Case:
Physical AI: encompassing robotics, autonomous vehicles (AV), and more.
Release Date:
Github [10/06/2025] via https://github.com/nvidia-cosmos/cosmos-predict2.5
Hugging Face [10/06/2025] via https://huggingface.co/collections/nvidia/cosmos-predict25-68bb63255f2fc206c5e5b346
Model Architecture
Cosmos-Predict2.5-2B is a diffusion transformer model designed for video denoising in the latent space. The network is composed of interleaved self-attention, cross-attention and feedforward layers as its building blocks. The cross-attention layers allow the model to condition on input text throughout the denoising process. Before each layer, adaptive layer normalization is applied to embed the time information for denoising. When image or video is provided as input, their latent frames are concatenated with the generated frames along the temporal dimension. Augment noise is added to conditional latent frames to bridge the training and inference gap.
This model was developed based on: Cosmos-Predict2-2B
Number of model parameters: 2,059,174,912
Input/Output Specifications
Input
- Input Type(s): Text+Image, Text+Video
- Input Format(s):
- Text: String
- Image: jpg, png, jpeg, webp
- Video: mp4
- Input Parameters:
- Text: One-dimensional (1D)
- Image: Two-dimensional (2D)
- Video: Three-dimensional (3D)
- Other Properties Related to Input:
- The input string should contain fewer than 300 words and should provide descriptive content for world generation, such as a scene description, key objects or characters, background, and any specific actions or motions to be depicted within the 5-second duration.
- For the 720P model, the input image should be 1280×704; for the 480P model, use 832×480.
- The input video should consist of 5 frames, each with a resolution of 1280×704 for the 720P model, or 832×480 for the 480P model.
Output
- Output Type(s): Video
- Output Format(s): mp4
- Output Parameters: Three-dimensional (3D)
- Other Properties Related to Output: The generated video is a 5-second clip, with resolution and frame rate determined by the model variant used. For example, the 720P 16FPS model produces a video with a resolution of 1280×704 and a frame rate of 16 FPS.
The video content visualizes the input text description as a short animated scene, capturing key elements within the specified time constraints.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Software Integration
Runtime Engine(s):
Supported Hardware Microarchitecture Compatibility:
- NVIDIA Ampere
- NVIDIA Blackwell
- NVIDIA Hopper
Note: Only BF16 precision is tested. Other precisions like FP16 or FP32 are not officially supported.
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
Training Dataset:
Data Modality
- [Image]
- [Text]
- [Video]
Data Collection Method by dataset
- [Automated]
Labeling Method by dataset
- [Hybrid: Human, Automated]
Testing Dataset:
Data Collection Method by dataset
- [Automated]
Labeling Method by dataset
- [Hybrid: Human, Automated]
Evaluation
Please see our technical paper for detailed evaluations of the base model.
Data Collection Method:
- Automated
Labeling Method:
- Hybrid: Human,Automated
System Requirements and Performance*
Video2World (720p, 16FPS): This model requires 32.54 GB of GPU VRAM. The following table shows inference time for a single generation across different NVIDIA GPU hardware:
GPU Hardware | Inference Runtime |
---|---|
H100 SXM | 228.8 s |
H200 SXM | 221.7 s |
B200 | 123.9 s |
H100 NVL | 355.7 s |
H100 PCIe | 378.5 s |
H200 NVL | 267.2 s |
L40S | 2567.1 s |
RTX PRO 6000 Blackwell | 452.2 s |
Operating System(s):
- Linux (We have not tested on other operating systems.)
Usage
- See Cosmos-Predict2.5 for details.
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
Limitations
Despite various improvements in world generation for Physical AI, Cosmos-Predict2 video2world models still face technical and application limitations for world prediction. In particular, they struggle to generate long, high-resolution videos without artifacts. Common issues include temporal inconsistency, camera and object motion instability, and imprecise interactions. The models may inaccurately represent 3D space, 4D space-time, or physical laws in the generated videos, leading to artifacts such as disappearing or morphing objects, unrealistic interactions, and implausible motions. As a result, applying these models for applications that require simulating physical law-grounded environments or complex multi-agent dynamics remains challenging.
Inference:
Acceleration Engine: PyTorch, Transformer Engine
Test Hardware: H100, A100, GB200
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below. Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.
Plus Plus (++) Promise
We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been:
- Verified to comply with current applicable disclosure laws, regulations, and industry standards.
- Verified to comply with applicable privacy labeling requirements.
- Annotated to describe the collector/source (NVIDIA or a third-party).
- Characterized for technical limitations.
- Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests.
- Reviewed before release.
- Tagged for known restrictions and potential safety implications.
Bias
Field | Response |
---|---|
Participation considerations from adversely impacted groups protected classes in model design and testing: | None |
Measures taken to mitigate against unwanted bias: | None |
Explainability
Field | Response |
---|---|
Intended Application & Domain: | World Generation |
Model Type: | Transformer |
Intended Users: | Physical AI developers |
Output: | Videos |
Describe how the model works: | Generates videos based on video and text inputs |
Technical Limitations: | The model may not follow the video or text input accurately in challenging cases, where the input video shows complex scene composition and temporal dynamics. Examples of challenging scenes include: fast camera movements, overlapping human-object interactions, low lighting with high motion blur, and multiple people performing different actions simultaneously. |
Verified to have met prescribed NVIDIA quality standards: | Yes |
Performance Metrics: | Quantitative and Qualitative Evaluation. We evaluate on PAI-Bench’s predict task and report two main scores: the Domain Score, which measures performance on domain-specific physical AI tasks, and the Quality Score, which reflects the quality of generated videos. The Quality Score is derived from eight text-to-video and image-to-video metrics adapted from VBench. In contrast, the Domain Score is obtained through VQA-based evaluation across seven domains: av, common, human, industry, misc, physics, and robotics. The final PAI-Bench Overall Score is computed as the average of the Quality and Domain scores. |
Potential Known Risks: | The model's output can generate all forms of videos, including what may be considered toxic, offensive, or indecent. |
Licensing: | NVIDIA Open Model License. Additional Information: Apache License 2.0. |
Privacy
Field | Response |
---|---|
Generatable or reverse engineerable personal data? | No |
Personal data used to create this model? | None Known |
Was consent obtained for any personal data used? | None Known |
How often is dataset reviewed? | Before Release |
Is there provenance for all datasets used in training? | Yes |
Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
Is data compliant with data subject requests for data correction or removal, if such a request was made? | No, not possible with externally-sourced data. |
Applicable Privacy Policy | https://www.nvidia.com/en-us/about-nvidia/privacy-policy/ |
Safety
Field | Response |
---|---|
Model Application(s): | World Generation |
Describe the life critical impact (if present). | None Known |
Use Case Restrictions: | NVIDIA Open Model License. Additional Information: Apache License 2.0. |
Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. |
- Downloads last month
- 398,675