nielsr HF Staff commited on
Commit
24be596
·
verified ·
1 Parent(s): 97153ee

Improve model card: Add pipeline tag, update license, abstract, and sample usage

Browse files

This PR enhances the model card for `InternVLA-M1_object` by:
- **Updating the license** to `mit`, as indicated by the official GitHub repository's license badge.
- **Adding `pipeline_tag: robotics`** to improve discoverability on the Hugging Face Hub and enable the interactive widget. The `robotics` tag has been removed from the general `tags` list to avoid redundancy.
- **Adding the paper link** [InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy](https://huggingface.co/papers/2510.13778) prominently at the top.
- **Including the paper abstract** for a more comprehensive model description.
- **Adding detailed `Key Features`, `Target Audience`, and `Experimental Results`** from the project's GitHub README.
- **Incorporating a `Sample Usage` section** with runnable Python code snippets for both chat and action prediction, directly extracted from the GitHub README.
- **Updating the `Citation`** to use the more complete BibTeX `@article` format found in the GitHub repository.
- **Adding `Environment Setup`, `Examples`, `Model Zoo`, `Roadmap`, `Contributing`, `Contact`, and `Acknowledgements`** sections for a more complete and informative resource.

Please review and merge if these improvements meet the community standards.

Files changed (1) hide show
  1. README.md +204 -12
README.md CHANGED
@@ -1,28 +1,220 @@
1
  ---
2
- license: cc-by-nc-sa-4.0
 
3
  tags:
4
- - robotics
5
  - vision-language-action-model
6
  - vision-language-model
7
  ---
 
8
  # Model Card for InternVLA-M1_object
 
 
 
9
  InternVLA-M1 is an open-source, end-to-end vision–language–action (VLA) framework for building and researching generalist robot policies.
10
- - 🌐 Homepage: [InternVLA-M1 Project Page](https://internrobotics.github.io/internvla-m1.github.io/)
11
  - 💻 Codebase: [InternVLA-M1 GitHub Repo](https://github.com/InternRobotics/InternVLA-M1)
12
 
13
- ## Training Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  action_chunk: 8
16
  batch_size: 128
17
  training_steps: 30k
18
  ```
19
 
20
- ## Citation
21
- ```
22
- @misc{internvla2024,
23
- title = {InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy},
24
- author = {InternVLA-M1 Contributors},
25
- year = {2025},
26
- booktitle={arXiv},
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  }
28
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ pipeline_tag: robotics
4
  tags:
 
5
  - vision-language-action-model
6
  - vision-language-model
7
  ---
8
+
9
  # Model Card for InternVLA-M1_object
10
+
11
+ This model is presented in the paper [InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy](https://huggingface.co/papers/2510.13778).
12
+
13
  InternVLA-M1 is an open-source, end-to-end vision–language–action (VLA) framework for building and researching generalist robot policies.
14
+ - 🌐 Project Page: [InternVLA-M1 Project Page](https://internrobotics.github.io/internvla-m1.github.io/)
15
  - 💻 Codebase: [InternVLA-M1 GitHub Repo](https://github.com/InternRobotics/InternVLA-M1)
16
 
17
+ ## Abstract
18
+ We introduce InternVLA-M1, a unified framework for spatial grounding and robot control that advances instruction-following robots toward scalable, general-purpose intelligence. Its core idea is spatially guided vision-language-action training, where spatial grounding serves as the critical link between instructions and robot actions. InternVLA-M1 employs a two-stage pipeline: (i) spatial grounding pre-training on over 2.3M spatial reasoning data to determine ``where to act'' by aligning instructions with visual, embodiment-agnostic positions, and (ii) spatially guided action post-training to decide ``how to act'' by generating embodiment-aware actions through plug-and-play spatial prompting. This spatially guided training recipe yields consistent gains: InternVLA-M1 outperforms its variant without spatial guidance by +14.6% on SimplerEnv Google Robot, +17% on WidowX, and +4.3% on LIBERO Franka, while demonstrating stronger spatial reasoning capability in box, point, and trace prediction. To further scale instruction following, we built a simulation engine to collect 244K generalizable pick-and-place episodes, enabling a 6.2% average improvement across 200 tasks and 3K+ objects. In real-world clustered pick-and-place, InternVLA-M1 improved by 7.3%, and with synthetic co-training, achieved +20.6% on unseen objects and novel configurations. Moreover, in long-horizon reasoning-intensive scenarios, it surpassed existing works by over 10%. These results highlight spatially guided training as a unifying principle for scalable and resilient generalist robots. Code and models are available at this https URL .
19
+
20
+ ## 🔥 Key Features
21
+
22
+ 1. **Modular & Extensible**
23
+ All core components (model architecture, training data, training strategies, evaluation pipeline) are fully decoupled, enabling independent development, debugging, and extension of each module.
24
+
25
+ 2. **Dual-System and Dual-Supervision**
26
+ InternVLA-M1 integrates both a language head and an action head under a unified framework, enabling collaborative training with dual supervision.
27
+
28
+ 3. **Efficient Training & Fast Convergence**
29
+ Learns spatial and visual priors from large-scale multimodal pretraining and transfers them via spatial prompt fine-tuning. Achieves strong performance (e.g., SOTA-level convergence on in ~2.5 epochs without separate action pretraining).
30
+
31
+ ## 🎯 Target Audience
32
+
33
+ 1. Users who want to leverage open-source VLMs (e.g., Qwen2.5-VL) for robot control.
34
+ 2. Teams co-training action datasets jointly with multimodal (vision–language) data.
35
+ 3. Researchers exploring alternative VLA architectures and training strategies.
36
+
37
+ ## 📊 Experimental Results
38
+ | | WindowX | Google Robot(VA) | Google Robot(VM) | LIBERO |
39
+ |-------------|---------|------------------|------------------|--------|
40
+ | $\pi_0$ | 27.1 | 54.8 | 58.8 | 94.2 |
41
+ | GR00t | 61.9 | 44.5 | 35.2 | 93.9 |
42
+ | InternVLA-M1 |**71.7** |**76.0** |**80.7** |**95.9**|
43
+
44
+ # 🚀 Quick Start
45
+
46
+ ## 🛠 Environment Setup
47
+
48
+ ```bash
49
+ # Clone the repo
50
+ git clone https://github.com/InternRobotics/InternVLA-M1
51
+
52
+ # Create conda environment
53
+ conda create -n internvla-m1 python=3.10 -y
54
+ conda activate internvla-m1
55
+
56
+ # Install requirements
57
+ pip install -r requirements.txt
58
+
59
+ # Install FlashAttention2
60
+ pip install flash-attn --no-build-isolation
61
+
62
+ # Install InternVLA-M1
63
+ pip install -e .
64
+ ```
65
+
66
+ ## ⚡ Quick Interactive M1 Demo
67
+
68
+ Below are two collapsible examples: InternVLA-M1 chat and action prediction.
69
+
70
+ <details open>
71
+ <summary><b>InternVLA-M1 Chat Demo (image Q&A / Spatial Grounding)</b></summary>
72
+
73
+ ```python
74
+ from InternVLA.model.framework.M1 import InternVLA_M1
75
+ from PIL import Image
76
+ import requests
77
+ from io import BytesIO
78
+ import torch
79
+
80
+ def load_image_from_url(url: str) -> Image.Image:
81
+ resp = requests.get(url, timeout=15)
82
+ resp.raise_for_status()
83
+ img = Image.open(BytesIO(resp.content)).convert("RGB")
84
+ return img
85
+
86
+ saved_model_path = "/PATH/checkpoints/steps_50000_pytorch_model.pt"
87
+ internVLA_M1 = InternVLA_M1.from_pretrained(saved_model_path)
88
+
89
+ # Use the raw image link for direct download
90
+ image_url = "https://raw.githubusercontent.com/InternRobotics/InternVLA-M1/InternVLA-M1/assets/table.jpeg"
91
+ image = load_image_from_url(image_url)
92
+ question = "Give the bounding box for the apple."
93
+ response = internVLA_M1.chat_with_M1(image, question)
94
+ print(response)
95
+ ```
96
+ </details>
97
+
98
+ <details>
99
+ <summary><b>InternVLA-M1 Action Prediction Demo (two views)</b></summary>
100
+
101
+ ```python
102
+ from InternVLA.model.framework.M1 import InternVLA_M1
103
+ from PIL import Image
104
+ import requests
105
+ from io import BytesIO
106
+ import torch
107
+
108
+ def load_image_from_url(url: str) -> Image.Image:
109
+ resp = requests.get(url, timeout=15)
110
+ resp.raise_for_status()
111
+ img = Image.open(BytesIO(resp.content)).convert("RGB")
112
+ return img
113
+
114
+ saved_model_path = "/PATH/checkpoints/steps_50000_pytorch_model.pt"
115
+ internVLA_M1 = InternVLA_M1.from_pretrained(saved_model_path)
116
+
117
+ image_url = "https://raw.githubusercontent.com/InternRobotics/InternVLA-M1/InternVLA-M1/assets/table.jpeg"
118
+ view1 = load_image_from_url(image_url)
119
+ view2 = view1.copy()
120
+
121
+ # Construct input: batch size = 1, two views
122
+ batch_images = [[view1, view2]] # List[List[PIL.Image]]
123
+ instructions = ["Pick up the apple and place it on the plate."]
124
+
125
+ if torch.cuda.is_available():
126
+ internVLA_M1 = internVLA_M1.to("cuda")
127
+
128
+ pred = internVLA_M1.predict_action(
129
+ batch_images=batch_images,
130
+ instructions=instructions,
131
+ cfg_scale=1.5,
132
+ use_ddim=True,
133
+ num_ddim_steps=10,
134
+ )
135
+ normalized_actions = pred["normalized_actions"] # [B, T, action_dim]
136
+ print(normalized_actions.shape, type(normalized_actions))
137
  ```
138
+ </details>
139
+
140
+ ## 📘 Examples
141
+
142
+ We provide several end-to-end examples for reference:
143
+
144
+ * **Reproduce InternVLA-M1 in SimplerEnv**
145
+ [Example](/examples/SimplerEnv)
146
+
147
+ * **Reproduce InternVLA-M1 in LIBERO**
148
+ [Example](/examples/LIBERO)
149
+
150
+ * **Training/Deployment on real robots**
151
+ [Example](/examples/real_robot)
152
+
153
+ ## 📈 Model Zoo
154
+ We release a series of pretrained models and checkpoints to facilitate reproduction and downstream use.
155
+
156
+ ### ✅ Available Checkpoints
157
+
158
+ | Model | Description | Link |
159
+ |-------|-------------|------|
160
+ | **InternVLA-M1** | Main pretrained model | [🤗 Hugging Face](https://huggingface.co/InternRobotics/InternVLA-M1) |
161
+ | **InternVLA-M1-Pretrain-RT-1-Bridge** | Pretraining on RT-1 Bridge data | [🤗 Hugging Face](https://huggingface.co/InternRobotics/InternVLA-M1-Pretrain-RT-1-Bridge) |
162
+ | **InternVLA-M1-LIBERO-Long** | Fine-tuned on LIBERO Long-horizon tasks | [🤗 Hugging Face](https://huggingface.co/InternRobotics/InternVLA-M1-LIBERO-Long) |
163
+ | **InternVLA-M1-LIBERO-Goal** | Fine-tuned on LIBERO Goal-conditioned tasks | [🤗 Hugging Face](https://huggingface.co/InternRobotics/InternVLA-M1-LIBERO-Goal) |
164
+ | **InternVLA-M1-LIBERO-Spatial** | Fine-tuned on LIBERO Spatial reasoning tasks | [🤗 Hugging Face](https://huggingface.co/InternRobotics/InternVLA-M1-LIBERO-Spatial) |
165
+ | **InternVLA-M1-LIBERO-Object** | Fine-tuned on LIBERO Object-centric tasks | [🤗 Hugging Face](https://huggingface.co/InternRobotics/InternVLA-M1-LIBERO-Object) |
166
+
167
+ ## Training Details
168
+ ```yaml
169
  action_chunk: 8
170
  batch_size: 128
171
  training_steps: 30k
172
  ```
173
 
174
+ # 🗺️ Roadmap
175
+
176
+ * [ ] Add Co-Training Multimodel Multitask Readme (now co-training code is already here)
177
+ * [x] 0930: Unified Inference Server for Simpler and LIBERO
178
+ * [x] 0918: Release model weights
179
+
180
+ # 🤝 Contributing
181
+
182
+ We welcome contributions via Pull Requests or Issues.
183
+ Please include detailed logs and reproduction steps when reporting bugs.
184
+
185
+ # 📜 Citation
186
+
187
+ If you find this useful in your research, please consider citing:
188
+
189
+ ```bibtex
190
+ @article{internvlam1,
191
+ title = {InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy},
192
+ author = {InternVLA-M1 Contributors},
193
+ journal = {arXiv preprint arXiv:2510.13778},
194
+ year = {2025}
195
  }
196
+ ```
197
+
198
+ # 📬 Contact
199
+
200
+ * Issues: Submit via GitHub Issues with detailed logs and steps
201
+
202
+ # 🙏 Acknowledgements
203
+
204
+ We thank the open-source community for their inspiring work. This project builds upon and is inspired by the following projects (alphabetical order):
205
+ - [IPEC-COMMUNITY](https://huggingface.co/IPEC-COMMUNITY): Curated OXE / LIBERO style multi-task datasets and formatting examples.
206
+ - [Isaac-GR00T](https://github.com/NVIDIA/Isaac-GR00T): Standardized action data loader (GR00T-LeRobot).
207
+ - [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL/blob/main/qwen-vl-finetune/README.md): Multimodal input/output format, data loader, and pretrained VLM backbone.
208
+ - [CogACT](https://github.com/microsoft/CogACT/tree/main/action_model): Reference for a DiT-style action head design.
209
+ - [Llavavla](https://github.com/JinhuiYE/llavavla): Baseline code structure and engineering design references.
210
+ - [GenManip Simulation Platform](https://github.com/InternRobotics/GenManip): Simulation platform for generalizable pick-and-place based on Isaac Sim.
211
+
212
+
213
+ Notes:
214
+ - If any required attribution or license header is missing, please open an issue and we will correct it promptly.
215
+ - All third-party resources remain under their original licenses; users should comply with respective terms.
216
+
217
+ ---
218
+
219
+ Thanks for using **InternVLA-M1**! 🌟
220
+ If you find it useful, please consider giving us a ⭐ on GitHub.