delinqu commited on
Commit
aff9a4b
·
verified ·
1 Parent(s): 478fea7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +422 -0
README.md ADDED
@@ -0,0 +1,422 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - IPEC-COMMUNITY/spatialvla-4b-224-pt
7
+ pipeline_tag: image-text-to-text
8
+ library_name: transformers
9
+ tags:
10
+ - VLA
11
+ - Foundation Vision-language-action Model
12
+ - Generalist Robot Policy
13
+ - robotics
14
+ ---
15
+
16
+ # SpatialVLA Fine-Tuned on fractal & bridge
17
+
18
+ This model was produced by fine-tuning the [SpatialVLA model](IPEC-COMMUNITY/spatialvla-4b-224-pt) on the **bridge dataset** for Simpler-env benchmark.
19
+
20
+ ## Model Details
21
+
22
+ ### Model Description
23
+
24
+ - **Developed by:** The SpatialVLA team consisting of researchers from Shanghai AI Laboratory, ShanghaiTech and TeleAI.
25
+ - **Model type:** Vision-language-action (language, image => robot actions)
26
+ - **Language(s) (NLP):** en
27
+ - **License:** MIT
28
+ - **Finetuned from model:** [paligemma2-3b-pt-224](https://huggingface.co/google/paligemma2-3b-pt-224)
29
+ - **Pretraining Dataset:** [Open X-Embodiment](https://robotics-transformer-x.github.io/) and [RH20T](https://rh20t.github.io/)
30
+ - **Repository:** [https://github.com/SpatialVLA/SpatialVLA](https://github.com/SpatialVLA/SpatialVLA)
31
+ - **Paper:** [SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model](https://arxiv.org/abs/2501.15830)
32
+ - **Project Page & Videos:** [https://spatialvla.github.io/](https://spatialvla.github.io/)
33
+
34
+ ## Uses
35
+
36
+ SpatialVLA relies solely on HuggingFace Transformers 🤗, making deployment extremely easy. If your environment supports `transformers >= 4.47.0`, you can directly use the following code to load the model and perform inference. (requires 8.5GB of GPU memory).
37
+
38
+ ### Direct Use
39
+
40
+ ```python
41
+ import torch
42
+ from PIL import Image
43
+ from transformers import AutoModel, AutoProcessor
44
+
45
+ model_name_or_path="IPEC-COMMUNITY/spatialvla-4b-224-pt"
46
+ processor = AutoProcessor.from_pretrained(model_name_or_path, trust_remote_code=True)
47
+
48
+ model = AutoModel.from_pretrained(model_name_or_path, trust_remote_code=True, torch_dtype=torch.bfloat16).eval().cuda()
49
+
50
+ image = Image.open("example.png").convert("RGB")
51
+ prompt = "What action should the robot take to pick the cup?"
52
+ inputs = processor(images=[image], text=prompt, return_tensors="pt")
53
+ generation_outputs = model.predict_action(inputs)
54
+
55
+ actions = processor.decode_actions(generation_outputs, unnorm_key="bridge_orig/1.0.0")
56
+ print(actions)
57
+ ```
58
+
59
+ ### Out-of-Scope Use
60
+
61
+ SpatialVLA models do not zero-shot generalize to new (unseen) robot embodiments, or setups that are not represented in the pretraining mix; in these cases, we suggest collecting a dataset of demonstrations on the desired setup, and fine-tuning SpatialVLA models instead.
62
+
63
+ ## How to Get Hands Dirty with the Model
64
+
65
+ If you want to use the model for fine-tuning or pre-training, you need to clone the [official repository](https://github.com/SpatialVLA/SpatialVLA) first.
66
+ ```bash
67
+ git clone https://github.com/SpatialVLA/SpatialVLA.git
68
+ ```
69
+
70
+ , then install the required packages and download the model from the Hugging Face model hub. The VLM backbone of SpatialVLA is PaLiGemma2, which requires transformers >= 4.47.0. Hence, create a Python environment with Python >= 3.10.
71
+ ```bash
72
+ conda create -n spatialvla python=3.10
73
+ conda activate spatialvla
74
+ ```
75
+
76
+ Install packages from `requirements.txt` file. Note that we use a customised `dlimp` to support seed setting for reproducibility. If you catch any problems, please manually install the dlimp form the [dlimp_custom](https://github.com/SpatialVLA/dlimp_custom).
77
+
78
+ ```bash
79
+ pip install -r requirements.txt
80
+ ```
81
+ ### Train from Scratch
82
+
83
+ SpatialVLA is pre-trained with 1.1 Million real-robot demonstrations from the OXE and RH20T dataset on a cluster of 64 A100 GPUs for abut 10 days, using a batch size of 2048. You can pre-train the model from scratch using the following command.
84
+
85
+ ```bash
86
+ # torchrun
87
+ bash scripts/spatialvla_4b_pretrain/torchrun_pretrain.sh
88
+
89
+ # or in a slurm cluster
90
+ bash scripts/spatialvla_4b_pretrain/slurm_pretrain.sh
91
+ ```
92
+
93
+ ### Fine-tuning
94
+
95
+ Most of our fine-tuning experiments are conducted using LoRA on 4 or 8 A100 GPUs.
96
+ You can use the following scripts for full-parameter or LoRA fine-tuning. For real-world experiments with small datasets, we prefer using LoRA for fine-tuning.
97
+
98
+ ```bash
99
+ # full fine-tuning
100
+ bash scripts/spatialvla_4b_finetune/finetune_full.sh
101
+
102
+ # LoRA fine-tuning
103
+ bash scripts/spatialvla_4b_finetune/finetune_lora.sh
104
+ ```
105
+
106
+ ## Evaluation
107
+
108
+ - SimplerEnv evaluation on Google Robot tasks.
109
+
110
+ <table border="1" class="dataframe">
111
+ <thead>
112
+ <tr style="text-align: center;">
113
+ <th rowspan="2">Model</th>
114
+ <th colspan="4">Visual Matching</th>
115
+ <th colspan="4">Variant Aggregation</th>
116
+ </tr>
117
+ <tr style="text-align: center;">
118
+ <th>Pick Coke Can</th>
119
+ <th>Move Near</th>
120
+ <th>Open/Close Drawer</th>
121
+ <th>#Average</th>
122
+ <th>Pick Coke Can</th>
123
+ <th>Move Near</th>
124
+ <th>Open/Close Drawer</th>
125
+ <th>#Average</th>
126
+ </tr>
127
+ </thead>
128
+ <tbody>
129
+ <tr>
130
+ <td>RT-1 (Begin)</td>
131
+ <td>2.7%</td>
132
+ <td>5.0%</td>
133
+ <td>13.9%</td>
134
+ <td>6.8%</td>
135
+ <td>2.2%</td>
136
+ <td>4.0%</td>
137
+ <td>6.9%</td>
138
+ <td>4.2%</td>
139
+ </tr>
140
+ <tr>
141
+ <td>RT-1 (15%)</td>
142
+ <td>71.0%</td>
143
+ <td>35.4%</td>
144
+ <td>56.5%</td>
145
+ <td>60.2%</td>
146
+ <td>81.3%</td>
147
+ <td>44.6%</td>
148
+ <td>26.7%</td>
149
+ <td>56.2%</td>
150
+ </tr>
151
+ <tr>
152
+ <td>RT-1 (Converged)</td>
153
+ <td>85.7%</td>
154
+ <td>44.2%</td>
155
+ <td>73.0%</td>
156
+ <td>74.6%</td>
157
+ <td>89.8%</td>
158
+ <td>50.0%</td>
159
+ <td>32.3%</td>
160
+ <td>63.3%</td>
161
+ </tr>
162
+ <tr>
163
+ <td>HPT</td>
164
+ <td>56.0%</td>
165
+ <td>60.0%</td>
166
+ <td>24.0%</td>
167
+ <td>46.0%</td>
168
+ <td>--</td>
169
+ <td>--</td>
170
+ <td>31.0%</td>
171
+ <td>45.0%</td>
172
+ </tr>
173
+ <tr>
174
+ <td>TraceVLA</td>
175
+ <td>28.0%</td>
176
+ <td>53.7%</td>
177
+ <td>57.0%</td>
178
+ <td>42.0%</td>
179
+ <td>60.0%</td>
180
+ <td>56.4%</td>
181
+ <td>29.4%</td>
182
+ <td>39.6%</td>
183
+ </tr>
184
+ <tr>
185
+ <td>RT-1-X</td>
186
+ <td>56.7%</td>
187
+ <td>31.7%</td>
188
+ <td>59.7%</td>
189
+ <td>53.4%</td>
190
+ <td>49.0%</td>
191
+ <td>32.3%</td>
192
+ <td>35.3%</td>
193
+ <td>64.3%</td>
194
+ </tr>
195
+ <tr>
196
+ <td>RT-2-X</td>
197
+ <td>78.7%</td>
198
+ <td>77.9%</td>
199
+ <td>25.0%</td>
200
+ <td>60.7%</td>
201
+ <td>82.3%</td>
202
+ <td>79.2%</td>
203
+ <td>--</td>
204
+ <td>--</td>
205
+ </tr>
206
+ <tr>
207
+ <td>Octo-Base</td>
208
+ <td>17.0%</td>
209
+ <td>4.2%</td>
210
+ <td>22.7%</td>
211
+ <td>16.8%</td>
212
+ <td>0.6%</td>
213
+ <td>3.1%</td>
214
+ <td>1.1%</td>
215
+ <td>1.1%</td>
216
+ </tr>
217
+ <tr>
218
+ <td>OpenVLA</td>
219
+ <td>16.3%</td>
220
+ <td>46.2%</td>
221
+ <td>35.6%</td>
222
+ <td>27.7%</td>
223
+ <td>54.5%</td>
224
+ <td>47.7%</td>
225
+ <td>17.7%</td>
226
+ <td>39.8%</td>
227
+ </tr>
228
+ <tr>
229
+ <td>RoboVLM (zero-shot)</td>
230
+ <td>72.7%</td>
231
+ <td>66.3%</td>
232
+ <td>26.8%</td>
233
+ <td>56.3%</td>
234
+ <td>68.3%</td>
235
+ <td>56.0%</td>
236
+ <td>8.5%</td>
237
+ <td>46.3%</td>
238
+ </tr>
239
+ <tr>
240
+ <td>RoboVLM (fine-tuning)</td>
241
+ <td>77.3%</td>
242
+ <td>61.7%</td>
243
+ <td>43.5%</td>
244
+ <td>63.4%</td>
245
+ <td>75.6%</td>
246
+ <td>60.0%</td>
247
+ <td>10.6%</td>
248
+ <td>51.3%</td>
249
+ </tr>
250
+ <tr>
251
+ <td>SpatialVLA (zero-shot)</td>
252
+ <td><b>81.0%</b></td>
253
+ <td><b>69.6%</b></td>
254
+ <td><b>59.3%</b></td>
255
+ <td><b>71.9%</b></td>
256
+ <td><b>89.5%</b></td>
257
+ <td><b>71.7%</b></td>
258
+ <td>36.2%</td>
259
+ <td><b>68.8%</b></td>
260
+ </tr>
261
+ <tr>
262
+ <td>SpatialVLA (fine-tuning)</td>
263
+ <td><b>86.0%</b></td>
264
+ <td><b>77.9%</b></td>
265
+ <td>57.4%</td>
266
+ <td><b>75.1%</b></td>
267
+ <td>88.0%</td>
268
+ <td>72.7%</td>
269
+ <td>41.8%</td>
270
+ <td><b>70.7%</b></td>
271
+ </tr>
272
+ </tbody>
273
+ </table>
274
+
275
+
276
+ - SimplerEnv evaluation on WidowX Robot tasks.
277
+
278
+ <table border="1" class="dataframe">
279
+ <thead>
280
+ <tr style="text-align: center;">
281
+ <th rowspan="2">Model</th>
282
+ <th colspan="2">Put Spoon on Towel</th>
283
+ <th colspan="2">Put Carrot on Plate</th>
284
+ <th colspan="2">Stack Green Block on Yellow Block</th>
285
+ <th colspan="2">Put Eggplant in Yellow Basket</th>
286
+ <th rowspan="2">#Overall Average</th>
287
+ </tr>
288
+ <tr style="text-align: center;">
289
+ <th>Grasp Spoon</th>
290
+ <th>Success</th>
291
+ <th>Grasp Carrot</th>
292
+ <th>Success</th>
293
+ <th>Grasp Green Block</th>
294
+ <th>Success</th>
295
+ <th>Grasp Eggplant</th>
296
+ <th>Success</th>
297
+ </tr>
298
+ </thead>
299
+ <tbody>
300
+ <tr>
301
+ <td>RT-1-X</td>
302
+ <td>16.7%</td>
303
+ <td>0.0%</td>
304
+ <td>20.8%</td>
305
+ <td>4.2%</td>
306
+ <td>8.3%</td>
307
+ <td>0.0%</td>
308
+ <td>0.0%</td>
309
+ <td>0.0%</td>
310
+ <td>1.1%</td>
311
+ </tr>
312
+ <tr>
313
+ <td>Octo-Base</td>
314
+ <td>34.7%</td>
315
+ <td>12.5%</td>
316
+ <td>52.8%</td>
317
+ <td>8.3%</td>
318
+ <td>31.9%</td>
319
+ <td>0.0%</td>
320
+ <td>66.7%</td>
321
+ <td>43.1%</td>
322
+ <td>16.0%</td>
323
+ </tr>
324
+ <tr>
325
+ <td>Octo-Small</td>
326
+ <td>77.8%</td>
327
+ <td>47.2%</td>
328
+ <td>27.8%</td>
329
+ <td>9.7%</td>
330
+ <td>40.3%</td>
331
+ <td>4.2%</td>
332
+ <td>87.5%</td>
333
+ <td>56.9%</td>
334
+ <td>30.0%</td>
335
+ </tr>
336
+ <tr>
337
+ <td>OpenVLA</td>
338
+ <td>4.1%</td>
339
+ <td>0.0%</td>
340
+ <td>33.3%</td>
341
+ <td>0.0%</td>
342
+ <td>12.5%</td>
343
+ <td>0.0%</td>
344
+ <td>8.3%</td>
345
+ <td>4.1%</td>
346
+ <td>1.0%</td>
347
+ </tr>
348
+ <tr>
349
+ <td>RoboVLM (zero-shot)</td>
350
+ <td>37.5%</td>
351
+ <td>20.8%</td>
352
+ <td>33.3%</td>
353
+ <td>25.0%</td>
354
+ <td>8.3%</td>
355
+ <td>8.3%</td>
356
+ <td>0.0%</td>
357
+ <td>0.0%</td>
358
+ <td>13.5%</td>
359
+ </tr>
360
+ <tr>
361
+ <td>RoboVLM (fine-tuning)</td>
362
+ <td>54.2%</td>
363
+ <td>29.2%</td>
364
+ <td>25.0%</td>
365
+ <td>25.0%</td>
366
+ <td>45.8%</td>
367
+ <td>12.5%</td>
368
+ <td>58.3%</td>
369
+ <td>58.3%</td>
370
+ <td>31.3%</td>
371
+ </tr>
372
+ <tr>
373
+ <td>SpatialVLA (zero-shot)</td>
374
+ <td><b>25.0%</b></td>
375
+ <td><b>20.8%</b></td>
376
+ <td><b>41.7%</b></td>
377
+ <td>20.8%</td>
378
+ <td><b>58.3%</b></td>
379
+ <td>25.0%</td>
380
+ <td><b>79.2%</b></td>
381
+ <td>70.8%</td>
382
+ <td><b>34.4%</b></td>
383
+ </tr>
384
+ <tr>
385
+ <td>SpatialVLA (fine-tuning)</td>
386
+ <td><b>20.8%</b></td>
387
+ <td>16.7%</td>
388
+ <td>29.2%</td>
389
+ <td>25.0%</td>
390
+ <td><b>62.5%</b></td>
391
+ <td>29.2%</td>
392
+ <td><b>100.0%</b></td>
393
+ <td><b>100.0%</b></td>
394
+ <td><b>42.7%</b></td>
395
+ </tr>
396
+ </tbody>
397
+ </table>
398
+
399
+ - Zero-shot Robot Control Evaluation on WidowX Robot.
400
+
401
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6535045a910b844786a6642f/SUPyXwcdfnWranO04tulL.png" alt="perform">
402
+
403
+ - Spatial Understanding Capability Evaluation.
404
+
405
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6535045a910b844786a6642f/g-EfM-6M7iM9IYryUTwLA.png" alt="perform">
406
+
407
+
408
+ ## Citation
409
+
410
+ **BibTeX:**
411
+
412
+ ```BibTeX
413
+ @misc{qu2025spatialvlaexploringspatialrepresentations,
414
+ title={SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model},
415
+ author={Delin Qu and Haoming Song and Qizhi Chen and Yuanqi Yao and Xinyi Ye and Yan Ding and Zhigang Wang and JiaYuan Gu and Bin Zhao and Dong Wang and Xuelong Li},
416
+ year={2025},
417
+ eprint={2501.15830},
418
+ archivePrefix={arXiv},
419
+ primaryClass={cs.RO},
420
+ url={https://arxiv.org/abs/2501.15830},
421
+ }
422
+ ```