nielsr HF Staff commited on
Commit
ffdba6b
·
verified ·
1 Parent(s): aff9a4b

Update pipeline tag and add paper link

Browse files

This PR updates the `pipeline_tag` to `robotics` to better reflect the model's application in robotics research. It also adds a link to the paper page for improved context and discoverability.

Files changed (1) hide show
  1. README.md +23 -292
README.md CHANGED
@@ -1,11 +1,11 @@
1
  ---
2
- license: mit
3
- language:
4
- - en
5
  base_model:
6
  - IPEC-COMMUNITY/spatialvla-4b-224-pt
7
- pipeline_tag: image-text-to-text
 
8
  library_name: transformers
 
 
9
  tags:
10
  - VLA
11
  - Foundation Vision-language-action Model
@@ -13,6 +13,16 @@ tags:
13
  - robotics
14
  ---
15
 
 
 
 
 
 
 
 
 
 
 
16
  # SpatialVLA Fine-Tuned on fractal & bridge
17
 
18
  This model was produced by fine-tuning the [SpatialVLA model](IPEC-COMMUNITY/spatialvla-4b-224-pt) on the **bridge dataset** for Simpler-env benchmark.
@@ -30,6 +40,8 @@ This model was produced by fine-tuning the [SpatialVLA model](IPEC-COMMUNITY/spa
30
  - **Repository:** [https://github.com/SpatialVLA/SpatialVLA](https://github.com/SpatialVLA/SpatialVLA)
31
  - **Paper:** [SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model](https://arxiv.org/abs/2501.15830)
32
  - **Project Page & Videos:** [https://spatialvla.github.io/](https://spatialvla.github.io/)
 
 
33
 
34
  ## Uses
35
 
@@ -107,302 +119,19 @@ bash scripts/spatialvla_4b_finetune/finetune_lora.sh
107
 
108
  - SimplerEnv evaluation on Google Robot tasks.
109
 
110
- <table border="1" class="dataframe">
111
- <thead>
112
- <tr style="text-align: center;">
113
- <th rowspan="2">Model</th>
114
- <th colspan="4">Visual Matching</th>
115
- <th colspan="4">Variant Aggregation</th>
116
- </tr>
117
- <tr style="text-align: center;">
118
- <th>Pick Coke Can</th>
119
- <th>Move Near</th>
120
- <th>Open/Close Drawer</th>
121
- <th>#Average</th>
122
- <th>Pick Coke Can</th>
123
- <th>Move Near</th>
124
- <th>Open/Close Drawer</th>
125
- <th>#Average</th>
126
- </tr>
127
- </thead>
128
- <tbody>
129
- <tr>
130
- <td>RT-1 (Begin)</td>
131
- <td>2.7%</td>
132
- <td>5.0%</td>
133
- <td>13.9%</td>
134
- <td>6.8%</td>
135
- <td>2.2%</td>
136
- <td>4.0%</td>
137
- <td>6.9%</td>
138
- <td>4.2%</td>
139
- </tr>
140
- <tr>
141
- <td>RT-1 (15%)</td>
142
- <td>71.0%</td>
143
- <td>35.4%</td>
144
- <td>56.5%</td>
145
- <td>60.2%</td>
146
- <td>81.3%</td>
147
- <td>44.6%</td>
148
- <td>26.7%</td>
149
- <td>56.2%</td>
150
- </tr>
151
- <tr>
152
- <td>RT-1 (Converged)</td>
153
- <td>85.7%</td>
154
- <td>44.2%</td>
155
- <td>73.0%</td>
156
- <td>74.6%</td>
157
- <td>89.8%</td>
158
- <td>50.0%</td>
159
- <td>32.3%</td>
160
- <td>63.3%</td>
161
- </tr>
162
- <tr>
163
- <td>HPT</td>
164
- <td>56.0%</td>
165
- <td>60.0%</td>
166
- <td>24.0%</td>
167
- <td>46.0%</td>
168
- <td>--</td>
169
- <td>--</td>
170
- <td>31.0%</td>
171
- <td>45.0%</td>
172
- </tr>
173
- <tr>
174
- <td>TraceVLA</td>
175
- <td>28.0%</td>
176
- <td>53.7%</td>
177
- <td>57.0%</td>
178
- <td>42.0%</td>
179
- <td>60.0%</td>
180
- <td>56.4%</td>
181
- <td>29.4%</td>
182
- <td>39.6%</td>
183
- </tr>
184
- <tr>
185
- <td>RT-1-X</td>
186
- <td>56.7%</td>
187
- <td>31.7%</td>
188
- <td>59.7%</td>
189
- <td>53.4%</td>
190
- <td>49.0%</td>
191
- <td>32.3%</td>
192
- <td>35.3%</td>
193
- <td>64.3%</td>
194
- </tr>
195
- <tr>
196
- <td>RT-2-X</td>
197
- <td>78.7%</td>
198
- <td>77.9%</td>
199
- <td>25.0%</td>
200
- <td>60.7%</td>
201
- <td>82.3%</td>
202
- <td>79.2%</td>
203
- <td>--</td>
204
- <td>--</td>
205
- </tr>
206
- <tr>
207
- <td>Octo-Base</td>
208
- <td>17.0%</td>
209
- <td>4.2%</td>
210
- <td>22.7%</td>
211
- <td>16.8%</td>
212
- <td>0.6%</td>
213
- <td>3.1%</td>
214
- <td>1.1%</td>
215
- <td>1.1%</td>
216
- </tr>
217
- <tr>
218
- <td>OpenVLA</td>
219
- <td>16.3%</td>
220
- <td>46.2%</td>
221
- <td>35.6%</td>
222
- <td>27.7%</td>
223
- <td>54.5%</td>
224
- <td>47.7%</td>
225
- <td>17.7%</td>
226
- <td>39.8%</td>
227
- </tr>
228
- <tr>
229
- <td>RoboVLM (zero-shot)</td>
230
- <td>72.7%</td>
231
- <td>66.3%</td>
232
- <td>26.8%</td>
233
- <td>56.3%</td>
234
- <td>68.3%</td>
235
- <td>56.0%</td>
236
- <td>8.5%</td>
237
- <td>46.3%</td>
238
- </tr>
239
- <tr>
240
- <td>RoboVLM (fine-tuning)</td>
241
- <td>77.3%</td>
242
- <td>61.7%</td>
243
- <td>43.5%</td>
244
- <td>63.4%</td>
245
- <td>75.6%</td>
246
- <td>60.0%</td>
247
- <td>10.6%</td>
248
- <td>51.3%</td>
249
- </tr>
250
- <tr>
251
- <td>SpatialVLA (zero-shot)</td>
252
- <td><b>81.0%</b></td>
253
- <td><b>69.6%</b></td>
254
- <td><b>59.3%</b></td>
255
- <td><b>71.9%</b></td>
256
- <td><b>89.5%</b></td>
257
- <td><b>71.7%</b></td>
258
- <td>36.2%</td>
259
- <td><b>68.8%</b></td>
260
- </tr>
261
- <tr>
262
- <td>SpatialVLA (fine-tuning)</td>
263
- <td><b>86.0%</b></td>
264
- <td><b>77.9%</b></td>
265
- <td>57.4%</td>
266
- <td><b>75.1%</b></td>
267
- <td>88.0%</td>
268
- <td>72.7%</td>
269
- <td>41.8%</td>
270
- <td><b>70.7%</b></td>
271
- </tr>
272
- </tbody>
273
- </table>
274
-
275
 
276
  - SimplerEnv evaluation on WidowX Robot tasks.
277
 
278
- <table border="1" class="dataframe">
279
- <thead>
280
- <tr style="text-align: center;">
281
- <th rowspan="2">Model</th>
282
- <th colspan="2">Put Spoon on Towel</th>
283
- <th colspan="2">Put Carrot on Plate</th>
284
- <th colspan="2">Stack Green Block on Yellow Block</th>
285
- <th colspan="2">Put Eggplant in Yellow Basket</th>
286
- <th rowspan="2">#Overall Average</th>
287
- </tr>
288
- <tr style="text-align: center;">
289
- <th>Grasp Spoon</th>
290
- <th>Success</th>
291
- <th>Grasp Carrot</th>
292
- <th>Success</th>
293
- <th>Grasp Green Block</th>
294
- <th>Success</th>
295
- <th>Grasp Eggplant</th>
296
- <th>Success</th>
297
- </tr>
298
- </thead>
299
- <tbody>
300
- <tr>
301
- <td>RT-1-X</td>
302
- <td>16.7%</td>
303
- <td>0.0%</td>
304
- <td>20.8%</td>
305
- <td>4.2%</td>
306
- <td>8.3%</td>
307
- <td>0.0%</td>
308
- <td>0.0%</td>
309
- <td>0.0%</td>
310
- <td>1.1%</td>
311
- </tr>
312
- <tr>
313
- <td>Octo-Base</td>
314
- <td>34.7%</td>
315
- <td>12.5%</td>
316
- <td>52.8%</td>
317
- <td>8.3%</td>
318
- <td>31.9%</td>
319
- <td>0.0%</td>
320
- <td>66.7%</td>
321
- <td>43.1%</td>
322
- <td>16.0%</td>
323
- </tr>
324
- <tr>
325
- <td>Octo-Small</td>
326
- <td>77.8%</td>
327
- <td>47.2%</td>
328
- <td>27.8%</td>
329
- <td>9.7%</td>
330
- <td>40.3%</td>
331
- <td>4.2%</td>
332
- <td>87.5%</td>
333
- <td>56.9%</td>
334
- <td>30.0%</td>
335
- </tr>
336
- <tr>
337
- <td>OpenVLA</td>
338
- <td>4.1%</td>
339
- <td>0.0%</td>
340
- <td>33.3%</td>
341
- <td>0.0%</td>
342
- <td>12.5%</td>
343
- <td>0.0%</td>
344
- <td>8.3%</td>
345
- <td>4.1%</td>
346
- <td>1.0%</td>
347
- </tr>
348
- <tr>
349
- <td>RoboVLM (zero-shot)</td>
350
- <td>37.5%</td>
351
- <td>20.8%</td>
352
- <td>33.3%</td>
353
- <td>25.0%</td>
354
- <td>8.3%</td>
355
- <td>8.3%</td>
356
- <td>0.0%</td>
357
- <td>0.0%</td>
358
- <td>13.5%</td>
359
- </tr>
360
- <tr>
361
- <td>RoboVLM (fine-tuning)</td>
362
- <td>54.2%</td>
363
- <td>29.2%</td>
364
- <td>25.0%</td>
365
- <td>25.0%</td>
366
- <td>45.8%</td>
367
- <td>12.5%</td>
368
- <td>58.3%</td>
369
- <td>58.3%</td>
370
- <td>31.3%</td>
371
- </tr>
372
- <tr>
373
- <td>SpatialVLA (zero-shot)</td>
374
- <td><b>25.0%</b></td>
375
- <td><b>20.8%</b></td>
376
- <td><b>41.7%</b></td>
377
- <td>20.8%</td>
378
- <td><b>58.3%</b></td>
379
- <td>25.0%</td>
380
- <td><b>79.2%</b></td>
381
- <td>70.8%</td>
382
- <td><b>34.4%</b></td>
383
- </tr>
384
- <tr>
385
- <td>SpatialVLA (fine-tuning)</td>
386
- <td><b>20.8%</b></td>
387
- <td>16.7%</td>
388
- <td>29.2%</td>
389
- <td>25.0%</td>
390
- <td><b>62.5%</b></td>
391
- <td>29.2%</td>
392
- <td><b>100.0%</b></td>
393
- <td><b>100.0%</b></td>
394
- <td><b>42.7%</b></td>
395
- </tr>
396
- </tbody>
397
- </table>
398
 
399
  - Zero-shot Robot Control Evaluation on WidowX Robot.
400
 
401
- <img src="https://cdn-uploads.huggingface.co/production/uploads/6535045a910b844786a6642f/SUPyXwcdfnWranO04tulL.png" alt="perform">
402
 
403
  - Spatial Understanding Capability Evaluation.
404
 
405
- <img src="https://cdn-uploads.huggingface.co/production/uploads/6535045a910b844786a6642f/g-EfM-6M7iM9IYryUTwLA.png" alt="perform">
406
 
407
 
408
  ## Citation
@@ -419,4 +148,6 @@ bash scripts/spatialvla_4b_finetune/finetune_lora.sh
419
  primaryClass={cs.RO},
420
  url={https://arxiv.org/abs/2501.15830},
421
  }
422
- ```
 
 
 
1
  ---
 
 
 
2
  base_model:
3
  - IPEC-COMMUNITY/spatialvla-4b-224-pt
4
+ language:
5
+ - en
6
  library_name: transformers
7
+ license: mit
8
+ pipeline_tag: robotics
9
  tags:
10
  - VLA
11
  - Foundation Vision-language-action Model
 
13
  - robotics
14
  ---
15
 
16
+ # Paper title and link
17
+
18
+ The model was presented in the paper [From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models](https://huggingface.co/papers/2506.09930).
19
+
20
+ # Paper abstract
21
+
22
+ The abstract of the paper is the following:
23
+
24
+ One promise that Vision-Language-Action (VLA) models hold over traditional imitation learning for robotics is to leverage the broad generalization capabilities of large Vision-Language Models (VLMs) to produce versatile, "generalist" robot policies. However, current evaluations of VLAs remain insufficient. Traditional imitation learning benchmarks are unsuitable due to the lack of language instructions. Emerging benchmarks for VLAs that incorporate language often come with limited evaluation tasks and do not intend to investigate how much VLM pretraining truly contributes to the generalization capabilities of the downstream robotic policy. Meanwhile, much research relies on real-world robot setups designed in isolation by different institutions, which creates a barrier for reproducibility and accessibility. To address this gap, we introduce a unified probing suite of 50 simulation-based tasks across 10 subcategories spanning language instruction, vision, and objects. We systematically evaluate several state-of-the-art VLA architectures on this suite to understand their generalization capability. Our results show that while VLM backbones endow VLAs with robust perceptual understanding and high level planning, which we refer to as good intentions, this does not reliably translate into precise motor execution: when faced with out-of-distribution observations, policies often exhibit coherent intentions, but falter in action execution. Moreover, finetuning on action data can erode the original VLM's generalist reasoning abilities. We release our task suite and evaluation code to serve as a standardized benchmark for future VLAs and to drive research on closing the perception-to-action gap. More information, including the source code, can be found at this https URL
25
+
26
  # SpatialVLA Fine-Tuned on fractal & bridge
27
 
28
  This model was produced by fine-tuning the [SpatialVLA model](IPEC-COMMUNITY/spatialvla-4b-224-pt) on the **bridge dataset** for Simpler-env benchmark.
 
40
  - **Repository:** [https://github.com/SpatialVLA/SpatialVLA](https://github.com/SpatialVLA/SpatialVLA)
41
  - **Paper:** [SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model](https://arxiv.org/abs/2501.15830)
42
  - **Project Page & Videos:** [https://spatialvla.github.io/](https://spatialvla.github.io/)
43
+ - **Project Page (INT-ACT):** [https://ai4ce.github.io/INT-ACT/](https://ai4ce.github.io/INT-ACT/)
44
+
45
 
46
  ## Uses
47
 
 
119
 
120
  - SimplerEnv evaluation on Google Robot tasks.
121
 
122
+ [Table 1]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
123
 
124
  - SimplerEnv evaluation on WidowX Robot tasks.
125
 
126
+ [Table 2]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
127
 
128
  - Zero-shot Robot Control Evaluation on WidowX Robot.
129
 
130
+ [Image 1]
131
 
132
  - Spatial Understanding Capability Evaluation.
133
 
134
+ [Image 2]
135
 
136
 
137
  ## Citation
 
148
  primaryClass={cs.RO},
149
  url={https://arxiv.org/abs/2501.15830},
150
  }
151
+ ```
152
+
153
+ **Note:** [Table 1] and [Table 2] refer to the tables present in the original model card. [Image 1] and [Image 2] refer to the images. These are not recreated here due to their length and complexity.