File size: 3,281 Bytes
8e139d9
 
 
4a92b2c
8e139d9
 
 
 
 
 
 
 
bae1e7e
 
aa19799
bae1e7e
 
 
5e540d9
 
aa19799
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8260e37
aa19799
 
 
 
bae1e7e
 
5e540d9
bae1e7e
 
 
 
 
 
 
 
 
 
 
 
aa19799
5e540d9
aa19799
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
license: mit
task_categories:
- robotics
language:
- en
tags:
- code
pretty_name: GEMBench dataset
size_categories:
- 10K<n<100K
---
# Dataset Card for GEMBench dataset

๐Ÿ’Ž **GE**neralizable vision-language robotic **M**anipulation **Bench**mark Dataset

A benchmark to systematically evaluate generalization capabilities of vision-and-language robotic manipulation policies. Built upon the RLBench simulator.

![GEMBench](dataset_overview.png)

๐Ÿ’ป **GEMBench Project Webpage:** https://www.di.ens.fr/willow/research/gembench/

๐Ÿ“ˆ **Leaderboard:** https://paperswithcode.com/sota/robot-manipulation-generalization-on-gembench

## Dataset Structure
Dataset structure is as follows:
```
  - gembench
      - train_dataset
          - microsteps: 567M, initial configurations for each episode
          - keysteps_bbox: 160G, extracted keysteps data
          - keysteps_bbox_pcd: (used to train 3D-LOTUS)
              - voxel1cm: 10G, processed point clouds
              - instr_embeds_clip.npy: instructions encoded by CLIP text encoder
          - motion_keysteps_bbox_pcd: (used to train 3D-LOTUS++ motion planner)
              - voxel1cm: 2.8G, processed point clouds
              - action_embeds_clip.npy: action names encoded by CLIP text encoder
      - val_dataset
          - microsteps: 110M, initial configurations for each episode
          - keysteps_bbox_pcd:
              - voxel1cm: 941M, processed point clouds
      - test_dataset
          - microsteps: 2.2G, initial configurations for each episode
```

## ๐Ÿ› ๏ธ Benchmark Installation

1. Create and activate your conda environment:
```bash
conda create -n gembench python==3.10

conda activate gembench
```
2. Install RLBench
Download CoppeliaSim (see instructions [here](https://github.com/stepjam/PyRep?tab=readme-ov-file#install))
```bash
# change the version if necessary
wget https://www.coppeliarobotics.com/files/V4_1_0/CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz
tar -xvf CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz
```

Add the following to your ~/.bashrc file:
```bash
export COPPELIASIM_ROOT=$(pwd)/CoppeliaSim_Edu_V4_1_0_Ubuntu20_04
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$COPPELIASIM_ROOT
export QT_QPA_PLATFORM_PLUGIN_PATH=$COPPELIASIM_ROOT
```

Install Pyrep and RLBench
```bash
git clone https://github.com/cshizhe/PyRep.git
cd PyRep
pip install -r requirements.txt
pip install .
cd ..

# Our modified version of RLBench to support new tasks in GemBench
git clone https://github.com/rjgpinel/RLBench
cd RLBench
pip install -r requirements.txt
pip install .
cd ..
```

## Evaluation

Please, check 3D-LOTUS++ official code repository for evaluation:

https://github.com/vlc-robot/robot-3dlotus?tab=readme-ov-file#evaluation


## Citation

If you use our GemBench benchmark or find our code helpful, please kindly cite our [work](https://arxiv.org/abs/2410.01345):

**BibTeX:**

```bibtex
 @inproceedings{garcia25gembench,
    author    = {Ricardo Garcia and Shizhe Chen and Cordelia Schmid},
    title     = {Towards Generalizable Vision-Language Robotic Manipulation: A Benchmark and LLM-guided 3D Policy},
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    year      = {2025}
}
```

## Contact

[Ricardo Garcia-Pinel](mailto:[email protected])