Datasets:
ArXiv:
License:
| license: mit | |
| <h1 align="center"> | |
| RoboTwin: Dual-Arm Robot Benchmark with Generative Digital Twins<br> | |
| </h1> | |
| <a href="https://yaomarkmu.github.io/">Yao Mu</a><sup>* †</sup>, <a href="https://tianxingchen.github.io">Tianxing Chen</a><sup>* </sup>, Zanxin Chen<sup>* </sup>, Shijia Peng<sup>*</sup>,<br>Zeyu Gao, Zhiqian Lan, Yude Zou, Lunkai Lin, Zhiqiang Xie, <a href="http://luoping.me/">Ping Luo</a><sup>†</sup>. | |
| <br> | |
| **RoboTwin**: [Webpage (Coming Soon)]() | [PDF (Coming Soon)]() | [arXiv (Coming Soon)]()<br> | |
| **RoboTwin (early version)**, accepted to <i style="color: red; display: inline;"><b>ECCV Workshop 2024 (Oral)</b></i>: [Webpage](https://robotwin-benchmark.github.io/early-version) | [PDF](https://arxiv.org/pdf/2409.02920) | [arXiv](https://arxiv.org/abs/2409.02920)<br> | |
| <a href="https://hits.seeyoufarm.com"><img src="https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2FTianxingChen%2FRoboTwin&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=Repo+Viewers&edge_flat=false"/></a> | |
| # 🛠️ Installation | |
| See [INSTALLATION.md](./INSTALLATION.md) for installation instructions. It takes about 20 minutes for installation. | |
| # ℹ️ Task Informaction | |
| Coming Soon | |
| # 🧑🏻💻 Usage | |
| ## 1. Task Running and Data Collection | |
| Run `run_task.sh` to run task: | |
| ``` | |
| bash run_task.sh ${task_name} ${gpu_id} | |
| ``` | |
| The default task running configurations can be found at `config/${task_name}.yml`, if you want to change some task setting, just modify the specific configuration file. | |
| RoboTwin will run expert check first to make sure only successful data will be collect (by running seeds one by one first), this will not take too long. | |
| The Data will be collect to `data` under the root directory, in the form of `.pkl` usually. | |
| ## 2. Task Config | |
| Data collection configurations are located in the `config` folder, corresponding to each task. Here is an explanation of the important parameters: | |
| 1. **render_freq**: Set to 0 means no rendering. If you wish to see the rendering, it can be set to 10. | |
| 2. **collect_data**: Data collection will only be enabled if set to True. | |
| 3. **camera_w,h**: These are the camera parameters, with a total of 4 cameras - two on the wrist and two positioned as top and front views. | |
| 4. **pcd_crop**: Determines whether the obtained point cloud data is cropped to remove elements like tables and walls. | |
| 5. **pcd_down_sample_num**: The point cloud data is downsampled using the FPS (Farthest Point Sampling) method, set it to 0 to keep the raw point cloud data. | |
| 6. **data_type/endpose**: The 6D pose of the end effector, which still has some minor issues. | |
| 7. **data_type/qpos**: Represents the joint action. | |
| 8. **observer**: Decides whether to save a observer-view photo for easy observation. | |
| ## 3. Deploy your policy | |
| See `envs/base_task.py`, search `TODO` and you may see the following code, make sure that `policy.get_action(obs)` will return action sequence (predicted actions).: | |
| ``` | |
| actions = model.get_action(obs) # TODO, get actions according to your policy and current obs | |
| ``` | |
| You need to modify `script/eval_policy.py` in the root directory to load your model for evaluation: Search `TODO`, modify the code to init your policy. | |
| Run the follow command to run your policy in specific task env: | |
| ``` | |
| bash script/run_eval_policy.sh ${task_name} ${gpu_id} | |
| ``` | |
| ## 4. DP3 as baseline | |
| The DP3 code can be found in `policy/3D-Diffusion-Policy`. | |
| Process Data for DP3 training after collecting data (In root directory), and input the task name and the amount of data you want your policy to train with: | |
| ``` | |
| python script/pkl2zarr_dp3.py | |
| ``` | |
| Then, move to `policy/3D-Diffusion-Policy` first, and run the following code to train DP3 : | |
| ``` | |
| bash train.sh ${task_name} ${expert_data_num} ${gpu_id} | |
| ``` | |
| Run the following code to eval DP3 for specific task: | |
| ``` | |
| bash eval.sh ${task_name} ${expert_data_num} ${gpu_id} | |
| ``` | |
| Current leaderboard: | |
| ``` | |
| Here's the revised table with the averages listed at the end: | |
| | Task | Algorithm | 10 demos | 20 demos | 50 demos | | |
| |--------------------------------|---------------------|--------------|--------------|--------------| | |
| | Apple Cabinet Storage | DP3 (XYZ) | 41% | 59% | 75% | | |
| | | DP3 (XYZ+RGB) | 22% | 41% | 60% | | |
| | Block Handover | DP3 (XYZ) | 55% | 89% | 70% | | |
| | | DP3 (XYZ+RGB) | 48% | 81% | 94% | | |
| | Block Stack (Easy) | DP3 (XYZ) | / | / | / | | |
| | | DP3 (XYZ+RGB) | 0% | 1% | 23% | | |
| | Container Place | DP3 (XYZ) | 34% | 54% | 68% | | |
| | | DP3 (XYZ+RGB) | 18% | 28% | 54% | | |
| | Dual Bottles Pick (Easy) | DP3 (XYZ) | 10% | 48% | 78% | | |
| | | DP3 (XYZ+RGB) | 9% | 41% | 75% | | |
| | Empty Cup Place | DP3 (XYZ) | 3% | 30% | 73% | | |
| | | DP3 (XYZ+RGB) | 7% | 23% | 82% | | |
| | Pick Apple Messy | DP3 (XYZ) | 2% | 2% | 9% | | |
| | | DP3 (XYZ+RGB) | 2% | 3% | 25% | | |
| | Shoes Place | DP3 (XYZ) | 2% | 1% | 12% | | |
| | | DP3 (XYZ+RGB) | 0% | 0% | 5% | | |
| | Block Hammer Beat | DP3 (XYZ) | 37% | 45% | 60% | | |
| | | DP3 (XYZ+RGB) | 36% | 41% | 73% | | |
| | Block Sweep | DP3 (XYZ) | 49% | 80% | 96% | | |
| | | DP3 (XYZ+RGB) | 70% | 98% | 99% | | |
| | Block Stack (Hard) | DP3 (XYZ) | / | / | / | | |
| | | DP3 (XYZ+RGB) | 0% | 0% | 3% | | |
| | Diverse Bottles Pick | DP3 (XYZ) | 3% | 12% | 38% | | |
| | | DP3 (XYZ+RGB) | 0% | 1% | 7% | | |
| | Dual Bottles Pick (Hard) | DP3 (XYZ) | 13% | 29% | 46% | | |
| | | DP3 (XYZ+RGB) | 11% | 26% | 48% | | |
| | Mug Hanging | DP3 (XYZ) | 1% | 9% | 13% | | |
| | | DP3 (XYZ+RGB) | 1% | 2% | 6% | | |
| | Shoe Place | DP3 (XYZ) | 12% | 16% | 54% | | |
| | | DP3 (XYZ+RGB) | 13% | 20% | 35% | | |
| | Average | DP3 (XYZ) | 20.15% | 36.46% | 53.23% | | |
| | | DP3 (XYZ+RGB) | 17.93% | 29.33% | 45.93% | | |
| ``` | |
| # 🪄 Digital Twin Generation | |
| Deemos Rodin: [https://hyperhuman.deemos.com/rodin](https://hyperhuman.deemos.com/rodin) | |
| # ⁉️ Common Issues | |
| If you find you fail to quit the running python process with `Crtl + C`, just try `Ctrl + \`. | |
| We found Vulkan is not stable in someoff-screen devices, try reconnecting `ssh -X ...` if you meet any problem. | |
| Other Common Issues can be found in [COMMON_ISSUE](./COMMON_ISSUE.md) | |
| # 👍 Citation | |
| If you find our work useful, please consider citing: | |
| 1. RoboTwin: Dual-Arm Robot Benchmark with Generative Digital Twins | |
| ``` | |
| Coming Soon | |
| ``` | |
| 2. RoboTwin: Dual-Arm Robot Benchmark with Generative Digital Twins (**early version**), accepted to <i style="color: red; display: inline;"><b>ECCV Workshop 2024 (Oral)</b></i> | |
| ``` | |
| @article{mu2024robotwin, | |
| title={RoboTwin: Dual-Arm Robot Benchmark with Generative Digital Twins (early version)}, | |
| author={Mu, Yao and Chen, Tianxing and Peng, Shijia and Chen, Zanxin and Gao, Zeyu and Zou, Yude and Lin, Lunkai and Xie, Zhiqiang and Luo, Ping}, | |
| journal={arXiv preprint arXiv:2409.02920}, | |
| year={2024} | |
| } | |
| ``` | |
| # 🏷️ License | |
| This repository is released under the MIT license. See [LICENSE](./LICENSE) for additional details. | |