qwbu commited on
Commit
1bba95d
·
verified ·
1 Parent(s): f09b689

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -3
README.md CHANGED
@@ -1,3 +1,36 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: robotics
4
+ library_name: transformers
5
+ ---
6
+
7
+ # UniVLA: Learning to Act Anywhere with Task-centric Latent Actions
8
+
9
+ The model was presented in the paper [UniVLA: Learning to Act Anywhere with Task-centric Latent Actions](https://huggingface.co/papers/2505.06111).
10
+
11
+ ## UniVLA-7b for CALVIN test suites
12
+
13
+ Code can be found at [https://github.com/OpenDriveLab/UniVLA](https://github.com/OpenDriveLab/UniVLA).
14
+
15
+ **🚀 Run the following script to start an evaluation on CALVIN ABC-D:**
16
+
17
+ ```bash
18
+ # Mutli-GPU evaluation is supported
19
+ torchrun --standalone --nnodes 1 --nproc-per-node 8 experiments/robot/calvin/run_calvin_eval_ddp.py \
20
+ --calvin_root /path/to/yout/calvin_root_path \
21
+ --action_decoder_path /path/to/your/action_decoder.pt \
22
+ --pretrained_checkpoint /path/to/your/calvin_finetuned_univla \
23
+ --seed 7
24
+ ```
25
+
26
+ ## 📝 Citation
27
+ If you find our models useful in your work, please cite [our paper](https://arxiv.org/pdf/2505.06111):
28
+
29
+ ```bibtex
30
+ @article{bu2025univla,
31
+ title={UniVLA: Learning to Act Anywhere with Task-centric Latent Actions},
32
+ author={Qingwen Bu and Yanting Yang and Jisong Cai and Shenyuan Gao and Guanghui Ren and Maoqing Yao and Ping Luo and Hongyang Li},
33
+ journal={arXiv preprint arXiv:2505.06111},
34
+ year={2025}
35
+ }
36
+ ```