Οβ.β (Pi05)
These weights directly come from the Pytorch conversion script of openpi and their pi05_base
model.
Οβ.β is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
Model Overview
Οβ.β represents a significant evolution from Οβ, developed by Physical Intelligence to address a big challenge in robotics: open-world generalization. While robots can perform impressive tasks in controlled environments, Οβ.β is designed to generalize to entirely new environments and situations that were never seen during training.
The Generalization Challenge
As Physical Intelligence explains, the fundamental challenge isn't performing tasks of agility or dexterity, but generalization, the ability to correctly perform tasks in new settings with new objects. Consider a robot cleaning different homes: each home has different objects in different places. Generalization must occur at multiple levels:
- Physical Level: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments
- Semantic Level: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills
- Environmental Level: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals
Co-Training on Heterogeneous Data
The breakthrough innovation in Οβ.β is co-training on heterogeneous data sources. The model learns from:
- Multimodal Web Data: Image captioning, visual question answering, object detection
- Verbal Instructions: Humans coaching robots through complex tasks step-by-step
- Subtask Commands: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed)
- Cross-Embodiment Robot Data: Data from various robot platforms with different capabilities
- Multi-Environment Data: Static robots deployed across many different homes
- Mobile Manipulation Data: ~400 hours of mobile robot demonstrations
This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously.
Training
Here's a complete training command for finetuning the base Οβ.β model on your own dataset:
python src/lerobot/scripts/train.py \
--dataset.repo_id=your_dataset \
--policy.type=pi05 \
--output_dir=./outputs/pi05_training \
--job_name=pi05_training \
--policy.repo_id=your_repo_id \
--policy.pretrained_path=lerobot/pi05_base \
--policy.compile_model=true \
--policy.gradient_checkpointing=true \
--wandb.enable=true \
--policy.dtype=bfloat16 \
--steps=3000 \
--policy.scheduler_decay_steps=3000 \
--policy.device=cuda \
--batch_size=32
Citation
If you use this model, please cite the original OpenPI work:
@article{openpi2024,
title={Open-World Robotic Manipulation with Vision-Language-Action Models},
author={Physical Intelligence},
year={2024},
url={https://github.com/Physical-Intelligence/openpi}
}
Original Repository
License
This model follows the same license as the original OpenPI repository.
- Downloads last month
- 789