SAFE: Multitask Failure Detection for Vision-Language-Action Models
Abstract
SAFE is a failure detector for vision-language-action models that generalizes to unseen tasks by learning from high-level internal features of the models.
While vision-language-action models (VLAs) have shown promising robotic behaviors across a diverse set of manipulation tasks, they achieve limited success rates when deployed on novel tasks out-of-the-box. To allow these policies to safely interact with their environments, we need a failure detector that gives a timely alert such that the robot can stop, backtrack, or ask for help. However, existing failure detectors are trained and tested only on one or a few specific tasks, while VLAs require the detector to generalize and detect failures also in unseen tasks and novel environments. In this paper, we introduce the multitask failure detection problem and propose SAFE, a failure detector for generalist robot policies such as VLAs. We analyze the VLA feature space and find that VLAs have sufficient high-level knowledge about task success and failure, which is generic across different tasks. Based on this insight, we design SAFE to learn from VLA internal features and predict a single scalar indicating the likelihood of task failure. SAFE is trained on both successful and failed rollouts, and is evaluated on unseen tasks. SAFE is compatible with different policy architectures. We test it on OpenVLA, pi_0, and pi_0-FAST in both simulated and real-world environments extensively. We compare SAFE with diverse baselines and show that SAFE achieves state-of-the-art failure detection performance and the best trade-off between accuracy and detection time using conformal prediction. More qualitative results can be found at https://vla-safe.github.io/.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics (2025)
- RoboFAC: A Comprehensive Framework for Robotic Failure Analysis and Correction (2025)
- Exploring the Limits of Vision-Language-Action Manipulations in Cross-task Generalization (2025)
- ForceVLA: Enhancing VLA Models with a Force-aware MoE for Contact-rich Manipulation (2025)
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks (2025)
- Interactive Post-Training for Vision-Language-Action Models (2025)
- InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper