OpenVLA: An Open-Source Vision-Language-Action Model Paper • 2406.09246 • Published Jun 13, 2024 • 40
Eliciting Compatible Demonstrations for Multi-Human Imitation Learning Paper • 2210.08073 • Published Oct 14, 2022
DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset Paper • 2403.12945 • Published Mar 19, 2024
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models Paper • 2402.07865 • Published Feb 12, 2024 • 15
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents Paper • 2306.16527 • Published Jun 21, 2023 • 46
Accurately and Efficiently Interpreting Human-Robot Instructions of Varying Granularities Paper • 1704.06616 • Published Apr 21, 2017
A Tale of Two DRAGGNs: A Hybrid Approach for Interpreting Action-Oriented and Goal-Oriented Instructions Paper • 1707.08668 • Published Jul 26, 2017
Learning Adaptive Language Interfaces through Decomposition Paper • 2010.05190 • Published Oct 11, 2020
Targeted Data Acquisition for Evolving Negotiation Agents Paper • 2106.07728 • Published Jun 14, 2021
Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering Paper • 2107.02331 • Published Jul 6, 2021
"No, to the Right" -- Online Language Corrections for Robotic Manipulation via Shared Autonomy Paper • 2301.02555 • Published Jan 6, 2023
Learning Visually Guided Latent Actions for Assistive Teleoperation Paper • 2105.00580 • Published May 2, 2021