Action Chunking with Transformers for Image-Based Spacecraft Guidance and Control
Abstract
An imitation learning method using Action Chunking with Transformers (ACT) achieves high performance in spacecraft guidance, navigation, and control with limited data, outperforming meta-reinforcement learning in accuracy and sample efficiency.
We present an imitation learning approach for spacecraft guidance, navigation, and control(GNC) that achieves high performance from limited data. Using only 100 expert demonstrations, equivalent to 6,300 environment interactions, our method, which implements Action Chunking with Transformers (ACT), learns a control policy that maps visual and state observations to thrust and torque commands. ACT generates smoother, more consistent trajectories than a meta-reinforcement learning (meta-RL) baseline trained with 40 million interactions. We evaluate ACT on a rendezvous task: in-orbit docking with the International Space Station (ISS). We show that our approach achieves greater accuracy, smoother control, and greater sample efficiency.
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper