diff --git "a/7tAyT4oBgHgl3EQfQvZV/content/tmp_files/load_file.txt" "b/7tAyT4oBgHgl3EQfQvZV/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/7tAyT4oBgHgl3EQfQvZV/content/tmp_files/load_file.txt" @@ -0,0 +1,1448 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf,len=1447 +page_content='IEEE ROBOTICS AND AUTOMATION LETTERS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' PREPRINT VERSION.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' ACCEPTED DEC, 2022 1 Learning from Guided Play: Improving Exploration for Adversarial Imitation Learning with Simple Auxiliary Tasks Trevor Ablett1, Bryan Chan2, and Jonathan Kelly1 Abstract—Adversarial imitation learning (AIL) has become a popular alternative to supervised imitation learning that reduces the distribution shift suffered by the latter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' However, AIL requires effective exploration during an online reinforcement learning phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In this work, we show that the standard, na¨ıve approach to exploration can manifest as a suboptimal local maximum if a policy learned with AIL sufficiently matches the expert distribution without fully learning the desired task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' This can be particularly catastrophic for manipulation tasks, where the difference between an expert and a non-expert state-action pair is often subtle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We present Learning from Guided Play (LfGP), a framework in which we leverage expert demonstrations of multiple exploratory, auxiliary tasks in addition to a main task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The addition of these auxiliary tasks forces the agent to explore states and actions that standard AIL may learn to ignore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Additionally, this particular formulation allows for the reusability of expert data between main tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Our experimental results in a challenging multitask robotic manipulation domain indicate that LfGP significantly outperforms both AIL and behaviour cloning, while also being more expert sample efficient than these baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' To explain this performance gap, we provide further analysis of a toy problem that highlights the coupling between a local maximum and poor exploration, and also visualize the differences between the learned models from AIL and LfGP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='3 Index Terms—Imitation Learning, Reinforcement Learning, Transfer Learning I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' INTRODUCTION E XPLORATION is a crucial part of effective reinforce- ment learning (RL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' A variety of methods have attempted to optimize the exploration-exploitation trade-off of RL agents [1]–[3], but the development of a technique that generalizes across domains remains an open research problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' A simple, well-known approach to reduce the need for random explo- ration is to provide a dense, or “shaped,” reward to learn from, but this can be very challenging to design appropriately [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Furthermore, the environment may not directly provide the low-level state information required for such a reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' An alternative to providing a dense reward is to learn a reward Manuscript received: Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 3, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Accepted: Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 18, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' This paper was recommended for publication by Editor Jens Kober upon evaluation of the Associate Editor and Reviewers’ comments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1Authors are with the Space & Terrestrial Autonomous Robotic Systems (STARS) Laboratory at the University of Toronto Institute for Aerospace Studies (UTIAS), Toronto, Ontario, Canada, M3H 5T6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Email: .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='@robotics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='utias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='utoronto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='ca 2Author is with the Department of Computing Science at the Uni- versity of Alberta, Edmonton, Alberta, Canada, T6G 2E8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Email: bryan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='chan@ualberta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='ca Digital Object Identifier (DOI): see top of this page.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 3Code, Blog, Appendix: https://papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='starslab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='ca/lfgp Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1: Learning from Guided Play (LfGP) finds an effective stacking policy by learning to compose multiple simple auxiliary tasks (only Reach is shown, for this episode) along with stacking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Discrim- inator Actor-Critic (DAC) [7], or off-policy AIL, reaches a local maximum action-value function and policy, failing to solve the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Arrow direction indicates mean policy velocity action, red-to-yellow (background) indicates low-to-high learned value, while arrow colour indicates probability of closing (green) or opening (blue) the gripper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' function from expert demonstrations of a task, in a process known as inverse RL (IRL) [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Many modern approaches to IRL are part of the adversarial imitation learning (AIL) family [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In AIL, rather than learning a reward function directly, the policy and a learned discriminator form a two- player min-max optimization problem, where the policy aims to confuse the discriminator by producing expert-like data, while the discriminator attempts to classify expert and non- expert data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Although AIL has been shown to be more expert sample efficient than supervised imitation learning (also known as be- havioural cloning, or BC) in continuous-control environments [6]–[8], its application to long-horizon robotic manipulation tasks with a wide distribution of possible initial configurations remains challenging [7], [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In this work, we investigate the use of AIL in a multitask robotic manipulation domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We find that a state-of-the-art AIL method, in which off-policy learning is used to maximize environment sample efficiency [7] (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', reduce the quantity of environment interaction required from the online RL portion of AIL), is outperformed by BC arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='00051v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='LG] 30 Dec 2022 LfGP DAC Reach Stack Pre-Grasp Post-Grasp2 IEEE ROBOTICS AND AUTOMATION LETTERS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' PREPRINT VERSION.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' ACCEPTED DEC,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2022 Multitask Environment Reach Lift Bring Together Insert Stack Guided Expert Play Guide Expert bring_0 together stack_01 Multitask Environment Reach( ) Lift( ) Bring( ) Insert( ) Stack( ) Multitask Environment Reach Lift Bring Together Insert Stack Guided Expert Play Expert lift( ) Guide Guide stack( ) Guided Agent Play Move( ) RESET NEXT Expert lift( ) Sched ( ) stack( ) Sched (lift( )) Agent Multitask AIL Update Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2: The main components of our system for learning from guided play.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In a multitask environment, a guide prompts an expert for a mix of multitask demonstrations, after which we learn a multitask policy through scheduled hierarchical AIL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' with an equivalent amount of expert data, contradicting previ- ous results [6]–[8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Through a simplified example, simulated robotic experiments, and learned model analysis, we show that this outcome occurs because a model learned with expert data and a discriminator is susceptible to the deceptive reward problem [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In other words, while AIL, and more generally IRL, can provide something akin to a dense reward, this reward is not necessarily optimal for teaching, and AIL alone does not enforce sufficiently diverse exploration to escape locally optimal but globally poor models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' A locally-optimal policy has converged to match a subset of the expert data, but in doing so, avoids crucial states and actions (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1, grasping the blue block) required to globally match the full expert set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' To overcome this limitation of AIL, we present Learning from Guided Play (LfGP),4 in which we combine AIL with a scheduled approach to hierarchical RL (HRL) [12], allowing an agent to ‘play’ in the environment with an expert guide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Using expert demonstrations of multiple relevant auxiliary tasks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', Reach, Lift, Move-Object), along with a main task (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', Stack, Bring, Insert), our scheduled hierarchical agent is able to learn tasks where AIL alone fails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Crucially, our formulation also allows auxiliary expert data to be reused between main tasks, further emphasizing the expert sample efficiency of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We use the word play to describe an agent that simulta- neously attempts and learns numerous tasks at once, freely composing them together, inspired by the playful (as opposed to goal-directed) phase of learning experienced by children [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In our case, guided represents two separate but related ideas: first, that the expert guides this play, as opposed to requiring hand-crafted sparse rewards as in [12] (right side of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2), and second, that the expert gathering of multitask, semi-structured demonstrations is guided by uniform-random task selection (middle of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2), rather than requiring the expert to choose transitions between goals, as in [13], [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Our specific contributions are the following: 1) A novel application of a hierarchical framework [12] to AIL that learns a reward and policy for a challenging 4Originally presented as a non-archival workshop paper [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' main task by simultaneously learning rewards and poli- cies for auxiliary tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) Manipulation experiments in which we demonstrate that AIL fails, while LfGP significantly outperforms both AIL and BC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 3) A thorough ablation study to examine the effects of various design choices for LfGP and our baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 4) Empirical analysis, including a simplified representative example and visualization of the learned models of LfGP and AIL, to better understand why AIL fails and how LfGP improves upon it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' PROBLEM FORMULATION A Markov decision process (MDP) is defined as M = ⟨S, A, R, P, ρ0, γ⟩, where the sets S and A are respectively the state and action space, R : S×A → R is a reward function, P is the state-transition environment dynamics distribution, ρ0 is the initial state distribution, and γ is the discount factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Actions are sampled from a stochastic policy π(a|s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The policy π interacts with the environment to yield experience (st, at, rt, st+1) for t = 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' , ∞, where s0 ∼ ρ0(·), at ∼ π(·|st), st+1 ∼ P(·|st, at), rt = R(st, at).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' When referring to finite-horizon tasks, t = T indicates the final timestep of a trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For notational convenience, we assume infinite-horizon, non-terminating environments where t is unbounded, but the extension to the finite-horizon case is trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We aim to learn a policy π that maximizes the expected return J(π) = Eπ [G(τ0:∞)] = Eπ [�∞ t=0 γtR(st, at)], where τt:∞ = {(st, at), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' } is the trajectory starting with (st, at), and G(τt:∞) is the return of trajectory τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In this work, we focus on imitation learning (IL), where R is unknown and instead we are given a finite set of expert demonstration (s, a) pairs BE = � (s, a)E, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In AIL, we attempt to simultaneously learn π and a discriminator D : S × A → [0, 1] that differentiates between expert samples (s, a)E and policy samples (s, a)π and subsequently define R using D [6], [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' To accommodate hierarchical learning, we augment M to contain auxiliary tasks, where Taux = {T1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' , TK} are separate MDPs that share S, A, P, ρ0 and γ with the main task Tmain but have their own reward functions, Rk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' With this ABLETT et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' : LEARNING FROM GUIDED PLAY 3 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 3: An MDP, analogous to stacking, with an expert demonstration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Poor exploration can lead AIL to learn a suboptimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' modification, we refer to entities in our model that are specific to task T ∈ Tall, Tall = Taux ∪ {Tmain}, as (·)T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We assume that we have a set of expert data BE T for each task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' LOCAL MAXIMUM WITH OFF-POLICY AIL In this section, we provide a representative example of how AIL can fail by reaching a locally maximum policy due to a learned deceptive reward [10] coupled with poor exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' A simple six-state MDP is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 3, with ten state- conditional actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We refer to actions as at = anm and states as st = sn where t, n and m refer to the current timestep, current state, and next state, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The reward function is R(s5, a55) = +1, R(s1, a15) = −5 and 0 for all other state- action pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The initial state s1 is always s1, the fixed horizon length is 5, and no discounting is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The MDP is meant to be roughly analogous to a stacking manipulation task: s2, s3, s4 and s6 represent the first block being reached, grasped, lifted, and dropped respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' State s5 represents the gripper hovering over the second block (whether the first block has been stacked or not), while s1 is the reset state, and a15 represents reaching s5 without grasping the first block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Taking action a15 results in a total return of 1 (because R(s1, a15) = −5), since the first block has not actually been grasped.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In our case, the agent does not receive any reward, and instead an expert demonstration of the optimal trajectory is provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We will assume access to a learned (perfect) discriminator, and will use the AIRL [8] reward, so state-action pairs in the expert set receive +1 reward and all others receive -1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We define the action-value Q(st, at) as the expected value of taking action at in state st, and initialize it to zero for all (s, a) pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We define our update rule as the standard Q-Learning update [1], Q(st, at) = Q(st, at) + α (R(st, at) + maxa Q(st+1, a) − Q(st, at)), with α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The agent uses ϵ-greedy exploration, storing each (st, at, st+1) tuple into a buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' After each episode, all Q values are updated to convergence using the whole buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' After the first complete episode of {a15, a55, a55, a55, a55}, Q(s1, a15) = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='7, and Q(s1, a12) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In the second ({a12, a26, a61, a15, a55}) and third ({a12, a23, a36, a61, a15}) episodes, the agent initially moves in the correct direction, but ultimately still fails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The final Q values in s1 are Q(s1, a15) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='49 and Q(s1, a12) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 A policy maximizing Q, having simultaneously learned to avoid s6 (by avoiding s2 and s3) and exploiting the (s5, a55) expert pair, will choose a1 = a15, giving a final return of 1 in the real MDP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' This behaviour matches what we see in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1: due to the large negative reward from dropping the block, AIL learns a policy that avoids stacking altogether and merely reaches the second block, just as AIL here learns to skip s2 and s3 and exploit a55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In both cases, poor initial exploration leads to a deceptive reward, which exacerbates poor exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' LEARNING FROM GUIDED PLAY (LFGP) We now introduce Learning from Guided Play (LfGP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Our primary goal is to learn a policy πTmain that can solve the main task Tmain, with a secondary goal of also learning auxiliary task policies πT1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' , πTK that are used for improved exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' More specifically, we derive a hierarchical learning objective that is decomposed into three parts: i) recovering the reward function of each task with expert demonstrations, ii) training all policies to achieve their respective goals, and iii) using all policies for effective exploration in Tmain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For a summary of the algorithm, see supplementary material link in Footnote 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Learning the Reward Function We first describe how to recover the reward functions from expert demonstrations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For each task T ∈ Tall, we learn a dis- criminator DT (s, a) that is used to define the reward function for policy optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We construct the joint discriminator loss following [7] to train each discriminator in an off-policy manner: L(D) = − � T ∈Tall EB [log (1 − DT (s, a))] +EBE T [log (DT (s, a))] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' (1) Each resulting discriminator DT attempts to differentiate the occupancy measure between the distributions induced by BE T and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We can use DT to define various reward functions [7];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' following [8], we define the reward function for each task T to be RT (st, at) = log (DT (st, at)) − log (1 − DT (st, at)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Learning the Hierarchical Agent We adapt Scheduled Auxiliary Control (SAC-X) [12] to learn the hierarchical agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The agent includes low-level intention policies (equivalently referred to as intentions), a high-level scheduler policy, as well as the Q-functions and the discriminators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The intentions aim to solve their corresponding tasks (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', the intention πT aims to maximize the task return J(πT )), whereas the scheduler aims to maximize the expected return for Tmain by selecting a sequence of intentions to interact with the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For the remainder of the paper, when we refer to a policy, we are referring to an intention policy, as opposed to the scheduler, unless otherwise specified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 5See six_state_mdp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='py from open source code to reproduce.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Legend 2 S 5 MDP C 2 3 S S S 6 S a5 Expert Demo a4 a1 2 S S a2 a3 S a1 a2-5 Suboptimal AIL Policy S4 IEEE ROBOTICS AND AUTOMATION LETTERS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' PREPRINT VERSION.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' ACCEPTED DEC, 2022 1) Learning the Intentions: We learn each intention using Soft Actor-Critic (SAC) [15], an actor-critic algorithm that maximizes the entropy-regularized objective, though any off- policy RL algorithm would suffice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The objective is J(πT ) = EπT � ∞ � t=0 γt (RT (st, at) + αH(πT (·|st))) � , (2) where the learned temperature α determines the importance of the entropy term and H(πT (·|st)) is the entropy of the intention πT at state st.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The soft Q-function is QT (st, at) = RT (st, at) + EπT � ∞ � t=0 γt(RT (st+1, at+1) + αH(πT (·|st+1))) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' (3) The intentions maximize the joint policy objective L(πint) = � T ∈Tall Es∼Ball,a∼πT (·|s) [QT (s, a) − α log πT (a|s)] , (4) where πint refers to the set of intentions {πTmain, πT1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' , πTK} and Ball refers to buffer containing every transition from interactions and demonstrations, as is done in [16], [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For policy evaluation, the soft Q-functions QT for each πT minimize the joint soft Bellman residual L(Q) = � T ∈Tall E(s,a,s′)∼Ball,a′∼πT (·|s′) � (QT (s, a) − δT )2� , (5) δT = RT (s, a) + γ (QT (s′, a′) − α log πT (a′|s′)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' (6) Crucially, because each task shares the common S, A, P, ρ0, and γ, and we are using off-policy learning, all tasks can learn from all data, as in [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) The Scheduler: SAC-X formulates learning the sched- uler by maximizing the expected return of the main task [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In particular, let H be the number of possible intention switches within an episode and let each chosen intention execute for ξ timesteps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The H intention choices made within the episode are defined as T 0:H−1 = � T (0), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' , T (H−1)� , where T (h) ∈ Tall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The return of the main task, given chosen intentions, is then defined as GTmain(T 0:H−1) = H−1 � h=0 (h+1)ξ−1 � t=hξ γtRTmain(st, at), (7) where at ∼ πT (h)(·|st) is the action taken at timestep t, sampled from the chosen intention T (h) in the hth scheduler period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The scheduler for the hth period P h S aims to maxi- mize the expected main task return: E � GTmain(T h:H−1)|P h S � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Although SAC-X describes a method to learn the scheduler [12], we find that a combination of two simple task-agnostic heuristics performs similarly in practice (see Section V-C2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Specifically, we use a weighted random scheduler (WRS) combined with handcrafted trajectories (HC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The WRS forms a prior categorical distribution over the set of tasks, with a higher probability mass pTmain for the main task and pTmain K for all other tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' This approach is comparable to the uniform scheduler from [12], with a bias towards the main task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The HC component is a small set of handcrafted trajectories of tasks that are sampled half of the time, forcing the scheduler to explore trajectories that would clearly be beneficial for completing the main task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The chosen handcrafted trajectories can be found in our code and in our supplementary material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Breaking Out of Local Maxima with LfGP Returning to the discussion in Section III, resolving the local maximum problem with LfGP is straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Sup- pose we include a go-right auxiliary task with BE go-right = {(s1, a12), (s2, a23), (s3, a34)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' When the scheduler chooses the go-right intention, the agent does not exploit the a55 action because the go-right discriminator learns that R(s5, a55) = −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Since the transitions are stored in the shared buffer that the main intention also samples from, the agent can quickly obtain the correct, optimal value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Expert Data Collection We assume that each T ∈ Tall has, for evaluation purposes only, a binary indicator of success.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In single-task imitation learning where this assumption is valid, expert data is typically collected by allowing the expert to control the agent until success conditions are met.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' At that point, the environment is reset following ρ0 and collection is repeated for a fixed number of episodes or (s, a) pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We collect our expert data in this way for each T separately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' EXPERIMENTS In this work, we are interested in answering the following questions about LfGP: 1) How does the performance of LfGP compare with BC and AIL in challenging manipulation tasks, in terms of success rate and expert sample efficiency?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) What parts of LfGP are necessary for success?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 3) How do the policies and action value functions differ between AIL and LfGP?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Experimental Setup We complete experiments in a simulation environment con- taining a Franka Emika Panda manipulator, one green and one blue block in a tray, fixed zones corresponding to the green and blue blocks, and one slot in each zone with < 1mm Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 4: Example successful runs of our four main tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Top to bottom: Stack, Unstack-Stack, Bring, Insert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' ABLETT et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' : LEARNING FROM GUIDED PLAY 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Stack 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Unstack-Stack 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Bring 1 2 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Insert 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Updates/steps (millions) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Success Rate LfGP (multi) BC (multi) DAC (single) BC (single) Expert Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 5: Performance results for LfGP, multitask BC, single-task BC, and DAC on all four tasks considered in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The x-axis corresponds to both gradient updates and environments steps for LfGP and DAC, and gradient updates only for both versions of BC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The shaded area corresponds to standard deviation across five seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' LfGP significantly outperforms the baselines on all tasks, and even in Bring where it is matched by single-task BC, it is far more expert sample efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' tolerance for fitting the blocks (see bottom right of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The robot is controlled via delta-position commands, and the blocks and end-effector can both be reset anywhere above the tray.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The environment is designed such that several different challenging tasks can be completed within a common observa- tion and action space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The main tasks that we investigate are Stack, Unstack-Stack, Bring, and Insert (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For more details on our environment and definitions of task success, see supplementary material link in Footnote 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We also define a set of auxiliary tasks: Open-Gripper, Close-Gripper, Reach, Lift, Move-Object, and Bring (Bring is both a main task and an auxiliary task for Insert), all of which are reusable between main tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We compare our method to several standard multitask and single-task baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' A multitask algorithm simultaneously learns to complete a main task as well as auxiliary tasks, while the single-task algorithms only learn to complete the main task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In general, we consider a multitask algorithm to be more useful than a single-task algorithm, given the potential to reuse expert data and trained models for learning new tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' To ensure a fair comparison, we provide single-task algorithms with an equivalent amount of total expert data as our multitask methods, as shown in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In our main experiments, we compare LfGP to a mul- titask variant of behavioural cloning (BC), single-task BC, and Discriminator-Actor-Critic (DAC) [7], a state-of-the-art approach to AIL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We train multitask BC with a multitask mean squared error objective, L(πint) = � T ∈Tall � (s,a)∈BE T (πT (s) − a)2 , (8) while BC is trained with the corresponding single task version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Following recent trends in improving BC performance, we train our BC baselines with the same number of gradient updates as LfGP and DAC, evaluating the policies at the same frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' This adjustment has been shown to dramatically increase the performance of BC [18], [19], particularly com- pared to the more common practice of using early stopping, as is done in [6], [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We validate that this change signifi- cantly improves BC performance in our ablation study (see Section V-C4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We gather expert data by first training an expert policy using Scheduled Auxiliary Control (SAC-X) [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We then run the Task Dataset Sizes Reuse Single Total Multi Stack SOCRLM: 1k/task 5k 1k 6k task U-Stack UOCRLM: 1k/task 5k 1k 6k Bring BOCRLM: 1k/task 6k 0 6k Insert IBOCRLM: 1k/task 6k 1k 7k Single Stack S: 6k 0 6k 6k Task U-Stack U: 6k 0 6k 6k Bring B: 6k 0 6k 6k Insert I: 6k 0 7k 7k TABLE I: The number of (s, a) pairs used for each main and auxiliary task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The table illustrates the reusability of the expert data used to generate the performance results described in Section V-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Each letter under “Dataset Sizes” is the first letter of a single (auxiliary) task, and bolded letters indicate that a dataset was reused for more than one main task (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', Open-Gripper was used for all four main tasks).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Multitask methods (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', LfGP) are able to reuse a large portion of the expert data, while single-task methods (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', single-task BC) cannot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' expert policies to collect various amounts of expert data as described in Section IV-D and Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We also collect an extra 200 expert (sT , 0) pairs per auxiliary task, where T refers to the final timestep of an individual episode and 0 is an action of all zeros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' This is equivalent to adding example data, as is done in example-based RL [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' This addition improved final task performance, likely because it biases the reward towards completing the final task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' It is worth noting that, in the real world, final states are easier to collect than full demonstrations, and LfGP does not require any modifications to accommodate these extra examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Finally, even without this addition, LfGP still outperforms the baselines (see Section V-C1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Performance Results Performance results for all methods and main tasks are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We freeze the policies every 100k steps and evaluate those policies for 50 randomized episodes, using only the mean action outputs for stochastic policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For all algorithms, we test across five seeds and report the mean and standard deviation of all seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In Stack, Unstack-Stack, and Insert, LfGP achieves expert performance, while the baselines all perform significantly worse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In Bring, LfGP does not quite achieve expert per- formance, and is matched by single-task BC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' However, we note that LfGP is much more expert data efficient than single- task BC because it reuses auxiliary task data (see Table I).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' A more direct comparison is multitask BC, which performs 6 IEEE ROBOTICS AND AUTOMATION LETTERS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' PREPRINT VERSION.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' ACCEPTED DEC, 2022 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Stack (no ablations) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5|BE orig| 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5|BE orig| 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Subsampled BE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0No Extra Final Examples 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Updates/steps (millions) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Success Rate LfGP (multi) BC (multi) DAC (single) BC (single) Expert Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 6: Various dataset ablations for LfGP and all baselines, including dataset size, subsampling of expert dataset, and replacement of extra (sT , 0) pairs with an equivalent amount of regular trajectory (s, a) pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In all cases, LfGP still significantly outperforms all baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 LfGP Scheduler 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Updates/steps (millions) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Success Rate WRS + HC WRS only Learned No Sched.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Expert 1 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Expert Sampling 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Updates/steps (millions) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Success Rate LfGP LfGP (BE for D only) LfGP (No (sT , 0) bias) DAC DAC (BE for D only) DAC (No (sT , 0) bias) Expert 1 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 BC/DAC Alternatives 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Updates/steps (millions) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Success Rate BC (multi) BC (multi, early stop) DAC GAIL BC BC (early stop) Expert Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 7: Left: Scheduler ablations for training LfGP, WRS is weighted random scheduler, HC is handcraft;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Middle: Expert sampling ablations for training LfGP/DAC;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Right: Baseline ablations for training BC/DAC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' much more poorly than LfGP across all tasks, including Bring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Intriguingly, DAC also performs very poorly on all tasks, a phenomenon that we further explore in Section VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ablation Study While the fundamental idea of LfGP is relatively straight- forward, it is worth considering alternatives to some of the specific choices made for our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In this section, we complete an ablation study where we vary (a) the expert dataset, including size, subsampling, and inclusion of extra (sT , 0) pairs;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' (b) the type of scheduler used for LfGP (see Section IV-B2);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' (c) the sampling strategy used for expert data;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' and (d) the method for training our baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' To reduce the computational load of completing these experiments, all of these variations were carried out exclusively for our Stack task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' All ablation results are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 6 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1) Dataset Ablations: We tested the following dataset vari- ations: (a) half and one and a half times the original expert dataset size;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' (b) subsampling BE, taking only every 20th timestep, as is done in [6], [7];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' and (c) replacing the 200 extra (sT , 0) pairs in each buffer with 200 regular trajectory (s, a) pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Notably, even in the challenging regimes of halving and subsampling the dataset, LfGP still learns an expert-level policy (albeit more slowly).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) Scheduler Ablations: We tested the following scheduler variations: (a) Weighted Random Scheduler (WRS) only, re- moving the Handcrafted (HC) addition;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' (b) a learned sched- uler, as is used in [12];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' and (c) no scheduler, in which only the main task is attempted, akin to the Intentional Unintentional Agent [12], [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Both WRS versions learn slightly faster than the learned scheduler, but all three methods outperform the No Scheduler ablation, replicating results from [12] demonstrating the importance of actually exploring all auxiliary tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Per- haps surprisingly, the HC modification made little difference compared with WRS only, but it is possible that for even more complex tasks, this could change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 3) Expert Sampling Ablations: For our main performance experiments, we modified standard AIL in two ways: (a) we added expert buffer sampling to π and Q updates, in addition to the D updates, as is done in [16], [17];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' and (b) we biased the sampling of BE when training D to be 95% final (sT , 0) pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We tested both LfGP and DAC without these additions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For LfGP, although these modifications improve learning speed, they are not required to generate an expert policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For DAC, performance is quite poor regardless of these adjustments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 4) Baseline Ablations: To verify that we evaluated against fair baselines, we tested two alternatives to those used for our main performance experiments: (a) an early stopping variation of BC, in which each expert buffer is divided into a 70%/30% train/validation split, taking the policy after validation error has not improved for 100 epochs;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' and (b) the on-policy variant of DAC, also known as Generative Adversarial Imitation Learning (GAIL) [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Notably, the early stopping variants of BC, commonly used as baselines in other AIL work [6], [7], [22] perform dramatically more poorly than those used in our experiments, verifying recent trends [18], [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' LEARNED MODEL ANALYSIS In this section, we further examine the learned Stack models of LfGP and DAC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We take snapshots of the average per- forming models from LfGP and DAC at four points during learning: 200k, 400k, 600k, and 800k model updates and environment steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Although the initial gripper and block positions are randomized between episodes during learning, for each snapshot, we reset the stacking environment to a single set of representative initial conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We then run the ABLETT et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' : LEARNING FROM GUIDED PLAY 7 LfGP – Open-Gripper LfGP – Close-Gripper LfGP – Reach LfGP – Lift LfGP – Move-Object LfGP – Stack DAC – Stack Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 8: The policy outputs (arrows) and Q values (background) for each LfGP task and for DAC at 200k environment steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The arrows show velocity direction/magnitude, blue → green indicates open-gripper → close-gripper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For Q values, red → yellow indicates low → high.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The LfGP policies and Q functions are reasonable for all tasks, while DAC has only learned to reach toward and above the green block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' snapshot policies for a single exploratory trajectory, using the stochastic outputs of each policy as well as, for LfGP, the WRS+HC scheduler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Trajectories from these runs are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' DAC is unable to learn to grasp or even reach the blue block and ultimately settles on a policy that learns to reach and hover near the green block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' This is understandable—DAC learns a deceptive reward for hovering above the green block regardless of the position of the blue block, because it has not sufficiently explored the alternative of first grasping the blue block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Even if hovering above the green block does not fully match the expert data, the DAC policy receives some reward for doing so, as evidenced by the learned Q value on the right side of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In comparison, even after only 200k environment steps, LfGP learns to reach and push the blue block, and by 600k steps, grasp, move, and nearly stack it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' By enforcing explo- ration of sub-tasks that are crucial to completing the main task, LfGP ensures that the distribution of expert stacking data is fully matched.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' RELATED WORK Imitation learning is often divided into two main categories: behavioural cloning (BC) [23], [24] and inverse reinforcement learning (IRL) [5], [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' BC recovers the expert policy via supervised learning, but it suffers from compounding errors due to covariate shift [23], [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Alternatively, IRL partially alleviates the covariate shift problem by estimating the reward function and then applying RL using the learned reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' A popular approach to IRL is adversarial imitation learning (AIL) [6], [7], [27], in which the expert policy is recovered by matching the occupancy measure between the generated data and the demonstration data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Our proposed method en- hances existing AIL algorithms by enabling exploration of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 9: LfGP and DAC trajectories of the gripper, blue block, and green block for four stack episodes with consistent initial conditions throughout the learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The LfGP episodes, each including auxiliary task sub-trajectories, demonstrate significantly more variety than the DAC trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' key auxiliary tasks via the use of a scheduled multitask model, simultaneously resolving the susceptibility of AIL to deceptive rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Agents learned via hierarchical reinforcement learning (HRL), which act over multiple levels of temporal abstractions in long planning horizon tasks, are shown to provide more effective exploration than agents operating over only a single level of abstraction [12], [28], [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Our approach for learning agents most closely resembles hierarchical AIL methods that attempt to combine AIL with HRL [27], [30]–[32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Existing work [30]–[32] often formulates the hierarchical agent using the Options framework [28] and learns the reward function with AIL [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Both [30] and [32] leverage task-specific expert demonstrations to learn options using mixture-of-experts and expectation-maximization strategies, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In contrast, our work focuses on expert demonstrations that include multi- ple reusable auxiliary tasks, each of which has clear semantic meaning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In the multitask setting, [27] and [31] leverage unsegmented, multitask expert demonstrations to learn low-level policies via a latent variable model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Other work has used a large corpus of unsegmented but semantically meaningful “play” expert data to bootstrap policy learning [13], [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We define our expert dataset as being derived from guided play, in that the expert completes semantically meaningful auxiliary tasks with provided transitions, reducing the burden on the expert to generate these data arbitrarily and simultaneously providing auxiliary task labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Compared with learning from unseg- mented demonstrations, the use of segmented demonstrations, as in [33], ensures that we know which auxiliary tasks our model will be learning, and opens up the possibility of expert data reuse and also transfer learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Finally, we deviate from the Options framework and build upon Scheduled Auxiliary Control (SAC-X) to train our hierarchical agent, since SAC- X has been shown to work well for challenging manipulation tasks [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' LIMITATIONS Our approach is not without limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' While we were able to use LfGP in six and seven-task settings, the number of tasks for which this method would become intractable is unclear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' LfGP needs access to segmented expert data as well;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' in many cases, this is reasonable, and is also required to be able to reuse auxiliary task data between main tasks, but it does necessitate extra care during expert data collection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Also, LfGP requires pre-defined auxiliary tasks: while this is a common approach to hierarchical RL (see [34], Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='1, for numerous examples), choosing these tasks may sometimes present a challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Finally, compared with methods that use offline data exclusively (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', BC), for our tasks, LfGP requires 200k 400k 600k 800k LfGP DAC8 IEEE ROBOTICS AND AUTOMATION LETTERS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' PREPRINT VERSION.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' ACCEPTED DEC, 2022 many online environment steps to learn a high-quality policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' This data gathering could be costly if human supervision was necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' It is worth noting that, because LfGP is already a multitask method, this final point could be partially resolved through the use of multitask reset-free RL [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' IX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' CONCLUSION We have shown how adversarial imitation learning can fail at challenging manipulation tasks because it learns deceptive rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We demonstrated that this can be resolved with Learning from Guided Play (LfGP), in which we introduce auxiliary tasks and the corresponding expert data, guiding the agent to playfully explore parts of the state and action space that would have been avoided otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We demonstrated that our method dramatically outperforms both BC and AIL base- lines, particularly in the case of AIL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Furthermore, our method can leverage reusable expert data, making it significantly more expert sample efficient than the highest-performing baseline, and its learned auxiliary task models can be applied to transfer learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In future work, we intend to investigate transfer learning to determine if overall policy learning time can be reduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' ACKNOWLEDGEMENTS We gratefully acknowledge the Digital Research Alliance of Canada and NVIDIA Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', who provided the GPUs used in this work through their Resources for Research Groups Program and their Hardware Grant Program, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' REFERENCES [1] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Sutton and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Barto, Reinforcement Learning: An Introduction, 2nd ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' MIT press, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [2] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Bellemare, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Srinivasan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ostrovski, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Schaul, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Saxton, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Munos, “Unifying Count-Based Exploration and Intrinsic Motiva- tion,” in Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Neural Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 29, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [3] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Nair, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' McGrew, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Andrychowicz, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Zaremba, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Abbeel, “Overcoming Exploration in Reinforcement Learning with Demon- strations,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2018 IEEE Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Robotics and Automation (ICRA’18), Brisbane, Australia, May 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 6292–6299.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [4] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ng and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Jordan, “Shaping and policy search in reinforcement learning,” Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' dissertation, University of California, Berkeley, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [5] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ng and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Russell, “Algorithms for inverse reinforcement learning,” in Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Machine Learning (ICML’00), July 2000, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 663–670.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [6] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ho and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ermon, “Generative Adversarial Imitation Learning,” in Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Neural Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Processing Systems, Barcelona, Spain, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 5–11 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 4565–4573.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [7] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Kostrikov, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Agrawal, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Dwibedi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Levine, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Tomp- son, “Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Learning Representations (ICLR’19), New Orleans, USA, May 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [8] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Fu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Luo, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Levine, “Learning Robust Rewards with Ad- verserial inverse Reinforcement Learning,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Learning Representations (ICLR’18), Vancouver, Canada, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 30–May 3 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [9] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Orsini, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “What Matters for Adversarial Imitation Learning?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' in Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Neural Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Processing Systems, June 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [10] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ecoffet, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Huizinga, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Lehman, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Stanley, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Clune, “First return, then explore,” Nature, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 590, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 7847, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 580–586, Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [11] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ablett, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Chan, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Kelly, “Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Ad- versarial Imitation Learning,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Neural Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Processing Systems (NeurIPS’21) Deep Reinforcement Learning Workshop, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [12] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Riedmiller, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “Learning by Playing Solving Sparse Reward Tasks from Scratch,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 35th Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Machine Learning (ICML’18), Stockholm, Sweden, July 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 4344–4353.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [13] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Lynch, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “Learning Latent Plans from Play,” in Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Robot Learning (CoRL’19), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [14] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Gupta, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Kumar, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Lynch, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Levine, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Hausman, “Relay Policy Learning: Solving Long Horizon Tasks Via Imitation and Rein- forcement Learning,” in Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Robot Learning (CoRL’19), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [15] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Haarnoja, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Zhou, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Abbeel, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Levine, “Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 35th Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Machine Learning (ICML’18), Stockholm, Sweden, July 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1861–1870.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [16] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Vecerik, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards,” Oct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [17] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Kalashnikov, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation,” arXiv:1806.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='10293 [cs, stat], June 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [18] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Mandlekar, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “What Matters in Learning from Offline Human Demonstrations for Robot Manipulation,” in Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Robot Learning, Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [19] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Hussenot, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “Hyperparameter Selection for Imitation Learning,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 38th Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Machine Learning (ICML’21), July 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 4511–4522.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [20] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Fu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Singh, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ghosh, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Yang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Levine, “Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition,” in Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Neural Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Processing Systems, Montreal, Canada, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [21] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Cabi, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “The Intentional Unintentional Agent: Learning to Solve Many Continuous Control Tasks Simultaneously,” in Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Robot Learning (CoRL’17), Mountain View, USA, Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [22] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Zolna, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “Task-Relevant Adversarial Imitation Learning,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2020 Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Robot Learning, Oct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 247–263.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [23] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ross, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Gordon, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Bagnell, “A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 14th Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Artificial Intelligence and Statistics (AISTATS’11), Fort Lauderdale, USA, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2011, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 627–635.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [24] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ablett, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Zhai, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Kelly, “Seeing All the Angles: Learning Multiview Manipulation Policies for Contact-Rich Tasks from Demon- strations,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' IEEE/RSJ Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Intelligent Robots and Systems (IROS’21), Prague, Czech Republic, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [25] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Abbeel and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ng, “Apprenticeship learning via inverse reinforce- ment learning,” in Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Machine Learning (ICML’04).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Banff, Canada: ACM Press, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [26] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ablett, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Mari´c, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Kelly, “Fighting Failures with FIRE: Failure Identification to Reduce Expert Burden in Intervention-Based Learning,” arXiv:2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='00245 [cs], Aug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [27] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Hausman, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Chebotar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Schaal, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Sukhatme, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Lim, “Multi- Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets,” in Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Neural Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Processing Systems, May 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [28] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Sutton, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Precup, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Singh, “Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning,” Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 112, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1-2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 181–211, Aug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [29] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Nachum, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Tang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Lu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Gu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Lee, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Levine, “Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Neural Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Processing Systems (NeurIPS’19) Deep Reinforcement Learning Workshop, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [30] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Henderson, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Chang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Bacon, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Meger, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Pineau, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Precup, “OptionGAN: Learning Joint Reward-Policy Options Using Generative Adversarial Inverse Reinforcement Learning,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' AAAI Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Artificial Intelligence (AAAI’18), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [31] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Sharma, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Sharma, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Rhinehart, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Kitani, “Directed-Info GAIL: Learning Hierarchical Policies from Unsegmented Demonstra- tions using Directed Information,” in Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Learning Representations (ICLR’19), May 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [32] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Jing, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “Adversarial Option-Aware Hierarchical Imitation Learn- ing,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 38th Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Machine Learning (ICML’21), July 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 5097–5106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [33] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Codevilla, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' M¨uller, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' L´opez, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Koltun, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Dosovitskiy, “End- to-End Driving Via Conditional Imitation Learning,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' IEEE Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Robotics and Automation (ICRA’18), Brisbane, Australia, May 21–25 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 4693–4700.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [34] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Pateria, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Subagdja, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='-h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Tan, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Quek, “Hierarchical Re- inforcement Learning: A Comprehensive Survey,” ACM Computing Surveys, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 54, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 109:1–109:35, June 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [35] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Gupta, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2021 IEEE Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Robotics and Automation (ICRA’21), Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' ABLETT et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' : LEARNING FROM GUIDED PLAY 9 APPENDIX A LEARNING FROM GUIDED PLAY ALGORITHM The complete pseudo-code is given in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Our implementation builds on RL Sandbox [36], an open-source PyTorch [37] framework for RL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For learning the discriminators, we follow DAC and apply a gradient penalty for regularization [7], [38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We optimize the intentions via the reparameterization trick [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' As is commonly done in deep RL, we use the Clipped Double Q-Learning trick [41] to mitigate overestimation bias [42] and use a target network to mitigate learning instability [43] when training the policies and Q-functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We also learn the temperature parameter αT separately for each task T (see Section 5 of [44] for more details on learning α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For Generative Adversarial Imitation Learning (GAIL), we use a common open-source PyTorch implementation [45].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The hyperparameters chosen for all methods are provided in Section G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Please see videos at papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='starslab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='ca/lfgp for examples of what LfGP looks like in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Algorithm 1 Learning from Guided Play (LfGP) Input: Expert replay buffers BE main, BE 1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' , BE K, scheduler period ξ, sample batch size N Parameters: Intentions πT with corresponding Q-functions QT and discriminators DT , and scheduler πS (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' with Q- table QS) 1: Initialize replay buffer B 2: for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' , do 3: # Interact with environment 4: For every ξ steps, select intention πT using πS 5: Select action at using πT 6: Execute action at and observe next state s′ t 7: Store transition ⟨st, at, s′ t⟩ in B 8: 9: # Update discriminator DT ′ for each task T ′ 10: Sample {(si, ai)}N i=1 ∼ B 11: for each task T ′ do 12: Sample {(s′ i, a′ i)}B i=1 ∼ BE k 13: Update DT ′ following Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' (1) using GAN + Gradient Penalty 14: end for 15: 16: # Update intentions πT ′ and Q-functions QT ′ for each task T ′ 17: Sample {(si, ai)}N i=1 ∼ B 18: Compute reward DT ′(si, ai) for each task T ′ 19: Update π and Q following Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' (4) and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' (5) 20: 21: # Optional Update learned scheduler πS 22: if at the end of effective horizon then 23: Compute main task return GTmain using reward esti- mate from Dmain 24: Update πS (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' update Q-table QS following Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='3) and recompute Boltzmann distribution) 25: end if 26: end for A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Scheduler Details 1) Learning the Scheduler: As stated in our paper, our main experiments used a simple weighted random scheduler with handcrafted trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In this section, we provide the details of our learned scheduler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Following [12], let H be the total number of possible intention switches within an episode and let each chosen intention execute for ξ timesteps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The H intention choices made within the episode are defined as T 0:H−1 = � T (0), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' , T (H−1)� , where T (h) ∈ Tall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The main task’s return given chosen intentions is then defined as GTmain(T 0:H−1) = H−1 � h=0 (h+1)ξ−1 � t=hξ γtRTmain(st, at), (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='1) where at ∼ πT (h)(·|st) is the action taken at timestep t, sampled from the chosen intention T (h) in the hth scheduler period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We further define the Q-function for the scheduler as QS(T 0:h−1, T (h)) = ET h:H−1∼P h:H−1 S � GTmain(T h:H−1)|T 0:h−1� and represent the scheduler for the hth period as a softmax distribution P h S over {QS(T 0:h−1, Tmain), QS(T 0:h−1, T1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' , QS(T 0:h−1, TK)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The scheduler maximizes the expected return of the main task following the scheduler: L(S) = ET (0)∼P 0 S � QS(∅, T (0)) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2) We use Monte Carlo returns to estimate QS, estimating the expected return using the exponential moving average: QS(T 0:h−1, T (h)) = (1 − φ)QS(T 0:h−1, T (h)) +φ GTmain(T h:H), (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='3) where φ ∈ [0, 1] represents the amount of discounting on older returns and GTmain(T h:H) is the cumulative discounted return of the trajectory starting at timestep hξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Weighted Random Scheduler Plus Handcrafted Trajectories As stated in our paper, the main experiments were com- pleted with the described weighted random scheduler (WRS) combined with some simple handcrafted trajectories (HC) that we expected to be beneficial for learning each of the main tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In this section, we provide further de- tails of these handcrafted scheduler trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Given a chosen proportion hyperparameter (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 in our experiments), we randomly sampled full trajectories from the lists below at the beginning of training episodes, and otherwise sam- pled from the regular WRS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For all four tasks Main = {Stack, Unstack-Stack, Bring, Insert}, we provided the fol- lowing set of trajectories: 1) Reach, Lift, Main, Open-Gripper, Reach, Lift, Main, Open-Gripper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) Reach, Lift, Move-Object, Main, Open-Gripper, Reach, Lift, Move-Object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 3) Lift, Main, Open-Gripper, Lift, Main, Open-Gripper, Lift, Main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 4) Main, Open-Gripper, Main, Open-Gripper, Main, Open- Gripper, Main, Open-Gripper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 10 IEEE ROBOTICS AND AUTOMATION LETTERS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' PREPRINT VERSION.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' ACCEPTED DEC, 2022 TABLE II: The components used in our environment observations, common to all tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Grip finger position is a continuous value from 0 (closed) to 1 (open).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Component Dim Unit Privileged?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Extra info EE pos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 3 m No rel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' to base EE velocity 3 m/s No rel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' to base Grip finger pos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 6 [0, 1] No current, last 2 Block pos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 6 m Yes both blocks Block rot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 8 quat Yes both blocks Block trans vel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 6 m/s Yes rel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' to base Block rot vel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 6 rad/s Yes rel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' to base Block rel to EE 6 m Yes both blocks Block rel to block 3 m Yes in base frame Block rel to slot 6 m Yes both blocks Force-torque 6 N,Nm No at wrist Total 59 5) Move-Object, Main, Open-Gripper, Move-Object, Main, Open-Gripper, Move-Object, Main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For insert, in addition to the trajectories listed above, we added two more trajectories to specifically accommodate Bring as an auxiliary task: 1) Bring, Insert, Open-Gripper, Bring, Insert, Open- Gripper, Bring, Insert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) Reach, Lift, Bring, Insert, Open-Gripper, Reach, Lift, Bring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' APPENDIX B ENVIRONMENT DETAILS Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 10: An image of our multitask environment immediately after a reset has been carried out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' A screenshot of our environment, simulated in PyBullet [47], is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We chose this environment because we desired tasks that a) have a large distribution of possible initial states, representative of manipulation in the real world, b) have a shared observation/action space with several other tasks, allowing the use of auxiliary tasks and transfer learning, and c) require a reasonably long horizon and significant use of contact to solve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The environment contains a tray with sloped edges (to keep the blocks within the reachable workspace of the end-effector), as well as a green and a blue block, each of which is 4 cm × 4 cm × 4 cm and has a mass of 100 g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The dimensions of the lower part of the tray, before reaching the sloped edges, are 30 cm × 30 cm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The dimensions of the ‘bring’ boundaries (shaded blue and green regions) are 8 cm × 8 cm, while the dimensions of the insertion slots, which are directly in the center of each shaded region, are 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='1 cm × 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='1 cm × 1 cm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The boundaries for end-effector movement, relative to the tool center point that is directly between the gripper fingers, are a 30 cm × 30 cm × 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 cm box, where the bottom boundary is low enough to allow the gripper to interact with objects, but not to collide with the bottom of the tray.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' See Table II for a summary of our environment observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In this work, we use privileged state information (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', block poses), but adapting our method to exclusively use image- based data is straightforward since we do not use hand-crafted reward functions as in [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The environment movement actions are 3-DOF translational position changes, where the position change is relative to the current end-effector position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We leverage PyBullet’s built-in position-based inverse kinematics function to generate joint commands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Our actions also contain a fourth dimension that corresponds to actuating the gripper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' To allow for the use of policy models with exclusively continuous outputs, this dimension accepts any real number, with any value greater than 0 commanding the gripper to open, and any number less than 0 commanding it to close.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Actions are supplied at a rate of 20 Hz, and each training episode is limited to 18 seconds, corresponding to 360 time steps per episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For play-based expert data collection, we also reset the environment manually every 360 time steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Between episodes, block positions are randomized to any pose within the tray, and the end-effector is randomized to any position between 5 and 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 cm above the tray, within the earlier stated end-effector bounds, with the gripper fully opened.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The only exception to these initial conditions is during expert data collection and agent training of the Unstack-Stack task: in this case, the green block is manually set to be on top of the blue block at the start of the episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' APPENDIX C PERFORMANCE RESULTS FOR AUXILIARY TASKS The performance results for all multitask methods and all auxiliary tasks are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Multitask BC has gradually decreasing performance on many of the auxiliary tasks as the number of updates increases, which is consistent with mild overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Intriguingly, however, multitask BC does achieve quite reasonable performance on many of the auxiliary tasks (such as Lift) without needing any of the extra environment interactions required by an online method such as LfGP or DAC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' An interesting direction for future work is to determine whether pretraining via multitask BC could provide ABLETT et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' : LEARNING FROM GUIDED PLAY 11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Stack Stack 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Open 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Close 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Lift 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Reach 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Move 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Unstack-Stack Unstack-Stack 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Open 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Close 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Lift 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Reach 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Move 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Bring Bring 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Open 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Close 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Lift 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Reach 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Move 1 2 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Insert Insert 1 2 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Open 1 2 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Close 1 2 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Bring 1 2 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Lift 1 2 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Reach 1 2 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Move 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Updates/steps (millions) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Success Rate LfGP (multi) BC (multi) DAC (single) BC (single) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 11: Performance for LfGP and the multitask baselines across all tasks, shaded area corresponds to standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' any improvements in environment sample efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We did attempt to do this, but found that it resulted in poorer final performance than training from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' APPENDIX D PROCEDURE FOR OBTAINING EXPERTS As stated, we used SAC-X [12] to train models that we used for generating expert data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We used the same hyperpa- rameters that we used for LfGP (see Table III), apart from the discriminator, which, of course, does not exist in SAC-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' See Section E for details on the hand-crafted rewards that we used for training these models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For an example of gathering play-based expert data, please see our attached video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We made two modifications to regular SAC-X to speed up learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' First, we pre-trained a Move-Object model before transferring this model to each of our main tasks, as we did in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='3 of our main paper, since we found that SAC-X would plateau when we tried to learn the more challenging tasks from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The need for this modification demon- strates another noteworthy benefit of LfGP—when training LfGP, main tasks could be learned from scratch, and generally in fewer time steps, than it took to train our experts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Second, during transfer to the main tasks, we used what we called a conditional weighted scheduler instead of a Q-Table: we de- fined weights for every combination of tasks, so that the sched- uler would pick each task with probability P(T (h)|T (h−1)), ensuring that ∀T ′ ∈ Tall, � T ∈Tall P(T |T ′) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The weights that we used were fairly consistent between main tasks, and can be found in our packaged code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The conditional weighted scheduler ensured that every task was still explored throughout the learning process, so that we would have high-quality experts for every auxiliary task in addition to the main task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' This scheduler can be considered as a more complex alter- native to the weighted random scheduler or the addition with handcrafted trajectories from our main paper, and again shows the flexibility of using a semantically-meaningful multitask policy with a common observation and action space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' APPENDIX E EVALUATION As stated in our paper, we evaluated all algorithms by testing the mean output of the main task policy head in our environment and determining a success rate based on 50 randomly selected resets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' These evaluation episodes were run for 360 time steps to match our training process, and if a condition for success was met within that time, they were recorded as a success.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The rest of this section describes in detail how we evaluated ‘success’ for each of our main and auxiliary tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' As previously stated, we trained experts using a modified SAC-X [12] that required us to define a set of reward functions for each task, which we include in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The authors of [12] focused on sparse rewards but also showed a few experiments in which dense rewards reduced the time to learn adequate policies, so we chose to use dense rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We note that many of these reward functions are particularly com- plex and required significant manual shaping effort, further motivating the use of an imitation learning scheme like the one presented in our paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' It is possible that we could have made do with sparse rewards, such as those used in [12], but our compute resources made this impractical—for example, in [12], their agent took 5000 episodes × 36 actors × 360 time steps = 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 M time steps to learn their stacking task, which would have taken over a month of wall clock time on our fastest machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' To see the specific values used for the rewards and success conditions described in these sections, please review our code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Unless otherwise stated, each of the success conditions in this section had to be held for 10 time steps, or 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 seconds, before being registered as a success.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' This choice was made to prevent registering a success when, for example, the blue block slipped off the green block during the Stack task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 12 IEEE ROBOTICS AND AUTOMATION LETTERS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' PREPRINT VERSION.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' ACCEPTED DEC, 2022 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Common For each of these functions, we use the following common labels: pb: blue block position, vb: blue block velocity, ab: blue block acceleration, pg: green block position, pe: end-effector tool center point position (TCP), ps: center of a block pushed into one of the slots, g1: (scalar) gripper finger 1 position, g2: (scalar) gripper finger 2 position, and ag: (scalar) gripper open/close action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' A block is flat on the tray when pb,z = 0 or pg,z = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' To further reduce training time for SAC-X experts, all rewards were set to 0 if ∥pb −pe∥ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='1 and ∥pg −pe∥ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='1 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', the TCP must be within 10 cm of either block).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' During training while using the Unstack-Stack variation of our environment, a penalty of -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='1 was added to each reward if ∥pg,z∥ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='001 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', there was a penalty to all rewards if the green block was not flat on the tray).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Stack/Unstack-Stack The evaluation conditions for Stack and Unstack-Stack are identical, but in our Unstack-Stack experiments, the environ- ment is manually set to have the green block start on top of the blue block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1) Success: Using internal PyBullet commands, we check to see whether the blue block is in contact with the green block and is not in contact with either the tray or the gripper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) Reward: We include a term for checking the distance between the blue block and the spot above the the green block, a term for rewarding increasing distance between the block and the TCP once the block is stacked, a term for shaping lifting behaviour, a term to reward closing the gripper when the block is within a tight reaching tolerance, and a term for rewarding the opening the gripper once the block is stacked.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Bring/Insert We use the same success and reward calculations for Bring and Insert, but for Bring the threshold for success is 3 cm, and for insert, it is 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 mm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1) Success: We check that the distance between pb and ps is less than the defined threshold, that the blue block is touching the tray, and that the end-effector is not touching the block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For Insert, the block can only be within 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 mm of the insertion target if it is correctly inserted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) Reward: We include a term for checking the distance between the pb and ps and a term for rewarding increas- ing distance between pb and pe once the blue block is brought/inserted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Open-Gripper/Close-Gripper We use the same success and reward calculations for Open- Gripper and Close-Gripper, apart from inverting the condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1) Success: For Open-Gripper and Close-Gripper, we check to see if ag < 0 or ag > 0 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) Reward: We include a term for checking the action, as we do in the success condition, and also include a shaping term that discourages high magnitudes of the movement action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Lift 1) Success: We check to see if pb,z > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='06.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) Reward: We add a dense reward for checking the height of the block, but specifically also check that the gripper positions correspond to being closed around the block, so that the block does not simply get pushed up the edges of the tray.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We also include a shaping term for encouraging the gripper to close when the block is reached.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Reach 1) Success: We check to see if ∥pe − pb∥ < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) Reward: We have a single dense term to check the distance between pe and pb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Move-Object For Move-Object, we changed the required holding time for success to 1 second, or 20 time steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1) Success: We check to see if the vb > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='05 and ab < 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The acceleration condition ensures that the arm has learned to move the block by following a smooth trajectory, rather than vigorously shaking it or continuously picking up and dropping it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) Reward: We include a velocity term and an acceleration penalty, as in the success condition, but also include a dense bonus for lifting the block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' APPENDIX F RETURN PLOTS As previously stated, we generated hand-crafted reward functions for each of our tasks for the purpose of training our SAC-X experts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Given that we have these rewards, we can also generate return plots corresponding to our results to add extra insight (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 12 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The patterns displayed in these plots are, for the most part, quite similar to the success rate plots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' One notable exception is that there is an eventual increase in performance when training DAC on Insert, indicating that, perhaps for certain tasks, DAC alone can eventually make progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Nevertheless, it is clear that LfGP improves learning efficiency, and it is unclear whether DAC would plateau even if it was trained for a longer period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' APPENDIX G MODEL ARCHITECTURES AND HYPERPARAMETERS All the single-task models share the same network architec- tures and all the multitask models share the same network architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' All layers are initialized using the PyTorch default methods [37].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For the single-task variant, the policy is a fully-connected network with two hidden layers followed by ReLU activation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Each hidden layer consists of 256 hidden units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The output of the policy for LfGP and DAC is split into two vectors, mean ˆµ and variance ˆσ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For both variants of BC, only the mean ˆµ ABLETT et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' : LEARNING FROM GUIDED PLAY 13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0 200 400 600 Stack 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0 200 400 600 800 1000 Unstack-Stack 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0 100 200 300 400 500 Bring 0 1 2 3 4 100 200 300 400 500 Insert 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Updates/steps (millions) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Episode Return LfGP (multi) BC (multi) DAC (single) BC (single) Expert Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 12: Episode return for LfGP compared with all baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Shaded area corresponds to standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0 200 400 600 Stack Stack 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 200 250 300 Open 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 100 150 200 250 300 Close 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0 200 400 Lift 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 100 150 200 250 300 Reach 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0 200 400 Move 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0 200 400 600 800 Unstack-Stack Unstack-Stack 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 200 250 300 Open 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 150 200 250 300 Close 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0 200 400 Lift 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0 100 200 Reach 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0 200 400 Move 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0 100 200 300 400 Bring Bring 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 200 250 300 Open 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 100 200 300 Close 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0 200 400 Lift 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 100 200 300 Reach 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0 200 400 Move 1 2 3 4 200 400 Insert Insert 1 2 3 4 250 275 300 325 Open 1 2 3 4 100 200 300 Close 1 2 3 4 100 200 300 400 500 Bring 1 2 3 4 0 200 400 Lift 1 2 3 4 0 100 200 300 Reach 1 2 3 4 0 200 400 Move 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Updates/steps (millions) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='0 Episode Return LfGP (multi) BC (multi) DAC (single) BC (single) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 13: Episode return for LfGP compared with multitask baselines on all tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Shaded area corresponds to standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' output is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The vectors define a Gaussian distribution (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' N(ˆµ, ˆσ2I), where I is the identity matrix).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' When computing actions, we squash the samples using the tanh function and bound the actions to be in range [−1, 1], as done in SAC [44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The variance ˆσ2 is computed by applying a softplus function followed by a sum with an epsilon ϵ = 1e-7 to prevent underflow: ˆσi = softplus(ˆxi) + ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The Q-functions are fully-connected networks with two hidden layers followed by ReLU activations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Each hidden layer consists of 256 units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The output of the Q-function is a scalar corresponding to the value estimate given the current state-action pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Finally, the discriminator is a fully-connected network with two hidden layers followed by tanh activations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Each hidden layer consists of 256 units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The output of the discriminator is a scalar logit to be used as an input to the sigmoid function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The sigmoid function output can be viewed as the probability of the current state-action pair coming from the expert distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For multitask variant, the policies and the Q-functions share their initial layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' There are two shared, fully-connected layers followed by ReLU activations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Each layer consists of 256 units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The output of the last shared layer is then fed into the policies and Q-functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Each policy head and Q-function head corresponds to one task and has the same architecture: a two-layered fully-connected network followed by ReLU activations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The output of the policy head corresponds to the parameters of a Gaussian distribution, as described previously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Similarly, the output of the Q-function head corresponds to the value estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Finally, the discriminator is a fully-connected network with two hidden layers followed by tanh activations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Each hidden layer consists of 256 units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The output of the discriminator is a vector, where the ith entry corresponds to the logit input to the sigmoid function for task Ti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The ith sigmoid function output corresponds to the probability of the current state-action pair coming from the expert distribution in task Ti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' The hyperparameters for our experiments are listed in Table III and Table V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In the early-stopping variant of BC, overfit tolerance refers to the number of full dataset training epochs without an improvement in validation error before we stop training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' All models are optimized using Adam Optimizer [48] with PyTorch default values, unless specified otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' APPENDIX H OPEN-ACTION AND CLOSE-ACTION 14 IEEE ROBOTICS AND AUTOMATION LETTERS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' PREPRINT VERSION.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' ACCEPTED DEC, 2022 TABLE III: Hyperparameters for AIL algorithms across all tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Parameters that do not appear in the original version of DAC are shown in blue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Algorithm LfGP DAC Total Interactions 2M (4M for Insert) Buffer Size 2M (4M for Insert) Buffer Warmup 25k Initial Exploration 50k Evaluations per task 50 Evaluation frequency 100k interactions Intention γ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='99 Batch Size 256 Q Update Freq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1 Target Q Update Freq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1 π Update Freq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1 Polyak Averaging 1e-4 Q Learning Rate 3e-4 π Learning Rate 1e-5 α Learning Rate 3e-4 Initial α 1e-2 Target Entropy −dim(a) = −4 Max.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Gradient Norm 10 π Weight Decay 1e-2 Q Weight Decay 1e-2 BE sampling proportion 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='1 BE sampling decay 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='99999 Discriminator Learning Rate 3e-4 Batch Size 256 Gradient Penalty λ 10 Weight Decay 1e-2 (sT , 0) sampling bias 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='95 TABLE IV: Hyperparameters for LfGP schedulers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Scheduler Learned WRS WRS + HC ξ 45 N/A N/A φ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6 N/A N/A Initial Temp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 360 N/A N/A Temp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Decay 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='9995 N/A N/A Min.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Temp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='1 N/A N/A Main Task Rate N/A 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 Handcraft Rate N/A N/A 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='5 DISTRIBUTION MATCHING There was one exception to the method we used for col- lecting our expert data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Specifically, our Open-Gripper and Close-Gripper tasks required additional considerations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' It is worth reminding the reader that our Open-Gripper and Close- Gripper tasks were meant to simply open or close the gripper, respectively, while remaining reasonably close to either block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' If we were to use the approach described above verbatim, the Open-Gripper and Close-Gripper data would contain no (s, a) pairs where the gripper actually released or grasped the block, instead immediately opening or closing the gripper while simply hovering near the blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Perhaps unsurprisingly, this was detrimental to our algorithm’s performance: as one example, an agent attempting to learn Stack would, if Open- Gripper was selected while the blue block was held above TABLE V: Hyperparameters for BC algorithms (both single-task and multitask) across all tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Version Main Results Early Stopping Batch Size 256 Learning Rate 1e-5 Weight Decay 1e-2 Total Updates 2M (4M for Insert) N/A Overfit Tolerance N/A 100 the green block, move the grasped blue block away from the green block before dropping it on the tray.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' This behaviour, of course, is not what we would want, but it better matches an expert distribution when the environment is reset in between each task execution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' To mitigate this, our Open-Gripper data actually contain a mix of each of the other sub-tasks called for the first 45 time steps, followed by a switch to Open-Gripper, ensuring that the expert dataset contains some degree of block-releasing, with the trade-off being that 50% of the Open-Gripper expert data is specific to whatever the main task happens to be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We left this additional detail out of our main paper for clarity, since it corresponds to only a small portion of the expert data (every other auxiliary task was fully reused).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Similarly, the Close-Gripper data calls Lift for 15 time steps before switching to Close-Gripper, ensuring that the Close-gripper dataset will contain a large proportion of data where the block is actually grasped.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' For the Closer-gripper data, however, this modification did still allow data to be reused between main tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' APPENDIX I ATTEMPTED AND FAILED EXPERIMENTS In this section, we provide a list of experiments and modi- fications that did not improve performance, in addition to the alternatives that did.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1) Pretraining with BC: We attempted to pretrain LfGP using multitask BC, and then to transition to online learning with LfGP, but we found that this tended to produce significantly poorer final performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Some existing work [49], [50] has investigated transitioning from BC to online RL, but achieving this consistently, especially with off-policy RL, remains an open research problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) Handcrafted Open-Gripper/Close-Gripper policies: Given the simplicity of designing a reward function in these two cases, a natural question is whether Open- Gripper and Close-Gripper could use hand-crafted re- ward functions, or even hand-crafted policies, instead of these specialized datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' In our experiments, both of these alternatives proved to be quite detrimental to our algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 3) Penalizing Q values: In our early experiments, we found that LfGP training progress was harmed by ex- ploding Q values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' This problem was particularly exac- erbated when we added BE sampling to our Q and π updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' It appears that this occurs because, at the begin- ning of training, the differences between discriminator ABLETT et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' : LEARNING FROM GUIDED PLAY 15 outputs for expert data and non-expert data are so large that the bootstrap Q updates quickly jump to unrealistic values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We attempted to use various forms of Q penalties to resolve this, akin to Conservative Q Learning (CQL) [51], but found that all of our modifications ultimately harmed final performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Some of the things we tried, in addition to the CQL loss, were reducing γ (.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='95, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='9), clipping Q losses to -5, +5, smooth L1 loss, huber loss, increased gradient penalty λ for D (50, 100), decreased reward scaling (.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='1), more discriminator updates per π/Q update (10), and weight decay in D only (as is done in [9]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We ultimately resolved exploding Q values by i) decreasing polyak averaging to a significantly lower value than is used in much other work (1e-4 as opposed to the SAC default of 5e-3), and ii) adding in weight decay (with a significantly higher value used than is used in other work) to π, Q, and D training (which was required to not overfit with the reduced polyak averaging value).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Without the added weight decay, performance started to plateau and eventually to decrease.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 4) Higher Update-to-Data (UTD) Ratio: Recent work in RL has started increasing the UTD ratio (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', increas- ing the number of policy/Q updates per environment interaction), with the goal of improving environment sample efficiency [53].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We were actually able to increase this from 1 to 2 and achieve a marginal improvement in environment sample efficiency, but this also nearly doubled the running time of our experiments, so we opted not to include this modification in our final results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Higher values of the UTD ratio also caused our Q values to explode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' APPENDIX J EXPERIMENTAL HARDWARE For a list of the software we used in this work, see our code and instructions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' We used a number of different computers and GPUs when completing our experiments: 1) GPU: NVidia Quadro RTX 8000, CPU: AMD - Ryzen 5950x 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 GHz 16-core 32-thread, RAM: 64GB, OS: Ubuntu 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2) GPU: NVidia V100 SXM2, CPU: Intel Gold 6148 Skylake @ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='4 GHz (only used 4 threads), RAM: 32GB, OS: CentOS 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 3) GPU: Nvidia GeForce RTX 2070, CPU: RYZEN Threadripper 2990WX, RAM: 32GB, OS: Ubuntu 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' REFERENCES [36] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Chan, “RL sandbox,” https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='com/chanb/rl sandbox public, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [37] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Paszke, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “PyTorch: An imperative style, high-performance deep learning library,” in Advances in Neural Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Processing Systems 32, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Larochelle, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Beygelzimer, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' dAlch´e-Buc, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Fox, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Garnett, Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 8024–8035.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [38] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Gulrajani, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ahmed, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Arjovsky, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Dumoulin, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Courville, “Improved Training of Wasserstein GANs,” in Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Neural Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Pro- cessing Systems, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Guyon, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Long Beach, USA: Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 5767–5777.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [39] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Kostrikov, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Agrawal, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Dwibedi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Levine, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Tomp- son, “Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Learning Representations (ICLR’19), New Orleans, USA, May 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [40] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Kingma and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Welling, “Auto-Encoding Variational Bayes,” arXiv:1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='6114 [cs, stat], Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [41] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Fujimoto, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' van Hoof, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Meger, “Addressing Function Ap- proximation Error in Actor-Critic Methods,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 35th Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Machine Learning (ICML’18), Stockholm, Sweden, Jul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 10–15 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 1582–1591.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [42] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' van Hasselt, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Guez, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Silver, “Deep Reinforcement Learning with Double Q-learning,” in AAAI Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Artificial Intelligence, Pheonix, USA, Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [43] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Mnih, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “Human-level control through deep reinforcement learning,” Nature, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 518, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 7540, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 529–533, Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [44] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Haarnoja, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “Soft Actor-Critic Algorithms and Applications,” arXiv:1812.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='05905 [cs, stat], Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [45] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Kostrikov, “PyTorch Implementations of Reinforcement Learn- ing Algorithms,” https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='com/ikostrikov/pytorch-a2c-ppo-acktr- gail, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [46] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Riedmiller, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “Learning by Playing Solving Sparse Reward Tasks from Scratch,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 35th Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Machine Learning (ICML’18), Stockholm, Sweden, July 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 4344–4353.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [47] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Coumans and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Bai, “PyBullet, a Python module for physics simulation for games, robotics and machine learning,” http://pybullet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='org, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [48] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Kingma and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ba, “Adam: A Method for Stochastic Optimization,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Learning Representations (ICLR’15), San Diego, USA, May 7–9 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [49] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Rajeswaran*, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Robotics: Science and Systems (RSS’18), Pittsburgh, USA, Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 26–30 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [50] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Wu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Mozifian, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Shkurti, “Shaping Rewards for Rein- forcement Learning with Imperfect Demonstrations using Generative Models,” arXiv:2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='01298 [cs], Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [51] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Kumar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Zhou, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Tucker, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Levine, “Conservative Q-Learning for Offline Reinforcement Learning,” arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='04779 [cs, stat], Aug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [52] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Orsini, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=', “What Matters for Adversarial Imitation Learning?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' in Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Neural Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Processing Systems, June 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' [53] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Zhou, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' Ross, “Randomized Ensembled Double Q-Learning: Learning Fast Without a Model,” arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content='05982 [cs], Mar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfQvZV/content/2301.00051v1.pdf'}