diff --git "a/2tE0T4oBgHgl3EQfdwDZ/content/tmp_files/load_file.txt" "b/2tE0T4oBgHgl3EQfdwDZ/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/2tE0T4oBgHgl3EQfdwDZ/content/tmp_files/load_file.txt" @@ -0,0 +1,663 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf,len=662 +page_content='ReVoLT: Relational Reasoning and Voronoi Local Graph Planning for Target-driven Navigation Junjia Liu13, Jianfei Guo23, Zehui Meng3, Jingtao Xue3 1 Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong 2 School of Automation Science and Engineering, Xi’an Jiaotong University 3 Application Innovate Laboratory (2012 Laboratories), Huawei Technologies Co.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=', Ltd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Beijing, 100038, China jjliu@mae.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='cuhk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='hk, ventus@stu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='xjtu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='cn, {mengzehui, xuejingtao}@huawei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='com Abstract—Embodied AI is an inevitable trend that emphasizes the interaction between intelligent entities and the real world, with broad applications in Robotics, especially target-driven navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This task requires the robot to find an object of a certain category efficiently in an unknown domestic environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Recent works focus on exploiting layout relationships by graph neural networks (GNNs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, most of them obtain robot actions directly from observations in an end-to-end manner via an incomplete relation graph, which is not interpretable and reliable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We decouple this task and propose ReVoLT, a hierarchical framework: (a) an object detection visual front- end, (b) a high-level reasoner (infers semantic sub-goals), (c) an intermediate-level planner (computes geometrical positions), and (d) a low-level controller (executes actions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT operates with a multi-layer semantic-spatial topological graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The reasoner uses multiform structured relations as priors, which are obtained from combinatorial relation extraction networks composed of unsupervised GraphSAGE, GCN, and GraphRNN-based Region Rollout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The reasoner performs with Upper Confidence Bound for Tree (UCT) to infer semantic sub-goals, accounting for trade-offs between exploitation (depth-first searching) and ex- ploration (regretting).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The lightweight intermediate-level planner generates instantaneous spatial sub-goal locations via an online constructed Voronoi local graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The simulation experiments demonstrate that our framework achieves better performance in the target-driven navigation tasks and generalizes well, which has an 80% improvement compared to the existing state-of- the-art method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The code and result video will be released at https://ventusff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='io/ReVoLT-website/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Index Terms—Relational reasoning, combinatorial relation graph neural networks, UCT bandit, online Voronoi local graph I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' INTRODUCTION Finding objects in complex houses efficiently is a prereq- uisite for domestic service robots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Robots need to reason and make dynamic decisions along with interacting with the real- world environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Embodied AI, proposed by Matej Hoffman and Rolf Pfiefer [1], suggests that to truly understand how the human brain works, a brain should be embedded into a physical body, and let it explore and interact with the real world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Among all the work practicing Embodied AI in recent years, target-driven navigation (TDN) is one of the most feasible and essential tasks, which combines techniques in both machine learning and robotics, and is widely applicable for scenarios such as domestic service robots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It typically requires the robot to find a target object of a certain category in an unknown scene, demanding both high efficiency and success rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Hence, the key problems of the TDN task are generalizing across unknown domains and exploring efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Traditional Simultaneous Localization and Mapping (SLAM) pipeline has already handled TDN to some extent [2], but there are still numerous problems lying in its major modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' First, it remains troublesome for SLAM-based methods to acquire and maintain a lifelong updating semantic map, which demands accurate sensors and semantic information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Second, SLAM-based methods are inherently less adaptive to posterior information, which causes them not generalizing well in complicated environments, especially in indoor scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Last but not least, SLAM-based methods are not specially designed for searching objects in unknown environments, which requires keeping balance between exploitation (depth-first searching) and exploration (regretting).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Recently, learning-based methods emerge and show power- ful capabilities of solving complicated tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, these methods generally have problems of interpretability and gen- eralization, especially in the TDN task which require robots to operate in unseen domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We argue that it is more natural and empirical to introduce a priori [3] to the learning model instead of training from scratch, considering how human teach ignorant babies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Introducing a priori enables algorithms to achieve higher data efficiency, better model interpretability, and generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In indoor TDN tasks, one of the most useful prior information is the relationship among objects and rooms of different categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Some recent works reason about the target direction using object relationships as a priori in single-room environments [4]–[6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, common domestic scenes are composed of multiple rooms, thus more prior information such as room connection, object-in-room membership, and other implicitly structured relationships could be exploited, which are typically ignored in these works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In this paper, we propose a hierarchical navigation frame- work, Relational Reasoning and Voronoi Local graph plan- ning (ReVoLT), which comprises a combinatorial graph neural network for multiform domestic relations extraction, an UCT- based reasoning exploration, and an online Voronoi local graph for the semantic-spatial transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The detailed contributions are as follows: The TDN task is concisely decomposed, allowing for separate and special designs for different modules, instead of operating in a mixed-up end-to-end manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We focus our efforts on designing the reasoner and the planner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' To extract multiform structural relations for reasoning, we arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='02382v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='RO] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='6 Jan 2023 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������������������������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The main hierarchical framework of ReVoLT method, which contains a high-level reasoner (infers semantic sub-goals), an intermediate-level planner (computes spatial location sub-goal), and a low-level controller (computes actions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The combinatorial relation extraction module provides a priori of the exploration value about the observed objects and regions through embedding similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Especially, Region Rollout model provides Monte Carlo simulation for UCT in a conditional GraphRNN (c-GraphRNN) way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' propose combining unsupervised GraphSAGE [7], self- supervised GCN, and c-GraphRNN methods for learning object embedding, region embedding, and region rollout, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Based on the relation priors, the high-level reasoner (semantic reasoning) is abstracted as a bandit problem and adopts UCT to balance exploitation (depth-first searching) and exploration (regretting).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We construct Voronoi local graphs online using RGB- D observations and convert semantic sub-goals to spatial locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We term this an intermediate-level planning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It is found in the test results that the proposed framework is superior to state-of-the-art methods and achieves a higher success rate and success weighted by path length (SPL) with good generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' RELATED WORKS Recently, there are many TDN solutions based on relational reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' They have the advantage of replacing an explicit metric map like SLAM-based methods, inferring the approxi- mate position of the target object based on observed objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Most of these methods use GNNs to learn object-object proximity relationships but ignore the relationship between regions/rooms, thus it limits their task scenarios to a single room (using AI2Thor data set [8] in simulation for training).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' For example, Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [4] propose to use Graph Convo- lutional Network (GCN) to incorporate the prior knowledge about object relationship into a Deep Reinforcement Learning (DRL) framework as part of joint embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Their priors are obtained from large-scale scene understanding datasets and updated according to the current observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Qiu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [6] share the same idea, but extract observations as context vectors, which integrates relationship strength between the connected objects and their spatial information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' For navigation tasks in houses with multiple rooms, it is necessary to first reach the room that may contain the target object (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' refrigerator-kitchen), then search the target in object cliques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Therefore, the learning of prior knowledge should consider more relationships, including room-to-room connection and object-in-room membership.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [9] propose a memory structure based on the Bayesian graph model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It uses the probability relationship graph to get the prior house layout from the training set and estimates its posterior in the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, this work does not combine object- level reasoning to achieve a complete TDN task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chaplot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [10] build a topological representation with associated semantic features and learn a prior semantic score function to evaluate the probability of potential nodes in a graph with various directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, they provide target images,which is impractical in domestic scenarios, while our method only uses target labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' They subsequently extend the Active Neural SLAM system [2], to learn semantic priors using a semanti- cally aware long-term policy for label target navigation task [11] and won CVPR 2020 Habitat ObjectNav Challenge1 [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It is worth mentioning that they also point out the end-to-end learning-based methods suffer from large sample complexity and poor generalization as they memorize object locations and appearance in training environments [11], which prompt us to consider the hierarchical framework with a topological graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Table I only lists TDN methods with label target and relational reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' REVOLT REASONING & PLANNING WITH HIERARCHICAL FRAMEWORK This task needs to be re-examined from the perspective of bionics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Imagine a human facing such a task when he enters an unknown house.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' He will not feel confused due to the prior knowledge about domestic scenes he has.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It is natural for us to first roughly determine the type of room based on categories of multiple observed objects in the current room (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' a bedroom).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' According to the object-in-room membership, the 1https://aihabitat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='org/challenge/2020/ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='13 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='11 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='12���������������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='���������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='��������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='�������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='����������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Combinatorial relation extraction module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' (a) Obtain object embedding via unsupervised weighted-GraphSAGE;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' (b) Region embedding is received by passing a sub-graph with object embedding to GCN layers;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' (c) According to the house structure of region connectivity, a GraphRNN-based model is used to learn the structure distribution and generate possible feature of future regions node by node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' TABLE I PERFORMANCE OF EXISTING TDN METHODS WITH VARIOUS EXPERIMENT SETTING Method Room Scale Dataset SR(%) SPL(%) Scene-prior [4] Single AI2-THOR 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='4 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='9 SAVN [13] Single AI2-THOR 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='7 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='3 MJOLNIR [6] Single AI2-THOR 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='3 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='1 BRM [9] Multiple House3D SemExp† [11] Multiple Matterport3D 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='4 † SemExp won the first place in CVPR Habitat 2020 competition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' exploration value V(t|cur room) of the target object t in the current room can be obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' At the same time, some potential but unexplored passages (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' a door or hallway) can be determined as ghost nodes like [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The structural relationship of the house layout and room connection can help us predict categories and value V(t|next room) of next rooms connected by ghost nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Except for these priors, dynamic decisions also should be made in a specific task, rather than just applying experience mechanically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Reasoning procedure which contains intelligent exploration and exploitation is one of the winning strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Thus, our approach focuses on solving the following two problems: How to obtain a more effective prior conditional explo- ration value in a structured form?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' How to make efficient decisions between multiple feasible paths based on exploration values?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The remainder of this section is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In subsection III-A, III-B, III-C, we present a combinatorial relation extraction module (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2) using GNNs, which learns three different relationships in a unified paradigm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' A UCT- based online reasoner is described in subsection III-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In III-E, we consider the coarse spatial information and build an intermediate-level planner through online Voronoi construc- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Finally, the whole ReVoLT hierarchical framework is summarized in subsection III-F (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Object Embedding learning As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2 (a), the object-to-object relationship consists of not only pair-wise semantic similarity, but also distances and the number of hops between object pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We first extract an object-level graph Go(Vo, Eo) through object positions pos and category Co from Matterport3D dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Objects in the same room are fully connected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' As for object pairs in different rooms, only those closest to a common door have an connecting edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This is useful for the robot to infer objects that are strongly related to the target just using object- level embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' GraphSAGE [7] is a popular model in the node embedding field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We adopt it to obtain the embedding of each object category to fuse semantics and proximity relationships with other categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Our node embedding procedure uses GloVe [14] as the initial node semantic feature {xv, ∀v ∈ Vo}, and employ an unsupervised form of GraphSAGE with a loss that penalizes the embedding similarity between two objects far apart and reward the adjacent two.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Different from the original GraphSAGE, edge features {ωe:u→v, ∀e ∈ Eo} are also taken into account in aggregation and loss calculations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' For each search depth k, weight matrices Wk, ∀k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' K},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' we employ an edge-weighted mean aggregator which simply takes the element-wise mean of the vectors in {hk−1 u ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ∀u ∈ N(v)} to aggregate information from node neighbors: h0 v ← xv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ∀v ∈ V hk v ←σ � Wk · mean({hk−1 v } ∪ {ωu→v · hk−1 u }) � (1) Then an edge-weighted loss function is applied to the output {zv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ∀v ∈ Vo},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' and tune the weight matrices Wk: LGo (zv) = − log � σ � ωu→vz⊤ v zu �� − Q · Eun∼Pn(v) log � σ � −ωu→vz⊤ v zun �� (2) where Pn is a negative sampling distribution,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Q defines the number of negative samples,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' σ is the sigmoid function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Since object embeddings with the same category {zc, ∀c ∈ Co} should have consistent representation, another mean ag- gregation is performed on the embeddings of same category between the final GraphSAGE aggregation and loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This overwrites the original value with the final embedding for each category {zc ← mean(hK v ), if Co(v) = c}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Region Embedding learning Apart from the pairwise relationship between objects, the many-to-one relationship between an object and a room or region is also indispensable for inferring the existence pos- sibility of the target object in a certain room or among multiple observed objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Besides, to evaluate the similarity, relationships of different levels should have a unified paradigm to obtain representation of consistent metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Therefore, for region-level sub-graphs, we still opt for the same embedding representation procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This part is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2 (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Region embedding is carried out in a self-supervised form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We take the sub-graph Gr(Vr, Er) as input, with embedding of objects in the same region {zc, ∀c ∈ Co} as nodes and weighted spatial distances as edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The batch composed of these sub-graphs is passed into the GCN [15], and the corresponding region embedding {rv, ∀v ∈ Vr} is obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Similarly from the previous procedure, for region embedding with the same label, a mean aggregation is performed to obtain a uniform vector representation {rl, ∀l ∈ Lr}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Since there is no need to do multi-hop aggregations at region-level, a simple GCN layer is applied rather than GraphSAGE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' To enable membership calculation between region embed- ding rl and object embedding zc and distinguish regions with different labels,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' we use a combined loss which comprises two parts: the classification loss of embedding label and the membership loss of object-in-region: LGr (rv) = − log � σ � r⊤ v zu �� − Q · Eun∼Pn(v) log � σ � −r⊤ v zun �� − 1 n n � i=1 lv log(ˆl(rv)) (3) where Pn(v) represents objects not in region v,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' and ˆl(·) is a multi-layer perceptron (MLP) network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Region Rollout learning As the third and most important part of relation extraction, the structural relationship reasoning ability plays a crucial role in understanding the correct direction of navigation and shortening the exploration period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' To achieve this, the joint probability p(Gh) of houses need to be learned to conceive a probable house layout memory Gh ∼ p(Gh|Gsub) conditioned on observed regions Gsub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, its sample space might not be easily characterized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Thus, the house graphs are modeled as sequences by following the idea of GraphRNN [16], and redefine some concepts to make it more suitable for conditional reasoning with embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This part is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2 (c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Sπ = fs(Gh, π) = (Aπ 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' , Aπ n) (4) where π represents the node order, and each element Aπ i ∈ {0, 1}(i−1)×(i−1), i ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' , n} is an adjacent matrix refer- ring the edges between node π(vi) and its previous nodes π(vj), j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' , i − 1} already in the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Since each Aπ i has variable dimensions, we first fill them up to the maximum dimension n and then repeat the 2D matrix 16 times to form a 3D matrix with n × n × 16 dimensions as an edge mask where 16 is the embedding length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Therefore, a featured graph can be expressed as the element-wise product of the region embedding matrix Xπ under corresponding order and sequence matrix {Sπ}3D: p(G) = n+1 � i=1 p � xπ i | ({Sπ 1 }3D, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' , {Sπ i−1}3D) ⊙ Xπ i−1 � (5) where Xπ i−1 is the embedding matrix with (i − 1) × (i − 1) × 16 dimensions referring to region embeddings before region π(vi), and xπ i refers to the embedding of π(vi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Passing {Sπ}3D ⊙ Xπ as a sequence into GRU or LSTM, we can get the structure distribution of houses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This allows us to predict the next region embedding and label under the condition of the observed subgraph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The loss function of the Region Rollout network is a CrossEntropy between generated embedding label and the real label: LGh(xπ i ) = − 1 n n � i=1 li log-softmax[(xπ i )T rj], ∀j ∈ Lr (6) In conclusion, with the combination of III-A unsupervised edge-weighted GraphSAGE object embedding learning, III-B self-supervised GCN region embedding learning, and III-C c-GraphRNN conditional region rollout, we can now extract multiform structural relationships.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Meanwhile, embedding is used as a unified paradigm for representation, and the similar- ity between objects or regions (either observed or predicted) embeddings and the target object embedding is used as a prior to guide the exploration in an unknown domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Reasoning and Exploring as a Bandit Problem A prior alone cannot lead to success.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Inspired by [10], a posterior topological representation is also constructed in each specific task to combine experience with practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Specifically, we build a multi-layer posterior topological graph covering all object-level, clique-level and vertex-level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' clique divides rooms into small clustered regions and reduces the burden of the visual front-end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Each vertex governs the three nearest cliques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Object Embedding network provides the object node features, and Region Embedding network generates the features of both clique and vertex from their attached objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Region Rollout network gives an evaluation about ghost nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, there are always situations contrary to experience in reality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In other words, robots must have the ability to balance exploration and exploitation online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We adopt Upper Confidence Bound for Tree (UCT) method [17] to set an online bonus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The simulation procedure of UCT is supported by the Region Rollout network, thus the robot is not only able to obtain the bonus from reached count, but also estimate the future exploration value inductive bias ωi of selected path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It can effectively prevent the robot from being ������������ ��������� ����������� ���������� ������������ ������������� ����������� � � ���� ������������������� ���������� �������� ���������� ��������������� ������������������������ ������������������� ���������������� ���������������� ������������ ������ �������� �� �� � Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In a specific task, a multi-layer topological graph is constructed based on visual front-end, and a tree with the birthplace as the root node is abstracted from the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The clique refers to a collection of adjacent objects or a bunch of non-semantic obstacles, and the vertex refers to an observed navigable location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Each gray ghost node has connected two vertices, and only stores the relative position of the connected vertices to assist localization, without being used as a navigation sub-goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The black ghost nodes refer to unknown areas and promote exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' trapped in a useless area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The combined effect of inductive bias ω and bonus will discourage the repetitive search near negative (non-success) sub-goals and drive the robot to return to parent nodes for back-tracking, which we term Revolt Reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The word Revolt summarizes the characteristics of our method vividly, which allows robots to regret at nodes with low exploration value, discarding them and returning to previous paths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' To avoid robots wandering between two goals, it is necessary to introduce a navigation loss term Ldis to penalize node distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Hence, we can finally obtain the exploration value V of the node i as: V(t|i) = Σm i→jωj m + c1 � ln Ni ni − c2Ldis (7) where factors c1 and c2 are set as 1 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' j refers to one of node i’s descendants in the tree, and m is its total number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ni is the total arrivals of node i and its descendants, while ni just represents arrivals of node i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Online constructed Voronoi local graph The reasoner only gives a semantic node id in a graph as a sub-goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' If the low-level controller directly uses it as a navigation goal, it will inevitably lead to over-coupling and increase the difficulty of navigation success.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We can refer to the hierarchical human central nervous system composed of the brain, cerebellum, brain-stem and spinal cord [18], if the high-level reasoner is compared to the brain, then the skeletal muscle is the low-level motor controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The brain does not directly transmit motion instructions to the skeletal muscles, but passes it through the brain-stem, spinal cord and other lower-level central nervous system for information conversion [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Besides, the brain does not actually support high-speed, low-latency information interaction while controlling a motion [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Therefore, it is necessary to use a RGB-D camera and an odometer to construct a local Voronoi graph, offering approximate relative coordinates of the sub-goal within a Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Combining the depth information with robot’s pose in a short period, then we can get a simple 3D reconstruction result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' A Voronoi local graph can be constructed through DBSCAN clustering after projecting the 3D map as a 2D obstacle scatter plot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' reachable range as an input to the low-level controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The Voronoi graph can record the relationship between the robot and obstacles, and provide an available path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Since the TDN task is map-less, we construct a local Voronoi graph within a fixed step online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Conditioning on the depth information, the parameters (in- ternal and external) of the camera, and the odometer infor- mation, obstacles in depth images can be easily converted into coordinates in a world coordinate system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This system is derived from the birth pose of the robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Projecting this partially reconstructed 3D map onto a 2D plane along the vertical axis forms a scatter diagram depicting obstacles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We can construct a Voronoi diagram online by segmenting naviga- ble paths and explorable cliques with multiple related objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Different from traditional methods [21], we use DBSCAN [22], [23] (a density-based clustering algorithm) to cluster the scattered points of adjacent obstacles into convex hulls first, and then filter out noise points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Followed by constructing Delaunay triangle with the center of scattered points in the convex hull, thereby generating a Voronoi diagram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ������������ ��������� ���������� ������������ ������������� � � �� �������� ������������� ������������������� �������������������� � �� � ���������� �������� ���������� ��������������� ����������������� ����������� �� �� �� �� � � ����������� ���������� ��������� ����������� ��������������������������������������������� ������� ����� ����� ��� ������ � ����������� Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The semantic sub-goal is converted into relative coordinates by the Voronoi-based intermediate-level planner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Hierarchical reasoning and planning for navigation In this section, we will summarize how the proposed rea- soner and planner cooperate to complete navigation tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The curves in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 5 show the correspondence of concepts between the topological graph in reasoner and the Voronoi diagram in planner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The aggregation of obstacles is regarded as a clique, each of which attaches and records all objects in its convex hull, and evaluates its inductive bias value according to the object-in-region membership via the Region Embedding network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The position of a vertex is generated by Voronoi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Multiple cliques and their subordinate objects surrounding the vertex jointly determine the general room label of it, and use the label for the inductive bias evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Relative directions and distances between two adjacent vertex nodes are stored in gray ghost nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Since the robot exploits relative coordinates and directions, it effectively avoids the influence of odometer and depth camera error, thus insensitive to cumulative error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Besides, thanks to the Voronoi local diagram, only short-period scatter data need to be saved, and there is no need to consider the closed-loop matching problem like SLAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' With the construction of Voronoi diagram and its trans- formation to a hierarchical topology, we can conduct rea- soning in vertex/clique-level and object-level, searching for the best vertex position and the most likely clique based on the exploration value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' After selecting a clique, the robot will navigate towards it, and explore it more explicitly as object- level reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Besides, the Voronoi diagram provides the evidence for choosing the next best view of one clique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' By changing multiple perspectives, the robot can find the target object in a clique more efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' EXPERIMENTS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Experiment Setup We use the Habitat simulator [24] with Matterport3D [25] environment as our experiment platform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Habitat simulator is a 3D simulator with configurable agents, multiple sensors, and generic 3D dataset handling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Matterport3D dataset contains 90 houses with 40 categories of objects and 31 labels of regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It also provides detailed object and region segmentation infor- mation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Here we just focus on 21 categories of target object required by the task: chair, table, picture, cabinet, cushion, sofa, bed, chest of drawers, plant, sink, toilet, stool, towel, tv monitor, shower, bathtub, counter, fireplace, gym equipment, seating, clothes and also ignore some meaningless room labels, like outdoor, no label, other room and empty room.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' We use YOLOv4 [26] as our object detection module, which is fine- tuned using objects in Matterport3D dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Because the aiming of low-level controller is the same as PointNav task’s [27], we adapt a pre-trained state-of-the-art PointNav method occupancy anticipation [28] as our controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' During a specific TDN task, the robot is spawned at a random location in a certain house and is demanded to find a object of a given category as quickly as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The task is evaluated with three commonly used indicators: Success Rate (SR), the Success weighted by Path Length (SPL) and Distance to Success (DTS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' SR represents the number of times the target was found in multiple episodes and is defined as 1 N �N i=1 Sui, where N is the number of total episodes and Sui is a binary value representing the success or failure of the i-th episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' SPL depicts both success and the optimal path length, it is defined as 1 N �N i=1 Si Li max(Pi,Li).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Here we use the shortest length provided by the simulator as Li and the path length of the robot as Pi in episode i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' DTS is the distance of the agent from the success threshold boundary when the episode ends.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The boundary is set to 1m and the maximum episode length is 500 steps, which are the same as [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Furthermore, our navigation task has two modes: indepen- dent (ReVoLT-i) and continuous (ReVoLT-c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Independent mode is the traditional one, the environment is reset after each episode and the robot clears its memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' While the continuous mode allows the robot to keep the topological graph if it resets in the same house.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It is used for evaluating the robot’s capability of keeping and updating the environment memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Baselines Random: At each step, the agent randomly samples an action from the action space with a uniform distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' RGBD + DD-PPO: This baseline is provided by ObjectNav Challenge 2020 [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Directly pass RGB-D information to an end-to-end DD-PPO and output an action from the policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Active Neural SLAM: This baseline uses an exploration policy trained to maximize coverage from [2], followed by the heuristic-based local policy as described above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' SemExp: Since [11] has not open-sourced their code, we directly use results in their paper as a state-of-the-art method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Results 1) results of combinatorial relation embeddings: The Ob- ject Embedding network obtains classification accuracy of 91%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The Region Embedding network obtains membership accuracy of 78% and classification accuracy of 75%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The Region Rollout network reaches prediction accuracy of 45% in the test set, which is acceptable since room relationships are not significant inherently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2) results of the whole TDN task: The results of baseline methods and ReVoLT is shown in Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' It can be seen that both of our models significantly outperform the current state-of-the-art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT-i small has ≈ 80% increase in SR and nearly twice than SemExp in SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' This confirms our hypothesis that separating prior learning and control policy in a hierarchical framework is indeed a wise approach than directly 13 12 11 10 6 8 6 5 0 8 10 12��������� ��������� ��������� ��������� ��������� ��������� ��������� ��������� Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Top-down maps of four successful tasks while using ReVoLT-i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The blue squares are the beginning positions, the blue curves are the robot trajectories, and arrows represent the robot’s current positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Targets are highlighted with green boxes, and pink areas refer to the success threshold boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The color of the trajectory is a gradient from dark to light, and the brighter the end indicates the longer the path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' TABLE II PERFORMANCE COMPARISON Method SR(%) SPL DTS (m) Random 0 0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='3298 RGBD + DD-PPO 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='021 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='3162 Active Neural SLAM 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='119 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='056 SemExp1 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='144 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='733 ReVoLT-i small∗ 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='265 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='9762 ReVoLT-i∗ 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='102 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0511 ReVoLT-c∗ 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='070 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0253 1 The 1st prize of AI Habitat 2020 These three refer to small mode with only 6 categories target like SemExp, independence mode (-i) and continuous mode (-c) of ReVoLT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' learning a semantically-aware policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Besides, the standard ReVoLT-i with 19 categories of targets still achieve a higher SR and SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' By applying the continuous mode, the robot retains a memory belonging to the same house, which allows it to find observed targets with a higher SR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ABLATION STUDY The success of ReVoLT is attributed to the relationship priors provided by the combinatorial graph neural networks, the online bonus by UCT, and the distance penalty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Therefore, we set three extra experiments with the same Voronoi-based planner and low-level controller to reveal their impacts, respec- tively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Moreover, the results of the continuous mode are also presented below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The performance of all varieties is listed in Table III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT w/o relationship priors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Sub-goal in the navigation without priors can be generated according to the distance of the observed cliques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Compared to Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 7 (a) with Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 6, we find that the lack of semantic relationship profoundly affects the robot’s path decision, making it not interested in the region with a target even though it is just nearby.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Besides, the lack ���������������������������������� ������������������������ ������������������������������� ������������������������������� ���������������� ���������� ���������������� ������������������� �������������������� ������������������� ��������� ������������������� ��������� ��������� Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In response to the three parts of exploration value function, we conduct ablation experiments respectively and illustrate them in top-down maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' TABLE III PERFORMANCE OF ABLATION EXPERIMENTS Method SR(%) SPL DTS (m) ReVoLT-i 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='102 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0511 ReVoLT-c 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='070 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0253 ReVoLT w/o priors 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='003 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='4129 ReVoLT w/o bonus 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='034 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='8139 ReVoLT w/o distance 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='030 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='2689 of region classification and region rollout makes the robot unable to use the observed semantic information to reason about relationships, resulting in longer paths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT w/o UCT bonus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The bonus is replaced with a fixed threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' If the robot reaches the same clique or vertex node more than twice, then this node will no longer be selected as 105T105a sub-goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The corresponding top-down maps are illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 7 (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Without a UCT bonus, the robot falls into an impossible local region until the threshold is reached.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT w/o distance penalty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 7 (c), using only priors and bonuses can also complete tasks, but their paths are longer due to the fluctuating thoughts while making decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT with continuous mode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The left figure of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 7 (d) is the same as the one in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, when searching for the second target in this house, once the robot associates current observations with the memory, it can find the target with a higher success rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, this also makes the robot more focused on exploitation and reduces exploration, which may cause it to ignore closer targets and lead to a lower SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' To sum up, relationship priors are essential for robots to understand the environment semantics, and it is also the major factor affecting the SR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' The UCT bonus and distance penalty contribute to the improvement of SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT-c maintains a long-term scene memory and can get outstanding performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' CONCLUSION We propose ReVoLT, a hierarchical reasoning target-driven navigation framework that combines combinatorial graph re- lation extraction and online UCT decision operating with a multi-layer topological graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' ReVoLT shows better perfor- mance on exploiting the prior relationships, and its bandit reasoning is more reasonable and efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' To bridge the gap between existing point-goal controllers and our reasoner, we adopt the Voronoi local graph for the semantic-spatial transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' However, some significant challenges remain in this field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Our future direction lies in using representation learning techniques to introduce richer object information like shape, color, and size, using scene graph detection to introduce richer semantic relation information like furniture layout, and achieving more abundant tasks like object instance navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' REFERENCES [1] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Hoffmann and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Pfeifer, “The implications of embodiment for behavior and cognition: animal and robotic case studies,” arXiv preprint arXiv:1202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='0440, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [2] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chaplot, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gandhi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Salakhutdinov, “Learning to explore using active neural slam,” in International Confer- ence on Learning Representations, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [3] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chatzilygeroudis, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Vassiliades, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Stulp, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Calinon, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Mouret, “A survey on policy search algorithms for learning robot controllers in a handful of trials,” IEEE Transactions on Robotics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 36, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 328–347, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [4] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Yang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Farhadi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Mottaghi, “Visual se- mantic navigation using scene priors,” arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='06543, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [5] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Du, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Yu, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Zheng, “Learning object relation graph and tentative policy for visual navigation,” in European Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 19–34, Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [6] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Qiu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Pal, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Christensen, “Learning hierarchical relationships for object-goal navigation,” 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [7] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Hamilton, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ying, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Leskovec, “Inductive representation learning on large graphs,” in Advances in Neural Information Processing Systems (NeurIPS), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [8] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Kolve, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Mottaghi, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Han, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' VanderBilt, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Weihs, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Herrasti, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gordon, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Zhu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Farhadi, “Ai2-thor: An interactive 3d environment for visual ai,” arXiv preprint arXiv:1712.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='05474, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [9] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Tamar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Russell, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gkioxari, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Tian, “Bayesian relational memory for semantic visual navigation,” in Proceedings of the IEEE International Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2769–2779, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [10] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chaplot, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Salakhutdinov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, “Neural topological slam for visual navigation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 12875– 12884, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [11] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chaplot, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gandhi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gupta, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Salakhutdinov, “Object goal navigation using goal-oriented semantic exploration,” Advances in Neural Information Processing Systems (NeurIPS), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 33, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [12] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Batra, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gokaslan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Kembhavi, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Maksymets, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Mottaghi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Savva, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Toshev, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wijmans, “ObjectNav Revisited: On Evalu- ation of Embodied Agents Navigating to Objects,” in arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='13171, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [13] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wortsman, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ehsani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Rastegari, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Farhadi, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Mottaghi, “Learning to learn how to learn: Self-adaptive visual navigation using meta-learning,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 6743–6752, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [14] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Pennington, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Socher, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Manning, “Glove: Global vectors for word representation,” in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1532–1543, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [15] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Kipf and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Welling, “Semi-supervised classification with graph convolutional networks,” in International Conference on Learning Rep- resentations (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [16] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' You, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ying, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ren, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Hamilton, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Leskovec, “Graphrnn: Generating realistic graphs with deep auto-regressive models,” in Inter- national Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 5708–5717, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [17] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Coquelin and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Munos, “Bandit algorithms for tree search,” in Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 67–74, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [18] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Purves, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Cabeza, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Huettel, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' LaBar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Platt, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Woldorff, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Brannon, Cognitive neuroscience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Sunderland: Sinauer Associates, Inc, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [19] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Bizzi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Tresch, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Saltiel, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' d’Avella, “New perspectives on spinal motor systems,” Nature Reviews Neuroscience, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 101–108, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [20] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Rosenbaum, Human motor control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Academic press, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [21] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Mahkovic and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Slivnik, “Generalized local voronoi diagram of visible region,” in Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1998 IEEE International Conference on Robotics and Automation (Cat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 98CH36146), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 349–355, IEEE, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [22] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Khan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Rehman, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Aziz, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Fong, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Sarasvady, “Dbscan: Past, present and future,” in The fifth international conference on the applications of digital information and web technologies (ICADIWT 2014), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 232–238, IEEE, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [23] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Schubert, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Sander, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ester, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Kriegel, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Xu, “Dbscan revisited, revisited: why and how you should (still) use dbscan,” ACM Transactions on Database Systems (TODS), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 42, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 1–21, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [24] Manolis Savva*, Abhishek Kadian*, Oleksandr Maksymets*, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Zhao, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wijmans, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Jain, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Straub, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Liu, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Koltun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Malik, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Parikh, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Batra, “Habitat: A Platform for Embodied AI Research,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [25] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Dai, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Funkhouser, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Halber, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Niessner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Savva, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Song, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Zeng, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Zhang, “Matterport3D: Learning from RGB- D data in indoor environments,” International Conference on 3D Vision (3DV), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [26] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Bochkovskiy, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content='10934, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [27] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Kadian, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Truong, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Gokaslan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Clegg, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Wijmans, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Lee, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Savva, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Chernova, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Batra, “Sim2real predictivity: Does evaluation in simulation predict real-world performance?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=',” 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' [28] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Ramakrishnan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Al-Halah, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' Grauman, “Occupancy antici- pation for efficient exploration and navigation,” in European Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'} +page_content=' 400–418, Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2tE0T4oBgHgl3EQfdwDZ/content/2301.02382v1.pdf'}