diff --git "a/-9FQT4oBgHgl3EQfKjXJ/content/tmp_files/load_file.txt" "b/-9FQT4oBgHgl3EQfKjXJ/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/-9FQT4oBgHgl3EQfKjXJ/content/tmp_files/load_file.txt" @@ -0,0 +1,1292 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf,len=1291 +page_content='Published as a conference paper at ICLR 2023 EMERGENCE OF MAPS IN THE MEMORIES OF BLIND NAVIGATION AGENTS Erik Wijmans1,2∗Manolis Savva2,3 Irfan Essa1,4 Stefan Lee5 Ari S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Morcos2 Dhruv Batra1,2 1Georgia Institute of Technology 2FAIR, Meta AI 3Simon Fraser University 4Google Research Atlanta 5Oregon State University ABSTRACT Animal navigation research posits that organisms build and maintain internal spa- tial representations, or maps, of their environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We ask if machines – specifi- cally, artificial intelligence (AI) navigation agents – also build implicit (or ‘men- tal’) maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A positive answer to this question would (a) explain the surprising phenomenon in recent literature of ostensibly map-free neural-networks achieving strong performance, and (b) strengthen the evidence of mapping as a fundamental mechanism for navigation by intelligent embodied agents, whether they be biolog- ical or artificial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Unlike animal navigation, we can judiciously design the agent’s perceptual system and control the learning paradigm to nullify alternative naviga- tion mechanisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically, we train ‘blind’ agents – with sensing limited to only egomotion and no other sensing of any kind – to perform PointGoal navi- gation (‘go to ∆x, ∆y’) via reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our agents are composed of navigation-agnostic components (fully-connected and recurrent neural networks), and our experimental setup provides no inductive bias towards mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Despite these harsh conditions, we find that blind agents are (1) surprisingly effective nav- igators in new environments (∼95% success);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2) they utilize memory over long horizons (remembering ∼1,000 steps of past experience in an episode);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (3) this memory enables them to exhibit intelligent behavior (following walls, detecting collisions, taking shortcuts);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (4) there is emergence of maps and collision detection neurons in the representations of the environment built by a blind agent as it nav- igates;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' and (5) the emergent maps are selective and task dependent (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' the agent ‘forgets’ exploratory detours).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Overall, this paper presents no new techniques for the AI audience, but a surprising finding, an insight, and an explanation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1 INTRODUCTION Decades of research into intelligent animal navigation posits that organisms build and maintain inter- nal spatial representations (or maps)1 of their environment, that enables the organism to determine and follow task-appropriate paths (Tolman, 1948;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' O’keefe & Nadel, 1978;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Epstein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Hamsters, wolves, chimpanzees, and bats leverage prior exploration to determine and follow short- cuts they may never have taken before (Chapuis & Scardigli, 1993;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Peters, 1976;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Menzel, 1973;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Toledo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Harten et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Even blind mole rats and animals rendered situationally- blind in dark environments demonstrate shortcut behaviors (Avni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Kimchi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Maaswinkel & Whishaw, 1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ants forage for food along meandering paths but take near-optimal return trips (M¨uller & Wehner, 1988), though there is some controversy about whether insects like ants and bees are capable of forming maps (Cruse & Wehner, 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cheung et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Analogously, mapping and localization techniques have long played a central role in enabling non- biological navigation agents (or robots) to exhibit intelligent behavior (Thrun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Institute, ∗Correspondence to etw@gatech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1Throughout this work, we use ‘maps’ to refer to a spatial representation of the environment that enables intelligent navigation behavior like taking shortcuts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We provide a detailed discussion and contrast w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' a ‘cognitive map’ as defined by O’keefe & Nadel (1978) in Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='13261v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='AI] 30 Jan 2023 Published as a conference paper at ICLR 2023 1972;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ayache & Faugeras, 1988;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Smith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' More recently, the machine learning commu- nity has produced a surprising phenomenon – neural-network models for navigation that curiously do not contain any explicit mapping modules but still achieve remarkably high performance (Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Kadian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Chattopadhyay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Partsey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For instance, Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2020) showed that a simple ‘pixels-to-actions’ architecture (using a CNN and RNN) can navigate to a given point in a novel environment with near-perfect accuracy;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Partsey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2022) further generalized this result to more realistic sensors and actuators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2022) showed a similar general purpose archi- tecture (a transformer) can perform a wide variety of embodied tasks, including navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The mechanisms explaining this ability remain unknown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Understanding them is both of scientific and practical importance due to safety considerations involved with deploying such systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In this work, we investigate the following question – is mapping an emergent phenomenon?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specif- ically, do artificial intelligence (AI) agents learn to build internal spatial representations (or ‘mental’ maps) of their environment as a natural consequence of learning to navigate?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The specific task we study is PointGoal navigation (Anderson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018), where an AI agent is introduced into a new (unexplored) environment and tasked with navigating to a relative location – ‘go 5m north, 2m west relative to start’2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This is analogous to the direction and distance of foraging locations communicated by the waggle dance of honey bees (Von Frisch, 1967).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Unlike animal navigation studies, experiments with AI agents allow us to precisely isolate map- ping from alternative mechanisms proposed for animal navigation – the use of visual land- marks (Von Frisch, 1967), orientation by the arrangement of stars (Lockley, 1967), gradients of olfaction or other senses (Ioal`e et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We achieve this isolation by judiciously designing the agent’s perceptual system and the learning paradigm such that these alternative mechanisms are rendered implausible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our agents are effectively ‘blind’;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' they possess a minimal perceptual system capable of sensing only egomotion, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' change in the agent’s location and orientation as the it moves – no vision, no audio, no olfactory, no haptic, no magnetic, or any other sensing of any kind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This perceptual system is deliberately impoverished to isolate the contribution of memory, and is inspired by blind mole rats, who perform localization via path integration and use the Earth’s magnetic field as a compass (Kimchi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Further still, our agents are composed of navigation-agnostic, generic, and ubiquitous architectural components (fully-connected layers and LSTM-based recur- rent neural networks), and our experimental setup provides no inductive bias towards mapping – no map-like or spatial structural components in the agent, no mapping supervision, no auxiliary tasks, nothing other than a reward for making progress towards a goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Surprisingly, even under these deliberately harsh conditions, we find the emergence of map-like spatial representations in the agent’s non-spatial unstructured memory, enabling it to not only suc- cessfully navigate to the goal but also exhibit intelligent behavior (like taking shortcuts, following walls, detecting collisions) similar to aforementioned animal studies, and predict free-space in the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Essentially, we demonstrate an ‘existence proof’ or an ontogenetic developmental ac- count for the emergence of mapping without any previous predisposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our results also explain the aforementioned surprising finding in recent literature – that ostensibly map-free neural-network achieve strong autonomous navigation performance – by demonstrating that these ‘map-free’ sys- tems in fact learn to construct and maintain map-like representations of their environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Concretely, we ask and answer following questions: 1) Is it possible to effectively navigate with just egomotion sensing?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Yes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that our ‘blind’ agents are highly effective in navigating new environments – reaching the goal with 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1%±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3% success rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' And they traverse moderately efficient (though far from optimal) paths, reaching 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9%±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6% of optimal path efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We stress that these are novel testing environments, the agent has not memorized paths within a training environment but has learned efficient navigation strategies that generalize to novel environments, such as emergent wall-following behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2) What mechanism explains this strong performance by ‘blind’ agents?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that memoryless agents completely fail at this task, achieving nearly 0% success.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' More importantly, we find that agents with memory utilize information stored over a long temporal and spatial hori- zon and that collision-detection neurons emerge within this memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Navigation performance as a function of the number of past actions/observations encoded in the agent’s memory does not 2The description in English is purely for explanatory purposes;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' the agent receives relative goal coordinates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2 Published as a conference paper at ICLR 2023 saturate till one thousand steps (corresponding to the agent traversing 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='66 meters), suggest- ing that the agent ‘remembers’ a long history of the episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3) What information does the memory encode about the environment?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Implicit maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We perform an AI rendition of Menzel (1973)’s experiments, where a chimpanzee is carried by a human and shown the location of food hidden in the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When the animal is set free to collect the food, it does not retrace the demonstrator’s steps but takes shortcuts to collect the food faster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Analogously, we train a blind agent to navigate from a source location (S) to a target location (T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' After it has finished navigating, we transplant its constructed episodic memory into a second ‘probe’-agent (which is also blind).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that this implanted-memory probe-agent performs dramatically better in navigating from S to T (and T to S) than it would without the memory transplant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Similar to the chimpanzee, the probe agent takes shortcuts, typically cutting out backtracks or excursions that the memory-creator had undertaken as it tried to work its way around the obstacles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' These experiments provide compelling evidence that blind agents learn to build and use implicit map-like representations of their environment solely through learning to navigate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Intriguingly further still, we find that surprisingly detailed metric occupancy maps of the environment (indicating free-space) can be explicitly decoded from the agent’s memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 4) Are maps task-dependent?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Yes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that the emergent maps are a function of the navigation goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Agents ‘forget’ excursions and detours, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' their episodic memory only preserves the features of the environment relevant to navigating to their goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This, in part, explains why transplanting episodic memory from one agent to another leads it to take shortcuts – because the excursion and detours are simply forgotten.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Overall, our experiments and analyses demonstrate that ‘blind’ agents solve PointGoalNav by combining information over long time horizons to build detailed maps of their environment, solely through the learning signals imposed by goal-driven navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In biological systems, convergent evolution of analogous structures that cannot be attributed to a common ancestor (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' eyes in vertebrates and jellyfish (Kozmik et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2008)) is often an indicator that the structure is a natural response to the ecological niche and selection pressures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Analogously, our results suggest that mapping may be a natural solution to the problem of navigation by intelligent embodied agents, whether they be biological or artificial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We now describe our findings for each question in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2 BLIND AGENTS ARE EFFECTIVE NAVIGATORS We train navigation agents for PointGoalNav in virtualized 3D replicas of real houses utilizing the AI Habitat simulator (Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Szot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2021) and Gibson (Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018) and Matterport3D (Chang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017) datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent is physically embodied as an cylinder with a diameter 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2m and height 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In each episode, the agent is randomly initialized in the environment, which establishes an episodic agent-centric coordinate system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The goal location is specified in cartesian coordinates (xg, yg, zg) in this system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent has four actions – move forward (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25 meters), turn left (10◦), turn right (10◦), and stop (to signal reaching the goal), and allowed a maximum of 2,000 steps to reach the specified goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' It is equipped with an egomotion sensor providing it relative position (∆x, ∆y, ∆z) and relative ‘heading’ (or yaw angle) ∆θ between successive steps, which is integrated to keep track of the agent’s location and heading relative to start [xt, yt, zt, θt].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This is sometimes referred to as a ‘GPS+Compass’ sensor in this literature (Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use two task-performance dependent metrics: i) Success, defined as whether or not the agent predicted the stop action within 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 meters of the target, and ii) Success weighted by inverse Path Length (SPL) (Anderson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018), defined as success weighted by the efficiency of agent’s path compared to the oracle path (the shortest path).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Given the high success rates we observe, SPL can be roughly interpreted as efficiency of the path taken compared to the oracle path – e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' an SPL of 95% means the agent took a path 95% as efficient as the oracle path while an SPL of 50% means the agent took a path 50% as efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that performance is evaluated in previously unseen environments to evaluate whether agents can generalize, not just memorize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent’s policy is instantiated as a long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) recurrent neural network – formally, given current observations ot = [xg, yg, zg, xt, yt, zt, θt], (ht, ct) = LSTM(ot, (ht−1, ct−1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We refer to this (ht, ct) as the agent’s internal memory repre- sentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that only contains information gathered during the current navigation episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train our agents for this task using a reinforcement learning (Sutton & Barto, 1992) algorithm called DD-PPO (Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The reward has a term for making progress towards the goal and 3 Published as a conference paper at ICLR 2023 GPS+Compass (A) (B) 1 3 2 4 6 5 Forward — Collided Forward — No Collision Turn — No Collision (C) Agent Bug — Always Right Bug — Always Left Clairvoyant Bug Figure 1: (A) PointGoal navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' An agent is initialized in a novel environment (bluesquare) and task with navigation to a point specified relative to the start location (red square).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We study ‘blind’ agents, equipped with just an egomotion sensor (called GPS+Compass in this literature).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (B) ‘Blind’ agent vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' bug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our learned ‘blind’ agent compared to 2 variants and an oracle equipped variant of the Bug algorithm (Lumelsky & Stepanov, 1987).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The Bug algorithm initially orients itself towards the goal and then proceeds towards the goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Upon hitting a wall, it follows along the wall until it reaches the other side.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The oracle version is told whether wall-following left or right is optimal, providing an upper-bound on Bug algorithm performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (C) t-SNE of the agent’s internal representation for collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find 4 overall clusters corresponding to the previous action taken and whether or not that action led to a collision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' for successfully reaching it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Neither the training procedure nor agent architecture contain explicit inductive biases towards mapping or planning relative to a map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 describes training details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Agent Success SPL 1 Blind 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 2 Clairvoyant Bug 100±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 3 Sighted (Depth) 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 (Ramakrishnan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2021) Table 1: PointGoalNav performance agents on PointGoalNav.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that blind agents are surprisingly effective (success) though not efficient (SPL) navigators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' They have similar success as an agent equipped with a Depth camera and higher SPL than a clair- voyant version of the ‘Bug’ algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Surprisingly, we find that agents trained under this impoverished sensing regime are able to navigate with near-perfect efficacy – reaching the goal with 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1%±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3% success rate (Table 1), even in situa- tions where the agent must take hundreds of actions and traverse over 25m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This performance is simi- lar in success rate (95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 vs 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0)3 to a sighted agent (equipped with a depth camera) trained on a larger dataset (HM3D) (Ramakrishnan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The paths taken by the blind agent are moderately ef- ficient but (as one might expect) far less so than a sighted agent (62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9 vs 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 SPL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' At this point, it might be tempting to believe that this is an easy navigation problem, but we urge the reader to fight hindsight bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We contend that the SPL of this blind agent is surprisingly high given the impoverished sensor suite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To put this SPL in context, we compare it with ‘Bug algorithms’ (Lumelsky & Stepanov, 1987), which are motion planning algorithms inspired by insect navigation, involving an agent equipped with only a localization sensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In these algorithms, the agent first orients itself towards the goal and then travels directly towards it until it encounters a wall, in which case it follows along the wall along one of two directions of travel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The primary challenge for Bug algorithms is determining whether to go left or right upon reaching a wall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To provide an upper bound on performance, we implement a ‘clairvoyant’ Bug algorithm agent with an oracle that tells it whether left or right is optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Even with the additional privileged information, the ‘clairvoyant’ Bug agent achieves an SPL of 46%, which is considerably less efficient than the ‘blind’ agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1b shows an example of the path our blind agent takes compared to 3 variants of the Bug algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This shows that blind navigation agents trained with reinforcement learning are highly efficient at navigating in previously unseen environments given their sensor suite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 EMERGENCE OF WALL-FOLLOWING BEHAVIOR AND COLLISION-DETECTION NEURONS Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1b shows the blind agent exhibiting wall-following behavior (also see blue paths in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A6 and videos in supplement).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This behavior is remarkably consistent;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' the agent spends the majority 3It may seem like the blind agent outperforms the sighted agent, but the mean performance of Ramakrishnan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2021) is within our error bars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 4 Published as a conference paper at ICLR 2023 of an episode near a wall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This is surprising because it is trained to navigate to the target location as quickly as possible, thus, it would be rewarded for traveling in straighter paths (that avoid walls).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We hypothesize that this strategy emerges due to two factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1) The agent is blind, it has no way to determine where the obstacles are in the environment besides ‘bumping’ into them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2) The environment is unknown to the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' While this is clearly true for testing environments it is also functionally true for training environments because the coordinate system is episodic, every episode uses a randomly-instantiated coordinate system based on how the agent was spawned;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' and the since the agent is blind, it cannot perform visual localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We test both hypotheses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To test (2), we provide an experiment in Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 showing that when the agent is trained in a single environment with a consistent global coordinate system, it learns to memorize the shortest paths in this environment and wall-following does not emerge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Consequently, this agent is unable to navigate in new environment, achieving 100% success on train and 0% on test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To test (1), we analyze whether the agent is capable of detecting collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that the agent is not equipped with a collision sensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In principle, the agent can infer whether it collided – if tries to move forward and the resulting egomotion is atypical, then it is likely that a collision happened.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This leads us to ask – does the agent’s memory contain information about collisions?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train a linear classifier that uses the (frozen) internal representation (ht+1, ct+1) to predict if action at resulted in a collision (details in Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The classifier achieves 98% accuracy on held-out data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As comparison, random guessing on this 2-class problem would achieve 50%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This shows the agent’s memory not only predicts its collisions, but also that collision-vs-not are linearly separable in internal-representation space, which strongly suggests that the agent has learned a collision sensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Next, we examine how collisions are structured in the agent’s internal representation by identifying the subspace that is used for collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically, we re-train the linear classifier with an ℓ1- weight penalty to encourage sparsity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We then select the top 10 neurons (from 3072) with the largest weight magnitude;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' this reduces dimensionality by 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='7% while still achieving 96% collision-vs-not accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use t-SNE (Van der Maaten & Hinton, 2008) and the techniques in Kobak & Berens (2019) to create a 2-dimension visualization of the resulting 10-dimension space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find 4 distinct semantically-meaningful clusters (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' One cluster always fires for collisions, one for forward actions that did not result in a collision, and the other two correspond to turning actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Notice that these exceedingly small number of dimensions and neurons essentially predict all collisions and movement of the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We include videos in the supplementary materials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3 MEMORY IS USED OVER LONG HORIZONS 10 0 10 1 10 2 10 3 Memory Length (log-scale) 0 20 40 60 80 100 Performance (Higher is better) SPL Success Figure 2: Navigation perfor- mance vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' memory length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Agent performance does not saturate until memory can contain information from hundreds of steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A memory of 103 steps is half the maximum episode length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Next, we examine how memory is utilized by asking if the agent uses memory solely to remember short-term informa- tion (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' did it collide in the last step?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=') or whether it also in- cludes long-range information (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' did it collide hundreds of steps ago?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To answer this question, we restrict the memory capacity of our agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically, let k denote the memory budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' At each time t, we take the previous k observations, [ot−k+1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' , ot], and construct the internal representation (ht, ct) via the recurrence (hi, ci) = LSTM(oi, (hi−1, ci−1)) for t − k < i ≤ t where (ht−k, ct−k) = (0, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' If the agent is only leveraging its memory for short-term stor- age we would expect performance to saturate at a small value of k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Instead, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2 shows that the agent leverages its memory for significantly long term storage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When memoryless (k = 1), the agent completely fail at the task, achieving nearly 0% suc- cess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Navigation performance as a function of the memory budget (k) does not saturate till one thousand steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Recall that the agent can move forward 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25 meters or turn 10◦ at each step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The average distance traveled in 1000 steps is 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='66 meters, indicating that it remembers information over long temporal and spatial horizons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 we train agents to operate at a specific memory budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that a budget of k = 256, the largest we are able to train, is not sufficient to achieve the performance of unbounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 5 Published as a conference paper at ICLR 2023 Agent Network Probe Network LSTM LSTM oA T-1 LSTM LSTM oA T hA T-2 aA T-2 aA T-1 aA T hA T hP 2 oP 1 aP 1 aP 2 oP 2 S T Stop Gradient (A) SecondNav(S→T) SecondNav(T→S) Probe Type Success SPL Success SPL 1 AllZeroMemory 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='40 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='27 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='40 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25 2 UntrainedAgentMemory 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='28 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='19 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='54 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='35 3 TrainedAgentMemory 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='23 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='16 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='16 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='22 (B) Figure 3: (A) Probe experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' First, an agent navigates (blue path, blue LSTM) from start (green sphere) to target (red sphere).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' After the agent navigates, we task a probe (purple LSTM) with performing the same navigation episode with the additional information encapsulated in the agent’s internal representation (or memory), hA T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The probe is able to navigate more efficiently by taking shortcuts (purple path).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As denoted by the dashed line between the probe and agent networks, the probe does not influence what the agent stores in its internal representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Environment in the image from the Replica Dataset (Straub et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (B) Agent memory transplant increases probe efficiency (SPL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Results of our trained probe agent under three configurations – initialized with an empty representation (AllZeroMemory), a representation of a random agent walked along the trained agent’s path (UntrainedAgentMemory), and the final representation of the trained agent (TrainedAgentMemory).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 95% confidence interval reported over 5 agent-probe pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 4 MEMORY ENABLES SHORTCUTS To investigate what information is encoded in the memory of our blind agents, we develop an exper- imental paradigm based on ‘probe’ agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A probe is a secondary navigation agent4 that is struc- turally identical to the original (sensing, architecture, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ), but parametrically augmented with the primary agent’s constructed episodic memory representation (hT , cT ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The probe has no influence on the agent, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' no gradients (or rewards) follow from probe to agent (please see training details in Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use this paradigm to examine whether the agent’s final internal representation contains sufficient information for taking shortcuts in the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3A, the agent first navigates from source (S) to target (T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' After the agent reaches T, a probe is initialized5 at S, its memory initialized with the agent’s final memory repre- sentation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (h0, c0)probe = (hT , cT )agent, and tasked with navigating to T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We refer to this probe task as SecondNav(S→T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' All evaluations are conducted in environments not used for training the agent nor the probe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Thus, any environmental information in the agent’s memory must have been gathered during its trajectory (and not during any past exposure during learning).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Similarly, all initial knowledge the probe has of the environment must come from the agent’s memory (hT , cT )agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our hypothesis is that the agent’s memory contains a spatial representation of the environment, which the probe can leverage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' If the hypothesis is true, we would expect the probe to navigate Sec- ondNav(S→T) more efficiently than the agent (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' by taking shortcuts and cutting out exploratory excursions taken by the agent).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' If not, we would expect the probe to perform on-par with the agent since the probe is being trained on essentially the same task as the agent6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In our experiments, we find that the probe is significantly more efficient than the agent – SPL of 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9%±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6% (agent) vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0%±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6% (probe).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' It is worth stressing how remarkable the performance of the probe is – in a new environment, a blind probe navigating without a map traverses a path that is within 15% of the shortest path on the map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The best known sighted agents (equipped with an RGB camera, Depth sensor, and egomotion sensor) achieve an SPL of 84% on this task (Ramakrishnan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Essentially, the memories of a blind agent are as valuable as having vision!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3A shows the difference in paths between the agent and probe (and videos showing more exam- ples are available in the supplement).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' While the agent exhibits wall-following behavior, the probe 4To avoid confusion, we refer to this probe agent as ‘probe’ and the primary agent as ‘agent’ from this point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 5The probe’s heading at S is set to the agent’s final heading upon reaching T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 6We note that an argument can be made that if the agent’s memory is useless to the probe, then the probe is being trained on a harder task since it must learn to navigate and ignore the agent’s memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' But this argument would predict the probe’s performance to be lower not higher than the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 6 Published as a conference paper at ICLR 2023 B A 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4% Non-navigable Navigable Ground Truth Prediction 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4% Ground Truth Prediction B A Figure 4: Learning navigation improves map prediction from memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (Left) Accuracy (In- tersection over Union) distributions (via kernel density estimation) and means (dashed lines);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' TrainedAgentMemory has a higher mean than UntrainedAgentMemory with p-value ≤ 10−5 (via Wilcoxon signed-rank test (Wilcoxon, 1992)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (Right) Example ground truth and predicted occu- pancy maps using TrainedAgentMemory (corresponding to (A) and (B) IoU points).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Light grey is non-navigable and dark grey is navigable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent path is drawn in light blue and navigates from start (green) to target (red).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We can see that when the agent travels close to one wall, the map decoder predicts another wall parallel to it, indicating a corridor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' instead takes more direct paths and rarely performs wall following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Recall that the only difference in the agent and probe is the contents of the initial hidden state – reward is identical (and available only during training), training environments are identical (although the episodes are different), and eval- uation episodes are identical – meaning that the environmental representation in the agent’s episodic memory is what enables the probe to navigate more efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We further compare this result (which we denote as TrainedAgentMemory) with two control groups: 1) AllZeroMemory: An empty (all zeros) episodic memory to test for any systematic biases in the probe tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This probe contains identical information at the start of an episode as the agent (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' no information).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2) UntrainedAgentMemory: Episodic memory generated by an untrained agent (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' with a random setting of neural network parameters) as it is walked along the trajectory of the trained agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This disentangles the agent’s structure from its parameters;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' and tests whether simply being encoded by an LSTM (even one with random parameters) provides an inductive bias towards building good environmental representations (Wieting & Kiela, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find no evidence for this inductive bias – UntrainedAgentMemory performs no better than AllZeroMemory (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3B, row 1 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Furthermore, TrainedAgentMemory significantly outper- forms both controls by +13 points SPL and +4 points Success (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3B, row 3 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1 and 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Taken together, these two results indicate that the ability to construct useful spatial representations of the environment from a trajectory is decidedly a learned behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Next, we examine if there is any directional preference in the episodic memory constructed by the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our claim is that even though the agent navigates from S to T, if its memory indeed contains map-like spatial representations, it should also support probes for the reverse task Second- Nav(T→S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Indeed, we find that TrainedAgentMemory probe performs the same (within margin of error) on both SecondNav(S→T) and SecondNav(T→S) (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3B right column) – indicating that the memory is equally useful in both directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 we demonstrate that the probe removes excursions from the agent’s path and takes shortcuts through previously unseen parts of the envi- ronment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Overall, these results provide compelling evidence that blind agents learn to build and use implicit map-like representations that enable shortcuts and reasoning about previously untraversed locations in the environment, solely through learning to navigate between two points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 5 LEARNING NAVIGATION IMPROVES METRIC MAP DECODING Next, we tackle the question ‘Does the agent build episodic representations capable of decod- ing metric maps (occupancy grids) of the environment?’' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Formally, given the final representation (hT , cT )agent, we train a separate decoding network to predict an allocentric top-down occupancy grid (free-space vs not) of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As with the probes, no gradients are propagated from the decoder to the agent’s internal representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We constrain the network to make predictions for a location only if the agent reached within 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 meters of it (refer to Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3 for details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that since the agents are ‘blind’ predictions about any unvisited location require reasoning about unseen 7 Published as a conference paper at ICLR 2023 Non-Excursion Excursion Predicted Visited Chance 5 25 50 75 100 (A) (B) Figure 5: (A) Excursion prediction example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Qualitative example of the previously-visited loca- tion decoder making systematic errors when decoding an excursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Blue represents the confidence of the decoder that the agent was previously at a given location;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' we can see that it is lower in the path interval marked in red (excursion) than the rest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (B) Remembrance of excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Performance of decoders when predicting previous agent locations broken down into three categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ‘Non- excursion’ is all predictions where the current location of the agent and the prediction time step are not part of an excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ‘Excursion’ is when the prediction time step is part of an excursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ‘Exit’ is when the prediction time step is part of the last 10% of the excursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' X-axis is the distance into the past and Y-axis is the relative error between the true and predicted locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As before, we compare the internal representation produced by TrainedAgentMemory to internal representation produced by an agent with random parameters, UntrainedAgentMemory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 4 shows the distribution of map-prediction accuracy, measured as interaction-over-union (IoU) with the true occupancy grid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that TrainedAgentMemory enables uniformly more accurate predictions than UntrainedAgentMemory– 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5% vs 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5% average IoU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The qualitative examples show that the predictor is commonly able to make accurate predictions about unvisited locations, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' when the agent travels close to one wall, the decoder predicts another parallel to it, indicating a cor- ridor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' These results show that the internal representation contains necessary information to decode accurate occupancy maps, even for unseen locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We note that the environment structural priors are also necessary to prediction unseen locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Thus agent memory is necessary but not sufficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4, we conduct this analysis on ‘sighted’ navigation agents (equipped with a Depth camera and egomotion sensor).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Perhaps counter-intuitively, we do not find conclusive evidence that metric maps can be decoded from the memory of sighted agents (despite their sensing suite being a strict superset of blind agents).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our conjecture is that for higher-level strategies like map-building to emerge, the learning problem must not admit ‘trivial’ solutions such as the ones deep reinforcement learning is know to latch onto (Baker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Lehman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Kadian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We believe that the minimal perception system used in our work served to create a challenging learning problem, which in turn limited the possible ‘trivial’ solutions, thus inducing map-building.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 6 MAPPING IS TASK-DEPENDENT: AGENT FORGETS EXCURSIONS Given that the agent is memory-limited, it stands to reason that it might need to choose what informa- tion to preserve and what to ‘forget’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To examine this, we attempt to decode the agent’s past positions from its memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Formally, given internal state at time t, (ht, ct), we train a prediction network fk(·) to predict the agent’s location k steps in to the past, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ˆst−k = fk(ht, ct)+st, k ∈ [1, 256].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Given ground truth location st+k, we evaluate the decoder via relative L2 error ||ˆst+k−st+k||/||st+k−st|| (refer to Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 for details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Qualitative analysis of past prediction results shows that the agent forgets excursions7, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' excursions are harder to decode (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 5a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To quantify this, we man- ually labelled excursions in 216 randomly sampled episodes in evaluation environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 5b shows that excursions are harder to decode than non-excursions, indicating that the agent does in- deed forget excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Interestingly, we find that the exit of the excursion is considerably easier to decode, indicating that the end of the excursion performs a similar function to landmarks in animal and human navigation (Chan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 7We define an excursion as a sub-path that approximately forms a loop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 8 Published as a conference paper at ICLR 2023 In the appendix, we study several additional questions that could not be accommodated in the main paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 we further examine the probe’s performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3 we examine predicting future agent locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 we use agent’s hidden state as a world model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 7 RELATED WORK Characterizing spatial representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Prior work has shown that LSTMs build grid- cell (O’keefe & Nadel, 1978) representations of an environment when trained directly for path integration within that environment (Banino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cueva & Wei, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Sorscher et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In contrast, our work provides no direct supervision for path integration, localization, or mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Banino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2018) demonstrated that these maps aid in navigation by training a navigation agent that utilizes this cognitive map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In contrast, we show that LSTMs trained for navigation learn to build spatial representations in novel environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Whether or not LSTMs trained under this setting also utilize grid-cells is a question for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Bruce et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2018) demonstrated that LSTMs learn localization when trained for navigation in a single environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We show that they learn mapping when given location and trained in many environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Huynh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2020) proposed a spatial memory architecture and demonstrated that a spatial representation emerges when trained on a localization task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We show that spatial representations emerge in non-spatial neural networks trained for navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Dwivedi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2022) examined what navigation agents learn about their environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We provided a detailed account of emergent mapping in larger environments, over longer time horizons, and show the emergence of intelligent behavior and mapping in blind agents, which is not the focus of prior work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ‘Map-free’ navigation agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Learned agents that navigate without an explicit mapping module (called ‘map-free’ or ‘pixels-to-actions’) have shown strong performance on a variety of tasks (Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Kadian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Chattopadhyay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Partsey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In this work, we do not provide any novel techniques nor make any experimental advancement in the efficacy of such (sighted) agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' However, we make two key findings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' First, that blind agents are highly effective navigators for PointGoalNav, exhibit- ing similar efficacy as sighted agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Second, we begin to explain how ‘map-free’ navigation agents perform their task: they build implicit maps in their memory, although the story is a bit nuanced due to the results in Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' we suspect this understanding might be extended in future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 8 OUTLOOK: LIMITATIONS, REPRODUCIBILITY In this work, we have shown that ‘blind’ AI navigation agents – agents with similar perception as blind mole rats – are capable of performing goal-driven navigation to a high degree of performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We then showed that these AI navigation agents learn to build map-like representations (supporting the ability to take shortcuts, follow walls, and predict free-space and collisions) of their environ- ment solely through learning goal-driven navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our agents and training regime have no added inductive bias towards map-building, be it explicit or implicit, implying that cognitive maps may be a natural solution to the inductive biases imposed by navigation by intelligent embodied agents, whether they be biological or artificial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In a similar manner, convergent evolution (Kozmik et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2008), where two unrelated intelligent systems independently arrive at similar mechanisms, suggests that the mechanism is a natural response of having to adapt to the environment and the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our results also provide an explanation of the surprising success of map-free neural network nav- igation agents by showing that these agents in fact learn to build map-like internal representations with no learning signal other than goal driven navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This result establish a link between how ‘map-free’ systems navigate with analytic mapping-and-planning techniques (Thrun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Institute, 1972;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ayache & Faugeras, 1988;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Smith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our results and analyses also point towards future directions in AI navigation research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically, imbuing AI navigation agents with explicit (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' architectural design) or implicit (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' training regime or auxiliary objectives) priors that bias agents towards learning an internal representation with the features found here may improve their performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Further, it may better equip them to learn more challenging tasks such as rearrangement of an environment by moving objects (Batra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We see several limitations and areas for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' First, we examined ground-based navigation agents operating in digitizations of real houses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This limits the agent a 2D manifold and induces strong structural priors on environment layout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As such, it is unclear how our results generalize 9 Published as a conference paper at ICLR 2023 to a drone flying through a large forest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Second, we examined agents with a minimal perceptual system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In the supplementary text, we attempted to decode occupancy grids (metric maps) from Depth sensor equipped agents and did not find convincing evidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our conjecture is that for higher-level strategies like map-building to emerge, the learning problem must not admit ‘trivial’ solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We believe that the minimal perception system used in our work also served to create such a challenging learning problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Third, our experiments do not study the effects of actuation noise, which is an important consideration in both robot navigation systems and path integration in biological systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fourth, we examine an implicit map-building mechanism (an LSTM), a similar set of experiments could be performed for agents with a differentiable read/write map but no direct mapping supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fifth, our agents only explore their environment for a short period of time (an episode) before their memory is reset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Animals and robots at deployment experience their environment for significantly longer periods of time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Finally, we do not provide a complete mechanistic account for how the agent learns to build its map or what else it stores in its memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Acknowledgements: We thank Abhishek Kadian for his help in implementing the first version of the SecondNav(T→S) probe experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We thank Jitendra Malik for his feedback on the draft and guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' EW is supported in part by an ARCS fellowship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The Georgia Tech effort was supported in part by NSF, ONR YIP, and ARO PECASE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The Oregon State effort is supported in part by the DARPA Machine Common Sense program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Government, or any sponsor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Reproducibility Statement: Implementation details of our analyses are provided in the appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our work builds on datasets and code that are already open-sourced, and our analysis code will be open-sourced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' REFERENCES Peter Anderson, Angel X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, and Amir Roshan Zamir.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' On evaluation of embodied navigation agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' CoRR, abs/1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='06757, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='org/abs/1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='06757.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Reut Avni, Yael Tzvaigrach, and David Eilam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Exploration and navigation in the blind mole rat (spalax ehrenbergi): global calibration as a primer of spatial representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Journal of Experi- mental Biology, 211(17):2817–2826, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nicholas Ayache and Olivier D Faugeras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Building, registrating, and fusing noisy visual maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The International Journal of Robotics Research, 7(6):45–65, 1988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Emergent tool use from multi-agent autocurricula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Chadwick, Thomas Degris, Joseph Modayil, Greg Wayne, Hubert Soyer, Fabio Viola, Brian Zhang, Ross Goroshin, Neil Rabinowitz, Razvan Pascanu, Charlie Beat- tie, Stig Petersen, Amir Sadik, Stephen Gaffney, Helen King, Koray Kavukcuoglu, Demis Hass- abis, Raia Hadsell, and Dharshan Kumaran.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Vector-based navigation using grid-like representa- tions in artificial agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nature, 557(7705):429–433, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1038/s41586-018-0102-6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1038/s41586-018-0102-6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Dhruv Batra, Angel X Chang, Sonia Chernova, Andrew J Davison, Jia Deng, Vladlen Koltun, Sergey Levine, Jitendra Malik, Igor Mordatch, Roozbeh Mottaghi, Manolis Savva, and Hao Su.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Rear- rangement: A challenge for embodied ai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In arXiv preprint arXiv:2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='01975, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Jake Bruce, Niko S¨underhauf, Piotr Mirowski, Raia Hadsell, and Michael Milford.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Learning deploy- able navigation policies at kilometer scale from a single traversal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Conference on Robot Learning (CoRL), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Edgar Chan, Oliver Baumann, Mark A Bellgrove, and Jason B Mattingley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' From objects to land- marks: the function of visual location information in spatial navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Frontiers in psychology, 3:304, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 10 Published as a conference paper at ICLR 2023 Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Matterport3d: Learning from rgb-d data in indoor environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In International Conference on 3D Vision (3DV), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' License: http://kaldir.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' vc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='tum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='de/matterport/MP TOS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='pdf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nicole Chapuis and Patricia Scardigli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Shortcut ability in hamsters (mesocricetus auratus): The role of environmental and kinesthetic information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Animal Learning & Behavior, 21(3):255–265, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Prithvijit Chattopadhyay, Judy Hoffman, Roozbeh Mottaghi, and Ani Kembhavi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Robustnav: To- wards benchmarking robustness in embodied navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Allen Cheung, Matthew Collett, Thomas S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Collett, Alex Dewar, Fred Dyer, Paul Graham, Michael Mangan, Ajay Narendra, Andrew Philippides, Wolfgang St¨urzl, Barbara Webb, Antoine Wys- trach, and Jochen Zeil.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Still no convincing evidence for cognitive map use by honeybees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Pro- ceedings of the National Academy of Sciences, 111(42):E4396–E4397, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ISSN 0027-8424.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1073/pnas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1413581111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='pnas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='org/content/111/42/E4396.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Holk Cruse and R¨udiger Wehner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' No need for a cognitive map: Decentralized memory for in- sect navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' PLOS Computational Biology, 7(3):1–10, 03 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1371/journal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='pcbi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1002009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1371/journal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='pcbi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1002009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Christopher J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cueva and Xue-Xin Wei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Emergence of grid-like representations by training recur- rent neural networks to perform spatial localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Confer- ence on Learning Representations (ICLR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='id= B17JTOe0-.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Kshitij Dwivedi, Gemma Roig, Aniruddha Kembhavi, and Roozbeh Mottaghi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' What do navigation agents learn about their environment?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 10276–10285, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Russell Epstein, E Z Patai, Joshua Julian, and Hugo Spiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The cognitive map in humans: Spatial navigation and beyond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nature Neuroscience, 20:1504–1513, 10 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1038/nn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4656.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Charles R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Gallistel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Learning, development, and conceptual change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='The organization of learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The MIT Press, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Priya Goyal, Piotr Doll´ar, Ross B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Accurate, large minibatch SGD: training ima- genet in 1 hour.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' CoRR, abs/1706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='02677, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='org/abs/1706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='02677.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Lee Harten, Amitay Katz, Aya Goldshtein, Michal Handel, and Yossi Yovel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The ontogeny of a mammalian cognitive map in the real world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Science, 369(6500):194–197, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Deep residual learning for image recog- nition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Sepp Hochreiter and J¨urgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Long short-term memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Neural Computation, 9(8): 1735–1780, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Peter J Huber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Robust estimation of a location parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In The Annals of Mathematical Statistics, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 73–101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' JSTOR, 1964.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Tri Huynh, Michael Maire, and Matthew R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Walter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Multigrid neural memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 4561–4571.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Stanford Research Institute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Shakey: An experiment in robot planning and learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 1972.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' P Ioal`e, M Nozzolini, and F Papi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Homing pigeons do extract directional information from olfactory stimuli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Behavioral Ecology and Sociobiology, 26(5):301–305, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 11 Published as a conference paper at ICLR 2023 Sergey Ioffe and Christian Szegedy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Batch normalization: Accelerating deep network training by reducing internal covariate shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Lucia F Jacobs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The evolution of the cognitive map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Brain, behavior and evolution, 62(2):128–139, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Abhishek Kadian, Joanne Truong, Aaron Gokaslan, Alexander Clegg, Erik Wijmans, Stefan Lee, Manolis Savva, Sonia Chernova, and Dhruv Batra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Are we making real progress in simulated environments?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' measuring the sim2real gap in embodied visual navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In IEEE Robotics and Automation Letters (RA-L), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Pe- ter Tang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' On large-batch training for deep learning: Generalization gap and sharp minima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, and Aniruddha Kembhavi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Simple but effec- tive: Clip embeddings for embodied ai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 14829–14838, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Tali Kimchi, Ariane S Etienne, and Joseph Terkel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A subterranean mammal uses the magnetic compass for path integration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 101(4):1105– 1109, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Diederik P Kingma and Jimmy Ba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Adam: A method for stochastic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Dmitry Kobak and Philipp Berens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The art of using t-sne for single-cell transcriptomics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nature communications, 10(1):1–14, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Zbynek Kozmik, Jana Ruzickova, Kristyna Jonasova, Yoshifumi Matsumoto, Pavel Vopalensky, Iryna Kozmikova, Hynek Strnad, Shoji Kawamura, Joram Piatigorsky, Vaclav Paces, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As- sembly of the cnidarian camera-type eye from vertebrate-like components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 105(26):8989–8993, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Lee Altenberg, Julie Beaulieu, Peter J Bentley, Samuel Bernard, Guillaume Beslon, David M Bryson, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Artificial Life, 26(2):274–306, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll´ar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Focal loss for dense object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE International Conference on Computer Vision (ICCV), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2980–2988, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, and Jason Yosinski.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' An intriguing failing of convolutional neural networks and the coordconv solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems (NeurIPS), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 9605–9616, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ronald Mathias Lockley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Animal navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Pan Books, 1967.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ilya Loshchilov and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Decoupled weight decay regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Vladimir J Lumelsky and Alexander A Stepanov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Path-planning strategies for a point mobile au- tomaton moving amidst unknown obstacles of arbitrary shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Algorithmica, 2(1-4):403–430, 1987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Hans Maaswinkel and Ian Q Whishaw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Homing with locale, taxon, and dead reckoning strategies by foraging rats: sensory hierarchy in spatial navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Behavioural brain research, 99(2): 143–152, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Emil W Menzel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Chimpanzee spatial memory organization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Science, 182(4115):943–945, 1973.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 12 Published as a conference paper at ICLR 2023 Martin M¨uller and R¨udiger Wehner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Path integration in desert ants, cataglyphis fortis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 85(14):5287–5290, 1988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Vinod Nair and Geoffrey E Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Rectified linear units improve restricted boltzmann machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' John O’keefe and Lynn Nadel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The hippocampus as a cognitive map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Oxford: Clarendon Press, 1978.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ruslan Partsey, Erik Wijmans, Naoki Yokoyama, Oles Dobosevych, Dhruv Batra, and Oleksandr Maksymets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Is mapping necessary for realistic pointgoal navigation?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 17232–17241, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Peters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cognitive maps in wolves and men.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Environmental design research, 2:247–253, 1976.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Santhosh K Ramakrishnan, Aaron Gokaslan, Erik Wijmans, Oleksandr Maksymets, Alex Clegg, John Turner, Eric Undersander, Wojciech Galuba, Andrew Westbury, Angel X Chang, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Habitat-matterport 3d dataset (hm3d): 1000 large-scale 3d environments for embodied ai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Neural Information Processing Systems – Benchmarks and Datasets, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A generalist agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='06175, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Habitat: A Platform for Embodied AI Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' High- dimensional continuous control using generalized advantage estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Proximal policy optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' CoRR, abs/1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='06347, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Randall Smith, Matthew Self, and Peter Cheeseman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Estimating uncertain spatial relationships in robotics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Autonomous robot vehicles, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 167–193.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Springer, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ben Sorscher, Gabriel C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Mel, Samuel A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ocko, Lisa Giocomo, and Surya Ganguli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A uni- fied theory for the computational and mechanistic origins of grid cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In bioRxiv preprint bioRxiv:2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='424583, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1101/2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='424583.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Dropout: a simple way to prevent neural networks from overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Journal of Machine Learning Research (JMLR), 15(1):1929–1958, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, Anton Clarkson, Mingfei Yan, Brian Budge, Yajie Yan, Xiaqing Pan, June Yon, Yuyang Zou, Kimberly Leon, Nigel Carter, Jesus Briales, Tyler Gillingham, Elias Mueggler, Luis Pesqueira, Manolis Savva, Dhruv Batra, Hauke M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Strasdat, Renzo De Nardi, Michael Goesele, Steven Lovegrove, and Richard A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Newcombe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The replica dataset: A digital replica of indoor spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' CoRR, abs/1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='05797, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' org/abs/1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='05797.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Richard S Sutton and Andrew G Barto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Reinforcement learning: An introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' MIT press, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Andrew Szot, Alex Clegg, Eric Undersander, Erik Wijmans, Yili Zhao, John Turner, Noah Maestre, Mustafa Mukadam, Devendra Chaplot, Oleksandr Maksymets, Aaron Gokaslan, Vladimir Von- drus, Sameer Dharur, Franziska Meier, Wojciech Galuba, Angel Chang, Zsolt Kira, Vladlen Koltun, Jitendra Malik, Manolis Savva, and Dhruv Batra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Habitat 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0: Training home assis- tants to rearrange their habitat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Advances in Neural Information Processing Systems (NeurIPS), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 13 Published as a conference paper at ICLR 2023 Sebastian Thrun, Wolfram Burgard, and Dieter Fox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Probabilistic robotics (intelligent robotics and autonomous agents), 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Sivan Toledo, David Shohami, Ingo Schiffner, Emmanuel Lourie, Yotam Orchan, Yoav Bartan, and Ran Nathan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cognitive map–based navigation in wild bats revealed by a new high-throughput tracking system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Science, 369(6500):188–193, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Edward C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Tolman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cognitive maps in rats and men.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Psychological Review, 55(4):189–208, 1948.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1037/h0061626.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, and Christoph Bregler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Efficient object localization using convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 648–656, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Laurens Van der Maaten and Geoffrey Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Visualizing data using t-sne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Journal of machine learning research, 9(11), 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Karl Von Frisch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The dance language and orientation of bees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Harvard University Press, 1967.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' John Wieting and Douwe Kiela.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' No training required: Exploring random encoders for sentence clas- sification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Erik Wijmans, Abhishek Kadian, Ari Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva, and Dhruv Batra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' DD-PPO: Learning near-perfect pointgoal navigators from 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 billion frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Frank Wilcoxon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Individual comparisons by ranking methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Breakthroughs in statistics, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 196–202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Springer, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fei Xia, Amir R Zamir, Zhiyang He, Alexander Sax, Jitendra Malik, and Silvio Savarese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Gibson env: Real-world perception for embodied agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' License: https://storage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='googleapis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' com/gibson material/Agreement%20GDS%2006-04-18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='pdf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A METHODS AND MATERIALS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 POINTGOAL NAVIGATION TRAINING Task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In PointGoal Navigation, the agent is tasked with navigating to a point specified relative to its initial location, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e an input of (δx, δy) corresponds to going δx meters forward and δy meters to the right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent succeeds if it predicts the stop action within 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 meters of the specified point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent has access to 4 low-level actions – move forward (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25 meters), turn left (10◦), turn right (10◦), and stop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' There is no noise in the agent’s actuations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Sensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent has access to solely an idealized GPS+Compass sensor that provides it heading and position relative to the starting orientation and location at each time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' There is no noise in the agent’s sensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent is parameterized by a 3-layer LSTM (Hochreiter & Schmidhuber, 1997) with a 512-d hidden dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' At each time-step, the agent receives observations g (the location of the goal relative to start), GPS (its current position relative to start), and compass (its current heading relative to start).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We also explicitly give the agent an indicator of if it is close to goal in the form of min(||g − GPS||, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5) as we find the agent does not learn robust stopping logic otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' All 4 inputs are projected to 32-d using separated fully-connected layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' These are then concatenated with a learned 32-d embedding of the previous action taken to form a 160-d input that is then given to the LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The output of the LSTM is then processed by a fully-connected layer to produce a softmax distribution of the action space and an estimate of the value function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We construct our training data based on the Gibson (Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018) and Matter- port3D dataset (Chang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We training on 411 scenes from Gibson and 72 from Matter- port3D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 14 Published as a conference paper at ICLR 2023 Training Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train our agents using Proximal Policy Optimization (PPO) (Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017) with Generalized Advantage Estimation (GAE) (Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use Decen- tralized Distributed PPO (DD-PPO) (Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020) to train on 16 GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Each GPU/worker collects 256 steps of experience from 16 agents (each in different scenes) and then performs 2 epochs of PPO with 2 mini-batchs per epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the Adam optimize (Kingma & Ba, 2015) with a learning rate of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 × 10−4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We set the discount factor γ to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='99, the PPO clip to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2, and the GAE hyper-parameter τ to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train until convergence (around 2 billion steps of experience).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' At every timestep, t, the agent is in state st and takes action at, and transitions to state st+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' It receives shaped reward in the form: rt = �2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 · Success if at is Stop −∆geo dist(st, st+1) − λ Otherwise (1) where ∆geo dist(st, st+1) is the change in geodesic (shortest path) distance to goal between st and st+1 and λ=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='001 is a slack penalty encouraging shorter episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Evaluation Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We evaluate the agent in the 18 scenes from the Matterport3D test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the episodes from Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019), which consist of 56 episodes per scene (1008 in total).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Episode range in distance from 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 to 30 meters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The ratio of geodesic distance to euclidean distance between start and goal is restricted to be greater than or equal to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1, ensuring that episodes are not simple straight lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that reward is not available during evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent is evaluated under two metrics, Success, whether or not the agent called the stop action with 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 meters of the goal and Success weighted by normalized inverse Path Length (SPL) (An- derson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' SPL is calculated as follows: given the agent’s path [s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' , sT ] and the initial geodesic distance to goal di for episode i, we first compute the length of the agent’s path li = T � t=2 ||st − st−1||2 (2) then SPL for episode i as SPLi = Successi · di min{di, li} (3) We then report SPL as the average of SPLi across all episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 PROBE TRAINING Task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The probe task is to either navigate from start to goal again (SecondNav(S→T)) or navigate from goal to start (SecondNav(T→S)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For SecondNav(S→T), the probe is initialized at the starting location but with the agent’s final heading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For SecondNav(T→S), the probe is initialized with the agent’s final heading and position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In both cases, the probe and the agent share the same coordinate system – i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' in SecondNav(T→S), the initial GPS and Compass readings for the probe are identical the the final GPS and Compass readings for the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When the agent does not successfully reach the goal, the probe task is necessarily undefined and we do not instantiate a probe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Sensors, Architecture, Training Procedure, Training Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The probe uses the same sensor suite, architecture, training procedure, and training data as the agent, described in Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 Note that no gradients (or rewards) follow from probe to agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' From the agent’s perspective, the probe does not exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' From the probe’s perspective, the agent provides a dataset of initial locations (or goals) and initial hidden states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Evaluation Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We evaluate the probe in a similar manner the agent except that any episode which the agent is unable to complete (5%) is removed due to the probe task being undefined if the agent is unable to complete the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent reaches the goal 95% of the time, thus only 50 out of 1008 possible probe evaluation episodes are invalidated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The control probe type accounts for this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We ignore the agent’s trajectory when computing SPL for the probe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3 OCCUPANCY MAP DECODING Task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train a decoding network to predict the top-down occupancy map of the environment from the final internal state of the agent (ht, ct).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We limit the decoder to only predict within 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 meters of any location the agent visited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 15 Published as a conference paper at ICLR 2023 Architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The map-decoder is constructed as follows: First the internal state (ht, ct) is concate- nated into a 512×6-d vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The vector is then passed to a 2-layer MLP with a hidden dimension of 512-d that produces a 4608-d vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This 4608-d vector is then reshaped into a [128, 6, 6] feature- map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The feature map is processed by a series of Coordinate Convolution (CoordConv) (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018) Coordinate Up-Convolution (CoordUpConv) layers decrease the channel-depth and increase spatial resolution to [16, 96, 96].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically, after an initial CoordConv with an output channel- depth of 128, we use a series of 4 CoordUpConv-CoordConv layers where each CoordUpConv doubles the spatial dimensions (quadruples spatial resolution) and each CoordConv reduces channel-depth by half.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We then use a final 1x1-Convolution to create a [2, 96, 96] tensor representing the non- normalized log-probabilities of whether or not an given location is navigable or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Each CoordConv has kernel size 3, padding 1, and stride 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' CoordUpConv has kernel size 3, padding 0, and stride 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Before all CoordConv and CoordUpConv, we use 2D Dropout (Srivastava et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Tompson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2015) with a zero-out probability of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use Batch Normalization layers (Ioffe & Szegedy, 2015) and the ReLU activation function (Nair & Hinton, 2010) after all layers except the terminal layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We construct our training data by having a trained agent perform episodes of Point- Goal navigation on the training dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that while evaluation is done utilizing the final hidden state, we construct our training dataset by taking 30 time steps (evenly spaced) from the trajectory and ensuring the final step is included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train on 8 GPUs with a batch size of 128 per GPU (total batch size of 1024).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the AdamW optimizer (Kingma & Ba, 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Loshchilov & Hutter, 2019) with an initial learning rate of 10−3 and linearly scale the learning rate to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 × 10−2 over the first 5 epochs (Goyal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017) and use a weight-decay of 10−5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the validation dataset to perform early-stopping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use Focal Loss (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017) (a weighted version of Cross Entropy Loss) with γ = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0, αNotNavigable = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='75, and αNavigable = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25 to handle the class imbalance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Evaluation Data and Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We construct our evaluation data using the validation dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that the scenes in evaluation are novel to both the agent and the decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We evaluate the predicted occupancy map from the final hidden state/final time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We collect a total of 5,000 episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 PAST AND FUTURE POSITION PREDICTION Task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train a decoder to predict the change in agent location given the internal state at time t (ht, ct).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically, let st be the agent’s position at time t where the coordinate system is defined by the agent’s starting location (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' s0 = 0), and st+k be its position k steps into the future/past, then the decoder is trained to model f((ht, ct)) = st+k − st.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The decoder is a 3-layer MLP that produces a 3 dimensional output with hidden sizes of 256 and 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use Batch Normalization (Ioffe & Szegedy, 2015) and the ReLU activation function (Nair & Hinton, 2010) after all layers except the last.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The training data is collected from executing a trained agent on episodes from the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For each episode, we collect all possible pairs of st, st+k for a given value of k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the AdamW optimizer (Kingma & Ba, 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Loshchilov & Hutter, 2019) with a learning rate of 10−3, a weight decay of 10−4, and a batch size of 256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use a Smooth L1 Loss/Huber Loss (Huber, 1964) between the ground-truth change in position and the predicted change in position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the validation set to perform early stopping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Evaluation Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We evaluate the trained decoded on held-out scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that the held-out scenes are novel both to the agent and the decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Visualization of Predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For visualization the predictions of past vitiation, we found it easier to train a second decoder that predicts all locations the agent visited previously on a 2D top down map given the internal state (ht, ct).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This decoder shares the exact same architecture and train- ing procedure as the occupancy grid decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The decoder removes the temporal aspect from the prediction, so it is ill-suited for any time-dependent analysis, but produces clearer visualizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Excursion Calibrated Analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To perform the excursions forgetting analysis, we use the excur- sion labeled episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We marked the end of the excursion as the last 10% of the steps that are part 16 Published as a conference paper at ICLR 2023 of the excursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For a given point in time t, we classify that point into one of {Non-Excursion, Excursion, Exit}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We then examine how well this point is remembered by calculating the error of predicting the point t from t + k, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' how well can t be predicted when it is k steps into the past.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When t is part of an excursions (both the excursion and the exit) we limit t + k to either be part of the same excursion or not part of an excursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When t is not part of an excursion, t + k must also not be part of an excursion nor can there be any excursion in the range [t, t + k].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 COLLISION PREDICTION LINEAR PROBE Task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The task of this probe is to predict of the previous action taken lead to a collision given the current hidden state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically it seeks to learn a function Collidedt = f((ht, ct)) where (ht, ct) is the internal state at time t and Collidedt is whether or not the previous action, at−1 lead to a collision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The architecture is logistic classifier that takes the concatentation of the internal state and produces logprob of Collidedt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We construct our training data by having a trained agent perform episodes of Point- Goal navigation on the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We collect a total of 10 million samples and then randomly select 1 million for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We then normalize each dimension independently by computing mean and standard deviation and then subtract mean and divide by standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This ensures that all dimensions have the same average magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We training on 1 GPU with a batch size of 256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the Adam opti- mizer (Kingma & Ba, 2015) with a learning rate of 5 × 10−4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train for 20 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Evaluation Data and Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We construct our evaluation data using the same procedure as the training data, but on the validation dataset and collect 200,00 samples (which is then subsampled to 20,000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Important Dimension Selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To select which dimensions are important for predicting collsions, we re-train our probe with various L1 penalties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We sweep from 0 to 1000 and then select the penalty that results in the lowest number of significant dimensions without substantially reducing accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We determine the number of significant dimensions by first ordering all dimensions by the L1 norm of the corresponding weight and then finding the smallest number of dimensions we can keep while maintaining 99% of the performance of keeping all dimensions for that classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The t-SNE manifold is computed using 20,000 samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This is then randomly subsampled to 1,500 for visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 DATA AND MATERIALS AVAILABILITY The Gibson (Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018) and Matterport3D (Chang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017) datasets can be acquired from their respective distributors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Habitat (Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019) is open source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Code to reproduce experi- ments will be made available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' B ADDITIONAL DISCUSSIONS B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 RELATIONSHIP TO COGNITIVE MAPS Throughout the text, we use the term ‘map’ to mean a spatial representation that supports intelligent behaviors like taking shortcuts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Whether or not this term is distinct from the specific concept of a ‘cognitive map’ is debated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cognitive maps, as defined by O’keefe & Nadel (1978), imply a set of properties and are generally attached to a specific mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The existence of a cognitive map requires that the agent be able to reach a desired goal in the environment from any starting location without being given that starting location, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' be able to navigate against a map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Further, cognitive maps refer to a specific mechanism – place cells and grid cells being present in the hippocampus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Other works have also studied ‘cognitive maps’ and not put such restrictions on its definition (Gallistel, 1990;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Tolman, 1948), however these broader definitions have been debated (Jacobs, 2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our work shows that the spatial information contained within the agent’s hidden state enables map- like properties – a secondary agent to take shortcuts through previously unexplored free space – and supports the decoding of a metric map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' However, these do not fully cover the proprieties of O’keefe 17 Published as a conference paper at ICLR 2023 & Nadel (1978)’s definition nor do we make a mechanistic claim about how this information is stored in the neural network, though we do find the emergence of collision-detection neurons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C ADDITIONAL EXPERIMENTS C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 BLIND SHORTEST PATH NAVIGATION WITH TRUE STATE In the main text, we posited that blind agents learn wall-following as this an effective strategy for blind navigation in unknown environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We posit that this is because the agent does not have ac- cess to true state (it does not know the current environment nor where it is in global coordinates).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In this experiment we show that blind agents learn to take shortest paths, as opposed to wall-following, when trained in a single environment (implicitly informing the agent of the current environment) and uses the global coordinate system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 8 We use an identical agent architecture and training procedure as outline for PointGoal navigation training in the Materials and Methods with two differences: 1) A single training and test environment and 2) usage of the global coordinates within the environment for both goal specific and the agent’s GPS+Compass sensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We perform this experiment on 3 scenes, 1 from the Gibson val dataset and 2 from Matterport3D val dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The average SPL during training is 99±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 showing that the blind agent learns shortest path navigation not wall-following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Figure A6 shows examples of an agent trained in a single scene with global coordinates and an agent trained in many scenes with episodic coordinates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' These two settings, i) where the agent uses an episodic coordinate system and navigates in unknown environments, and ii) where the agent uses global coordinates and navigates in a known environment can be seen as the difference between a partially observable Markov decision process (POMDP) and a Markov decision process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In the POMDP case, the agent must learn a generalizable policy while it can overfit in the MDP case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 FURTHER ANALYSIS OF THE PROBE’S PERFORMANCE In the main text, we showed that the probe is indeed much more efficient than the agent, but how is this gain achieved?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our hypothesis is that the probe improves upon the agent’s path by taking shortcuts and eliminating excursions (representing an ‘out and back’).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We define an excursion as a sub-path that approximately forms a loop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To quantify excursions, we manually annotate excursions in 216 randomly sampled episodes in evaluation environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Of the labeled episodes, 62% have a least 1 excursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' On average, an episode has 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='95 excursions, and excursions have an average length of 101 steps (corresponding to 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='23 meters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Since excursions represent unnecessary portions of the trajectory, this indicates that the probe should be able improve upon the agent’s path by removing these excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We quantify this excursion removal via the normalized Chamfer distance between the agent’s path and the probe’s path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Formally, given the agent’s path Agent=[s(agent) 1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' , s(agent) T ] and the probe’s path Probe=[s(probe) 1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' , s(probe) N ] where s ∈ R3 is a point in the environment: PathDiff(Agent, Probe) = 1 N N � i=1 min 1≤j≤T GeoDist(s(agent) i , s(probe) j ), (4) where GeoDist(·, ·) indicates the geodesic distance (shortest traverseable path-length).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that Chamfer distance is not symmetric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' PathDiff(Probe, Agent) measures the average distance of a point on the probe path s(probe) j from the closest point on the agent path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A large PathDiff(Probe, Agent) indicates that the probe travels through novel parts of the environments (compared to the agent).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Conversely, PathDiff(Agent, Probe) measures the average distance of a point on the agent path s(agent) i from the closest point on the probe path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A large � PathDiff(Agent, Probe) − PathD- iff(Probe, Agent) � gap indicates that agent path contains excursions while the probe does not;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' thus, 8Recall that in the episodic coordinate system the origin is defined by the agent’s starting position and orientation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In the global coordinate system the origin is an arbitrary but consistent location (we simply use the origin for a given scene defined in the dataset).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Thus in the global coordinate system the goal is specified as ‘Go to (x, y)’ where x and y are specified in the global coordinate system, not with respect to the agent’s current location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 18 Published as a conference paper at ICLR 2023 we refer to this gap as Excursion Removal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To visually understand why this is the case, consider the example agent and probe paths in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Point (C) lies on an excursion in the agent path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' It contributes a term to PathDiff(Agent, Probe) but not to PathDiff(Probe, Agent) because (D) is closer to (E) than (C).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' On both SecondNav(S→T) and SecondNav(T→S), we find that as the efficiency of a probe in- creases, Excursion Removal also increases (Table A2, row 1 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2, 2 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3), confirming that the TrainedAgentMemory probe is more efficient because it removes excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We next consider if the TrainedAgentMemory probe also travels through previously unexplored space in addition to removing excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To quantify this, we report PathDiff(Probe, Agent) on episodes where agent SPL is less than average (less than 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9 If probes take the same path as the agent, we would expect this metric to be zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' If, however, probes travel through previously unexplored space to minimize travel distance, we would expect this metric to be significantly non- zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Indeed, on SecondNav(S→T), we find the TrainedAgentMemory probe is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='32 meters away on average from the closest point on the agent’s path (99% empirical bootstrap of the mean gives a range of (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='299, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='341)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' See Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A7 for a visual example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' On SecondNav(T→S), this effect is slightly more pronounced, the TrainedAgentMemory probe is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='55 meters away on average (99% empirical bootstrap of the mean gives a range of (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='52, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='588)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Taken holistically, these results show that the probe is both more efficient than the agent and consistently travels through new parts of the environment (that the agent did not travel through).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Thus, the spatial representation in the agent’s memory is not simply a ‘literal’ episodic summarization, but also contains anticipatory inferences about previously unexplored spaces being navigable (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' traveling along the hypotenuses instead of sides of a room).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In the text above we reported free space inference only on episodes where the agent gets an SPL bellow average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A12 we provide a plot of Free Space Inference vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Agent SPL to show the impact of other cutoff points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A13 we also provide a similar plot of Excursion Removal vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Agent SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In both cases, as agent SPL increase, the probe is able to infer less free space or remove less excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3 FUTURE VISITATION PREDICTION In the main text we examined what types of systematic errors are made when decoding past agent locations, here we provide addition analysis and look at predicting future observations as that will reveal if there are any idiosyncrasies in what can be predicted about future vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' what will happen in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Given ground truth location st+k, we evaluate the decoder via i) absolute L2 error ||ˆst+k−st+k|| and ii) relative L2 error ||ˆst+k − st+k||/||st+k − st||.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To determine baseline (or chance) performance, we train a second set of decoders where instead of using the correct internal state (ht, ct) as the input, we randomly select an internal state from a different trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This will evaluate if there are any inherent biases in the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A8, we find that the decoder is able to accurately predict where the agent has been, even for long time horizons – e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' at 100 time steps in the past, relative error is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='55 and absolute error is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0m, compared to relative error of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 and absolute error of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2m for the chance baseline prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For short time horizons the decoder is also able to accurately predict where the agent will be in the future – e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' at 10 time steps into the future, relative and absolute error are below chance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Interestingly, we see that for longer range future predictions, the decoder is worse than chance in relative error but on- par in absolute error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This apparent contradiction arises due to the decoders making (relatively) large systematic errors when the agent backtracks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In order for the decoder to predict backtracking, the agent would need to already know its future trajectory will be sub-optimal (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' lead to backtracking) but still take that trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This is in contradiction with the objective the agent is trained for, to reach the goal as quickly as possible, and thus the agent would not take a given path if it knew it would lead to backtracking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 9We restrict to a subset where the agent has relatively low SPL to improve dynamic range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When the agent has high SPL, there won’t be excursions to remove and this metric will naturally be low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In the supplementary text we provide plots of this metric vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' agent SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 19 Published as a conference paper at ICLR 2023 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 EXTENSION TO SIGHTED NAVIGATION AGENTS In the main text we analyzed how ‘blind’ agents, those with limited perceptual systems, utilize their memory and found evidence that they build cognitive maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Here, we extend our analysis to agents with rich perceptual systems, those equipped with a Depth camera and an egomotion sensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our primary experimental paradigm relies on showing that a probe is able to take shortcuts when given the agent’s memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This experimental paradigm relies on the probe being able to take a shorter path than the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Navigation agents with vision can perform PointNav near-perfectly (Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020) and thus there isn’t room for improving, rendering this experiment infeasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As a supplement to this experiment, we also show that a metric map (top-down occupancy grid) can be decoded from the agents memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This procedure can also be applied to sighted agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the ResNet50 (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2016) Gibson-2plus (Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018) pre-train model from Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020) and train an occupancy grid decoder using the same procedure as in the main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note however we utilize only Gibson for training and the Gibson validation scenes as held-out data instead of Matterport3D as this agent was only trained on Gibson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As before, we compare performance from TrainedAgentMemory with UntrainedAgentMemory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find mixed results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When measuring performance with Intersection-over-Union (IoU), UntrainedAgentMemory outperforms TrainedAgentMemory (40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1% vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' However, when measuring performance with average class balanced accuracy, TrainedAgentMemory outperforms UntrainedAgentMemory (61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='8% vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A9 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A10 show the corresponding distri- bution plots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Overall, this experiment does not provide convincing evidence either way to whether vision- equipped agents build metric maps in their memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' However, it does show that vision-equipped agents, if they do maintain a map of their environment, create one that is considerably more chal- lenging to decode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Further, we note this does not necessarily imply similarly mixed results as to whether or not vision agents maintain a still spatial but sparser representation, such as a topological graph, as their rich perception can fill in the details in the moment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 NAVIGATION FROM MEMORY ALONE In the main text we showed that agents learn to build map-like representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A map-like repre- sentation of the environment, should, to a degree, support navigation with no external information, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' by dead reckoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Given that the actions are deterministic, the probe should be able to perform either task without external inputs and only the agent’s internal representation and the previously taken action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The localization performed by the probe in this setting is similar to path integration, however, it must also be able to handle any collisions that occur when navigating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A11 shows performance vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' episode length for SecondNav(S→T) and SecondNav(T→S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' There are two primary trends.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For short navigation episodes (≤5m), the agent is able to complete the task often.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We also find that under this setting, SecondNav(T→S) is an easier task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This is due to the information conveyed to the probe by its initial heading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In SecondNav(T→S), the probe can make progress by simply turning around and going forward, while in SecondNav(S→T), the final heading of the agent is not informative of which way the probe should navigate initially.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Overall, these results show that the representation built by the agent is sufficient to navigate short distances with no external information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Experiment procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This experiment mirrors the probe experiment described in methods and materials with three differences: 1) The input from the GPS+Compass sensor is zero-ed out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2) The change in distance to goal shaping in the reward is normalized by the distance from initial state to goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that the prediction of the value function suffers considerably otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3) An additional reward signal as to whether or not the last action taken decreased the angle between the probe’s current heading and the direction along the shortest path to goal is added.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find the probe has challenges learning to turn around on the SecondNav(T→S) task otherwise (as it almost always starts facing 180◦ in the wrong direction).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Let hgt t be the heading along the shortest path to goal from the probe’s current position st, ht be the probe’s current heading, then AngularDistance(hgt t , ht) is the error in the probe’s heading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The full 20 Published as a conference paper at ICLR 2023 reward for this probe is then rt(st, at, st+1) = � � � � � � � 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 · Success if at is Stop −10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 · ∆geo dist(st, st+1)/GeoDist(s0, g) −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25 · ∆HeadingError(st, st+1) −λ Otherwise (5) C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 MEMORY LENGTH The method presented in the main text to examine memory length is post-hoc analysis performed on the ‘blind’ PointGoal Navigation agents and thus the agent is operating out-of-distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' From the agent’s view, it is still performing a valid PointGoal navigation episode, just with a different starting location, but the agent may not have taken the same sequence of actions if started from that location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' While we would still expect performance to stature with a small k if the memory length is indeed short, it is imprecise with measuring the exact memory length of the agent and does not answer what memory budget is required to perform the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Here we examined training agents with a fixed memory length LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A14 shows similar trends to those described in the main paper – performance increases as the memory budget increases – however performance is higher when the agent is trained for a given memory budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Due to the increased compute needed to train the model (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' training a model with a memory length of 128 is 128× more computationally costly), we where unable to train for a memory budget longer than 256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We also note the non-monotonicity in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We conjecture that this is a consequence of inducing the negative effects of large-batch optimization (Keskar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017) – training with a memory budget of k effectively increases the batch size by a factor of k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Keeping the batch size constant has its own drawbacks;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' reducing the number of parallel environments will harm data diversity and result in overfitting while reducing the rollout length increases the bias of the return estimate and makes credit assignment harder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Thus we kept number of environments and rollout length constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' D SUPPLEMENTARY VIDEOS Movies S1-3 Videos showing blind agent navigation with the location of the hidden state in the collision t-SNE space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Notice that the hidden state stays within a cluster throughout a series of actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 21 Published as a conference paper at ICLR 2023 SecondNav(S→T) SecondNav(T→S) Probe Type Excursion Removal Excursion Removal 1 AllZeroMemory 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='21±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='017 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='21±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='004 2 UntrainedAgentMemory 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='23±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='009 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='009 3 TrainedAgentMemory 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='52±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='014 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='51±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='011 Table A2: Excursion removal result of our trained probe agent under three configurations – ini- tialized with an empty representation (AllZeroMemory), a representation of a random agent walked along the trained agent’s path (UntrainedAgentMemory), and the final representation of the trained agent (TrainedAgentMemory).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 95% confidence interval reported over 5 agent-probe pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Navigable Not Navigable Agent Path Novel Scene, Episodic Coordinates Agent Path Known Scene, Global Coordinates Figure A6: True state trajectory comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Example trajectories of an agent with true state (trained for a specific environment and using global coordinates), green line, compared to an agent trained for many environments and using episodic coordinates, blue line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The later is what we examine in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Notice that the agent with true state take shortest path trajectories while the agent without true state instead exhibits strong wall-following behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 22 30Published as a conference paper at ICLR 2023 PathDiff(P,A) Probe Path Agent Path PathDiff(A,P) - PathDiff(P,A) Excursion Removal Free Space Inference A B E D C Figure A7: Two categories of probe shortcut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ‘Excursion Removal’ is when the probe removes excursions from the agent’s path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The dashed line shows the distance between the points in the excursion and the closest point in the probe’s path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ‘Free Space Inference’ occurs when the probe travels through previously unvisited locations in the environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The dashed lines show the dis- tance between any points in the probe’s path and the closest point in the agent’s path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 200 100 0 100 200 Time Offset 0 2 4 6 Error Absolute L2 Error 200 100 0 100 200 Time Offset 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 Relative L2 Error Actual Chance Figure A8: Past and future prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Performance of decoders trained to predict where the agent was in the past/will be in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' On the x-axis is how far into the past or future the decoder is predicting (positive values are future predictions and negative values are past predictions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The y-axis is either absolute or relative L2 error between the predicted location of the agent and the true location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 23 Published as a conference paper at ICLR 2023 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 Map Prediction Accuracy (IoU) UntrainedAgentMemory TrainedAgentMemory Figure A9: Map prediction accuracy (Intersection over Union) for Depth sensor equipped agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9 Map Prediction Accuracy (Class Balanced Accuracy) UntrainedAgentMemory TrainedAgentMemory Figure A10: Map prediction accuracy (class balanced accuracy) for Depth sensor equipped agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 5 10 15 20 25 30 GeodesicDistance(Start, Goal) 0 10 20 30 40 50 60 70 Performance (SPL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Higher is better) SecondNav(S T) 5 10 15 20 25 30 GeodesicDistance(Start, Goal) SecondNav(T S) Figure A11: Memory-only probe performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Performance (in SPL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' higher is better) as a func- tion of geodesic distance from start to goal for the TrainedAgentMemory probe without inputs on SecondNav(S→T) and SecondNav(T→S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' More information can be found under the ‘Navigation from memory alone’ header.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 24 Published as a conference paper at ICLR 2023 20 40 60 80 Agent Performance (SPL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Higher is better) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 Free Space Inference SecondNav(S T) 20 40 60 80 Agent Performance (SPL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Higher is better) SecondNav(T S) Figure A12: Free Space Inference for the TrainedAgentMemory probe on both SecondNav(S→T) and SecondNav(T→S) as a function of agent SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We see that as agent SPL decreases, the probe is able to take paths that inference more free space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 20 40 60 80 Agent Performance (SPL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Higher is better) 0 1 2 3 Excursion Removal SecondNav(S T) 20 40 60 80 Agent Performance (SPL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Higher is better) SecondNav(T S) Figure A13: Excursion Removal for the TrainedAgentMemory probe on both SecondNav(S→T) and SecondNav(T→S) as a function of agent SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We see that as agent SPL decreases, excursion removal increases since the probe is able to remove additional excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 0 50 100 150 200 250 Memory Length 0 20 40 60 80 100 Performance (higher is better) Metric SPL Success Figure A14: Performance vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' memory length for agents trained under a given memory length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that longer memory lengths are challenging to train for under this methodology as it induces the negative effects of large-batch optimization and is computationally expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 25 Published as a conference paper at ICLR 2023 A B D C Ground Truth 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4% 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4% Prediction Ground Truth Prediction D C A B Non-navigable Navigable Figure A15: Map prediction with poor examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In the main text we shows qualitative examples for the average prediction and a good prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Here we show two additional examples: A, a very poor quality prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This shows that the decoder sometimes does make large mistakes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' B, the average prediction for the UntrainedAgentMemory decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This shows the qualitative difference between the average UntrainedAgentMemory and TrainedAgentMemory prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 26' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'}