diff --git "a/BdE0T4oBgHgl3EQfPwDB/content/tmp_files/load_file.txt" "b/BdE0T4oBgHgl3EQfPwDB/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/BdE0T4oBgHgl3EQfPwDB/content/tmp_files/load_file.txt" @@ -0,0 +1,1338 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf,len=1337 +page_content='Chat2Map: Efficient Scene Mapping from Multi-Ego Conversations Sagnik Majumder1,2* Hao Jiang2 Pierre Moulon2 Ethan Henderson2 Paul Calamia2 Kristen Grauman1,3 Vamsi Krishna Ithapu2 1UT Austin 2Reality Labs Research, Meta 3FAIR Abstract Can conversational videos captured from multiple egocen- tric viewpoints reveal the map of a scene in a cost-efficient way?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We seek to answer this question by proposing a new problem: efficiently building the map of a previously un- seen 3D environment by exploiting shared information in the egocentric audio-visual observations of participants in a natural conversation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Our hypothesis is that as multi- ple people (“egos") move in a scene and talk among them- selves, they receive rich audio-visual cues that can help uncover the unseen areas of the scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Given the high cost of continuously processing egocentric visual streams, we further explore how to actively coordinate the sampling of visual information, so as to minimize redundancy and re- duce power use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To that end, we present an audio-visual deep reinforcement learning approach that works with our shared scene mapper to selectively turn on the camera to ef- ficiently chart out the space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We evaluate the approach using a state-of-the-art audio-visual simulator for 3D scenes as well as real-world video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Our model outperforms previous state-of-the-art mapping methods, and achieves an excellent cost-accuracy tradeoff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Project: http://vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' utexas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='edu/projects/chat2map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Introduction The spatial layout of the environment around us is fun- damental to understanding our physical context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' By repre- senting the walls, furniture, and other major structures in a space, scene maps ground activity and objects in a persis- tent frame of reference, facilitating high-level reasoning for many downstream applications in augmented reality (AR) and robotics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For example, episodic memory [18, 30] aims to relocalize lost objects observed in first-person video (where are my keys?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' floorplan estimation [10, 45, 53] aims to chart out the area and shapes of complex buildings;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' navigating agents try to discover routes in unfamiliar spaces [4, 11, 60].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' While traditional computer vision approaches for map- Work done during an internship at Reality Labs Research, Meta Ego 2’s observations View 1 Speech View 3 Speech 2 Speech View Ego 1’s observations 1 View Speech View 2 Speech View 3 Speech 1 3 2 3 2 1 Unmapped 3D scene Scene Occupancy Map Legend Occupied Free Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Given egocentric audio-visual observations from multi- ple people wearing AR glasses and moving and conversing (left), we aim to accurately map the scene (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To mitigate cost, our model receives audio continuously but learns to selectively em- ploy the ego cameras only when the visual input is expected to be informative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' ping (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=', visual SLAM) are highly effective when extensive exposure to the environment is possible, in many real-world scenarios only a fraction of the space is observed by the cam- era.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Recent work shows the promise of sensing 3D spaces with both sight and sound [8, 14, 26, 28, 59]: listening to echoes bounce around the room can reveal the depth and shape of surrounding surfaces, and even help extrapolate a floorplan beyond the camera’s field of view or behind occluded objects [59].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' While we are inspired by these advances, they also have certain limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Often systems will emit sounds (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=', a frequency sweep) into the environment to ping for spatial information [1, 14, 15, 24, 28, 44, 59, 69], which is intrusive if done around people.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Furthermore, existing audio-visual models assume that the camera is always on grabbing new frames, which is wasteful if not intractable, particularly on lightweight, low-power computing devices in AR settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We introduce Chat2Map, a new scene mapping task 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='02184v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='CV] 4 Jan 2023 1aimed at eliminating these challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In the proposed set- ting, multiple people converse as they move casually through the scene while wearing AR glasses equipped with an ego- centric camera, microphones, and potentially other sensors (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=', for odometry).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 Given their egocentric audio-visual data streams, the goal is to infer the ground-plane occupancy map for the larger environment around them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' See Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We observe that audio-visual data from the egos’ inter- actions will naturally reflect scene structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' First, as they walk and talk, their movements reveal spaces like corridors, doorways, and large rooms, in both modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Second, the speech captured by the device-wearer’s cameras and microphones can be localized to different speakers, which, compared to active sound emission, is non-intrusive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To realize this vision, we develop a novel approach to efficient scene mapping from multi-ego conversations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Our approach has two key elements: a shared scene mapper and a visual sampling policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For the former, we devise a transformer-based mapper that incorporates the multiple data streams to infer a map beyond the directly observed areas, and, most importantly, that enables communication among the egos about their observations and states in the 3D space to improve mapping accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For the latter, our idea is to relax the common assumption of an “always-on" camera, and instead actively select when to sample visual frames from any one of the ego cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Intuitively, certain regions where the egos move will be more or less important for mapping (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=', corners of the room, doors).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We train a sampling policy with deep reinforcement learning that activates the visual feed only when it is anticipated to complement the continuous audio feed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' This is a cost-conscious approach, mindful that switching on a camera is much more power consuming than sensing audio with microphones [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We demonstrate our approach using a state-of-the-art audio-visual simulator for 3D scenes as well as some real- world video input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We can successfully map an unfamil- iar environment given only partial visibility via multiple conversing people moving about the scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Compared to sampling all visual frames, our model reduces the visual processing by 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5% while the mapping accuracy declines marginally (∼ 9%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Related Work Visual scene mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Past works tackle scene mapping using 3D Manhattan layouts [20, 73, 80, 85, 86], detailed floorplans [10, 45, 53, 71, 78], occupancy [23, 39, 52, 61, 67, 68], and semantic maps [51].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Manhattan layouts in- clude structured outputs like scene boundaries [73, 85, 86], corners [85, 86], and floor/ceilings [80, 86], but do not gen- eralize to unseen environment regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Floorplan estimation 1Throughout, we call each person participating in the conversation an “ego" for short.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' methods use dense scans of 3D scenes to predict geometric (walls, exterior/ interior) and semantic layouts (room type, object type, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' ), rely on extensive human walkthroughs with RGB-D [10, 45] or 3D point cloud [53, 71] scans, and are usually limited to polygonal layouts [10, 45, 53, 71, 78].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Occupancy maps traditionally rely on wide field-of-view (FoV) LiDAR scanners [62] or evaluate on simple 2D envi- ronments wihtout non-wall obstacles [23, 39, 68, 68].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' More recent methods [4, 5, 11, 60] train an embodied agent to explore and build topdown maps of more complex scenes using RGB-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' On the contrary, our method uses both vision and audio from the observations of a group of conversing people for mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Rather than steer the camera of a robot to map the scene, our task requires processing passive video from human camera wearers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Audio-visual scene mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To our knowledge, the only prior work to translate audio-visual inputs into a general (ar- bitrarily shaped) floorplan maps is AV-Floorplan [59].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Un- like AV-Floorplan, our method maps from speech in natural human conversations, which avoids emitting intrusive fre- quency sweep signals to generate echoes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In addition, a key goal of our work is to reduce mapping cost by skipping redundant visual frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Our experiments demonstrate the benefits of our model design over AV-Floorplan [59].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Audio(-visual) spatial understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' More broadly, be- yond the mapping task, various methods leverage audio for geometric and material information about the 3D scene and its constituent objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Prior work relies on acoustic reflec- tions to estimate the shape of an objects [44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Echolocation is used in robotics to estimate proximity to surrounding sur- faces [1, 15, 24, 69].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Together, vision and audio can better reveal the shape and materials of objects [54, 65, 84], self- supervise imagery [28], and improve depth sensing [40, 81].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Recent work exploits correlations between spatial audio and imagery to reason about scene acoustics [7, 49] or aid active embodied navigation [6, 9, 19, 27, 83] and source separa- tion [47, 48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' No prior work intelligently captures images during conversations to efficiently map a scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Multi-agent spatial understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' There is existing work [17, 33, 35, 36, 57] in the visual multi-agent reinforce- ment learning (MARL) community that learns collaborative agents for performing tasks like relocating furniture [35, 36], playing 3D multi-player games [34], coordinated scene ex- ploration [33], or multi-object navigation [57].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In such set- tings, the collaborative agents actively interact with the envi- ronment to learn a shared scene representation for success- fully completing their task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In contrast, we aim to learn a shared geometric map of a 3D scene given passive observa- tions that come from the trajectories chosen by a group of people involved in a natural conversation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Efficient visual sampling in video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Efficient visual sam- pling has been studied in the context of video recogni- 2 tion [29, 42, 43, 79, 82] and summarization [12, 72] with the goal of selectively and smartly processing informative frames, which can both reduce computational cost and im- prove recognition performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' More closely related to our approach are methods that use audio for the decision- making [29, 42, 56].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Different from the above, we use ef- ficient visual sampling in the context of mapping scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Furthermore, in our case an online sampling decision needs to be made at every step before looking at the current visual frame (or frames from future steps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Chat2Map Task Formulation We propose a novel task: efficient and shared mapping of scenes from multi-ego conversations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Without loss of generality, we consider two egos, E1 and E2, each wearing AR glasses equipped with an RGB-D camera and a multi-channel microphone array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The egos have a conversation and move around in an unmapped 3D environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Each conversation is T steps long.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' At each step t, the ego Ei’s glasses receives an observation Oi,t = (Vi,t, Si,t, Pi,t, S ′ i,t, P ′ i,t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Vi,t is the 90◦ FOV RGB- D image and Si,t is the speech waveform uttered by Ei, as observed from its pose Pi,t = (xi,t, yi,t, θi,t), where (xi,t, yi,t) denotes its location and θi,t denotes its orientation in the 3D scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' S ′ i,t is the speech of the other ego E ′ i (the other person involved in the conversation), as perceived by Ei (note, the voice sounds different depending on the lis- tener position), and P ′ i,t is E ′ i’s pose relative to Ei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Modern AR glasses, like Bose Frames or Facebook Aria already sup- port capturing such multi-sensory observations, making it possible to have a real-world instantiation of our task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Given the real-time observation stream O for the egos, where O = � Oi,t : i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' , 2, t = 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' T � and a total budget of visual frames B, we aim to learn a model that can accurately estimate the top-down occupancy map M of the scene without exceeding the visual budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We assume the first visual frames (at t = 0) for both egos to be observed by the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Thus we aim to learn a policy that samples B frames from 2 ∗ (T − 1) choices—which are not considered a batch, but rather unfold in sequence—and a mapper that predicts the scene map given the sampled frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Recall that our goal is to build a model that samples the expensive visual frames only when absolutely needed for scene mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' This is captured by the constraint 1 ≤ B <<2 ∗ (T − 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' There are three important aspects to our task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' First, it requires learning from both vision and audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' While the visual signal carries rich information about the local scene geometry, there can be a high amount of redundancy in the visual feed captured during a conversation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=', the egos may visit the same location more than once or change their viewpoint only marginally).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Second, not only does the long- range nature of audio help uncover the global scene prop- erties [21, 59] like shape and size—beyond what’s visible in images—we can also exploit audio to undersample the visual frames, thereby reducing the cost of capturing and processing sensory inputs for mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Third, shared map- ping of a scene implies jointly leveraging the complementary information in the audio (speech) from self and other egos, and the synergy of the audio-visual cues from multiple egos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' These insights form the basis of our key hypothesis in this task—selectively sampling visual frames during a conversa- tion involving egos that share information with each other can facilitate efficient mapping of a scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Approach We solve the task by learning a model that estimates the scene map given the egos’ audio-visual observations and also sequentially decides when to sample visual frames for mapping given the audio stream, ego poses, and previously sampled frames, if any.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Here, "sampling" refers to individu- ally deciding for each ego whether to use its camera or not to capture the visuals at every step of its trajectory in the scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The sampling is preemptive in nature, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' the policy selects or skips a frame without capturing it first.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Our model has two main components (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2): (1) a shared scene mapper, and (2) a visual sampling policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' At every step t, the shared mapper has two functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' First, it estimates the map of a previously unseen environment by exploiting the shared spatial cues in the audio-visual observations of the two egos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Second, it informs the policy about the utility of sampling a certain visual frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Guided by the mapper, the policy samples only the most informative visual frames that can boost mapping significantly over using just audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Note that, unlike the visuals, we observe audio continuously as it is less resource-intensive vis-a-vis storage and power requirements for processing [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We learn our task through the synergy of the mapper and the policy, such that under the constraint of a limited visual budget B, our model implicitly understands which visual frames are critical for mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' First, we describe the steps involved to prepare our model inputs (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Next, we introduce our visual sampling policy (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2) and shared scene mapper (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Finally, we present model training details (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Through the rest of the text, we use separate notations to distinguish the egos’ observations O (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' what the egos receive from the environment) from our model inputs O (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' what we capture and feed to our model for efficient mapping).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Model input preparation We prepare our model inputs by separately preprocessing the visual and audio modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' If our policy decides to sample an image V, we transform it into V = (V R, V M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' V R denotes the normalized RGB image with pixel values ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' V M denotes the 90◦ FoV topdown occupancy map created by projecting the depth image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To do the depth 3 a) Visual sampling policy 𝞹!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Policy input encoder for ego 𝐸!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' at step 𝑡 Fusion Fusion Fusion Fusion Fusion 𝑒!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',# Pose at 𝑡 - 1 Pose at 𝑡 𝑃!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',#$% 𝑃!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',# 1 C Speech from Self at 𝑡-1… 𝑡 Pose at 𝑡 - 1 … 𝑡 𝑃!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',#$%…# 𝑆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',#$%…# Other Ego’s Relative Pose at 𝑡 -1 … 𝑡 𝑃′!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',#$%…# 𝑆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=",#$%…# ' If sampled by policy at t - 1 Pose 𝑉!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',#$% 𝑃!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',#$% RGB CNN CNN CNN Embedding Embedding Embedding Embedding Embedding 𝑜#,% !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=" 𝑜&,' ( 𝑜),) ( 𝑜#,% ( 𝑜),) !" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=" 𝑜&,' !" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=" 𝑜&,' (!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 𝑜),) (!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 𝑜#,% (!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=" Multi- modal Memory Ego Pose Embedding Query Transformer Transpose Convolutions Modality Tag Fusion Embedding Other Ego’s Relative Pose Modality Tag Fusion Embedding Pose CNN 1 C Speech from Self If sampled by policy Modality Tag Fusion Embedding Pose RGB 𝑝),) 𝑝&,' 𝑝#,% 𝑑),) 𝑑&,' 𝑑#,% 𝑉!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',( 𝑃!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',( 𝑆!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',( 𝑃!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',( 𝑃!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=",( ' 𝑆!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=",( ' 𝑝* 𝑑 𝑣!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',#$% Map input encoder for ego 𝐸!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' at step 𝑡 𝑃!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',( Feature-wise Raster Index CNN Feature-wise Raster Index CNN Feature-wise Raster Index Fusion 𝒉𝒕"𝟏 𝒉𝒕 GRU 𝒈𝒕 Critic Actor 𝑎%,# ���%,# 𝑒&,# 𝑎&,# b) Shared scene mapper 𝑓* Map legend 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Ego pose 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Free / Occupied +𝑣!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',( 𝑝!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',#$% 𝑣!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',#$%…# 𝑝!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',#$%…# 𝑠′!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',#$%…# 𝑝′!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',#$%…# 𝑝!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',#$% 𝑝!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',# +𝑝!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',( +𝑝!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',( ̂𝑠!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',( ̂𝑠′!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',( +𝑝′!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',( /𝑚+!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' /𝑚+ /𝑚, 90° FoV Map 90° FoV Map 1𝑀!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=',( 1 C Speech from Other Ego at 𝑡 -1… 𝑡 Speech from Other Ego 1 C Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Our model has two main components: a) a visual sampling policy (left), and b) a shared scene mapper (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' At each step, our policy receives the current audio along with the previous audio(-visual) observations for the egos and decides for each ego individually whether to capture its visual frame at the current step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' As per the policy predictions, the shared mapper conditionally uses the current visual frame(s) and audio along with the past audio(-visual) observations to predict the occupancy map of the scene, a ground-plane map showing where obstacles and freespace are (shown in green and white).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' projection, we first backproject it into the world coordinates using the camera’s intrinsic parameters to compute the local visible scene’s 3D point cloud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Next, we project these points to obtain a two-channel binary topdown map of size h×w×2, where the first channel of the map reveals occupied/free areas, and the second channel reveals seen/unseen areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' If our policy skips V, we set V R and V M to all-zero matrices of the appropriate size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For a speech waveform S, we calculate the short-time Fourier transform (STFT) magnitude spectrogram denoted by S of size F × T × C, where F, T , and C are the number of frequency bins, time windows, and ambisonic microphone channels, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Lastly, we normalize each pose Pi,t to be relative to P1,1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' See Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5 and Supp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Visual sampling policy At every step t, our visual sampling policy πV (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 left) receives Oπ(t) as input and makes the decision to either capture or skip the visual frame Vi,t for each ego Ei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Oπ(t) comprises the visual cue from the last step along with the speech cues and the poses from the current step and the last step for both egos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Formally, Oπ(t) = � Oπ i (t) : i = 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 � , where Oπ i (t) = � Vi,t−1, Si,j, Pi,j, S ′ i,j, P ′ i,j : j = t − 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' t � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The policy first uses an encoder network to generate a multi-modal embedding of Oπ(t), and then passes the embedding to a policy network that makes a sampling decision per ego.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' At t = 1, as per our problem definition (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3), the policy always chooses to sample the visual frames for both egos, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=', the cameras are initially on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Multi-modal policy embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To process ego Ei’s visual input Vi,t−1 from the last step, we encode the RGB image V R i,t−1 and map V M i,t−1 with separate CNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We then concatenate the two features to generate the visual embedding vi,t−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To encode the pose in- puts � Pi,t−1, P ′ i,t−1, Pi,t, P ′ i,t � , we use a linear layer and generate pose embeddings � pi,t−1, p ′ i,t−1, pi,t, p ′ i,t � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We process the speech inputs � Si,t−1, Si,t−1, Si,t, S ′ i,t � using another CNN and create speech embeddings � si,t−1, s ′ i,t−1, si,t, s ′ i,t � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Next, we fuse the visual, speech and pose embeddings using linear layers (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 left for details) to obtain the multi-modal policy embedding ei,t for Ei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Finally, we fuse the policy embeddings for the two egos, e1,t and e2,t with a linear layer to produce the multi-modal policy embedding et.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The visual, audio, and pose inputs carry complementary cues required for efficient visual sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Whereas the pose inputs from the last and current steps explicitly reveal the viewpoint change between the steps, the previous and current speech inputs provide information about the changes in the local and global scene structures as a function of the previously sampled visual inputs, which together suggest the value of sampling a visual frame at the current step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Further- more, guided by our training reward (below in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4), the previously observed visual frames and audio together enable 4 our policy to anticipate the current frames and skip them if they are deemed redundant, thereby improving mapping accuracy for a low visual budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Policy network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The policy network consists of a GRU that estimates an updated history ht along with the current state representation gt, using the fused embedding et and the history of states ht−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' An actor-critic module takes gt and ht−1 as inputs and predicts a policy distribution πθ(ai,t|gt, ht−1) per ego along with the value of the state Hθ(gt, ht−1) (θ are policy parameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The policy samples an action ai,t ∈ � 0, 1 � for every Ei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' ai,t = 1 corresponds to selecting Vi,t, ai,t = 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Shared scene mapper Whereas Oπ(t) denotes our policy input (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2), OM(t) denotes the input to our shared scene mapper f M at step t, such that OM(t) = � (Vi,j, Si,j, S ′ i,j, Pi,j, P ′ i,j) : i = 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2, j = 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' t � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' f m starts by embedding each component of OM(t) using a separate network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' This is fol- lowed by a multi-modal memory that stores the embeddings since the start of the episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Finally, a transformer [76] predicts an estimate ˜ M(t) of the scene map conditioned on the multi-modal memory and the egos’ poses in the episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Multi-modal mapper embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For the visual input Vi,j, we encode V R i,j and V M i,j using separate CNNs and do a channel-wise concatenation to get visual features ˆvi,j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Similarly speech is encoded using separate CNNs to get ˆsi,j and ˆs ′ i,j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Each of ˆv, ˆs and ˆs ′ is of size 4 × 4 × 1024.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For both vision and speech, we compute two positional embeddings, pI and pII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' They encode the pose of the egos in the 3D space, and the index of each 1024-dimensional feature in the visual or speech features in the raster order respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Whereas pI helps discover spatial cues as a function of the egos’ location in the 3D scene, pII enables our model to attend to different modalities in a more fine- grained manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For both, we compute an 8-dimensional sinusoidal positional encoding [76] and then pass it through a linear layer to obtain a 1024-dimensional embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For pII, we additionally repeat this process for every feature index in the raster order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Lastly, we reshape pI and add it with pII to produce 4 × 4 × 1024-dimensional positional embeddings, ˆpi,j for ˆvi,j and ˆsi,j, and ˆp ′ i,j for ˆs ′ i,j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Following [49], we also learn an embedding ˆmi,j ∈ � ˆmV , ˆmS, ˆmS′� to capture different modality types, where ˆmV represents vision, and ˆmS and ˆmS′ represent the speech from self and that of the other ego, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The modality-based embeddings help our model differentiate between different modalities and better map the scene by learning complementary spatial cues from them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Multi-modal memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For the visual input Vi,j, we add its embedding ˆvi,j with its positional embedding ˆpi,j and modality embedding ˆmV i,j, and flatten the sum to get a 16 × 1024-dimensional embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Similarly, we fuse the speech embeddings by taking their sum and flattening it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' This gen- erates a multi-modal memory of fused embeddings o, such that o = � oV 1,1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' , oV 2,t, oS 1,1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' , oS 2,t, oS′ 1,1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' , oS′ 2,t � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Occupancy prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To predict the underlying scene occupancy, we first use a transformer encoder [76] to attend to the embeddings in o and capture short- and long-range correlations within and across modalities using a stack of self-attention layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' This generates an audio-visual repre- sentation that models the spatial layout of the 3D scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Next, we use a transformer decoder [76] to perform cross- attention on the audio-visual representation of the scene conditioned on the embedding ˆpi,j for every pose Pi,j in OM(t) and generate an embedding di,j for the pose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Finally, we upsample di,j using a multi-layer network U comprising transpose convolutions and a sigmoid layer at the end to predict an estimate ˜ Mi,j of the ground-truth local 360◦ FoV map for the pose, Mi,j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Both Mi,j and its estimate ˜ Mi,j are two-channel binary occupancy maps of size H × W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To obtain the estimated map ˜ M(t) for the scene, we register each prediction ˜ Mi,j onto a larger shared map using the pose Pi,j and threshold the final shared map at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 (see Supp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 for map registration details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Importantly, the shared map allows communication between both egos’ data streams for more informed mapping and sampling, as we show in results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Model training Policy training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We propose a novel dense RL reward to train policy πV : r(t) = ∆Q(t) − η ∗ ρ(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' ∆Q(t) measures the improvement in mapping from taking actions � ai,t : i = 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 � over not sampling any visual frame at step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' ρ(t) is a penalty term to discourage sampling a frame from the same pose more than once, which we weight by η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We define ∆Q(t) as ∆Q(t) = Q � ˜ M(t) | OM(t) � − Q � ˜ M(t) | (OM(t) \\ Vt) � , where Q is a map quality measure, Q(X|Y ) represents the quality of map estimate X given inputs Y , and (OM(t) \\ Vt) denotes the mapper inputs devoid of any visual frame for the current step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We define ρ(t) as ρ(t) = � i=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 ai,t ∗ 1(Vi,t ∈ OM(t − 1)), where the indicator function checks if Vi,t was used in map- ping before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' While ∆Q(t) incentivizes sampling frames that provide a big boost to the mapping accuracy over skipping them, ρ(t) penalizes wasting the visual budget on redundant sampling, thereby maximizing mapping performance within 5 the constraints of a limited budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We set ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='03 in all our experiments and define Q as the average F1 score over the occupied and free classes in a predicted occupancy map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We train πV with Decentralized Distributed PPO (DD- PPO) [77].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The DD-PPO loss consists of a value loss, policy loss and an entropy loss to promote exploration (see Supp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 for details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Mapper training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' At each step t, we train the shared map- per f m with a loss LM(t), such that LM(t) = 1 2 × t � i=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 � j=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='t BCE( ˜ Mi,j, Mi,j), where BCE( ˜ Mi,j, Mi,j) is the average binary cross entropy loss between ˜ Mi,j and Mi,j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Training curriculum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To train our model, we first pretrain mapper f m in two phases and then train the policy πV while keeping f m frozen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In phase 1, we train f m without visual sampling, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' all visual frames are provided at each step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In phase 2, we finetune the pretrained weights of f m from phase 1 on episodes where we randomly drop views to satisfy the budget B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' While phase 1 improves convergence when training with visual sampling, phase 2 helps with reward stationarity when training our RL policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Experiments Experimental setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For our main experiments, we use SoundSpaces [8] acoustic simulations with AI-Habitat [63] and Matterport3D [3] visual scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' While Matterport3D provides dense 3D meshes and image scans of real-world houses and other indoor scenes, SoundSpaces provides room impulse responses (RIRs) at a spatial resolution of 1m for Matterport3D that model all real-world acoustic phenom- ena [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' This setup allows us to evaluate with as many as 83 scenes, split in 56/10/17 for train/val/test, compare against relevant prior work [59, 60] and report reproducible results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We also collect real-world data in a mock-up apartment due to the absence of a publicly available alternative suited for our task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We capture a dense set of RGB images us- ing a Samsung S22 camera and generate the corresponding depth images using monocular depth estimation [22, 38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To compute the RIRs, following [25], we generate a sinusoidal sweep sound from 20Hz-20kHz with a loudspeaker at source location, capture it with an Eigenmike at a receiver location, and convolve the spatial sound with the inverse of the sweep sound to retrieve the RIR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' All capturing devices are placed at a height of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We generate occupancy maps by back- projecting the depth images (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1) and register them onto a shared topdown map before taking egocentric crops to generate the local occupancy inputs and targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Note that both datasets have real-world visuals as they are captured in the real environments;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' SoundSpaces has simulated audio while the apartment data has real-world collected audio RIRs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Conversation episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For each episode (both simula- tion and real), we randomly place the two egos in a scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Episode length is T = 16 and 8 for simulation and real resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' At each step, the egos execute a movement from A = � MoveForward, TurnLeft, TurnRight � , where MoveForward moves an ego forward by 1 m, and the Turn actions rotate the ego by 90◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Further, either of the egos speaks or both speak with equal probability of 1 3 at every step, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=', there are no moments of silence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The egos stay between 1 − 3m from each other so that they don’t col- lide and so that each ego is audible by the other at all times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' This results in train/val splits of 1,955,334/100 episodes in simulation, and a simulated/real-world test split of 1000/27 episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Visual budget B = 2 for our main experiments (see Supp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 for B = 4, 6 evaluations).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Note that these episodes are simply to generate video data;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' our task requires processing passive video, not controlling embodied agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Observations and model output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For the occupancy maps, we generate 31 × 31 × 2-dimensional input maps that cover 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 × 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 m2 [4, 11, 60] in area at a resolution of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 m, and set the local target map size to H×W = 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4×6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 m2 (∼ 41 m2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For speech, we use 100 distinct speakers from LibriSpeech [55], split in 80/11 for heard/unheard, where unheard speech is only used in testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We assume access to correct camera poses since modern AR devices are equipped with motion sensors that can robustly estimate relative poses [46].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We test our robustness to ambient sounds that get mixed with the egos’ speech, and incorporate odom- etry noise models [59, 60] (see Supp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Evaluation settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We evaluate our model in two set- tings: 1) passive mapping, the mapper has access to all visual frames in an episode (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=', the camera is always-on), and 2) active mapping, where the mapping agent has to ac- tively sample frames to meet the visual budget B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' This helps disentangle our modeling contributions—whereas passive mapping lets us show improvements in the mapper hM over existing methods [59, 60], active mapping helps demonstrate the benefits of smart visual sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We use standard evaluation metrics [60]: F1 score and IoU (intersection over union) between the predicted and target scene maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For both metrics, we report the mean over the free and occupied classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For active mapping, we average the metrics over 3 random seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We use the following baselines to compare our model‘s efficacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Passive mapping: All-occupied: a naive baseline that predicts all locations in its map estimate as occupied Register-inputs: a naive baseline that registers the input maps onto a shared map and uses it as its prediction 6 Simulation Real world Model F1 score ↑ IoU ↑ F1 score ↑ IoU ↑ All-occupied 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8 Register-inputs 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 OccAnt [60] 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='9 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 AV-Floorplan [59] 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='9 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 Ours 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 Ours w/o vision 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 Ours w/o audio 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 Ours w/o E ′ i’s speech 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='9 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 Ours w/o shared mapping 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='9 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Passive mapping performance (%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Model F1 score ↑ IoU ↑ All-occupied 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8 Register-inputs 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 OccAnt [60] 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 AV-Floorplan [59] 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 Ours 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='9 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 Ours w/o vision 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 Ours w/o audio 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 Ours w/o E ′ i’s speech 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='9 Ours w/o shared mapping 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Passive mapping performance (%) with ambient sounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' OccAnt [60]: a vision-only SOTA model that uses the RGB-D images at each step to anticipate the occupancy of the area around an ego that’s outside its visible range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' AV-Floorplan [59]: an audio-visual SOTA model that passively predicts the floorplan of a scene using a walk- through in it, where the audio is either self-generated or comes from semantic sources in the scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We adapt the model for our occupancy prediction task and give it the exact same audio-visual observations as our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Active mapping: Random: an agent that selects visual frames randomly for each ego as long as the budget allows Greedy: an agent that greedily uses up the visual budget by sampling frames as early as possible Unique-pose: an agent that samples a frame for every new ego pose in the episode In active mapping, we use the model from the second pre- training phase (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4) as the mapper for all models for fair comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Thus, any difference in performance is due to the quality of each method’s sampling decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' See Supp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' for all other details like network architectures and training hyperparameters (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8), and baseline imple- mentation (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Map prediction results Passive mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Table 1 (top) reports the prediction quality of all models in the passive mapping setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Naive baselines (All-occupied, Register-inputs) perform worse than 1 4 8 12 16 Episode step 62 64 66 68 70 Mean F1 score (%) Random Unique pose Greedy Ours w/o audio for V Ours (a) Simulation 1 2 4 6 8 Episode step 44 46 48 50 52 Mean F1 score (%) Random Unique pose Greedy Ours w/o audio for V Ours (b) Real world Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Active mapping performance vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' episode step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1 4 8 12 16 Episode step 62 64 66 68 70 Mean F1 score (%) Random Unique pose Greedy Ours w/o audio for V Ours (a) Effect of ambient sounds 1-3 3-5 5-7 7-9 Inter-ego distance thresholds (m) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content="2 Ours Ours w/o E ′ i's speech Mean F1 score (%) Mean IoU (%) (b) Impact of ego E ′ i’s speech Figure 4." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' (a) Effect of ambient environment sounds on active mapping (b) Impact of the other ego’s speech on passive mapping vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' distance between the egos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' the learned models, showing the complexity of our map pre- diction task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' AV-Floorplan [59] fares the best among all baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Its improvement over OccAnt [60] demonstrates the benefits of exploiting the spatial cues in audio for map- ping and using an attention-based model to leverage the long- and short-range correlations in the audio-visual inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Our method outperforms all baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Its improvement over AV-Floorplan [59] underlines the efficacy of perform- ing attention at different granularities—across modalities, within a single modality and within a single input—guided by our positional and modality type embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' It also generalizes to the real-world setting and retains its benefits over the baselines, even without retraining on the real-world data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' However, we do observe a drop in performance gains, probably due to the large sim-to-real gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Active mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 shows the active mapping per- formance as a function of episode progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Employing naive heuristics for sampling, like Random or Greedy, isn’t enough for high-quality mapping, which emphasizes the high levels of redundancy in the visual frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Unique-pose improves over both Random and Greedy, showing that sam- pling diverse viewpoints provides more information about the underlying scene geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Even though the baselines make progress initially, they flatten quickly and our model eventually outperforms them all, on both real-world and simulated data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' This highlights the benefits of learning a smart policy that, given the audio streams and its visual samples from the past, understands the value of sampling a visual frame for mapping by taking 7 Example 1 Example 2 Sampled views 1 2 3 4 Sampled views 1 2 3 4 1 2 3 4 3 4 1 2 Example 2 Legend View Correct prediction Occupied Seen Free Occupied Unseen Incorrect prediction Occupied Free Sampled Skipped Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Sample episodes for our active mapping model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' While our policy samples only the salient visual frames, our mapper can both complete partially seen objects as well as anticipate objects never seen before in the sampled visuals (red boxes on the maps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' cues from our novel reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Moreover, on the real-world data, we see improved performance margins over the base- lines towards end of episodes, showing that our policy can adaptively postpone visual sampling to improve mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Owing to our smart sampling, the per-episode reduction in processing for B = 2 is 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 GFLOPS in simulation and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 GFLOPS for the real-world data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Model analysis Ablations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Table 1 (bottom), we ablate the components of our model for passive mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Upon removing audio, our model experiences a large drop in mapping performance, which indicates that our model leverages complementary spa- tial cues in audio and vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We also see a drop in the map quality when our model doesn’t have access to the speech from the other ego (E ′ i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' This shows that E ′ i’s speech can better reveal the more global scene geometry than Ei’s own speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4b further shows that the impact of the other ego’s speech becomes more prominent for larger inter-ego distances (3 − 5 m vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1 − 3 m), in which case the two types of speech are dissimilar enough to carry complemen- tary geometric cues, but reduces for even larger distances (5 m or more), in which case E ′ i is too far for its speech to carry useful cues about Ei’s local scene geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Moreover, unlike the ablation that doesn’t perform shared mapping, our model benefits significantly from jointly attending to the observations of the egos and exploiting the complementary information in them—even though both models use the exact same audio-visual observations, including both speech from self and the other ego.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For active mapping, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 shows a drop in the mapping performance upon removing audio from the policy inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' This implies that our policy exploits audio to reason about the level of redundancy in a new visual frame and improve the mapping quality vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' visual budget tradeoff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' On the more challenging real-world setting, audio plays an even bigger role, as shown by the larger performance drop in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' See Supp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' for similar results with 1) unheard speech (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2), 2) higher values of budget B (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3), 3) sensor noise (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4), and 4) larger target map sizes (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Ambient and background sounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We also test our model’s robustness to ambient and background sounds by inserting a non-speech sound (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' running AC, dog barking, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=') at a random location outside the egos’ trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Al- though quite challenging, our model performs better than the baselines for both passive (Table 2) and active mapping (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Hence, even without explicit audio separation, our model is able to implicitly ground its audio representations in the corresponding pose features for accurate mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Qualitative results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5 shows two successful active mapping episodes of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Note how our model samples views that tend have to little visual overlap but are informative of the surrounding geometry (both occupied and free spaces).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Besides, it is able to complete structures only partially visible in the sampled views, and more interestingly, leverage the synergy of audio and vision to anticipate unseen areas (red boxes on the occupancy maps in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Failure cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We notice two common failure cases with active mapping: episodes where the people stay at the same location, leading to very few informative visual frames to sample from;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' and episodes with highly unique visual samples at every trajectory step, in which case each sample is useful and our model behaves similar to Unique-pose or Greedy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For passive mapping, our model fails with very complex scenes that commonly have objects in spaces where both vision and audio can’t reach (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' narrow corners) 8 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Conclusion We introduce Chat2Map, a new task aimed at scene map- ping using audio-visual feeds from egocentric conversations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We develop a novel approach for Chat2Map comprised of a shared scene mapper and a visual sampling policy based on a novel reinforcement learner that smartly samples the visuals only when necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We show promising perfor- mance on both simulated data and real-world data from over 80 environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' References [1] Ego-Noise Predictions for Echolocation in Wheeled Robots, volume ALIFE 2019: The 2019 Conference on Artificial Life of ALIFE 2022: The 2022 Conference on Artificial Life, 07 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 2 [2] Aaron Carroll and Gernot Heiser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' An analysis of power con- sumption in a smartphone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2010 USENIX Annual Technical Conference (USENIX ATC 10), 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2, 3 [3] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Matterport3d: Learning from rgb-d data in indoor environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' arXiv preprint arXiv:1709.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='06158, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 6 [4] Devendra Singh Chaplot, Dhiraj Gandhi, Saurabh Gupta, Ab- hinav Gupta, and Ruslan Salakhutdinov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Learning to explore using active neural slam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In International Conference on Learning Representations, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 2, 6 [5] Devendra Singh Chaplot, Dhiraj Prakashchand Gandhi, Abhi- nav Gupta, and Russ R Salakhutdinov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Object goal navigation using goal-oriented semantic exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:4247–4258, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [6] Changan Chen, Ziad Al-Halah, and Kristen Grauman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Seman- tic audio-visual navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15516–15525, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [7] Changan Chen, Ruohan Gao, Paul T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Calamia, and Kristen Grauman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Visual acoustic matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2022 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR), pages 18836–18846, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [8] Changan Chen, Unnat Jain, Carl Schissler, Sebastia Vi- cenc Amengual Gari, Ziad Al-Halah, Vamsi Krishna Ithapu, Philip Robinson, and Kristen Grauman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Soundspaces: Audio- visual navigation in 3d environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 6, 13, 14 [9] Changan Chen, Sagnik Majumder, Ziad Al-Halah, Ruohan Gao, Santhosh Kumar Ramakrishnan, and Kristen Grauman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Learning to set waypoints for audio-visual navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In International Conference on Learning Representations, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [10] Jiacheng Chen, Chen Liu, Jiaye Wu, and Yasutaka Furukawa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Floor-sp: Inverse cad for floorplans by sequential room-wise shortest path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2661–2670, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 2 [11] Tao Chen, Saurabh Gupta, and Abhinav Gupta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Learn- ing exploration policies for navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' arXiv preprint arXiv:1903.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='01959, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 2, 6 [12] Yangyu Chen, Shuhui Wang, Weigang Zhang, and Qingming Huang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Less is more: Picking informative frames for video captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the European conference on computer vision (ECCV), pages 358–373, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 [13] Sungjoon Choi, Qian-Yi Zhou, and Vladlen Koltun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Robust reconstruction of indoor scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5556–5565, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 14 [14] Jesper Christensen, Sascha Hornauer, and Stella Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Batvision learning to see 3d spatial layout with two ears.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In ICRA, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1 [15] Jesper Haahr Christensen, Sascha Hornauer, and Stella X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Batvision: Learning to see 3d spatial layout with two ears.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 1581–1587, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 2 [16] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' A recurrent latent variable model for sequential data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Cortes, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Lawrence, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Lee, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Sugiyama, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Garnett, editors, Advances in Neural Information Processing Systems, volume 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=', 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 16 [17] Abhishek Das, Théophile Gervet, Joshua Romoff, Dhruv Batra, Devi Parikh, Michael G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Rabbat, and Joelle Pineau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Tarmac: Targeted multi-agent communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' ArXiv, abs/1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='11187, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [18] Samyak Datta, Sameer Dharur, Vincent Cartillier, Ruta Desai, Mukul Khanna, Dhruv Batra, and Devi Parikh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Episodic memory question answering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1 [19] Victoria Dean, Shubham Tulsiani, and Abhinav Gupta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' See, hear, explore: Curiosity via audio-visual association.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Ad- vances in Neural Information Processing Systems, 33:14961– 14972, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [20] Helisa Dhamo, Nassir Navab, and Federico Tombari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Object- driven multi-layer scene decomposition from a single image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5369–5378, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [21] Ivan Dokmani´c, Reza Parhizkar, Andreas Walther, Yue M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Lu, and Martin Vetterli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Acoustic echoes reveal room shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 110(30):12186–12191, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 [22] Ainaz Eftekhar, Alexander Sax, Jitendra Malik, and Amir Zamir.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Omnidata: A scalable pipeline for making multi-task mid-level vision datasets from 3d scans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10786–10796, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 6 [23] Amine Elhafsi, Boris Ivanovic, Lucas Janson, and Marco Pavone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Map-predictive motion planning in unknown environ- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 8552–8558.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' IEEE, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [24] Itamar Eliakim, Zahi Cohen, Gábor Kósa, and Yossi Yovel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' A fully autonomous terrestrial bat-like acoustic robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' PLoS Computational Biology, 14, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 2 [25] Angelo Farina.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Simultaneous measurement of impulse re- sponse and distortion with a swept-sine technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Journal of The Audio Engineering Society, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 6 [26] Chuang Gan, Jeremy Schwartz, Seth Alter, Martin Schrimpf, James Traer, Julian De Freitas, Jonas Kubilius, Abhishek Bhandwaldar, Nick Haber, Megumi Sano, Kuno Kim, Elias Wang, Damian Mrowca, Michael Lingelbach, Aidan Curtis, 9 Kevin T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Feigelis, Daniel M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Bear, Dan Gutfreund, David D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Cox, James J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' DiCarlo, Josh H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' McDermott, Joshua B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Tenen- baum, and Daniel L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Yamins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Threedworld: A platform for interactive multi-modal physical simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In NeurIPS Track on Datasets and Benchmarks, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1 [27] Chuang Gan, Yiwei Zhang, Jiajun Wu, Boqing Gong, and Joshua B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Tenenbaum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Look, listen, and act: Towards audio- visual embodied navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 9701– 9707, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [28] Ruohan Gao, Changan Chen, Ziad Al-Halah, Carl Schissler, and Kristen Grauman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Visualechoes: Spatial image repre- sentation learning through echolocation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, edi- tors, Computer Vision – ECCV 2020, pages 658–676, Cham, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Springer International Publishing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 2 [29] Ruohan Gao, Tae-Hyun Oh, Kristen Grauman, and Lorenzo Torresani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Listen to look: Action recognition by previewing audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10454–10464, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 [30] Kristen Grauman,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Andrew Westbury,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Eugene Byrne,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Zachary Chavis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Antonino Furnari,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Rohit Girdhar,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Jackson Hamburger,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Hao Jiang,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Miao Liu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Xingyu Liu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Miguel Martin,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Tushar Nagarajan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Ilija Radosavovic,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Santhosh Kumar Ramakrish- nan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Fiona Ryan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Jayant Sharma,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Michael Wray,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Meng- meng Xu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Eric Zhongcong Xu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Chen Zhao,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Siddhant Bansal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Dhruv Batra,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Vincent Cartillier,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Sean Crane,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Tien Do,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Morrie Doulaty,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Akshay Erapalli,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Christoph Feichtenhofer,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Adriano Fragomeni,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Qichen Fu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Christian Fuegen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Abrham Gebrese- lasie,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Cristina Gonzalez,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' James Hillis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Xuhua Huang,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Yifei Huang,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Wenqi Jia,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Weslie Khoo,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Jachym Kolar,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Satwik Kot- tur,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Anurag Kumar,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Federico Landini,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Chao Li,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Yanghao Li,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Zhenqiang Li,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Karttikeya Mangalam,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Raghava Modhugu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Jonathan Munro,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Tullie Murrell,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Takumi Nishiyasu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Will Price,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Paola Ruiz Puentes,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Merey Ramazanova,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Leda Sari,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Kiran Somasundaram,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Audrey Southerland,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Yusuke Sugano,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Ruijie Tao,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Minh Vo,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Yuchen Wang,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Xindi Wu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Takuma Yagi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Yunyi Zhu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Pablo Arbelaez,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' David Crandall,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Dima Damen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Giovanni Maria Farinella,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Bernard Ghanem,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Vamsi Krishna Ithapu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, and Jitendra Ma- lik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Ego4d: Around the World in 3,000 Hours of Egocentric Video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In IEEE/CVF Computer Vision and Pattern Recogni- tion (CVPR), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1 [31] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Delving deep into rectifiers: Surpassing human-level per- formance on imagenet classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 16 [32] Sergey Ioffe and Christian Szegedy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Batch normalization: Accelerating deep network training by reducing internal co- variate shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Francis Bach and David Blei, editors, Pro- ceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Re- search, pages 448–456, Lille, France, 07–09 Jul 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 15, 16 [33] Shariq Iqbal and Fei Sha.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Coordinated exploration via in- trinsic rewards for multi-agent reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' arXiv preprint arXiv:1905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='12127, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [34] Max Jaderberg, Wojciech M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio García Castañeda, Charlie Beat- tie, Neil C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Rabinowitz, Ari S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Morcos, Avraham Ruder- man, Nicolas Sonnerat, Tim Green, Louise Deason, Joel Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Leibo, David Silver, Demis Hassabis, Koray Kavukcuoglu, and Thore Graepel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Human-level performance in 3d multi- player games with population-based reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Science, 364:859 – 865, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [35] Unnat Jain, Luca Weihs, Eric Kolve, Ali Farhadi, Svetlana Lazebnik, Aniruddha Kembhavi, and Alexander G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Schwing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' A cordial sync: Going beyond marginal policies for multi- agent embodied tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' ArXiv, abs/2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='04979, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [36] Unnat Jain, Luca Weihs, Eric Kolve, Mohammad Rastegari, Svetlana Lazebnik, Ali Farhadi, Alexander G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Schwing, and Aniruddha Kembhavi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Two body problem: Collaborative visual task completion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2019 IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 6682– 6692, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [37] Hao Jiang, Calvin Murdock, and Vamsi Krishna Ithapu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Ego- centric deep multi-channel audio-visual active speaker local- ization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10534–10542, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 15 [38] O˘guzhan Fatih Kar, Teresa Yeo, Andrei Atanov, and Amir Zamir.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3d common corruptions and data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, pages 18963–18974, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 6 [39] Kapil Katyal, Katie Popek, Chris Paxton, Phil Burlina, and Gregory D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Hager.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Uncertainty-aware occupancy map pre- diction using generative networks for robot navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2019 International Conference on Robotics and Automation (ICRA), pages 5453–5459, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [40] Hansung Kim, Luca Remaggi, Philip JB Jackson, Fil- ippo Maria Fazi, and Adrian Hilton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3d room geometry recon- struction using audio-visual sensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2017 International Conference on 3D Vision (3DV), pages 621–629, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [41] Diederik P Kingma and Jimmy Ba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Adam: A method for stochastic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6980, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 16 [42] Bruno Korbar, Du Tran, and Lorenzo Torresani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Scsampler: Sampling salient clips from video for efficient action recogni- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 6231–6241, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 [43] Jintao Lin, Haodong Duan, Kai Chen, Dahua Lin, and Limin Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Ocsampler: Compressing videos to one clip with single-step sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 13894–13903, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 [44] David B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Lindell, Gordon Wetzstein, and Vladlen Koltun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Acoustic non-line-of-sight imaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2019 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR), pages 6773–6782, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 2 [45] Chen Liu, Jiaye Wu, and Yasutaka Furukawa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Floornet: A uni- fied framework for floorplan reconstruction from 3d scans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the European conference on computer vision (ECCV), pages 201–217, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 2 [46] Wenxin Liu, David Caruso, Eddy Ilg, Jing Dong, Anastasios I Mourikis, Kostas Daniilidis, Vijay Kumar, and Jakob Engel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 10 Tlio: Tight learned inertial odometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' IEEE Robotics and Automation Letters, 5(4):5653–5660, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 6 [47] Sagnik Majumder, Ziad Al-Halah, and Kristen Grauman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Move2hear: Active audio-visual source separation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Pro- ceedings of the IEEE/CVF International Conference on Com- puter Vision, pages 275–285, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [48] Sagnik Majumder, Ziad Al-Halah, and Kristen Grauman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Ac- tive audio-visual separation of dynamic sound sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In European Conference on Computer Vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Springer, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [49] Sagnik Majumder, Changan Chen, Ziad Al-Halah, and Kris- ten Grauman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Few-shot audio-visual learning of environment acoustics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Thirty-Sixth Conference on Neural Information Processing Systems, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2, 5 [50] Vinod Nair and Geoffrey E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Rectified Linear Units Improve Restricted Boltzmann Machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the 27th International Conference on Machine Learning, pages 807–814.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Omnipress, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 15, 16 [51] Medhini Narasimhan, Erik Wijmans, Xinlei Chen, Trevor Dar- rell, Dhruv Batra, Devi Parikh, and Amanpreet Singh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Seeing the un-scene: Learning amodal semantic maps for room navi- gation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 513–529.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [52] Simon T O’Callaghan and Fabio T Ramos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Gaussian process occupancy maps*.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Rob.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=', 31(1):42–62, jan 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [53] Brian Okorn, Xuehan Xiong, and Burcu Akinci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Toward automated modeling of floor plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 3D PVT, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 2 [54] Andrew Owens, Phillip Isola, Josh McDermott, Antonio Tor- ralba, Edward H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Adelson, and William T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Freeman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Visually indicated sounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2405–2413, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [55] Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Librispeech: An asr corpus based on public domain audio books.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206–5210, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 6 [56] Rameswar Panda, Chun-Fu Richard Chen, Quanfu Fan, Xi- meng Sun, Kate Saenko, Aude Oliva, and Rogerio Feris.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Adamml: Adaptive multi-modal learning for efficient video recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7576–7585, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 [57] Shivansh Patel, Saim Wani, Unnat Jain, Alexander G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Schwing, Svetlana Lazebnik, Manolis Savva, and Angel X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Chang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Interpretation of emergent communication in het- erogeneous collaborative embodied agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 15993–15943, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [58] Katharine Patterson, Kevin W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Wilson, Scott Wisdom, and John R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Hershey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Distance-based sound separation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In IN- TERSPEECH, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 15 [59] Senthil Purushwalkam, Sebastia Vicenc Amengual Gari, Vamsi Krishna Ithapu, Carl Schissler, Philip Robinson, Ab- hinav Gupta, and Kristen Grauman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Audio-visual floorplan reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision, pages 1183–1192, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 2, 3, 6, 7, 13, 14 [60] Santhosh K Ramakrishnan, Ziad Al-Halah, and Kristen Grau- man.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Occupancy anticipation for efficient exploration and navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 400–418.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 2, 6, 7, 13, 14 [61] Fabio Ramos and Lionel Ott.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Hilbert maps: Scalable con- tinuous occupancy mapping with stochastic gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The International Journal of Robotics Research, 35(14):1717– 1730, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [62] João Machado Santos, David Portugal, and Rui P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Rocha.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' An evaluation of 2d slam techniques available in robot operating system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pages 1–6, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [63] Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Habitat: A platform for embodied ai research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9339– 9347, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 6, 14 [64] Andrew M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Saxe, James L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' McClelland, and Surya Ganguli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Exact solutions to the nonlinear dynamics of learning in deep linear neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' CoRR, abs/1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6120, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 16 [65] Carl Schissler, Christian Loftin, and Dinesh Manocha.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Acous- tic classification and optimization for multi-modal rendering of real-world scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' IEEE Transactions on Visualization and Computer Graphics, 24:1246–1259, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [66] John Schulman, Philipp Moritz, Sergey Levine, Michael I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Jordan, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Abbeel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' High-dimensional continuous control using generalized advantage estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' CoRR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 16 [67] Ransalu Senanayake, Thushan Ganegedara, and Fabio Ramos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Deep occupancy maps: a continuous mapping technique for dynamic environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In NIPS 2017 Workshop MLITS, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [68] Rakesh Shrestha, Fei-Peng Tian, Wei Feng, Ping Tan, and Richard Vaughan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Learned map prediction for enhanced mobile robot exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2019 International Conference on Robotics and Automation (ICRA), pages 1197–1204, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [69] Jascha Sohl-Dickstein, Santani Teng, Benjamin M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Gaub, Chris C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Rodgers, Crystal Li, Michael R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' DeWeese, and Nicol S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Harper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' A device for human ultrasonic echolocation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' IEEE Transactions on Biomedical Engineering, 62(6):1526– 1534, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1, 2 [70] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Dropout: A simple way to prevent neural networks from overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Journal of Machine Learning Research, 15(56):1929–1958, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 16 [71] Wei Sui, Lingfeng Wang, Bin Fan, Hongfei Xiao, Huaiyu Wu, and Chunhong Pan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Layer-wise floorplan extraction for automatic urban building reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' IEEE Transactions on Visualization and Computer Graphics, 22(3):1261–1277, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [72] Maitreya Suin and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Rajagopalan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' An efficient frame- work for dense video captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Proceedings of the AAAI Conference on Artificial Intelligence, 34(07):12039–12046, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 [73] Cheng Sun, Chi-Wei Hsiao, Min Sun, and Hwann-Tzong Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Horizonnet: Learning room layout with 1d representa- tion and pano stretch data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1047–1056, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [74] Yi Sun, Xiaogang Wang, and Xiaoou Tang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Deeply learned face representations are sparse, selective, and robust.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2015 11 IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), pages 2892–2900, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 15, 16 [75] Ryu Takeda, Yoshiki Kudo, Kazuki Takashima, Yoshifumi Kitamura, and Kazunori Komatani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Unsupervised adaptation of neural networks for discriminative sound source local- ization with eliminative constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2018 IEEE Interna- tional Conference on Acoustics, Speech and Signal Process- ing (ICASSP), pages 3514–3518, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 14 [76] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Advances in neural information processing systems, 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5, 16 [77] Erik Wijmans, Abhishek Kadian, Ari S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva, and Dhruv Batra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Dd-ppo: Learning near-perfect pointgoal navigators from 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 billion frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In ICLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 6, 16 [78] Wenming Wu, Xiao-Ming Fu, Rui Tang, Yuhan Wang, Yu- Hao Qi, and Ligang Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Data-driven interior plan generation for residential buildings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=', 38(6), nov 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [79] Zuxuan Wu, Caiming Xiong, Chih-Yao Ma, Richard Socher, and Larry S Davis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Adaframe: Adaptive frame selection for fast video recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1278–1287, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 [80] Shang-Ta Yang, Fu-En Wang, Chi-Han Peng, Peter Wonka, Min Sun, and Hung-Kuo Chu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Dula-net: A dual-projection network for estimating room layouts from a single rgb panorama.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3363–3372, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [81] Mao Ye, Yu Zhang, Ruigang Yang, and Dinesh Manocha.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3d reconstruction in the presence of glasses by acoustic and stereo fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4885–4893, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [82] Serena Yeung, Olga Russakovsky, Greg Mori, and Li Fei-Fei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' End-to-end learning of action detection from frame glimpses in videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2678–2687, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 [83] Yinfeng Yu, Wenbing Huang, Fuchun Sun, Changan Chen, Yikai Wang, and Xiaohong Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Sound adversarial audio- visual navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In International Conference on Learning Representations, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [84] Zhoutong Zhang, Jiajun Wu, Qiujia Li, Zhengjia Huang, James Traer, Josh H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' McDermott, Joshua B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Tenenbaum, and William T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Freeman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Generative modeling of audible shapes for object perception.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2017 IEEE International Conference on Computer Vision (ICCV), pages 1260–1269, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [85] Chuhang Zou, Alex Colburn, Qi Shan, and Derek Hoiem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Layoutnet: Reconstructing the 3d room layout from a single rgb image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In Proceedings of the IEEE conference on com- puter vision and pattern recognition, pages 2051–2059, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 [86] Chuhang Zou, Jheng-Wei Su, Chi-Han Peng, Alex Colburn, Qi Shan, Peter Wonka, Hung-Kuo Chu, and Derek Hoiem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Manhattan room layout reconstruction from a single 360◦ image: A comparative study of state-of-the-art methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In- ternational Journal of Computer Vision, 129(5):1410–1431, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 12 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Supplementary Material In this supplementary material we provide additional de- tails about: Video (with audio) for qualitative illustration of our task and qualitative assessment of our map predictions (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1) Experiment to show the effect of unheard sounds (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5 in main) on map predictions (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2), as noted in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 in main Experiment to show the impact of the visual budget B (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 in main) on mapping quality (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3), as referenced in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 in main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Experiment to show the effect of sensor noise on map- ping accuracy (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4), as mentioned in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 in main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Experiment to show mapping performance as function of the target map size (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5), as noted in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 in main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Dataset details (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6) in addition to what’s provided in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5 in main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Additional baseline details for reproducibility (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7), as referenced in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5 in main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Architecture and training details (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8), as noted in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5 in main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Supplementary video The supplementary video qualitatively depicts our task, Chat2Map:Efficient Scene Mapping from Multi-Ego Con- versations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Moreover, we qualitatively show our model’s mapping quality by comparing the predictions against the ground truths and the visual samples chosen by our sampling policy for efficient mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Please use headphones to hear the spatial audio correctly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We also demonstrate the acous- tically realistic SoundSpaces [8] audio simulation platform that we use for our core experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The video is available at http://vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='utexas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='edu/projects/ chat2map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Unheard sounds In Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 in main, we showed results with heard sounds (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5 in main), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' the anechoic speech sounds uttered by the egos are shared between train and test splits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' However, due to our use of unseen environments in test (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5 in main), the spatial speech sounds input to our model during test are not heard in training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To make the evaluation even more challenging, we conduct a parallel experiment here, where even the anechoic speech is distinct from what’s used in Model F1 score ↑ IoU ↑ All-occupied 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8 Register-inputs 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 OccAnt [60] 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 AV-Floorplan [59] 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 Ours 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 Ours w/o vision 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 Ours w/o audio 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 Ours w/o E ′ i’s speech 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 Ours w/o shared mapping 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Passive mapping performance (%) on unheard sounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1 4 8 12 16 Episode step 62 64 66 68 70 Mean F1 score (%) Random Unique pose Greedy Ours w/o audio for V Ours Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Active mapping performance vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' episode step on unheard sounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' training, which we call as the unheard sound setting (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5 in main).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Table 3 shows our passive mapping results in the unheard sound setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Our model is able to retain its performance margins over all baselines even in this more challenging scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We notice a similar trend upon evaluating our model for active mapping on unheard sounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 6 shows that our model is able to generalize to novel sounds better than all baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' This indicates that both our mapper f M and visual sam- pling policy πV are able to learn useful spatial cues from audio that are agnostic of the speech content and semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Visual budget value So far, we have shown active mapping results with the visual budget set to B = 2 (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 in main).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To analyze the effect of larger values of B, we show our active mapping performance for B ∈ � 4, 6 � in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Our model outperforms all baselines even for these larger B values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We also observe that the lower the visual budget, the higher the performance margins are for our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' This shows that our model is particularly more robust to the lack of visuals in extremely low-resource settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 13 1 4 8 12 16 Episode step 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 Mean F1 score (%) Random Unique pose Greedy Ours w/o audio for V Ours (a) B = 4 1 4 8 12 16 Episode step 65 70 75 Mean F1 score (%) Random Unique pose Greedy Ours w/o audio for V Ours (b) B = 6 Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Active mapping performance vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' episode step with B ∈ � 4, 6 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Model F1 score ↑ IoU ↑ All-occupied 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 Register-inputs 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 OccAnt [60] 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 AV-Floorplan [59] 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8 Ours 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 Ours w/o vision 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 Ours w/o audio 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 Ours w/o E ′ i’s speech 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 Ours w/o shared mapping 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Passive mapping performance (%) with sensor noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1 4 8 12 16 Episode step 60 62 64 66 68 Mean F1 score (%) Random Unique pose Greedy Ours w/o audio for V Ours Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Active mapping performance vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' episode step with sensor noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Sensor noise Here, we test our model’s robustness to sensor noise by adding noise of the appropriate type individually to each sensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For RGB images, we sample the noise from a Gaussian distribution with a mean of 0 and a standard de- viation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 [60, 63].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For depth, we use the Redwood depth noise model [13, 60, 63], where the amount of noise is higher for higher depth values and vice-versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Follow- ing [60], we sample pose noise from a truncated Gaus- sian with a mean of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='025 m and a standard deviation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='001 m for the spatial location component of an ego pose � (x, y) in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 in main � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For orientation θ (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 in main), we use another truncated Gaussian with a mean of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='9◦ and H = W = 8 m H = W = 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 m Model F1 score ↑ IoU ↑ F1 score ↑ IoU ↑ All-occupied 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='9 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 Register-inputs 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='9 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 OccAnt [60] 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 AV-Floorplan [59] 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 Ours 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 Ours w/o vision 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 Ours w/o audio 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 Ours w/o E ′ i’s speech 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 Ours w/o shared mapping 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='9 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Passive mapping performance (%) for larger target map sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 1 4 8 12 16 Episode step 60 62 64 Mean F1 score (%) Random Unique pose Greedy Ours w/o audio for V Ours (a) H = W = 8 m 1 4 8 12 16 Episode step 57 58 59 60 61 Mean F1 score (%) Random Unique pose Greedy Ours w/o audio for V Ours (b) H = W = 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 m Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Active mapping performance vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' episode step for larger target map sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' a standard deviation of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='057◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Both distributions are trun- cated at 2 standard deviations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For our multi-channel mi- crophones (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 in main), we add a high amount of noise (SNR of 40 dB) [8] using a standard noise model [13, 75].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Table 4 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 8 report our passive and active mapping performance, respectively, in the face of sensor noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In both settings, although our model’s performance declines in comparison to the noise-free setting (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Table 1 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 in main), it generalizes better than all baselines, thereby underlining the effectiveness of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Target map size In main (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1), we showed mapping results with H × W = 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 × 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 m2(∼ 41 m2), where H and W denote the height and width of the ground-truth local 360◦ FoV maps (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 in main).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To analyze the impact of larger target map sizes on the mapping quality, we also test our model with H ×W ∈ � 8×8 m2(64 m2), 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6×9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6 m2(∼ 92 m2) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Table 5 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 9 show the corresponding results for passive and active mapping, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In both cases, our model outperforms all baselines by a substantial margin, showing that our method is also robust to higher target map sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Dataset details Here, we provide additional dataset details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We will re- lease our datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 14 Visual data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' All RGB-D images in our experiments have a resolution of 128 × 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To generate the topdown occupancy maps, we threshold the local pointcloud computed from the 90◦ FoV depth im- ages (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 in main) using a lower and upper height limit of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 m, respectively, such that a map cell is con- sidered occupied if there is a 3D point for it in the 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5 m range, and free otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To generate an estimate of the scene map, we register the estimates of ground-truth local 360◦ FoV maps, ˜ Mi,js onto a shared scene map ˜ M (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 in main) and maintain a count of the number of updates undergone by every cell in the shared map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To register a local estimate ˜ Mi,j, we first trans- late and rotate ˜ Mi,j within ˜ M on the basis of its normalized pose Pi,j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Next, we add ˜ Mi,j with the corresponding part of ˜ M and update the counter for every map cell that’s been changed through the registration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We repeat this process for every ˜ Mi,j in the episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Finally, we normalize the updated ˜ M by dividing each cell in it by its number of updates from the counter, and thresholding at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' In our experiments, ˜ M covers a maximum area of 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 × 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 m2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Audio data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For each conversation episode, we randomly choose 2 speakers from the same split – heard or unheard (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5 in main).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Starting at a random time in the audio clip for each speaker, we choose contiguous 3s slices from each clip for T steps to use as the anechoic audio for the two egos in the episode, where T denotes the episode length (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 3 in main).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Further, we normalize each slice to have the same RMS value of 400 across the whole dataset, where all audio is sampled at 16 kHz and stored using the standard 16-bit integer format.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To generate the spectrograms, we convolve a speech slice with the appropriate 9-channel RIR sampled at 16 kHz and compute its STFT with a Hann window of 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='93 ms, hop length of 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='31 ms, and FFT size of 511 to generate 9-channel magnitude spectrograms, where each channel has 256 fre- quency bins and 257 overlapping temporal windows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We assume access to detected and separated speech from the egos at all times since on-device microphones of AR glasses can tackle nearby and distant speaker detection [37] and separation [58].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Baselines Here, we provide additional implementation details for our active mapping baselines for reproducibility (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 5 in main).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Random.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' At each step t, we generate a random num- ber between 0 and 1 from a uniform distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' De- pending on which quartile of the 0-1 range the random number lies in, we skip visual frames for both egos, sample for just one ego, or sample for both egos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Greedy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Starting at t = 2, we sample visual frames for both egos at every step until we run out of the visual budget B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' If the value of B is such that it allows sam- pling only one visual frame at a certain step (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' B is odd), we randomly choose the ego for which we sample the frame at that step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Unique-pose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To implement this baseline, we keep track of the egos’ poses during an episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' At any step t, we sample the frame for an ego if it’s current pose has never been assumed before by either of the egos in that episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Architecture and training Here, we provide our architecture and additional training details for reproducibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We will release our code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 Policy architecture Visual encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To encode local occupancy map inputs, our policy πV (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 in main) uses a 6-layer CNN con- sisting of 5 convolutional (conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=') layers followed by an adaptive average pooling layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The first three conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layers use a kernel size of 4 and a stride of 2, while the last two conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layers use a kernel size of 3 and a stride of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' All conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layers use a zero padding of 1, except for the third conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layer, which uses a zero padding of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The number of output channels of the conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layers are [64, 64, 128, 256, 512], respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Each convolution is followed by a leaky ReLU [50, 74] activation with a negative slope of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2, and a Batch Normalization [32] of 1e−5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The adaptive average pooling layer reduces the output of the last conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layer to a feature of size 1 × 1 × 512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To encode RGB images (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 in main), πV uses a separate CNN with 5 conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layers and an adaptive average pooling layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Each conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layer has a kernel size of 4, stride of 2 and zero padding of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The number of output channels are [64, 64, 128, 256, 512], respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Similar to the occupancy map encoder, each convolution is followed by a leaky ReLU [50, 74] activation with a negative slope of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 and a Batch Normalization [32] of 1e−5, and the adaptive average pooling layer reduces the output of the last conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layer to a feature of size 1 × 1 × 512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We fuse the occupancy and RGB features by concate- nating them and passing through a single linear layer that produces a 512-dimensional visual embedding v (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 in main).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Speech encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The speech encoder (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 in main) in πV is a CNN with 5 conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layers and an adaptive average pooling layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Each conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layer has a kernel size of 4, stride 15 of 2 and a padding of 1, except for the second conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layer, which has a kernel size of 8, stride of 4 and padding of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The number of channels in the CNN are [64, 64, 128, 256, 512], respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Similar to the visual encoder, each conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layer is followed by a leaky ReLU [50, 74] with a negative slope of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 and a Batch Normalization [32] of 1e−5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The adaptive average pooling layer reduces the output of the last conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layer to a feature of size 1 × 1 × 512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Pose encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The pose encoder (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 in main) in πV is a single linear layer that takes a normalized pose P (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 in main) as input and produces a 32-dimensional pose embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Fusion layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We perform linear fusion of the visual, speech and pose embeddings (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 2 in main) at two levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The first level has 4 linear layers and the sec- ond level has 1 linear layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Each linear layer produces a 512-dimensional fused feature as its output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Policy network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The policy network (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 in main) comprises a one-layer bidirectional GRU [16] with 512 hid- den units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The actor and critic networks consist of one linear layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='2 Mapper architecture Visual encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To encode local occupancy map inputs, our shared mapper f M (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 in main) uses a CNN similar to the one used for encoding occupancy maps in πV (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' ), except that it doesn’t have a pooling layer at the end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The RGB encoder (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 in main) in f M is also similar to the one for πV , except that it also doesn’t have a pooling layer at the end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We fuse the map and RGB features by concatenating them along the channel dimension, and obtain a 4 × 4 × 1024 dimensional feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Speech encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The speech encoders (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 in main) in f M are CNNs with 5 layers that share the architecture with the first 5 conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layers of the speech encoder in πV (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1), except that the last conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layer in both encoders has 1024 output channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Modality encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' For our modality embedding ˆm (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 in main), we maintain a sparse lookup table of 1024-dimensional learnable embeddings, which we index with 0 to retrieve the visual modality embedding ( ˆmV ), 1 to retrieve the modality embedding ( ˆmS) for the speech from self, and 2 to retrieve the modality embedding ( ˆmS′) for the speech from the other ego.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Occupancy prediction network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The transformer [76] (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 in main) in our occupancy prediction network com- prises 6 encoder and 6 decoder layers, 8 attention heads, an input and output size of 1024, a hidden size of 2048, and ReLU [50, 74] activations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Additionally, we use a dropout [70] of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1 in our transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The transpose convolutional network U (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 in main) consists of 6 layers in total.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The first 5 layers are transpose convolutions (conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=') layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The first 4 transpose conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layers have a kernel size of 4 and stride of 2, and the last transpose conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layer has a kernel size of 3 and stride of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Each transpose conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' has a padding of 1, ReLU [50, 74] activation and Batch Normalization [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The number of the output channels for the transpose conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' layers are [512, 256, 128, 64, 2], respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' The last layer in U is a sigmoid layer (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 in main), which outputs the map estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 Parameter initialization We use the Kaiming-normal [31] weight initialization strat- egy to initialize the weights of all our network modules, except for the pose encoding layers and fusion layers, which are initialized with Kaiming-uniform [31] initialization, and the policy network, which is initialized using the orthogonal initialization strategy [64].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We switch off biases in all net- work modules, except for the policy network where we set the biases initially to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 Training hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Policy training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' To train our policy πV using DD- PPO [77] (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 in main), we weight the action loss by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='0, value loss by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5, and entropy loss by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We train our policy on 8 Nvidia Tesla V100 SXM2 GPUs with Adam [41], an initial learning rate of 1e−4 and 8 processes per GPU for 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='064 million policy prediction steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Among other policy training parameters, we set the clip parameter value to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='1, number of DD-PPO epochs to 4, number of mini batches to 1, max gradient norm value to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='5, reward discount factor γ to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='99, and the value of λ in the generalized advantage estimation [66] formulation for DD-PPO to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' Mapper training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' We train our shared scene mapper f M (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='3 in main) with a binary cross entropy loss (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content='4 in main) on 4 Nvidia Quadro RTX 6000 GPUs until conver- gence by using Adam [41], an initial learning rate of 1e−4 and a batch size of 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'} +page_content=' 16' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE0T4oBgHgl3EQfPwDB/content/2301.02184v1.pdf'}