diff --git "a/89AzT4oBgHgl3EQfFPox/content/tmp_files/load_file.txt" "b/89AzT4oBgHgl3EQfFPox/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/89AzT4oBgHgl3EQfFPox/content/tmp_files/load_file.txt" @@ -0,0 +1,995 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf,len=994 +page_content='POLICY PRE-TRAINING FOR AUTONOMOUS DRIVING VIA SELF-SUPERVISED GEOMETRIC MODELING Penghao Wu1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2∗ Li Chen1 Hongyang Li1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3† Xiaosong Jia1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3∗ Junchi Yan1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 Yu Qiao1 1OpenDriveLab,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Shanghai AI Laboratory 2UC San Diego 3Shanghai Jiao Tong University ABSTRACT Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' we won- der whether this idea could be adapted in a grab-and-go spirit,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' and mitigate the sample inefficiency problem for visuomotor driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant in- formation for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pre- training in visuomotor driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The proposed PPGeo is performed in two stages to support effective self-supervised training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In the first stage, the geometric modeling framework generates pose and depth predictions simulta- neously, with two consecutive frames as input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' As such, the pre-trained visual encoder is equipped with rich driving pol- icy related representations and thereby competent for multiple visuomotor driv- ing tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' As a side product, the pre-trained geometric modeling networks could bring further improvement to the depth and odometry estimation tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Code and models will be available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='com/OpenDriveLab/PPGeo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1 INTRODUCTION Policy learning refers to the learning process of an autonomous agent acquiring the decision-making policy to perform a certain task in a particular environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Visuomotor policy learning (Mnih et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Levine et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Hessel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Laskin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Toromanoff et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020) takes as input raw sensor observations and predicts the action, simultaneously cooperating and training the perception and control modules in an end-to-end fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' For visuomotor policy models, learning tabula rasa is difficult, where it usually requires a prohibitively large corpus of labeled data or en- vironment interactions to achieve satisfactory performance (Espeholt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Yarats et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' To mitigate the sample efficiency caveat in visuomotor policy learning, pre-training the visual per- ception network in advance is a promising solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Recent studies (Shah & Kumar, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Parisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Radosavovic et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Shah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022) have demonstrated that applying popular visual pre-training approaches, including ImageNet (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2009) classifica- tion, contrastive learning (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020c), masked image modeling (MIM) (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022), and language-vision pre-training (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2021), could guar- antee superior representation for robotic policy learning tasks, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', dexterous manipulation, motor ∗Work done during internship at Shanghai AI Laboratory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' †Corresponding author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Email to: lihongyang@pjlab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='org.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='cn 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='01006v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='CV] 3 Jan 2023 Figure 1: Uniqueness of visuomotor driving policy learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The planned trajectory is shown as red points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (a) static obstacles and background buildings (objects in yellow rectangles) are irrelevant to the driving decision;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (b) the traffic signal in the visual input (marked with the green box) is extremely difficult to recognize and yet deterministic for control outputs;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (c) the pre-trained visual encoder has to be robust to different light and weather conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Photo credit from (Caesar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' control skills and visual navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' However, for one crucial and challenging visuomotor task in particular, namely end-to-end autonomous driving1, the aforementioned predominant pre-training methods may not be the optimal choice (Yamada et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In this paper, we aim to investigate why ever-victorious pre-training approaches for general computer vision tasks and robotic control tasks are prone to fail in case of end-to-end autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' For conventional pre-training methods in general vision tasks, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', classification, segmentation and detection, they usually adopt a wide range of data augmentations to achieve translation and view invariance (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' For robotic control tasks, the input sequence is generally of small resolution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' the environment setting is simple and concentrated on objects (Parisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Radosavovic et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We argue that the visuomotor driving investigated in this paper, is sensitive to geometric relationships and usually comprises complex scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' As described in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1(a), the input data often carry irrelevant information, such as background buildings, far-away moving vehicles, nearby static obstacles, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', which are deemed as noises for the decision making task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' To obtain a good driving policy, we argue that the desirable model should only concentrate on particular parts/patterns of the visual input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' That is, taking heed of direct or deterministic relation to the decision making, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', traffic signals in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' However, concurrent pre-training approaches fail to fulfill such a requirement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' There comes a natural and necessary demand to formulate a pre-training scheme curated for end-to-end autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We attempt to pre-train a visual encoder with a massive amount of driving data crawled freely from the web, such that given limited labeled data, downstream applications could generalize well and quickly adapt to various driving environments as depicted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The pivotal question is how to introduce driving-decision awareness into the pre-training process to help the visual encoder concentrate on crucial visual cues for driving policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' One may resort to directly predicting ego-motion based on single frame sensor input, constraining the network on learning policy-related features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Previous literature tackles the supervision problem with pseudo labeling training on either an open dataset (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022b) or the target domain data (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' However, pseudo labeling approaches suffer from noisy predictions from poorly calibrated models - this is true especially when there exists distinct domain gap such as geographical locations and traffic complexities (Rizve et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' To address the bottleneck aforementioned, we propose PPGeo (Policy Pre-training via Geometric modeling), a fully self-supervised driving policy pre-training framework to learn from unlabeled and uncalibrated driving videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' It models the 3D geometric scene by jointly predicting ego-motion, depth, and camera intrinsics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Since directly learning ego-motion based on single frame input along with depth and intrinsics training from scratch is too difficult, it is necessary to separate the visual en- coder pre-training from depth and intrinsics learning in two stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In the first stage, the ego-motion is predicted based on consecutive frames as does in conventional depth estimation frameworks (Go- dard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In the second stage, the future ego-motion is estimated based on the single frame by a visual encoder, and could be optimized with the depth and camera intrinsics network well-learned in the first stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' As such, the visual encoder is capable of inferring future ego-motion based on current input alone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The pre-trained visual encoder could be well adopted for downstream driving tasks since it captures driving policy related information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' As a side product, the depth and 1We use end-to-end autonomous driving and visuomotor autonomous driving interchangeably in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2 Irrelevant Object Deterministic Signal Light/Weather Variation (a) (b) (c)𝐼𝑡+1 𝐼𝑡 PoseNet DepthNet Visual Encoder (Our Focus) Depth 𝐷𝑡 (a) Self-supervised Visuomotor Policy Pre-training (b) Downstream Tasks Intrinsic K Ego Motion T Photometric Reconstruction Ego Motion T Photometric Reconstruction 𝐼𝑡 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='1 Stage One a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 Stage Two Single frame input Since a car is ahead We need to STOP Consecutive frames input Since frames barely change We need to STOP frozen Visual Encoder (Fine-tuned) Policy Learning Visual Input Figure 2: Overview of PPGeo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (a) We focus on pre-training an effective visual encoder to encode driving policy related information by predicting ego-motion based on single frame input (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 Stage Two).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' As achieving such a goal without labels is non-trivial, the visual encoder is obtained with the aid of a preceding procedure (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='1 Stage One) with temporal inputs and two sub-networks (pose and depth).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In this illustrative example, the ego-vehicle needs to take action of STOP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The ego-motion in (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='1) is inferred by judging two consecutive frames barely change;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' whilst the ego-motion in (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2) is predicted based on single visual input - focusing on driving policy related information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' As such, the visual encoder could be fine-tuned and applied to a wide span of downstream tasks in (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' pose networks could be utilized as new initial weights for depth and odometry estimation tasks, bringing in an additional performance gain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' To sum up, our key contributions are three-fold: We propose a pre-training paradigm curated for various visuomotor driving tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' To the best of our knowledge, this is the first attempt to achieve a fully self-supervised framework without any need of pseudo-labels2, leveraging the effect of pre-training by large-scale data to the full extent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We devise a visual encoder capable of predicting ego-motion based on single visual input, being able to extract feature representations closely related to driving policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Such a design of visual encoder is flexible to extend to various downstream applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We demonstrate the superiority of our approach on a set of end-to-end driving scenarios, covering different types and difficulty levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The performance in terms of various metrics is improved from 2% to even over 100% in challenging cases with very limited data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2 METHODOLOGY 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='1 OVERVIEW The visuomotor policy learning for autonomous driving targets generating a policy π, such that it makes driving decisions, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', control actions or planned trajectory, from visual observation x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Our goal is to pre-train a visual encoder φ(x), which maps the raw image input to a compact repre- sentation containing important information for driving decision making.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The representation is then utilized by the policy π(φ(x)) to perform driving tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2, our pre-training method pre-trains the visual encoder on unlabeled driving videos via two stages in a self-supervised manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 TWO-STAGE SELF-SUPERVISED TRAINING Stage One: Self-supervised Geometric Modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' During the first stage, given a target image It and source images It′ in a sequence, we jointly estimate the depth of the target image, the intrinsics of the camera, and the 6-DoF ego-motion between these two frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Given the estimations, we are able to model the 3D geometry of the scene, and reconstruct the target image by projecting pixels in 2Pseudo-labels here mean using another model trained on additional labeled data to create “artificial” labels for the unlabeled dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 3 the source images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Formally, the pixel-wise correspondence between It and It′ is calculated as: pt′ = KTt→t′Dt(pt)K−1pt, (1) where pt and pt′ are the homogeneous coordinates of the pixel in It and It′ respectively, K is the predicted camera intrinsic matrix, and Dt(pt) represents the predicted depth value at pixel pi in It.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' With this relationship, the target image It′→t could be reconstructed with pixels in It′, and be optimized by the photometric reconstruction error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Following Godard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2019), we choose two images adjacent to the current frame as the source images, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', t′ ∈ {t − 1, t + 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The DepthNet consists of a common encoder-decoder structure (Godard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2019) and estimates the depth map of the input image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Two images are stacked together and fed into the encoder of the PoseNet, whose bottleneck feature is then utilized to predict the camera intrinsics and the ego- motion via two separate MLP-based heads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' For camera intrinsics estimation, optical center (cx, cy) and focal lengths fx, fy are regressed similarly as in Gordon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Chanduri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Stage Two: Visuomotor Policy Pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' After the first stage of training, the DepthNet and PoseNet are well trained and fitted to the driving video data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Then, in the second stage, we replace the PoseNet for ego-motion estimation with the visual encoder φ(x) prepared for downstream driv- ing policy learning tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Now the visual encoder only takes a single frame image as input and predicts ego-motion between the current frame and subsequent frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Specifically, the visual encoder estimates the ego-motion Tt→t+1 based on It alone and Tt→t−1 based on It−1 followed by an inverse operation, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The visual encoder is optimized by the photometric reconstruction error similar to the first stage, aside from a modification where the DepthNet and the intrinsics estimation are frozen and not backpropagated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' This is empirically observed towards better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' By doing so, the visual encoder is enforced to learn the actual driving policy, since the ego-motion between two consecutive frames is straightforwardly related to the driving decision or action taken at the current timestamp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' One might argue that the PoseNet trained in the first stage could provide pseudo motion labels, with which the visual encoder could be directly supervised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' However, the ego-motion predicted from the PoseNet is too sparse compared with the geometric projection approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In our pipeline, every pixel provides supervision for the visual encoder so that inaccurate depth estimation in some pixels could be mitigated by the accurate ones, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', it constructs a “global” optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In contrast, direct supervision from the PoseNet would be greatly affected by the undesirable prediction inaccuracy and noise results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' This is especially true for diverse uncalibrated online videos (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Thus far, the backbone of visual encoder φ(x) has gained knowledge about the driving policy from the diverse driving videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' It can then be applied to downstream visuomotor autonomous driving tasks as the initial weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Besides, the DepthNet and PoseNet trained on this large corpus of uncalibrated video data could also be utilized in depth and odometry estimation tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 LOSS FUNCTION Following Godard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2019), the loss function is comprised of the photometric loss and the smoothness loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The photometric error is comprised of an ℓ1 term and an SSIM (structural similarity index measure) term (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2004): ℓpe = α 2 (1 − SSIM(It, It′→t)) + (1 − α)ℓ1(It, It′→t), (2) where we set α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='85 following the practice (Godard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The smooth loss is: ℓs = |∂xd∗ t |e−|∂xIt| + |∂yd∗ t |e−|∂yIt|, (3) where d∗ t is the mean-normalized inverse depth map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We also adopt the minimum reprojection loss and auto-masking scheme (Godard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2019) to improve self-supervised depth estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 3 EXPERIMENTS All pre-training experiments are conducted on the hours-long unlabeled YouTube driving videos (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' It covers different driving conditions e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', geographical locations and weather.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We sample 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='8 million frames in total at 1 Hz for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' For the first stage in PPGeo 4 pipeline, we train the model for 30 epochs by Adam (Kingma & Ba, 2015) optimizer with a learning rate of 10−4 which drops to 10−5 after 25 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' For the second stage, the encoder is trained for 20 epochs using the AdamW (Loshchilov & Hutter, 2017) optimizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' A cyclic learning rate scheduler is applied with the learning rate ranging from 10−6 to 10−4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The batch size for both stages is 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We use data augmentations including ColorJitter, RamdomGrayScale, and GaussianBlur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='1 DESCRIPTION ON COMPARED BASELINES We use ResNet-34 (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2016) as the encoder and load different pre-trained weights for the initialization of downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We compare PPGeo with pre-training methods including: Random.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We use the default Kaiming initialization (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2015) for convolution layers and constant initialization for batchnorms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' ImageNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We use the model weight provided by Torchvision (Marcel & Rodriguez, 2010), which is pre-trained with the classification task on ImageNet (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' MIM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The model is pre-trained with the masked image modeling method on the YouTube driving video, which tries to reconstruct images with random masked-out patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' SimMIM (Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022) is adopted as it is suitable for convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' MoCo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We pre-train the model using MoCo-v2 (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020c) on the YouTube driving videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We exclude RandomResizedCrop and RandomHorizontalFlip augmentations as they are not suitable for the driving task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' ACO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Following Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2022b), it is pre-trained using action-conditioned contrastive learning on the YouTube driving videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' ACO trains an inverse dynamic model to generate pseudo steer labels for driving videos, based on which steer-based discrimination is added on top of MoCo-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' SelfD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' SelfD (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022a) is not a pre-training method strictly since it needs to train the whole policy model on the driving video for each task, while other pre-training methods aforemen- tioned provide a general pre-training visual model for all tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We still include it for comparison due to its close relationship to our target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Specifically, we follow Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2022a) to train the model for each task with the following pipeline: training on the task data → training on the YouTube data with pseudo-label → fine-tuning on the task data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 DESCRIPTION ON DOWNSTREAM AUTONOMOUS DRIVING TASKS We carry out experiments under (1) three imitation learning based closed-loop driving tasks in CARLA (Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2017), (2) one reinforcement learning based driving task in CARLA, and (3) an open-loop planning task on real-world autonomous driving dataset nuScenes (Caesar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020), to fully validate the effectiveness of PPGeo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We briefly describe each task below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' It corresponds to the goal-conditioned navigation task in the CoRL2017 bench- mark (Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The agent is trained in Town01 and tested in Town02 with unseen weather, and there are no other traffic participants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We use different sizes of training data (from 4K to 40K) following Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2022b) to evaluate the generalization ability of pre-trained vi- sual encoders when labeled data is limited and conduct the closed-loop evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The evaluation metric is success rate, denoting the portion of 50 pre-defined routes finished without any collision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' And traffic lights are ignored here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' CILRS (Codevilla et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2019), a classic image based end-to-end autonomous driving model, is adopted for training and evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Navigation Dynamic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' This is the navigation dynamic task in the CoRL2017 benchmark (Dosovit- skiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The setting differentiates from Navigation that there are other dynamic objects such as randomly generated vehicles, which substantially increases the difficulty of driving safety.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Leaderboard Town05-long.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' This challenging and realistic benchmark corresponds to the Leader- Board benchmark (CARLA, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We collect 40K training data in Town01, 03, 04, 06 and eval- uate on 10 routes in the unseen Town05 (Prakash et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Due to the challenging scenarios in this task, we evaluate different pre-training approaches with the state-of-the-art image-based au- tonomous driving model TCP (Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The major metrics of this task are Driving Score, Route Completion, and Infraction Score (all the higher the better).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Route Completion denotes the portion of the route completed by the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Infraction Score is the number of infractions made 5 Table 1: The Successful Rate results of the closed-loop Navigation task (mean by 3 random trials).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Pre-train Method Navigation - # of training samples 10% (4K) 20% (8K) 40% (16K) 100% (40K) Random 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='6 ± 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='5 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ImageNet 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='4 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='6 MIM 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='1 MoCo 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='1 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 ACO 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='5 SelfD 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 PPGeo (ours) 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 along the route including pedstrain collisions, vehicle collisions, red light infractions, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' And the main metric Driving Score is the product of Route Completion and Infraction Score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Reinforcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Proximal Policy Optimization (PPO) (Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2017) is used to train the CILRS (Codevilla et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2019) model initialized with different pre-trained weights in CARLA Town01 environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The reward shaping details follow Roach (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We also conduct experiments to freeze the pre-trained visual encoder during training to further study the effectiveness of the pre-trained feature representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' nuScenes Planning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' This task involves trajectory planning in real-world dataset nuScenes (Caesar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Given the current visual input, the model plans a 3-second trajectory (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='5 Hz), and the planned trajectory is compared with the ground truth log.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We also calculate the collision rate, where a collision is defined as overlaps with future vehicles and pedestrians based on planned waypoints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The metric of this tasks includes (1) the L2 distance between predicted trajectory and ground truth trajectory, and (2) the collision rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Metrics are measured at different time lengths from 1s to 3s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The planning model used here is comprised of a visual encoder and a GRU-based planner to predict each waypoint auto-regressively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We use the official train-val split for training and evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 NUMERIC COMPARISON ON DOWNSTREAM TASKS For imitation learning based closed-loop driving tasks, the evaluation results are shown in Table 1- 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We present the plot between episode return and environment steps of each method in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 3 for the reinforcement learning experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The open-loop nuScenes planning results are provided in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We could observe that PPGeo outperforms other baselines by a large margin in all tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Note that the model is tested under a different number of fine-tuning samples from 10% (4K) to full 40K in the Navigation and Navigation Dynamic tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In the case of the particularly small size of training samples, PPGeo still demonstrates competitive performance and has a larger improvement gap of over 100%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' This validates the generalization ability of the pre-trained visual encoder, which is important when adapting to a new environment with very limited labeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In the more chal- lenging and real-world style Leaderboard Town05-long task in Table 3, the model pre-trained with our method achieves the highest driving score and infraction score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' PPGeo well handles cases where the agent needs to stop, leading to much fewer vehicle collisions and red light infractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Since ACO considers steering angles only during pre-training, its performance degrades on more challenging scenarios where brake and throttles are also important.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' SelfD performs slightly better than ACO in complex cases while it significantly degenerates when the task data is limited, as affected by the unsatisfying pseudo labeling model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' ImageNet pre-training also shows competitive performance, which might credit to its ability of finding salient objects in the scene when the input contains little irrelevant information (see examples in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='4 DEPTH AND ODOMETRY ESTIMATION In this part, we explore whether the large-scale training on uncalibrated data could benefit the depth and odometry estimation models as well and validate the effectiveness of first-stage training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Specif- ically, we employ the DepthNet and PoseNet trained after the first stage as initial weights for Mon- odepthv2 (Godard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2019), and conduct experiments on KITTI (Geiger et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Results in Table 5 indicate that pre-training on large-scale driving videos could bring performance improve- 6 Table 2: The Successful Rate results of the closed-loop Navigation Dynamic (mean by 3 random trials).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Pre-train Method Navigation Dynamic - # of training samples 10% (4K) 20% (8K) 40% (16K) 100% (40K) Random 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ImageNet 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 MIM 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='5 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='1 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 MoCo 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ACO 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 SelfD 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='6 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='4 PPGeo (ours) 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 Table 3: Closed-loop Leaderboard Town05-long task results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Besides three main metrics, infraction details are also reported (all the lower the better).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Evaluation repeats 3 times with the mean reported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Pre-train Method Driving Score Infraction Score Route Completion Collisions pedestrian Collisions vehicle Collisions layout Off-road violations Agent blocked Red light violations Random 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='50±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='65±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='02 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='49±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='09±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='07 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='16±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='00±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='44±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='97±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='53±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='12 ImageNet 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='29±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='77±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='03 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='52±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='00±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='71±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='11±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='15±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='01 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='01±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='29±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='10 MIM 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='39±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='72±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='04 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='75±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='14±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='91±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='04±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='18±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='87±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='14±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='11 MoCo 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='10±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='65±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='02 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='09±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='13±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='79±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='00±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='49±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='81±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='45±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='13 ACO 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='05±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='67±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='06 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='52±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='00±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='69±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='05±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='54±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='94±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='73±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='10 SelfD 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='76±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='65±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='03 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='72±7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='17±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='84±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='00±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='32±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='75±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='12±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='08 PPGeo 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='44±5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='79±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='08 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='05±5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='04±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='54±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='00±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='16±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='76±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='04±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='05 100 200 300 400 500 600 700 800 Steps (K) 100 0 100 200 300 400 500 Episode Return Visual Encoder Fine-tuning ImageNet MoCo ACO PPGeo 100 200 300 400 500 600 700 800 Steps (K) 100 200 300 400 Episode Return Visual Encoder Frozen ImageNet MoCo ACO PPGeo Figure 3: Learning curves of the RL agents using PPGeo and three other best pre-training baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Left: the pre-trained visual encoder is jointly fine-tuned during RL training;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Right: the visual en- coder is frozen during RL training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The episode return is the mean with standard deviation in shade across three runs with different random seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Table 4: Open-loop nuScenes planning results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We evaluate the ℓ2 distance between model predic- tions and the ground truth trajectory and collision rate in horizons from 1 second to 3 seconds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Pre-train Method L2 (m) ↓ Collision Rate (%) ↓ 1s 2s 3s 1s 2s 3s Random 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='621 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='722 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='851 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='550 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='779 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='375 ImagNet 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='331 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='202 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='086 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='315 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='550 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='366 MIM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='412 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='357 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='331 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='297 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='622 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='507 MoCo 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='528 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='545 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='585 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='560 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='235 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='390 ACO 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='496 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='496 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='519 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='446 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='178 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='223 SelfD 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='419 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='359 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='316 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='353 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='923 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='044 PPGeo (ours) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='302 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='154 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='018 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='270 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='425 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='941 7 Table 5: Improvement from our pre-training method on depth and odometry estimation tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Pre-train Method Depth Estimation Odometry Estimation abs rel ↓ sq rel ↓ rmse ↓ rmse log ↓ a1 ↑ a2 ↑ a3 ↑ Sequence 09 ↓ Sequence 10 ↓ ImageNet 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='118 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='902 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='873 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='196 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='871 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='958 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='981 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='017±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='010 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='015±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='010 PPGeo 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='114 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='805 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='599 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='186 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='874 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='962 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='984 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='016±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='009 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='013±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='009 Ours ACO ImageNet MoCo Origin Figure 4: Eigen-Cam (Muhammad & Yeasin, 2020) activation maps of the learned representation from different pre-training methods on the driving video data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Table 6: Ablative study on key designs of PPGeo on the Navigation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' # Experiment Navigation - # of training samples 10% (4K) 20% (8K) 40% (16K) 100% (40K) 1 Single stage 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 2 No frozen in 2nd stage 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='1 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 3 PoseNet direct supervision 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 4 PPGeo 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='0 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='3 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='7 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='2 ment to both depth and odometry estimation tasks, which is an additional harvest of our pre-training framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We refer readers to Godard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2019) for details about the metrics of these tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='5 VISUALIZATION RESULTS Here we provide heatmaps of the feature representations learned by different pre-training methods using Eigen-Cam (Muhammad & Yeasin, 2020) to show the attended regions in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In many cases (Row 1&2), our model mainly concentrates on the lane in front of the ego vehicle, which is highly related to driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' And our model PPGeo well captures the specific cues causing the brake action including front vehicles (Row 3&4) and traffic lights (Row 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' We also observe that the model pre-trained with ImageNet classification tends to capture salient objects in the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' This is helpful when the salient objects are straightforwardly related to the driving decision (Row 4);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' but it may focus on wrong objects when the input contains other irrelevant information (Row 2&3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='6 ABLATIVE STUDY We conduct ablative study as to different designs of PPGeo on the Navigation task in Table 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Train- ing the visual encoder and DepthNet in a single stage simultaneously (Row 1) leads to an inferior performance, indicating that it is quite challenging for the visual encoder to learn the correct ego- motion if depth estimation is also trained from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Moreover, jointly optimizing the DepthNet in the second stage (Row 2, not frozen) degrades the depth estimation quality and harms the per- formance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In Row 3, we observe that utilizing the PoseNet obtained in the first stage to provide 8 pseudo label supervision directly leads to inferior results, since an inaccurate pseudo label impairs the learning process to great extent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 4 RELATED WORK Pre-training for NLP and General Vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Pre-training or representation learning has proved to be an essential key to the success of artificial intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In the field of Natural Language Processing (NLP), with the powerful capability of Transformer (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2017), pre-training on large- scale datasets with large models then fine-tuning on downstream tasks has become the dominant paradigm (Kenton & Toutanova, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' As for the field of Computer Vision, training specific downstream tasks with the supervised pre-trained weights of visual encoder via ImageNet classification task is widely adopted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Recently, unsupervised and self-supervised learn- ing methods such as contrastive learning (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='b) and masked im- age modeling (Bao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Peng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022) have gained impressive improvement over ImageNet pre-training on various vision benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Very recent vision-language co-training approaches (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022) demonstrate their extraordinary potential in the domain of multi-modal learning and applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Yet, these generic representation learning methods adopt various data augmentation techniques to achieve translation and view invariance, while visuomotor driving sets in a highly dynamic environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In this work, we show that the ever-victorious pre-training methods may not be the optimal choice, and introduce a curated paradigm for visuomotor driving policy learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Pre-training for Visuomotor Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Learning a control policy directly from raw visual input is challenging since the model needs to reason about visual pixels and dynamic behaviors simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Moreover, training visuomotor models from scratch usually requires tons of labeled data or environment interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' To this end, recently, Shah & Kumar (2021) shows that feature representations from ResNet (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2016) pre-trained on ImageNet classification is helpful for RL-based dexterous manipulation tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Parisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2022) conducts extensive experiments on applying “off-the-shelf” pre-trained vision models in diverse control domains and validates their benefits to train control policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' CLIP (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2021) is also adopted in some embodied AI and robot navigation problems (Shah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Besides borrowing pre-trained weights for visuomotor tasks, researchers in robotics now desire a paradigm learning policy representations from raw data directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Radosavovic et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Seo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Gupta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2022) inherit the MIM spirit to realize visual pre-training for control tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Yang & Nachum (2021) investigates unsupervised representation learning objectives from D4RL environments (Fu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020), and Yamada et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2022) further adopts task-induced approaches to learn from prior tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' However, compared with visuomotor driving, the visual inputs of such control tasks are less diverse which usually concentrate on objects and are much more compact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' To our best knowledge, ACO (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022b) is the only pre-training method customized for autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' By first training an inverse dynamic model on nuScenes (Caesar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020), they get pseudo steer labels of the driving videos and then construct the steer-conditioned discrimination for contrastive learning following MoCo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' However, ACO ignores other crucial driv- ing factors such as throttle and brakes, and its performance is largely limited by the inverse dynamic model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' SelfD (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022a) is not strictly designed for pre-training while it also makes use of vast amounts of videos to learn driving policies via semi-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' It acquires the pseudo labeling knowledge from the target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' These two methods both depend on the accuracy of pseudo labeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In contrast, we realize fully self-supervised learning through dense geometric reconstruction, evading the possible adverse effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Policy Learning for Autonomous Driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Visuomotor autonomous driving learns a driving pol- icy directly from sensor inputs in an end-to-end manner (Codevilla et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Liang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Prakash et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Shao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In essence, the inherent difficulty of the urban-style autonomous driving tasks makes such meth- ods data-hungry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Interfuser (Shao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022), the current top-1 method on the CARLA Leader- board (CARLA, 2022), requires 3 million labeled data samples for imitation learning (behavior cloning specifically).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' RL-based model MaRLn (Toromanoff et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2020) needs 20 million environ- ment steps of interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The sample efficiency problem greatly impedes the real-world application of such approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In this work, we propose a self-supervised pre-training pipeline to learn driving 9 policy related representations on unlabeled driving videos, and pave the way for these visuomotor autonomous driving models to further achieve satisfying performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 5 CONCLUSION AND DISCUSSION In this work, we have proposed a fully self-supervised visuomotor driving policy pre-training paradigm PPGeo by modeling the 3D geometry of large-scale unlabeled driving videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Taking a direct approach to infer the ego-motion and benefiting from the two-stage pre-training pipeline, we enable the visual encoder to learn driving policies based on single visual input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Our method out- performs the peer pre-training approaches by a large margin on a series of visuomotor driving tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' For its limitation, our method currently only considers the ego-motion for a single time step, and a future direction is to devise the framework to perform multi-step motion prediction which contains more information about driving decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' REFERENCES Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Beit: Bert pre-training of image transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Language models are few-shot learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In NeurIPS, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' nuscenes: A multimodal dataset for autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2, 5, 6, 9 CARLA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' CARLA autonomous driving leaderboard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' https://leaderboard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='carla.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='org/, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 5, 9 Sai Shyam Chanduri, Zeeshan Khan Suri, Igor Vozniak, and Christian M¨uller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Camlessmonodepth: Monocular depth estimation with unknown camera parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' arXiv preprint arXiv:2110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='14347, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 4 Annie S Chen, Suraj Nair, and Chelsea Finn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Learning generalizable robotic reward functions from” in-the-wild” human videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In RSS, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Dian Chen, Brady Zhou, Vladlen Koltun, and Philipp Kr¨ahenb¨uhl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Learning by cheating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CoRL, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' A simple framework for contrastive learning of visual representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICML, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Improved baselines with momentum contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='04297, 2020c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1, 5, 9 Felipe Codevilla, Matthias M¨uller, Antonio L´opez, Vladlen Koltun, and Alexey Dosovitskiy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' End- to-end driving via conditional imitation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICRA, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Felipe Codevilla, Eder Santana, Antonio M L´opez, and Adrien Gaidon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Exploring the limitations of behavior cloning for autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 5, 6, 9, 14 Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Imagenet: A large-scale hierarchical image database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CVPR, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1, 5 Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' CARLA: An open urban driving simulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CoRL, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 5 Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICML, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1 10 Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' D4rl: Datasets for deep data-driven reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' arXiv preprint arXiv:2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='07219, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Andreas Geiger, Philip Lenz, and Raquel Urtasun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Are we ready for autonomous driving?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' the kitti vision benchmark suite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CVPR, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 6 Cl´ement Godard, Oisin Mac Aodha, and Gabriel J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Brostow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Unsupervised monocular depth esti- mation with left-right consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2, 4 Cl´ement Godard, Oisin Mac Aodha, Michael Firman, and Gabriel J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Brostow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Digging into self- supervised monocular depth prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2, 4, 6, 8, 14 Ariel Gordon, Hanhan Li, Rico Jonschkowski, and Anelia Angelova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Depth from videos in the wild: Unsupervised monocular depth learning from unknown cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 4 Agrim Gupta, Stephen Tian, Yunzhi Zhang, Jiajun Wu, Roberto Mart´ın-Mart´ın, and Li Fei-Fei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Maskvit: Masked visual pre-training for video prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='11894, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICCV, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 5 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Deep residual learning for image recog- nition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CVPR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 5, 9, 14 Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Momentum contrast for unsupervised visual representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1, 9 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll´ar, and Ross Girshick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Masked autoencoders are scalable vision learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CVPR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1, 9 Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Rainbow: Combining improvements in deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In AAAI, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1 Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Bert: Pre-training of deep bidirectional transformers for language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In NAACL-HLT, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Diederik P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Kingma and Jimmy Ba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Adam: A method for stochastic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICLR, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 5 Misha Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, and Aravind Srinivas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Rein- forcement learning with augmented data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In NeurIPS, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1 Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' End-to-end training of deep visuo- motor policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' JMLR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1 Xiaodan Liang, Tairui Wang, Luona Yang, and Eric Xing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Cirl: Controllable imitative reinforcement learning for vision-based self-driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ECCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Ilya Loshchilov and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Decoupled weight decay regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' arXiv preprint arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='05101, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 5 S´ebastien Marcel and Yann Rodriguez.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Torchvision the machine-vision package of torch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ACMMM, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 5 Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle- mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Human-level control through deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Nature, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1 Mohammed Bany Muhammad and Mohammed Yeasin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Eigen-cam: Class activation map using principal components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In IJCNN, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 8 Simone Parisi, Aravind Rajeswaran, Senthil Purushwalkam, and Abhinav Kumar Gupta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The unsur- prising effectiveness of pre-trained vision models for control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICML, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1, 2, 9 11 Zhiliang Peng, Li Dong, Hangbo Bao, Qixiang Ye, and Furu Wei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Beit v2: Masked image modeling with vector-quantized visual tokenizers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='06366, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Aditya Prakash, Kashyap Chitta, and Andreas Geiger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Multi-modal fusion transformer for end-to- end autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 5, 9 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Learning transferable visual models from natural language supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICML, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1, 9 Ilija Radosavovic, Tete Xiao, Stephen James, Pieter Abbeel, Jitendra Malik, and Trevor Darrell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Real world robot learning with masked visual pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CoRL, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1, 2, 9 Mamshad Nayeem Rizve, Kevin Duarte, Yogesh S Rawat, and Mubarak Shah.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In defense of pseudo- labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2 John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Proximal policy optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' arXiv preprint arXiv:1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='06347, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 6 Younggyo Seo, Danijar Hafner, Hao Liu, Fangchen Liu, Stephen James, Kimin Lee, and Pieter Abbeel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Masked world models for visual control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='14244, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Dhruv Shah, Blazej Osinski, Brian Ichter, and Sergey Levine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CoRL, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1, 9 Rutav Shah and Vikash Kumar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Rrl: Resnet as representation for reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICML, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1, 9 Hao Shao, Letian Wang, Ruobing Chen, Hongsheng Li, and Yu Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Safety-enhanced autonomous driving using interpretable sensor fusion transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CoRL, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Marin Toromanoff, Emilie Wirbel, and Fabien Moutarde.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' End-to-end model-free reinforcement learning for urban driving using implicit affordances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1, 9 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' NeurIPS, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, and Furu Wei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Image as a foreign lan- guage: Beit pretraining for all vision and vision-language tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='10442, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Image quality assessment: from error visibility to structural similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' TIP, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 4 Erik Wijmans, Abhishek Kadian, Ari Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva, and Dhruv Batra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Dd-ppo: Learning near-perfect pointgoal navigators from 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='5 billion frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' arXiv preprint arXiv:1911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='00357, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1 Penghao Wu, Xiaosong Jia, Li Chen, Junchi Yan, Hongyang Li, and Yu Qiao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Trajectory-guided control prediction for end-to-end autonomous driving: A simple yet strong baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In NeurIPS, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 5, 9, 14 Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Unsupervised feature learning via non- parametric instance discrimination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2 Tete Xiao, Ilija Radosavovic, Trevor Darrell, and Jitendra Malik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Masked visual pre-training for motor control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' arXiv preprint arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='06173, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1, 9 Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Simmim: A simple framework for masked image modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CVPR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1, 5, 9 Jun Yamada, Karl Pertsch, Anisha Gunjal, and Joseph J Lim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Task-induced representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICLR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2, 9 12 Mengjiao Yang and Ofir Nachum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Representation matters: offline pretraining for sequential decision making.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICML, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 9 Denis Yarats, Ilya Kostrikov, and Rob Fergus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Image augmentation is all you need: Regularizing deep reinforcement learning from pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 1 Jimuyang Zhang, Ruizhao Zhu, and Eshed Ohn-Bar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Selfd: Self-learning large-scale driving policies from the web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In CVPR, 2022a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2, 4, 5, 9 Qihang Zhang, Zhenghao Peng, and Bolei Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Learning to drive by watching youtube videos: Action-conditioned contrastive policy pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ECCV, 2022b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2, 4, 5, 9 Richard Zhang, Phillip Isola, and Alexei A Efros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Colorful image colorization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ECCV, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 2 Zhejun Zhang, Alexander Liniger, Dengxin Dai, Fisher Yu, and Luc Van Gool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' End-to-end urban driving by imitating a reinforcement learning coach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In ICCV, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 6 13 POLICY PRE-TRAINING FOR AUTONOMOUS DRIVING VIA SELF-SUPERVISED GEOMETRIC MODELING Supplementary Materials In this Supplementary document, we first provide detailed network structures in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' More de- scription and visual illustrations of the downstream tasks are discussed in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Last, we discuss limitations and common failure cases in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' A NETWORK DETAILS For all experiments, the backbone of the visual encoder is ResNet-34 (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2016), and the detailed structure of it is provided in Table 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' For DepthNet and PoseNet, we follow the same model structure as Godard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2019) with a two-layer MLP focal length head and a two-layer MLP optical center head added to the bottleneck of the PoseNet to predict the intrinsic matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Please refer to Godard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2019) for model details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' For the Navigation, Navigation Dynamic, and Reinforcement Learning tasks, we use CILRS (Codev- illa et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2019) and the model details are provided in Table 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' For the Leaderboard Town05-long task, TCP (Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', 2022) is chosen as our agent, and we refer readers to Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' (2022) for model details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' For the nuScenes Planning, the trajectory planning model structure is shown in Table 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Table 7: Detailed structure of the visual encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Layer Type Channels Stride Kernel Size Activation Function Image Encoder ResNet-34 Measurement Encoder Conv 256 1 1 ReLU Conv 256 3 1 ReLU Conv 256 3 1 ReLU Conv 6 1 1 ReLU Average Pooling Table 8: Detailed structure of the CILRS model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Layer Type Dims in Dims out Activation Function Image Encoder ResNet-34 512 Speed Encoder FC 1 256 ReLU FC 256 512 Speed Pred Head FC 512 256 ReLU FC 256 256 ReLU FC 256 256 ReLU Control Pred Head FC 512 256 ReLU FC 256 256 ReLU FC 256 3 Sigmoid 14 Table 9: Detailed structure of the trajectory planning model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Image Encoder ResNet-34 Bottleneck Layer Type Dims in Dims out Activation Function FC 512 256 ReLU FC 256 256 Decoder Layer Type Hidden dim Input Dim Output Dim GRU 256 2 2 B DOWNSTREAM TASKS DETAILS For Navigation and Navigation Dynamic, training data is collected in Town01, and the closed-loop testing is conducted in Town02.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The maps of Town01 and Town02 are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The agent needs to follow a series of sparse waypoints to navigate from the start point to the end point and avoid collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The difference between Navigation and Navigation Dynamic is that there are other dynamic vehicles and pedestrians in the town.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Examples are provided in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The Leaderboard-Town05-long task is more close to real-world urban driving, with different chal- lenging scenarios added to the route.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' The map of Town05 is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Town 01 Town 02 Town 05 Figure 5: Maps of Town01, Town02, and Town05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Navigation Navigation Dynamic Figure 6: Examples of the front view image for Navigation and Navigation Dynamic tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 15 C LIMITATIONS In this part, we analyze some failure cases and limitations of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Since the visual encoder need to predict the future motion based on a single front-view image, there might be some factors that directly influence the driving decision not shown in the image (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=', vehicles behind the ego vehicle, factors related to the driver, navigation information).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' Some of such cases are provided in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' In these cases, the visual encoder does not get enough information to make the correct prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' These samples during training may hamper the learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' After training, one may use the difference between the prediction from PoseNet and that from visual encoder to filter out these samples, and re-train the visual encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 𝐼𝑡 𝐼𝑡+1 Figure 7: Failure cases where the driving decision/future motion can not be inferred from It.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' For the cases in Row 1 and Row 2, by comparing It and It+1, we know that the ego vehicle stops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' However, there is no clear clue in It indicating it should stop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' For the case in Row 3, the ego vehicle is turning left, while we could hardly tell the turning direction from It alone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'} +page_content=' 16' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfFPox/content/2301.01006v1.pdf'}