diff --git "a/D9FJT4oBgHgl3EQfBywq/content/tmp_files/load_file.txt" "b/D9FJT4oBgHgl3EQfBywq/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/D9FJT4oBgHgl3EQfBywq/content/tmp_files/load_file.txt" @@ -0,0 +1,1570 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf,len=1569 +page_content='Model-based Offline Reinforcement Learning with Local Misspecification Kefan Dong*, Yannis Flet-Berliac*, Allen Nie*, Emma Brunskill Stanford University {kefandong,yfletberliac,anie,ebrun}@stanford.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='edu Abstract We present a model-based offline reinforcement learning pol- icy performance lower bound that explicitly captures dynam- ics model misspecification and distribution mismatch and we propose an empirical algorithm for optimal offline policy se- lection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Theoretically, we prove a novel safe policy improve- ment theorem by establishing pessimism approximations to the value function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Our key insight is to jointly consider se- lecting over dynamics models and policies: as long as a dy- namics model can accurately represent the dynamics of the state-action pairs visited by a given policy, it is possible to approximate the value of that particular policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We analyze our lower bound in the LQR setting and also show compet- itive performance to previous lower bounds on policy selec- tion across a set of D4RL tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Introduction Offline reinforcement learning (RL) could leverage histor- ical decisions made and their outcomes to improve data- driven decision-making in areas like marketing (Thomas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2017), robotics (Quillen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Swazinna, Udluft, and Runkler 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Singh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020), recommendation systems (Swaminathan and Joachims 2015), etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Offline RL is particularly useful when it is possible to deploy context-specific decision policies, but it is costly or infeasible to do online reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Prior work on offline RL for large state and/or action spaces has primarily focused on one of two extreme settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' One line of work makes minimal assumptions on the under- lying stochastic process, requiring only no confounding, and leverages importance-sampling estimators of potential poli- cies (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', Thomas, Theocharous, and Ghavamzadeh (2015);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Thomas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2019)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Unfortunately, such estimators have a variance that scales exponentially with the horizon (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2018b) and are often ill-suited to long horizon problems1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' An alternative, which is the majority of work in offline RL, is to make a number of assumptions on the domain, These authors contributed equally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='aaai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='org).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' All rights reserved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1Marginalized importance sampling (MIS) methods (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2018a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Xie, Ma, and Wang 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yin and Wang 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Liu, Bacon, and Brunskill 2020) help address this but rely on the system being Markov in the underlying state space behavior data generation process and the expressiveness of the function classes employed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The work in this space typi- cally assumes the domain satisfies the Markov assumption, which has been recently shown in the off-policy evaluation setting to enable provably more efficient policy value esti- mation (Kallus and Uehara 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Historically, most work (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', Munos (2003);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Farahmand, Munos, and Szepesv´ari (2010);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Xie and Jiang (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Chen and Jiang (2019)) as- sumes the batch data set has coverage on any state-action pairs that could be visited under any possible policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' More recent work relaxes this strong requirement using a pes- simism under uncertainty approach that is model-based (Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kidambi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020), model-free (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020) or uses policy search (Curi, Berkenkamp, and Krause 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' van Hasselt, Hessel, and Aslanides 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Such work still relies on realizability/lack of misspecification assump- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For model-free approaches, a common assumption is that the value function class can represent all policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2020) assume that the value function class is closed under (modified) Bellman backups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' A recent exception is Xie and Jiang (2020), which only requires the optimal Q- function to be representable by the value function class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' However, their sample complexity scales non-optimally (Xie and Jiang 2020, Theorem 2), and they also make strong assumptions on the data coverage – essentially the dataset must visit all states with sufficient probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Model-based approaches such as Malik et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2020) as- sume the dynamics class has no misspecification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' These two lines of work hint at possibilities in the mid- dle: can we leverage the sample-efficient benefits of Markov structure and allow for minimal assumptions on the data- gathering process and potential model misspecification?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' This can be viewed as one step towards more best-in-class results for offline RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Such results are relatively rare in RL, which tends to focus on obtaining optimal or near-optimal policies for the underlying domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yet in many important applications, it may be much more practical to hope to iden- tify a strong policy within a particular policy class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Our insight is that the algorithm may be able to lever- age misspecified models and still leverage the Markov as- sumption for increased data efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In particular, we take a model-based offline RL approach to leverage dynamics models that can accurately fit the space of state-action pairs visited under a particular policy (local small misspecifica- tion), rather than being a good model of the entire possi- ble state-action space (global small misspecification).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Our work is most closely related to the recently proposed Min- imax Model Learning (MML) algorithm (Voloshin, Jiang, and Yue 2021): MML optimizes for the model that mini- mizes a value-aware error which upper bounds the differ- ence of policy value in learned and real models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' If the con- sidered model class includes the true model, this can work very well, but when the models are misspecified, this can be- come overly conservative since it optimizes with respect to a worst-case potential state-action distribution shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The key feature of our algorithm is to jointly optimize pol- icy and dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Prior model-based offline RL algorithms typically estimate dynamics first, and then optimize a policy w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' the learned dynamics (Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Voloshin, Jiang, and Yue 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' But when the dynamics model class is misspecified, there may not exist a unique “good dynamics” that can approximate the value of every policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' As a result, the learned policy may have a good estimated value under the learned dynamics, but a poor performance in the real en- vironment, or the learned policy may be overly conservative due to the misestimated dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Our paper makes the following contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' First, we provide a finite sample bound that assumes a Markov model, leverages the pessimism principle to work with many data- gathering distributions, accounts for estimation error in the behavior policy and, most importantly, directly accounts for dynamics and value function model misspecification (see Lemma 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We prove the misspecification error of our method is much tighter than other approaches because we only look at the models’ ability to represent visited state- action pairs for a particular policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In that sense, we say our algorithm relies on small local model dynamics mis- specification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In Theorem 6, we show that when the dynam- ics model class does not satisfy realizability, decoupling the learning of policy and dynamics is suboptimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' This moti- vates our algorithm which jointly optimizes the policy and model dynamics across a finite set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Because of the tighter pessimistic estimation, we can prove a novel safe policy im- provement theorem (see Theorem 4) for offline policy opti- mization (OPO).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' While our primary contribution is theoreti- cal, our proposed method for policy selection improves over the state-of-the-art MML Voloshin, Jiang, and Yue (2021) in a simple linear Gaussian setting, and has solid performance on policy selection on a set of D4RL benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Related Works There is an extensive and growing body of research on of- fline RL and we focus here on methods that also assume a Markov domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Many papers focus on model-free meth- ods (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', Fujimoto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2018);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2019, 2020)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Nachum et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2019) and their follow-ups (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Zhang, Liu, and Whiteson 2020) learn a distribution correction term, on top of which they perform evaluation or policy optimization tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Uehara, Huang, and Jiang (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Jiang and Huang (2020) study the duality between learn- ing Q-functions and learning importance weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2020) explicitly consider the distribution shift in offline RL and propose conservative Bellman equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Another line of research uses model-based methods (Ki- dambi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Matsushima et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Swazinna, Udluft, and Runkler 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Fu and Levine 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Farahmand, Barreto, and Nikovski 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Gelada et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Delgrange, Nowe, and P´erez (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Voloshin, Jiang, and Yue (2021) learn the dynamics using different loss functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2020) build an uncertainty quan- tification on top of the learned dynamics and select a policy that optimizes the lower confidence bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (Argenson and Dulac-Arnold 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Zhan, Zhu, and Xu 2021) focus on pol- icy optimization instead of model learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In Table 1, we compare our error bounds with existing results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Our statistical error (introduced by finite dataset) is comparable with VAML (Farahmand, Barreto, and Nikovski 2017), MBS-PI (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020) and MML (Voloshin, Jiang, and Yue 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In addition, we consider misspecification er- rors and safe policy improvement (SPI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Algorithm Statistical Error Misspecification SPI VAML � O � p √n � 2 \x13(global) \x17 MBS-PI � O � Vmaxζ (1−γ)2√n � \x13(global) \x13 MML Rn3 \x13(global) \x17 Ours � O � Vmax 1−γ � ζ n � \x13(local) \x13 Table 1: Comparison of error bounds with prior works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Problem Setup A Markov Decision Process (MDP) is defined by a tuple ⟨T, r, S, A, γ⟩ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' S and A denote the state and action spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' T : S × A → ∆(S) is the transition and r : S × A → R+ is the reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' γ ∈ [0, 1) is the discount factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For a policy π : S → ∆(A), the value function is defined as V π T (s) = Es0=s,at∼π(st),st+1∼T (st,at)[�∞ t=0 γtr(st, at)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let Rmax ≜ maxs,a r(s, a) be the maximal reward and Vmax ≜ Rmax/(1 − γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Without loss of generality, we as- sume that the initial state is fixed as s0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We use η(T, π) ≜ V π T (s0) to denote the expected value of policy π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let ρπ T (s, a) ≜ (1 − γ) �∞ t=0 γt Prπ T (st = s, at = a | s0) be the normalized state-action distribution when we execute policy π in a domain with dynamics model T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For simplicity in this paper we assume the reward function is known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' An offline RL algorithm takes a dataset D = {(si, ai, s′ i)}n i=1 as input, where n is the size of the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Each (si, ai, s′ i) tuple is drawn independently from a behav- ior distribution µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We assume that µ is consistent with the MDP in the sense that µ(· | s, a) = T(s, a) for all (s, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For simplicity, we use ˆE to denote the empirical distribu- tion over the dataset D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In this paper, we assume that the 2VAML only considers linear function approximation and p is the dimension of the feature vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 3The Rademacher complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For the finite hypothesis, the best-known upper bound is in the same order of ours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' algorithm has access to an estimated behavior distribution ˆµ such that TV(µ, ˆµ) is small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' This estimation can be easily obtained using a separate dataset (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2020)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The algorithm can access three (finite) function classes G, T , Π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' G is a class of value functions, T a class of dy- namics and Π a class of policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We assume that g(s, a) ∈ [0, Vmax] for all g ∈ G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We use T ⋆ to denote the ground- truth dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note that T ⋆ may not be in T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Our goal is to return a policy π ∈ Π that maximizes η(T ⋆, π).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Main Results A standard model-based RL algorithm learns the dynamics models first, and then uses the learned dynamics to estimate the value of a policy, or optimize it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In this approach, it is crucial to link the estimation error of the dynamics to the estimation error of the value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Therefore, as a starting point, we invoke the simulation lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Lemma 1 (Simulation Lemma (Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kakade and Langford 2002)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Consider two MDPs with dynamics T, T ⋆, and the same reward function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Then, η(T, π) − η(T ⋆, π) = γ 1 − γ E(s,a)∼ρπ T [ Es′∼T (s,a)[V π T ⋆(s′)] − Es′∼T ⋆(s,a)[V π T ⋆(s′)] � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (1) For a fixed ground-truth dynamics T ⋆, we define Gπ T (s, a) = Es′∼T (s,a)[V π T ⋆(s′)] − Es′∼T ⋆(s,a)[V π T ⋆(s′)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The simulation lemma states that the dynamics will provide an accurate estimate of the policy value if Es′∼T (s,a)[V π T ⋆(s′)] matches Es′∼T ⋆(s,a)[V π T ⋆(s′)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In other words, to obtain a good estimate of a policy value, it is suf- ficient to minimize the model error Gπ T (s, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Since the value function V π T ⋆ is unknown, Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2020) upper bound the model error by introducing a class of test functions G : S → R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' When V π T ⋆ ∈ G, we have |Gπ T (s,a)|≤supg∈G ��Es′∼T (s,a)g(s′)−Es′∼T ⋆(s,a)g(s′)] ��.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In an offline dataset D, typically we can only observe one sample from T ⋆(s, a) per state-action pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Hence the al- gorithm cannot compute this upper bound exactly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In ad- dition, the distribution of the dataset D is also different from the one required by the simulation lemma ρπ T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' To ad- dress these issues, we explicitly introduce a density ratio w : S × A → R+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For a test function g ∈ G and a dynam- ics model T, let f g T (s, a) ≜ Es′∼T (s,a)[g(s′)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Recall that ˆE denotes the empirical expectation over dataset D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Then our model loss is defined as ℓw(T, g) = |ˆE[w(s, a)(f g T (s, a) − g(s′))]|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2) Distribution mismatch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We aim to upper bound policy eval- uation error by the loss function even if there are state ac- tion pairs with small probability mass under behavior dis- tribution µ (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', the offline dataset does not have a perfect coverage).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Following Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2020), we treat the un- known state-action pairs pessimistically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let ζ be a fixed cutoff threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Recall that ˆµ is an estimation of the behav- ior distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For a policy π and dynamics T, we define wπ,T (s, a) ≜ I � ρπ T (s,a) ˆµ(s,a) ≤ ζ � ρπ T (s,a) ˆµ(s,a) as the truncated den- sity ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For a fixed policy π, when w = wπ,T , ���E(s,a)∼ρπ T � Gπ T (s, a) ���� ≤ ����E(s,a)∼ρπ T � I �ρπ T (s, a) ˆµ(s, a) ≤ ζ � Gπ T (s, a) ����� + ���E(s,a)∼ρπ T � I �ρπ T (s, a) ˆµ(s, a) > ζ � Gπ T (s, a) ���� ≤ |E(s,a)∼ˆµ � w(s, a)Gπ T (s, a) � | + Vmax ���E(s,a)∼ρπ T � I �ρπ T (s, a) ˆµ(s, a) > ζ ����� ≤ |E(s,a)∼µ � w(s, a)Gπ T (s, a) � | + ζVmaxTV (ˆµ, µ) + Vmax ����E(s,a)∼ρπ T � I �ρπ T (s, a) ˆµ(s, a) > ζ ������.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Hence, ignoring statistical error due to finite dataset, we can upper bound the estimation error |η(T ⋆, π) − η(T, π)| by γ 1 − γ � sup g∈G ���ℓwπ,T (g, T) ��� + ζVmaxTV (ˆµ, µ) + VmaxE(s,a)∼ρπ T � I �ρπ T (s, a) ˆµ(s, a) > ζ ��� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (3) Intuitively, the first term measures the error caused by im- perfect dynamics T, the second term captures the estimation error of the behavior distribution, and the last term comes from truncating the density ratios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Pessimistic Policy Optimization with Model Misspecification In this section, we explicitly consider misspecifications of the function classes used for representing the value func- tion and dynamics models (G and T , respectively).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Most prior theoretical work on model-based RL make strong as- sumptions on the realizability of the dynamics model class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For example, in the offline setting, Voloshin, Jiang, and Yue (2021) focus on exact realizability of the dynamics model (that is, T ⋆ ∈ T ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In the online setting, Jin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2020) pro- vide bounds where there is a linear regret term due to global model misspecification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Their bounds require a T ∈ T such that TV (T(s, a), T ⋆(s, a)) ≤ ϵ for all (s, a), even if the state-action pair (s, a) is only visited under some poorly per- forming policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We now show that offline RL tasks can need much weaker realizability assumptions on the dynam- ics model class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Our key observation is that for a given dynamics T and policy π, computing the density ratio wπ,T is statistically efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note that to compute wπ,T we do not need any samples from the true dynamics: instead, we only need to be able to estimate the state-action density under a dynamics model T for policy π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' This allows us to explicitly utilize the density ratio to get a relaxed realizability assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The local value function error for a particular dynamics model T and policy π is defined as ϵV (T, π) ≜ inf g∈G |E(s,a)∼µ[wπ,T (s, a)(Es′∼T (s,a)[(g − V π T ⋆)(s′)] + Es′∼T ⋆(s,a)[(g − V π T ⋆)(s′)])]|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The term ϵV measures the local misspecification of the value function class – that is, the error between the true value of the policy V π T ⋆ and the best fitting value function in the class G – only on the state-action pairs that policy π visits under a particular potential dynamics model T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In con- trast, previous results (Jin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Nachum et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Voloshin, Jiang, and Yue 2021) take the global maximum error over all (reachable) (s, a), which can be much larger than the local misspecification error ϵV (T, π).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' With this local misspecification error, we can establish a pessimistic estimation of the true reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let E be a high probability event under which the loss function ℓwπ,T (T, g) is close to its expectation (randomness comes from the dataset D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In the Appendix, we define this event formally and prove that Pr(E) ≥ 1 − δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The following lemma gives a lower bound on the true reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Proofs, when omitted, are in the Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let ι = log(2|G||T ||Π|/δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For any dynamics model T and policy π, we define lb(T, π) = η(T, π) − 1 1 − γ � sup g∈G ℓwπ,T (g, T) + VmaxE(s,a)∼ρπ T � I �ρπ T (s, a) ˆµ(s, a) > ζ ��� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (4) Then, under the event E, we have η(T ⋆, π) ≥ lb(T, π) − 1 1 − γ � ϵV (T, π) − 2Vmax � ζι/n − ζVmaxTV (ˆµ, µ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (5) We use this to define our offline policy selection Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Algorithm 1: Model-based Offline RL with Local Misspecification Error Require: estimated behavior distribution ˆµ, truncation threshold ζ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' for π ∈ Π, T ∈ T do Compute wπ,T (s, a) = I � ρπ T (s,a) ˆµ(s,a) ≤ ζ � ρπ T (s,a) ˆµ(s,a) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Compute lb(T, π) by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' end π ← argmaxπ∈Π maxT ∈T lb(T, π).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In contrast to existing offline model-based algorithms (Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Voloshin, Jiang, and Yue 2021), our algorithm optimizes the dynamics and policy jointly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For a given dy- namics model, policy pair, our Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1 computes the trun- cated density ratio wπ,T which does not require collecting new samples and then uses this to compute a lower bound lb(T, π) (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (4)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Finally, it outputs a policy that maximizes the lower bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We will shortly show this joint optimiza- tion can lead to better offline learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Parameter ζ controls the truncation of the stationary im- portance weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Increasing ζ decreases the last term in the lower bound objective lb(T, π), but it may also increase the variance given the finite dataset size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note that by setting ζ = log(n) and letting n → ∞ (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', with infinite data), the last term in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (4) and the statistical error converge to zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Safe Policy Improvement We now derive a novel safe policy improvement result, up to the error terms given below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Intuitively this guarantees that the policy returned by Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1 will improve over the be- havior policy when possible, which is an attractive property in many applied settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note that recent work (Voloshin, Jiang, and Yue 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020) on model-based of- fline RL does not provide this guarantee when the dynamics model class is misspecified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For a fixed policy π, define ϵρ(π) ≜ infT ∈T E(s,a)∼ρπ T ⋆ [TV (T(s, a), T ⋆(s, a))], (6) ϵµ(π) ≜ E(s,a)∼ρπ T ⋆ � I �ρπ T ⋆(s, a) ˆµ(s, a) > ζ/2 �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (7) The term ϵρ measures the local misspecification error of the dynamics model class in being able to represent the dynam- ics for state-action pairs encountered for policy π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' ϵµ rep- resents that overlap of the dataset for an alternate policy π: such a quantity is common in much of offline RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In the fol- lowing theorem, we prove that the true value of the policy computed by Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1 is lower bounded by that of the optimal policy in the function class with some error terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Consider a fixed parameter ζ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let ˆπ be the pol- icy computed by Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1 and ˆT = argmaxT lb(T, ˆπ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let ι = log(2|G||T ||Π|/δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Then, with probability at least 1−δ, we have η(T ⋆, ˆπ) ≥ sup π � η(T ⋆, π) − 6Vmaxϵρ(π) (1 − γ)2 − Vmaxϵµ(π) 1 − γ � − ϵV ( ˆT, ˆπ) 1 − γ − 4Vmax 1 − γ � ζι n − 2ζVmaxTV (ˆµ, µ) 1 − γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (8) To prove Theorem 4, we prove the tightness of lb(T, π) — the lower bound maxT lb(T, π) is at least as high as the true value of the policy with some errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Consequently, maxi- mizing the lower bound also maximizes the true value of the policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Formally speaking, we have the following Lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For any policy π ∈ Π, under the event E we have max T ∈T lb(T, π) ≥ η(T ⋆, π) − 6Vmaxϵρ(π)/(1 − γ)2 − 1 1 − γ � Vmaxϵµ(π) − 2Vmax � ζι/n − ζVmaxTV (ˆµ, µ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In the sequel, we present a proof sketch for Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In this proof sketch, we hide 1/(1 − γ) factors in the big- O notation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For a fixed policy π, let ˆT be the minimizer of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We prove Lemma 5 by analyzing the terms in the definition of lb( ˆT, π) (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (4)) separately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Following the definition of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (6), we can show that ∥ρπ ˆT − ρπ T ⋆∥1 ≤ O(ϵρ(π)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Consequently we get η( ˆT, π) ≥ η(T ⋆, π) − O(ϵρ(π)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Recall that 0 ≤ g(s, a) ≤ Vmax for all g ∈ G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Then for any (s, a) we have supg∈G |Es′∼ ˆT (s,a)g(s′) − Es′∼T ⋆(s,a)g(s′)]| ≤ VmaxTV( ˆT(s, a), T ⋆(s, a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Com- bining the definition of ℓw(g, T), Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (6) and statistical error we get supg∈G ℓwπ,T (g, T) ≤ � O(ϵρ(π) + 1/√n + VmaxTV (ˆµ, µ)) under event E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' iii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For the last term regarding distribution mismatch, we combine Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (7) and Lemma 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We can upper bound this term by O(ϵρ(π) + ϵµ(π)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' iv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The final term arises due to the potential estimation error in the behavior policy distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Theorem 4 follows directly from combining Lemma 3 and Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note that Theorem 4 accounts for estimation er- ror in the behavior policy, misspecification in the dynamics model class, and misspecification in the value function class, the latter two in a more local, tighter form than prior work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Illustrative Example To build intuition of where our approach may yield benefits, we provide an illustrative example where Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1 has better performance than existing approaches: an MDP whose state space is partitioned into several parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The model class is re- stricted so that every model can only be accurate on one part of the state space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' When each deterministic policy only vis- its one part of the state space, the local misspecification error is small — for each policy, there exists a dynamics model in the set which can accurately estimate the distribution of states and actions visited under that policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In contrast, if the dynamics are learned to fit the whole state space, the estima- tion error will be large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' More precisely, for a fixed parameter d, consider a MDP where S = {s0, · · · , sd} ∪ {sg, sb}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' There are d actions denoted by a1, · · · , ad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The true dynamics are deterministic and given by T ⋆(s0, ai) = si, T ⋆(si, aj) = �sg, if I [i = j] , sb, if I [i ̸= j] , (9) T ⋆(sg, ai) = sg, T ⋆(sb, ai) = sb, ∀i ∈ [d].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (10) And the reward is r(s, ai) = I [s = sg] , ∀i ∈ [d].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The transition function class T is parameterized by θ ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For a fixed θ, the transition for states s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' , sd is Tθ(si, aj) = �sg, w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1 2 � 1 + e⊤ j θ � , sb, w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1 2 � 1 − e⊤ j θ � , (11) where ej is the j-th standard basis of Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The transitions for states s0, sg, sb is identical to the true dynamics T ⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' But the transition model Tθ in the function class must use the same parameter θ to approximate the dynamics in states s1, · · · , sd, which makes it misspecified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Decoupling learning the dynamics model and policy is suboptimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Most prior algorithms first learn a dynamics model and then do planning with that model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' However, note here that the optimal action induced by MDP planning given a particular Tθ is suboptimal (assuming a uniformly random tie-breaking).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' This is because, for any given θ, that dynam- ics model will estimate the dynamics of states s1, · · · , sd as being identical, with identical resulting value functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note this is suboptimality will occur in this example even if the dataset is large and covers the state–action pairs visited by any possible policy (ϵµ(π) = 0), the value function class is tabular and can represent any value function ϵV = 0, the behavior policy is known or the resulting estimation error is small (TV (ˆµ, µ) = 0, and ζ = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In such a case, Theo- rem 4 guarantees that with high probability, our algorithm will learn the optimal policy because there exist couplings of the dynamics models and optimal policies such that the local misspecification error ϵρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' This demonstrates that prior algorithms (including MML (Voloshin, Jiang, and Yue 2021)) that decouple the learning of dynamics and policy can be suboptimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We now state this more formally: Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Consider any (possibly stochastic) algorithm that outputs an estimated dynamics Tθ ∈ T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let πθ be the greedy policy w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Tθ (with ties breaking uniformly at ran- dom).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Then max π η(T ⋆, π) − η(T ⋆, πθ) ≥ (A − 1)γ2 A(1 − γ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (12) As a side point, we also show that the off-policy estima- tion error in Voloshin, Jiang, and Yue (2021) is large when the dynamics model class is misspecified in Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We defer this result to the Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Experiments While our primary contribution is theoretical, we now inves- tigate how our method can be used for offline model-based policy selection with dynamics model misspecification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We first empirically evaluate our method on Linear-Quadratic Regulator (LQR), a commonly used environment in optimal control theory (Bertsekas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2000), in order to assess: Can Algorithm 1 return the optimal policy when we have both model and distribution mismatch?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We also evaluate our ap- proach using D4RL (Fu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020), a standard offline RL benchmark for continuous control tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Here we consider: Given policies and dynamics pairs obtained using state-of- the-art offline model-based RL methods with ensemble dy- namics, does Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1 allow picking the best policy, outper- forming previous methods?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Linear-Quadratic Regulator (LQR) LQR is defined by a linear transition dynamics st+1 = Ast + Bat + η, where st ∈ Rn and at ∈ Rm are state and action at time step t, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' η ∼ N(0, σ2I) is ran- dom noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' LQR has a quadratic reward function R(s, a) = −(sT Qs + aT Ra) with Q ∈ Rn×n and R ∈ Rm×m be- ing positive semi-definite matrices, Q, R ⪰ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The op- timal controller to maximize the sum of future rewards �H t=1 −(sT t Qst+aT t Rat) until the end of horizon H has the form at = −Kst (K ∈ Rm×n) (Bertsekas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The value function is also a quadratic function, V (s) = sT Us+q for some constant q and positive semi-definite matrix U ⪰ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In the experiment, the state space is [−1, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Misspecified transition classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Consider a 1D version of LQR with A(x) = (1 + x/10), B(x) = (−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 − x/10), 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6 K 12 9 6 3 0 Return Returns of different policies under true environment Ours MML 1 2 3 4 5 Rank 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 Negative of lower bound (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='00,-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='25) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='00,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='00) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='00,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='25) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='20,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='25) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='20,-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='25) Ranking imposed by Eq 6 on policy-model pair (T, ) Model Loss+Distribution Shift 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 MBLB MML MOPO D4RL IQM Normalized Score Figure 1: Left: Visualization of true policy value η(T ⋆, π).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Our algorithm picks the optimal policy, whereas MML picks a suboptimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Middle: Visualization of negative lower bounds lb(T, π) for different policies and models (indexed by the values of (v, u)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Right: We show the interquartile mean (IQM) scores of two model-based lower bounds (MML and MBLB) and a recent model-based policy learning algorithm (MOPO) on D4RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Q = 1, R = 1 and noise η ∼ N(0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='05).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Our true dy- namics is given by x∗ = 6, and the corresponding optimal policy has K = −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Function classes used by Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1 are finite and computed as follows: (i) the value function class G contains the value functions of 1D LQR with parameters x ∈ {2, 4, 10} and K ∈ {−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1, −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9, −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='7};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (ii) the transi- tion class T is misspecified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We use the following transition class Tu ∈ T parametrized by u, Tu = �st+1 = A(x∗)st − B(x∗)at, st ∈ [u, u + 1], st+1 = st, otherwise, with u ∈ {−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='75, −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5, −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='25, 0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='25}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In other words, the capacity of the transition class is limited – each func- tion can only model the true dynamics of a part of the states;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (iii) the policy class is given by πv parameterized by v, and πv(s) = −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1(s − v) + N(0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='01) with v ∈ {−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6, −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4, −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2, 0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Intuitively, πv tries to push the state toward s = v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Since the state and action spaces are one dimensional, we can compute the density ratio wπ,T efficiently by discretiza- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The implementation details are deferred to Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We compare our algorithm to minimizing MML loss as described in the OPO algorithm of Voloshin, Jiang, and Yue (2021, Algorithm 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' MML strictly outperformed VAML (Farahmand, Barreto, and Nikovski 2017) as shown in the experiments of (Voloshin, Jiang, and Yue 2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' hence, we only compare to MML in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Figure 1 (Left) shows the return of different poli- cies under the true environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Our method picks the op- timal policy for the true model, whereas MML picks the wrong policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In Figure 1 (Middle), we also visualize dif- ferent terms in the definition of lb(T, π) (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (5)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note that the model loss for different policy is different (model loss for (v, u) = (0, 0) is significantly larger than (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='25), even if the dynamics are the same).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' This is because the model loss is evaluated with a different density ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' This highlights the main benefit of our method over the baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Since the model class is misspecified, maximizing over the weight function w in the MML loss results in an unrealistically large loss value for some models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' However, if the chosen policy does not visit the part of the state space with a large error, there is no need to incur a high penalty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' D4RL D4RL (Fu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020) is an offline RL standardized bench- mark designed and commonly used to evaluate the progress of offline RL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' This benchmark is standard for evaluating offline policy learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Here, we use a state-of-the-art policy learning algorithm MOPO (Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020) to propose a set of policy-transition model tuples – for N policy hyperparameters and K transition models, we can get M × K tuples: {(π1, T1), (π1, T2), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', (πN, TK)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The MOPO algorithm learns an ensemble of transition mod- els and randomly chooses one to sample trajectories during each episode of training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Instead, we choose one transition model to generate trajectories for the policy throughout the entire training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In our experiment, we choose M = 1 and K = 5, and train each tuple for 5 random seeds on Hopper and HalfCheetah tasks (see Appendix).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We then compute the model-based lower bound for each (πi, Tj), and select the optimal policy that has the highest lower bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We learn the dynamics using 300k iterations and we train each policy us- ing 100k gradient iterations steps with SAC (Haarnoja et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2018) as the policy gradient algorithm, imitating MOPO (Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020) policy gradient update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' MML.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Voloshin, Jiang, and Yue (2021) recommended two practical implementations for computing MML lower bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The implementation parametrizes w(s, a)V (s′) jointly via a new function h(s, a, s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We refer readers to Prop 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 from Voloshin, Jiang, and Yue (2021) for a detailed explanation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We describe how we parametrize this function as follows: Linear: Voloshin, Jiang, and Yue (2021) showed that if T, V, µ are all from the linear function classes, then a model T that minimizes MML loss is both unique and identifiable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' This provides a linear parametrization of h(s, a, s′) = ψ(s, a, s′)T θ, where ψ is a basis function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We choose ψ to be either a squared basis function or a polynomial basis function with degree 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kernel: Using a radial basis function (RBF) over S × Dataset Type Env MOPO MML (Squared) MML (Polynomial) MML (RKHS) MBLB (Linear) MBLB (Quad) medium hopper 175.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 (95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='3) 379.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 (466.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4) 375.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6 (459.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5) 375.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 (459.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9) 591.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='7 (523.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1) 808.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 (502.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='7) med-expert hopper 183.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8 (94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4) 160.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9 (131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5) 116.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 (148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4) 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 (35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0) 261.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1 (157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9) 242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 (134.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0) expert hopper 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 (63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4) 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8 (87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9) 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6 (61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9) 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 (56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2) 118.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2 (61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6) 121.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 (72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5) medium halfcheetah 599.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8 (668.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4) 1967.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6 (1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5) 2625.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1 (937.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2) 3858.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2 (1231.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1) 3290.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 (1753.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1) 2484.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2 (1526.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8) med-expert halfcheetah 486.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6 (48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1) 188.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 (137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2) 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 (252.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5) 343.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2 (225.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2) 207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 (509.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5) 192.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8 (432.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0) Table 2: We report the mean and (standard deviation) of selected policy’s simulator environment performance across 5 random seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' MML and MBLB are used as model-selection procedures where they select the best policy for each seed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Our method is choosing the most near-optimal policy across the datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' A × S and computing K((s, a, s′), (˜s, ˜a, ˜s′)), Voloshin, Jiang, and Yue (2021) showed that there exists a closed- form solution to compute the maxima of the MML loss (RKHS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Here, there is no need for any gradient update, we only sample s′ from T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' MBLB (Ours).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For a continuous control task, we compute our model-based lower bound (MBLB) as follows: Compute η(T, π).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Although it is reasonable to directly use a value function V π T trained during policy learning to compute η(T, π), Paine et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2021) points out how this value function often severely over-estimates the ac- tual discounted return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Therefore, we estimate the expected value of policy π using the generalized advantage estima- tor (GAE) (Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For a sequence of tran- sitions {st, at, r(st, at), st+1}t∈[0,N], it is defined as: At = �t+N t′=t (γλ)t′−t(r(st′, at′) + γVφ(st′+1) − Vφ(st′)), with λ a fixed hyperparameter and Vφ the value function estimator at the previous optimization iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Then, to estimate the value function, we solve the non-linear regression problem minimizeφ �t+N t′=t (Vφ(st′)− ˆVt′)2 where ˆVt = At+Vφ(st′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We also provide a comparison to using the standard TD-1 Fitted Q Evaluation (FQE) (Le, Voloshin, and Yue 2019) in- stead in Table A1 in the Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We find that using GAE provides better policy evaluation estimations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Behavior density modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We use a state-of-the-art nor- malizing flow probability model to estimate the density of state-action pairs (Papamakarios et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For ρπ T , we sample 10,000 trajectories from T, π, and estimate the cor- responding density;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' for the behavior distribution µ, we use the given dataset D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We empirically decide the number of training epochs that will give the model the best fit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Compute supg∈G |ℓwπ,T (g, T)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We parametrize g either as a linear function of state: g(s) = mT s, or a quadratic func- tion of the state: g(s) = sT Ms + b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We use gradient ascent on ℓwπ,T (g, T) to maximize this objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We report the results in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' There is gen- eral overlap across seeds for the performance between vari- ous methods, but our approach has the best average perfor- mance or is within the standard deviation of the best.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We also show that for different choices of how we parameterize the w(s, a)V (s′) distribution (MML) and how we choose the family of g test function (MBLB), we are selecting differ- ent final policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' However, overall, MBLB can pick better- performing final policies with two different parametrizations while MML is choosing lower-performing policies with its three parametrizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We find that our approach of select- ing among the set of policies computed from each of the models used by MOPO consistently outperforms the policy produced by MOPO in the considered tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' To summarize these results, we report the interquartile mean (IQM) scores of each method in Figure 1 (Right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' IQM is an outlier robust metric proposed by Agarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (2021) to compare deep RL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We create the plot by sam- pling with replacement over all runs on all datasets 50000 times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Though there is significant overlap, our method gen- erally outperforms policies learned from MOPO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Conclusion There are many directions for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The current lb(T, π) implementation with density ratio wπ,T (s, a) is not differentiable: an interesting question is to make this differ- entiable so that we can directly optimize a policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Another interesting question would be to construct estimators for the local misspecification errors ϵρ, ϵµ and ϵV , which could be used to refine the model class to optimize performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' To conclude, this paper studies model-based offline rein- forcement learning with local model misspecification errors, and proves a novel safe policy improvement theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Our theoretical analysis shows the benefit of this tighter analy- sis and approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We illustrate the advantage of our method over prior work in a small linear quadratic example and also demonstrate that it is competitive or has stronger per- formance than recent model-based offline RL methods on policy selection in a set of D4RL tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Acknowledgment Research reported in this paper was sponsored in part by NSF grant #2112926, the DEVCOM Army Research Lab- oratory under Cooperative Agreement W911NF-17-2-0196 (ARL IoBT CRA) and a Stanford Hoffman-Yee grant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='Government.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Gov- ernment is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright nota- tion herein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' References Agarwal, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Schwarzer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Castro, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Courville, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Bellemare, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Deep reinforcement learning at the edge of the statistical precipice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Advances in neural infor- mation processing systems, 34: 29304–29320.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Argenson, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Dulac-Arnold, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Model-based of- fline planning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='05556.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Bertsekas, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Dynamic programming and optimal control: Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Athena scientific Belmont.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Jiang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Information-Theoretic Consid- erations in Batch Reinforcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In International Conference on Machine Learning, 1042–1051.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Curi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Berkenkamp, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Krause, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Advances in Neural Informa- tion Processing Systems, 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Delgrange, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Nowe, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and P´erez, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Distilla- tion of RL Policies with Formal Guarantees via Variational Abstraction of Markov Decision Processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Farahmand, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='-m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Barreto, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Nikovski, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Value-aware loss function for model-based reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In Artificial Intelligence and Statistics, 1486–1494.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Farahmand, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Munos, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Szepesv´ari, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Er- ror propagation for approximate policy and value iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Fu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kumar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Nachum, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Tucker, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' D4rl: Datasets for deep data-driven reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='07219.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Fu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Offline Model-Based Opti- mization via Normalized Maximum Likelihood Estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='07970.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Fujimoto, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' van Hoof, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Meger, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Ad- dressing function approximation error in actor-critic meth- ods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Proceedings of Machine Learning Research, 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Gelada, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kumar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Buckman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Nachum, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Belle- mare, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Deepmdp: Learning continuous latent space models for representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In International Conference on Machine Learning, 2170–2179.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Haarnoja, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Zhou, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Abbeel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Soft actor-critic: Off-policy maximum entropy deep rein- forcement learning with a stochastic actor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:1801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='01290.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Jiang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Minimax confidence inter- val for off-policy evaluation and policy optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='02081.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Jin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Jordan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Provably efficient reinforcement learning with linear function approx- imation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In Conference on Learning Theory, 2137–2143.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kakade, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Langford, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Approximately Optimal Approximate Reinforcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In Proceedings of the Nineteenth International Conference on Machine Learn- ing, 267–274.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Morgan Kaufmann Publishers Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kallus, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Uehara, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov De- cision Processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', 21: 167–1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kidambi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Rajeswaran, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Netrapalli, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Joachims, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Morel: Model-based offline reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='05951.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kumar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Fu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Soh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Tucker, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Advances in Neural Information Processing Sys- tems, 32: 11784–11794.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kumar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Singh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Tian, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Finn, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' A Workflow for Offline Model-Free Robotic Rein- forcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In 5th Annual Conference on Robot Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kumar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Zhou, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Tucker, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Con- servative Q-Learning for Offline Reinforcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In Larochelle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Ranzato, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Hadsell, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Balcan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Lin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', Advances in Neural Information Process- ing Systems, volume 33, 1179–1191.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Le, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Voloshin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Yue, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Batch policy learning under constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In International Conference on Machine Learning, 3703–3712.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Liu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Tang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Zhou, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2018a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Breaking the curse of horizon: infinite-horizon off-policy estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 5361–5371.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Bacon, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Brunskill, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Understanding the curse of horizon in off-policy evaluation via conditional importance sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In International Conference on Ma- chine Learning, 6184–6193.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Gottesman, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Raghu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Komorowski, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Faisal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Doshi-Velez, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Brunskill, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2018b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Representa- tion Balancing MDPs for Off-Policy Policy Evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Ad- vances in neural information processing systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Swaminathan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Agarwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Brunskill, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Provably Good Batch Off-Policy Reinforcement Learning Without Great Exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Malik, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kuleshov, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Song, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Nemer, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Seymour, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Ermon, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Calibrated model-based deep rein- forcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In International Conference on Machine Learning, 4314–4323.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Matsushima, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Furuta, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Matsuo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Nachum, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Gu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Deployment-efficient reinforcement learn- ing via model-based offline optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='03647.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Munos, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Error bounds for approximate policy itera- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In ICML, volume 3, 560–567.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Nachum, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Chow, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Dai, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' DualDICE: Behavior-Agnostic Estimation of Discounted Stationary Distribution Corrections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Paine, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Paduraru, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Michi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Gulcehre, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Zolna, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Novikov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and de Freitas, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Hyper- parameter selection for offline reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='09055.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Papamakarios, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Nalisnick, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Rezende, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Mo- hamed, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Lakshminarayanan, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Normalizing Flows for Probabilistic Modeling and Inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', 22(57): 1–64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Quillen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Jang, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Nachum, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Finn, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Ibarz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Deep reinforcement learning for vision- based robotic grasping: A simulated comparative evaluation of off-policy methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In 2018 IEEE International Con- ference on Robotics and Automation (ICRA), 6284–6291.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Schulman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Moritz, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Jordan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Abbeel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' High-dimensional continuous control using gener- alized advantage estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Singh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kumar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' COG: Connecting New Skills to Past Expe- rience with Offline Reinforcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='14500.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Swaminathan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Joachims, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Batch learn- ing from logged bandit feedback through counterfactual risk minimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The Journal of Machine Learning Research, 16(1): 1731–1755.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Swazinna, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Udluft, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Runkler, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Overcoming Model Bias for Robust Offline Deep Reinforcement Learn- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='05533.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Thomas, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Theocharous, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Ghavamzadeh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' High confidence policy improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In International Con- ference on Machine Learning, 2380–2388.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Thomas, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' da Silva, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Barto, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Giguere, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Brun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Brunskill, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Preventing undesirable behavior of intelligent machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Science, 366(6468): 999– 1004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Thomas, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Theocharous, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Ghavamzadeh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Du- rugkar, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Brunskill, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Predictive Off-Policy Pol- icy Evaluation for Nonstationary Decision Problems, with Applications to Digital Marketing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In AAAI, 4740–4745.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Uehara, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Jiang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Minimax weight and q-function learning for off-policy evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In In- ternational Conference on Machine Learning, 9659–9668.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' van Hasselt, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Hessel, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Aslanides, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' When to use parametric models in reinforcement learning?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In Wal- lach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Larochelle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Beygelzimer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=" d'Alch´e-Buc, F." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Fox, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Garnett, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', Advances in Neural Informa- tion Processing Systems, volume 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Voloshin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Jiang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Yue, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Minimax Model Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In International Conference on Artificial Intelli- gence and Statistics, 1612–1620.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Xie, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Jiang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Batch value-function approximation with only realizability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='04990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Xie, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Ma, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Towards optimal off- policy evaluation for reinforcement learning with marginal- ized importance sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Advances in neural information processing systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Asymptotically effi- cient off-policy evaluation for tabular reinforcement learn- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In International Conference on Artificial Intelligence and Statistics, 3948–3958.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Kumar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Rafailov, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Rajeswaran, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Finn, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Combo: Conservative of- fline model-based policy optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='08363.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Thomas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Yu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Ermon, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Zou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Levine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Finn, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Ma, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' MOPO: Model-based Offline Policy Optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='13239.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Zhan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Zhu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Xu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Model-Based Of- fline Planning with Trajectory Pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' arXiv preprint arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='07351.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Zhang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Dai, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Schuurmans, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Gen- DICE: Generalized Offline Estimation of Stationary Values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In International Conference on Learning Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Liu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' and Whiteson, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Gradientdice: Rethinking generalized offline estimation of stationary val- ues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In International Conference on Machine Learning, 11194–11203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Missing Proofs High Probability Events In this section, we introduce concentration inequalities and define the high probability events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Define the following quantities L(π, g, T) = E(s,a,s′)∼µ � wπ,T (s, a)(Ex∼T (s,a)[g(x)] − Ex∼T ⋆(s,a)[g(x)]) � , (13) l(π, g, T) = E(s,a,s′)∼D[wπ,T (s, a)(f g T (s, a) − g(s′))].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (14) Recall that ι = log(2|G||T ||Π|/δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Consider the event E = � |L(π, g, T) − l(π, g, T)| ≤ 2Vmax � ζι n , ∀π ∈ Π, g ∈ G, T ∈ T � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (15) In the following, we show that Pr (E) ≥ 1 − δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (16) Recall that D = {(si, ai, s′ i)}n i=1 where (si, ai, s′ i) ∼ µ are i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' samples from distribution µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For fixed π ∈ Π, g ∈ G, T ∈ T , we have E[ˆl(π, g, T)] = l(π, g, T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Meanwhile, note that |wπ,T (s, a)(f g T (s, a) − g(s′))| ≤ ζVmax, (17) E(s,a,s′)∼µ[wπ,T (s, a)2(f g T (s, a) − g(s′))2] (18) ≤ E(s,a,s′)∼ρπ T [wπ,T (s, a)(f g T (s, a) − g(s′))2] ≤ V 2 maxζ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (19) By Bernstein inequality, with probability at least 1 − δ/(|G||T ||Π|), |L(π, g, T) − l(π, g, T)| ≤ � 2V 2 maxζ log(2|G||T ||Π|/δ) n + ζVmax 3n log(2|G||T ||Π|/δ) (20) Recall that ι = log(2|G||T ||Π|/δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' When n ≥ ζ we have |L(π, g, T) − l(π, g, T)| ≤ 2Vmax � ζι n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (21) Note that when n < ζ, E trivially holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' As a result, applying union bound we prove Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Proof of Lemma 3 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' In the following, we consider a fixed policy π and dynamics T ∈ T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We use w to denote wπ,T when the context is clear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' By basic algebra we get ���E(s,a)∼ρπ T [Gπ T (s, a)] ��� (22) ≤ ����E(s,a)∼ρπ T � I �ρπ T (s, a) ˆµ(s, a) ≤ ζ � Gπ T (s, a) ����� + E(s,a)∼ρπ T � I �ρπ T (s, a) ˆµ(s, a) > ζ � |Gπ T (s, a)| � (23) ≤ ��E(s,a)∼ˆµ[w(s, a)Gπ T (s, a)] �� + VmaxE(s,a)∼ρπ T � I �ρπ T (s, a) ˆµ(s, a) > ζ �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (24) Note that E(s,a)∼ˆµ[w(s, a)Gπ T (s, a)] = � s,a ˆµ(s, a)w(s, a)Gπ T (s, a) (25) = � s,a (ˆµ(s, a) − µ(s, a) + µ(s, a))w(s, a)Gπ T (s, a) (26) = � s,a µ(s, a)w(s, a)Gπ T (s, a) + � s,a (ˆµ(s, a) − µ(s, a))w(s, a)Gπ T (s, a) (27) ≤ E(s,a)∼µ[w(s, a)Gπ T (s, a)] + � s,a |ˆµ(s, a) − µ(s, a)|ζVmax (28) ≤ E(s,a)∼µ[w(s, a)Gπ T (s, a)] + ζVmaxTV (ˆµ, µ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (29) Continuing Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (24) we get ���E(s,a)∼ρπ T [Gπ T (s, a)] ��� (30) ≤ ��E(s,a)∼µ[w(s, a)Gπ T (s, a)] �� + VmaxE(s,a)∼ρπ T � I �ρπ T (s, a) ˆµ(s, a) > ζ �� + ζVmaxTV (ˆµ, µ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (31) Consequently, in the following we prove ��E(s,a)∼µ[w(s, a)Gπ T (s, a)] �� ≤ sup g∈G ℓw(g, T) + ϵV (T, π) + 2Vmax � ζι n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let Lw(g, T) = ��E(s,a,s′)∼µ � w(s, a)(Ex∼T (s,a)[g(x)] − Ex∼T ⋆(s,a)[g(x)]) ��� be the population error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Recall that under the high probability event E in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (15), for any g ∈ G and T ∈ T |Lw(g, T) − ℓw(g, T)| ≤ 2Vmax � ζι n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (32) Now by the definition of Gπ T (s, a), for any g ∈ G we have ��E(s,a)∼µ[w(s, a)Gπ T (s, a)] �� (33) = ��E(s,a)∼µ � w(s, a) � Es′∼T (s,a)[V π T ⋆(s′)] − Es′∼T ⋆(s,a)[V π T ⋆(s′)] ���� (34) ≤ ��E(s,a)∼µ � w(s, a) � Es′∼T (s,a)[g(s′)] − Es′∼T ⋆(s,a)[g(s′)] ���� (35) + ��E(s,a)∼µ � w(s, a) � Es′∼T (s,a)[g(s′) − V π T ⋆(s′)] + Es′∼T ⋆(s,a)[g(s′) − V π T ⋆(s′)] ����.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (36) Define ˆg = argmin g∈G ��E(s,a)∼µ � w(s, a) � Es′∼T (s,a)[g(s′) − V π T ⋆(s′)] + Es′∼T ⋆(s,a)[g(s′) − V π T ⋆(s′)] ����.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Since g is arbitrarily, continuing Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (36) and recalling Definition 2 we get ��E(s,a)∼µ[w(s, a)Gπ T (s, a)] �� (37) ≤ ��E(s,a)∼µ � w(s, a) � Es′∼T (s,a)[ˆg(s′)] − Es′∼T ⋆(s,a)[ˆg(s′)] ���� + ϵV (T, π) (38) ≤ sup g∈G ��E(s,a)∼µ � w(s, a) � Es′∼T (s,a)[g(s′)] − Es′∼T ⋆(s,a)[g(s′)] ���� + ϵV (T, π).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (39) Combining Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (39) and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (32) we get, ��E(s,a)∼µ[w(s, a)Gπ T (s, a)] �� ≤ sup g∈G Lw(g, T) + ϵV (T, π) (40) ≤ sup g∈G ℓw(g, T) + ϵV (T, π) + 2Vmax � ζι n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (41) Now plugging in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (31) we get, ���E(s,a)∼ρπ T [Gπ T (s, a)] ��� ≤ sup g∈G ℓw(g, T) + ϵV (T, π) + 2Vmax � ζι n + VmaxE(s,a)∼ρπ T � I �ρπ T (s, a) ˆµ(s, a) > ζ �� + ζVmaxTV (ˆµ, µ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Finally, combining with simulation lemma (Lemma 1) we finish the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Proof of Lemma 5 Proof of Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Consider a fixed π ∈ Π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' When the context is clear, we use ϵρ and ϵµ to denote ϵρ(π) and ϵµ(π) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Consider the dynamics ˆT = argmin T ∈T E(s,a)∼ρπ T ⋆ [TV (T(s, a), T ⋆(s, a))].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (42) By the definition of ϵρ we get E(s,a)∼ρπ T ⋆ � TV � ˆT(s, a), T ⋆(s, a) �� ≤ ϵρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Applying Lemma 9 we get ��ρπ ˆT − ρπ T ⋆ �� 1 ≤ ϵρ (1 − γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (43) The rest of the proof is organized in the following way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We bound the three terms in RHS of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (4) respectively as follows η( ˆT, π) ≥ η(T ⋆, π) − Vmax 1 − γ ϵρ, (44) sup g∈G ℓw(g, ˆT) ≤ 2Vmaxϵρ 1 − γ + 2Vmax � ζι n + ζVmaxTV (ˆµ, µ) , (45) E(s,a)∼ρπ ˆ T � I � ρπ ˆT (s, a) ˆµ(s, a) > ζ �� ≤ ϵµ + 3ϵρ (1 − γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (46) Then we combine these inequalities together to prove Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Step 1: Proving Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (44).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note that for every T and π, η(T, π) = 1 1−γ ⟨ρπ T , r⟩ where r is the reward function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Then we have η(T ⋆, π) − η( ˆT, π) = 1 1 − γ � ρπ T ⋆ − ρπ ˆT , r � ≤ 1 1 − γ ��ρπ T ⋆ − ρπ ˆT �� 1 ∥r∥∞ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (47) Combining with Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (43) we get Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (44).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Step 2: Proving Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (45).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For any fixed function g ∈ G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let w = wπ, ˆT be a shorthand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Define Lw(g, T) = ��E(s,a,s′)∼µ[w(s, a)(f g T (s, a) − g(s′))] �� to be the population error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Then we have Lw(g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' ˆT) = ���E(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)∼µ � w(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a) � Es′∼ ˆT (s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)[g(s′)] − Es′∼T ⋆(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)[g(s′)] ����� ≤ ���E(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)∼ˆµ � w(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a) � Es′∼ ˆT (s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)[g(s′)] − Es′∼T ⋆(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)[g(s′)] ����� + ζVmaxTV (ˆµ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' µ) = �����E(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)∼ρπ ˆ T � I � ρπ ˆT (s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a) ˆµ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a) ≤ ζ � � Es′∼ ˆT (s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)[g(s′)] − Es′∼T ⋆(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)[g(s′)] ������� + ζVmaxTV (ˆµ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' µ) ≤ VmaxE(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)∼ρπ ˆ T � I � ρπ ˆT (s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a) ˆµ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a) ≤ ζ � TV � ˆT(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' T ⋆(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a) �� + ζVmaxTV (ˆµ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' µ) ≤ VmaxE(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)∼ρπ T ⋆ � TV � ˆT(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' T ⋆(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a) �� + Vmaxϵρ 1 − γ + ζVmaxTV (ˆµ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' µ) (By Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (43)) ≤ Vmax � ϵρ + ϵρ 1 − γ � + ζVmaxTV (ˆµ, µ) ≤ 2Vmaxϵρ 1 − γ + ζVmaxTV (ˆµ, µ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Under event E we have ℓw(g, ˆT) ≤ Lw(g, ˆT) + 2Vmax � ζι n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (48) Because g is arbitrary, we get Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (45).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Step 3: Proving Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (46).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note that E(s,a)∼ρπ ˆ T � I �ρˆπ T (s, a) ˆµ(s, a) > ζ �� (49) = E(s,a)∼ρπ ˆ T � I � ρπ ˆT (s, a) ρπ T ⋆(s, a) ρπ T ⋆(s, a) ˆµ(s, a) > ζ �� (50) ≤ E(s,a)∼ρπ ˆ T � I � ρπ ˆT (s, a) ρπ T ⋆(s, a) > 2 �� + E(s,a)∼ρπ ˆ T � I �ρπ T ⋆(s, a) ˆµ(s, a) > ζ/2 �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (51) With the help of Lemma 8, we can upper bound the first term of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (51) by the total variation between ρπ ˆT and ρπ T ⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Combining Lemma 8 and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (43) we get E(s,a)∼ρπ ˆ T � I � ρˆπ T (s, a) ρπ T ⋆(s, a) > 2 �� ≤ 2ϵρ 1 − γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (52) On the other hand, by combining Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (43) and the definition of ϵµ we get E(s,a)∼ρπ ˆ T � I �ρπ T ⋆(s, a) ˆµ(s, a) > ζ/2 �� ≤ E(s,a)∼ρπ T ⋆ � I �ρπ T ⋆(s, a) ˆµ(s, a) > ζ/2 �� + ϵρ 1 − γ ≤ ϵµ + ϵρ 1 − γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Consequently, we get Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (46).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Now we stitch Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (43), Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (44) and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (45) together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Combining with the definition of lb( ˆT, π) in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (4), we have lb( ˆT, π) = η( ˆT, π) − 1 1 − γ � sup g∈G ���ℓwπ,T (g, ˆT) ��� + VmaxE(s,a)∼ρπ T � I � ρπ ˆT (s, a) ˆµ(s, a) > ζ �� + 2ζVmaxTV (ˆµ, µ) � ≥ η(T ⋆, π) − Vmaxϵρ 1 − γ − 2Vmaxϵρ (1 − γ)2 − 2Vmax 1 − γ � ζι n − Vmax 1 − γ � 3ϵρ 1 − γ + ϵµ � − 2ζVmaxTV (ˆµ, µ) 1 − γ ≥ η(T ⋆, π) − 6Vmaxϵρ (1 − γ)2 − Vmaxϵµ 1 − γ − 2Vmax 1 − γ � ζι n − 2ζVmaxTV (ˆµ, µ) 1 − γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note that ˆT ∈ T , we have max T ∈T lb(T, π) ≥ lb( ˆT, π), (53) which finishes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Proof of Theorem 4 Proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let ˆT, ˆπ ← argmaxT ∈T ,π∈Π lb(T, π) be the dynamics and policy that maximizes the lower bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note that ˆπ is the output of Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Now under the event E, by Lemma 5, for any policy π we have max T ∈T lb(T, π) ≥ η(T ⋆, π) − 6Vmaxϵρ(π) (1 − γ)2 − Vmaxϵµ(π) 1 − γ − 2Vmax 1 − γ � ζι n − 2ζVmaxTV (ˆµ, µ) 1 − γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (54) On the other hand, under the event E, by Lemma 3 we get η(T ⋆, π) ≥ lb( ˆT, ˆπ) − ϵV ( ˆT, ˆπ) 1 − γ − 2Vmax 1 − γ � ζι n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (55) By the optimality of ˆT, ˆπ, we have lb( ˆT, ˆπ) ≥ supT ∈T lb(T, π) for any π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' As a result, combining with Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (54) and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (55) we get η(T ⋆, ˆπ) ≥ lb( ˆT, ˆπ) − ϵV ( ˆT, ˆπ) 1 − γ − 2Vmax 1 − γ � ζι n (56) ≥ sup π∈Π sup T ∈T lb(T, π) − ϵV ( ˆT, ˆπ) 1 − γ − 2Vmax 1 − γ � ζι n (57) ≥ sup π � η(T ⋆, π) − 6Vmaxϵρ(π) (1 − γ)2 − Vmaxϵµ(π) 1 − γ � − ϵV ( ˆT, ˆπ) 1 − γ − 4Vmax 1 − γ � ζι n − 2ζVmaxTV (ˆµ, µ) 1 − γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (58) Proof of Theorem 6 Proof of Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note that for any fixed θ ∈ Rd, the transition function for state s1, · · · , sd are identical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' As a result, Qπ Tθ(si, aj) = Qπ Tθ(si′, aj), ∀i, i′ ∈ [d] for any policy π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Recall that πθ is the optimal policy of Tθ (with ties breaking uniformly at random).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Therefore, πθ(s0) = 1/A and πθ(si) = πθ(si′), ∀i, i′ ∈ [d].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' By the definition of the ground-truth dynamics T ⋆ in Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (9)-(10), we have Qπθ T ⋆(si, aj) = I [i = j] γ 1−γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Therefore, η(T ⋆, πθ) = γ A d � i=1 Qπθ T ⋆(si, πθ(si)) ≤ γ A max a d � i=1 Qπθ T ⋆(si, a) ≤ γ2 A(1 − γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (59) Since maxπ η(T ⋆, π) = γ2 1−γ , we have max π η(T ⋆, π) − η(T ⋆, πθ) ≥ (A − 1)γ2 A(1 − γ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' OPE Error of MML In this section, we show that the off-policy estimation error in Voloshin, Jiang, and Yue (2021) can be large when the dynamics model class is misspecified in Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The MML algorithm requires an density ratio class W : S × A → R+ and prove that when wπ,T ∈ W and V π T ⋆ ∈ G, |η(T, π) − η(T ⋆, π)| ≤ γ min T ∈T max w∈W,g∈G |ℓw(g, T)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (60) Unfortunately, this is suboptimal since the error may not converge to zero even given infinite data: Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Consider the set the dynamics class T = {Tθ : θ ∈ Sd−1, θi ≥ 0, ∀i ∈ [d]}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let Π = {πx : x ∈ [d]} where πx(si) = ax for 0 ≤ i ≤ d and πx(sg) = πx(sb) = a1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let W be the density ratio class induced by π running on {T ⋆} ∪ T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Even with G = {V πx T ⋆ : x ∈ [d]} and infinite number of data, we have min T ∈T max w∈W,g∈G |ℓw(g, T)| ≥ γ 8(1 − γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (61) In contrast, the error terms in Theorem 4 converge to 0 when ζ > poly(d, 1/(1 − γ)) and n → ∞ in the same setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Proof of Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Recall that we set the dynamics class T = {Tθ : θ ∈ Sd−1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let Π = {πx : x ∈ [d]} where πx(si) = ax for 0 ≤ i ≤ d and πx(sg) = πx(sb) = a1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let W be the density ratio induced by π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For any x ∈ [d], we can compute ρπx T ⋆(s0, ai) = (1 − γ)I [i = x] , ρπx T ⋆(si, aj) = γ(1 − γ)I [i = x, j = x] , (62) ρπx T ⋆(sg, aj) = γ2(1 − γ)I [j = 1] , ρπx T ⋆(sb, aj) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (63) Let µ be uniform distribution over 3d + d2 state action pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Then we can define W = {wx : x ∈ [d]} where wx(s, a) ≜ 1 1−γ ρπx T ⋆(s,a) µ(s,a) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Now for any fixed θ ∈ Sd−1, θ ≥ 0, consider max w∈W,g∈G |ℓw(g, Tθ)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (64) Let x = argmini θi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We claim that ℓwx(V πx T ⋆ , Tθ) ≥ γ 8(1 − γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Indeed, with infinite data we have ℓwx(V πx T ⋆ , Tθ) = ��E(s,a)∼µ � wx(s, a) � Es′∼T (s,a)[V πx T ⋆ (s′)] − Es′∼T ⋆(s,a)[V πx T ⋆ (s′)] ���� = 1 1 − γ ���E(s,a)∼ρπx T ⋆ �� Es′∼T (s,a)[V πx T ⋆ (s′)] − Es′∼T ⋆(s,a)[V πx T ⋆ (s′)] �����.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Recall that Tθ = T ⋆ for states s0, sg, sb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' As a result, we continue the equation by 1 1 − γ ���E(s,a)∼ρπx T ⋆ �� Es′∼T (s,a)[V πx T ⋆ (s′)] − Es′∼T ⋆(s,a)[V πx T ⋆ (s′)] ����� = γ ��Es′∼T (sx,ax)[V πx T ⋆ (s′)] − Es′∼T ⋆(sx,ax)[V πx T ⋆ (s′)] �� (by the definition of ρ) = γ ���� 1 2(1 + θx)V πx T ⋆ (sg) + 1 2(1 − θx)V πx T ⋆ (sb) − V πx T ⋆ (sg) ���� (by the definition of Tθ) = γ 2 (1 − θx)(V πx T ⋆ (sg) − V πx T ⋆ (sb)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' By basic algebra, V πx T ⋆ (sg) = (1 − γ)−1 and V πx T ⋆ (sb) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' As a result, we get ℓwx(V πx T ⋆ , Tθ) ≥ γ 2(1 − γ)(1 − θx).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (65) Recall that x = argmini θi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Since θ ∈ Sd−1 and θi ≥ 0, ∀i, we have 1 = �d i=1 θ2 i ≥ dθ2 x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' As a result, when d > 2 we have θx ≤ 1/ √ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Therefore ℓwx(V πx T ⋆ , Tθ) ≥ γ 2(1 − γ)(1 − θx) ≥ γ 8(1 − γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (66) Helper Lemmas In this section, we present several helper lemmas used in Appendix .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Lemma 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For two distribution p, q over x ∈ X, if we have ∥p − q∥1 ≤ ϵ, then for any ζ > 1, Ex∼p � I �p(x) q(x) > ζ �� ≤ ζ ζ − 1ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Define E(x) = I � p(x) q(x) > ζ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note that under event E(x) we have p(x) > q(x)ζ =⇒ p(x) − q(x) > q(x)(ζ − 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (67) As a result, ϵ ≥ ∥p − q∥1 ≥ � |p(x) − q(x)|E(x) dx (68) ≥ � (ζ − 1)q(x)E(x) dx = Ex∼q[E(x)](ζ − 1) (69) ≥ (Ex∼p[E(x)] − ϵ)(ζ − 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (70) By algebraic manipulation we get Ex∼p[E(x)] ≤ ζ ζ−1ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Lemma 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Consider a fixed policy π and two dynamics model T, ¯T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Suppose E(s,a)∼ρπ T � TV � T(s, a), ¯T(s, a) �� ≤ ϵ, we get ��ρπ T − ρπ ¯T �� 1 ≤ 1 1 − γ ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (71) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' First of all let G, ¯G be the transition kernel from S × A to S × A induced by T, π and ¯T, π respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Then for any distribution ρ ∈ ∆(S × A) we have ��Gρ − ¯Gρ �� 1 ≤ E(s,a)∼ρ � TV � ¯T(s, a), T(s, a) �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (72) Let ρh (or ¯ρh) be the state-action distribution on step h under dynamics T (or ¯T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Then we have ρh − ¯ρh = � Gh − ¯Gh� ρ0 = h−1 � h′=0 ¯Gh−h′−1� G − ¯G � Gh′ρ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (73) As a result, ∥ρh − ¯ρh∥1 ≤ h−1 � h′=0 ��� ¯Gh−h′−1� G − ¯G � Gh′ρ0 ��� 1 (74) ≤ h−1 � h′=0 ��� � G − ¯G � Gh′ρ0 ��� 1 ≤ h−1 � h′=0 E(s,a)∼ρh′ � TV � ¯T(s, a), T(s, a) �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (75) It follows that ��ρπ T − ρπ ¯T �� 1 ≤ (1 − γ) ∞ � h=0 γh ∥ρh − ¯ρh∥1 (76) ≤(1 − γ) ∞ � h=0 γh h−1 � h′=0 E(s,a)∼ρh′ � TV � ¯T(s, a), T(s, a) �� (77) ≤(1 − γ) ∞ � h=0 γh 1 − γ E(s,a)∼ρh � TV � ¯T(s, a), T(s, a) �� (78) = ∞ � h=0 γhE(s,a)∼ρh � TV � ¯T(s, a), T(s, a) �� (79) = 1 1 − γ E(s,a)∼ρπ T � TV � ¯T(s, a), T(s, a) �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (80) LQR Experimental Details Data generation The offline dataset is generated by running several πv under the true dynamics with v ∈ {−1, −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='75, −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5, −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='25, 0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='25, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='75} and added noise N(0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5) to the policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' As a result, the behavior dataset covers most of the state-action space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The dataset contains 2000 trajectories with length 20 from each policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Implementation We compute the density ratio by approximating the behavior distribution µ and the state-action distribution ρπ T respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' By discretizing the state-action space into 10 × 10 bins uniformly, the distribution µ(s, a) is approximated by the frequency of the corresponding bin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For ρπ T , we first collect 2000 trajectories of policy π under T and compute the distribution similarly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Because all the function classes are finite, we enumerate over the function classes to compute lb(T, π) for every pair of dynamics and policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Hyperparameters In the experiments, we use the following hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Cutoff threshold in Line 3 of Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1: ζ = 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Random seeds for three runs: 1, 2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' State noise: η ∼ N(0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='05).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Policy noise: N(0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='01).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Discount factor: γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Mean of initial state: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Noise added to initial state: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Number of trajectories per policy: 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We do not require parameter tuning for optimization procedures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We tried cutoff threshold with ζ ∈ {10, 20, 50} and number of trajectories in {20, 500, 2000}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Smaller cutoff leads to an over-pessimistic lower bound, and fewer trajectories introduce variance to the final result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Computing resources These experiments run on a machine with 2 CPUs, 4GB RAM, and Ubuntu 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We don’t require GPU resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We use Python 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 and numpy 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' D4RL Experimental Details Tasks Hopper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The Hopper task is to make a hopper with three joints and four body parts hop forward as fast as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The state space is 11-dimension, the action is a 3-dimensional continuous space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' HalfCheetah.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The HalfCheetah task is to make a 2D robot with 7 rigid links, including 2 legs and a torso run forward as fast as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The state space is 17-dimension, the action is a 6-dimensional continuous space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Model Choice and Hyperparameters For all the dynamics, each model is parametrized as a 4-layer feedforward neural network with 200 hidden units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' For the SAC (Haarnoja et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2018) updates (serving as the policy gradient updates subroutine), the function approximations used for the policy and value function are 2-layer feedforward neural networks with 256 hidden units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' The hyperparameter choices for behavior density modeling are based on the training progress of the normalizing flow model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We pre-select a few (less than 10) combinations of hyperparameters and pick the set that gives us the lowest training loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Usually, this is not the best practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' However, the small number of combinations (non-exhaustive search) and small model size reduced our concern for training set overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' MOPO (Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 2020): Batch size: 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Rollout horizon: 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Lambda: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' MBLB: Random seeds for five runs: 1, 2, 3, 4, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Number of trajectories to sample: 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Rollout horizon: 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Batch size: 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Cutoff threshold in Line 3 of Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 1: ζ = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Discount factor γ: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' GAE λ: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' g function latent size: 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' MML: Random seeds for five runs: 1, 2, 3, 4, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Batch size: 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Basis function class: square, polynomial Ratio-Value function parametrization: linear, reproducing kernel hilbert space (RKHS) For MML, we first need to make a decision on how to parametrize h(s, a, s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' If we choose a linear parametrization such as h(s, a, s′) = ψ(s, a, s′)T θ, we need to decide what ψ is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' There are two obvious choices: ψ(x) = [x, x2, 1] (square basis func- tion), or a polynomial basis function with degree 2: given x = [x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', xd], ψ(x) = [x2 1, x1x2, x1x3, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', x2 2, x2x3, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', x2 d], which can be efficiently computed as the upper triangular entries of xxT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' If we choose the ratio-value function parametrization to be RKHS, then we use radial basis function (RBF) as K((s, a, s′), (˜s, ˜a, ˜s′)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Computing resources These experiments run on a machine with 4 CPUs, 10GB RAM, and Ubuntu 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We don’t require GPU resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We use Python 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 and numpy 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Algorithms We describe the MML and MBLB algorithms in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Algorithm 2 describes how we compute MBLB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Note that we compute three components of lower bound explicitly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Algorithm 3 describes how we compute MML with linear parametrization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Algorithm 4 describes how we compute MML with RKHS parametrization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Algorithm 2: MBLB: Model-based Lower Bound Input: offline RL data D;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' set of dynamics, policy pairs [(π1, T1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', (πK, TK)], Vmax, γ, ζ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Output: optimal policy π∗ ˆµ(·, ·) = trainFlow (D) scores = [] for i ← 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='K do Qπi = trainFQE (Sample (D, Ti, πi), πi) ρTi πi(·, ·) = trainFlow (Sample (D, Ti, πi)) η = E(s,a)∼D[Qπi(s, πi(s))] Initialize (θ) L = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' ∆ = 0 for (s, a, s′) ∈ D do w = max(min( ρ Ti πi(s,a) ˆµ(s,a) , ζ), 0) ℓ = −|w · (Ex∼Ti(s)[gθ(x)] − gθ(s′))| θ = θ + ∇θℓ ∆ = ∆ − Vmax · I � ρ Ti πi(s,a) ˆµ(s,a) > ζ � L = L + ℓ end score = 1 |D|(η + 1 1−γ (∆ + L)) scores ← score end i = argmax(scores) return πi Algorithm 3: MML-Linear: Minimax Model Learning Bound Input: offline RL data D;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' set of dynamics, policy pairs [(π1, T1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', (πK, TK)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Output: optimal policy π∗ scores = [] for i ← 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='K do Initialize (θ) L = 0 for (s, a, s′) ∈ D do ℓ = −(Ex∼Ti(s)[ψ(s, a, x)T θ] − ψ(s, a, s′)T θ) θ = θ + ∇θℓ L = L + ℓ end score = L |D| scores ← score end i = argmax(scores) return πi Algorithm 4: MML-RKHS: Minimax Model Learning Bound Input: offline RL data D;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' set of dynamics, policy pairs [(π1, T1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=', (πK, TK)], kernel K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Output: optimal policy π∗ scores = [] for i ← 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='K do L = 0 for (s, a, s′), (˜s, ˜a, ˜s′) ∈ D do ℓ1 = Ex∼T (s),˜x∼T (˜s)[K((s, a, x), (˜s, ˜a, ˜x))] ℓ2 = −2Ex∼T (s)[K((s, a, x), (˜s, ˜a, ˜s′))] ℓ3 = K((s, a, s′), (˜s, ˜a, ˜s′)) L = L + ℓ1 + ℓ2 + ℓ3 end score = L |D| scores ← score end i = argmax(scores) return πi D4RL Additional Experiments Ablation Study We conduct an ablation study in Table A1 where we evaluate the final performance of the policies selected using either FQE with TD-1 estimation or FQE with GAE estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We observe that using GAE for offline policy selection allows for picking better policies on average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' MBLB with RKHS In this section, we derive the closed-form solution to supg∈G ℓw(g, T) when the test function g belongs to a reproducing kernel Hilbert space (RKHS), and empirically evaluate the MBLB method with RKHS parameterization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let K : S ×S → R be a symmetric and positive definite kernel and HK its corresponding RKHS with inner product ⟨·, ·⟩HK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Then we have the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Lemma 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' When G = {g ∈ HK : ⟨g, g⟩HK ≤ 1}, we have sup g∈G ℓw(g, T)2 = Es,a,s′∼D,x∼T (s,a)E˜s,˜a,˜s′∼D,˜x∼T (˜s,˜a) [w(s, a)w(˜s, ˜a)(K(x, ˜x) + K(s′, ˜s′) − K(x, ��s′) − K(˜x, s′)] (81) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Let Kx ≜ K(x, ·) ∈ HK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' By the reproducing property, we have ⟨Kx, Ky⟩HK = K(x, y) and ⟨Kx, g⟩HK = g(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' As a Dataset Type Environment FQE (TD-1) FQE (GAE) medium hopper 507.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8 (549.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6) 533.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 (532.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6) med-expert hopper 149.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='3 (146.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2) 261.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1 (157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9) expert hopper 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 (34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6) 120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='7 (78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='7) medium halfcheetah 1802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 (1011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9) 2117.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 (1215.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6) med-expert halfcheetah 302.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1 (605.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2) 394.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9 (632.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0) Table A1: We report the mean and (standard deviation) of the selected policy’s environment performance across 3 random seeds using different variants of FQE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' result,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' sup g∈G ℓw(g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' T)2 = sup g:⟨g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='g⟩HK ≤1 Es,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='s′∼D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='x∼T (s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)[w(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a)(⟨Kx,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' g⟩HK − ⟨Ks′,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' g⟩HK)]2 (82) = sup g:⟨g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='g⟩HK ≤1 � Es,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='s′∼D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='x∼T (s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)[w(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a)(Kx − Ks′)],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' g �2 HK (83) = ∥Es,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='s′∼D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='x∼T (s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)[w(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a)(Kx − Ks′)]∥2 HK (Cauchy-Schwarz) = � Es,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='s′∼D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='x∼T (s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)[w(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a)(Kx − Ks′)],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' E˜s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='˜a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='˜s′∼D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='˜x∼T (˜s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='˜a)[w(˜s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' ˜a)(K˜x − K˜s′)] � HK (84) = Es,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='s′∼D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='x∼T (s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)E˜s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='˜a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='˜s′∼D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='˜x∼T (˜s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='˜a)[⟨w(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a)(Kx − Ks′),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' w(˜s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' ˜a)(K˜x − K˜s′)⟩HK] (85) = Es,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='s′∼D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='x∼T (s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='a)E˜s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='˜a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='˜s′∼D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='˜x∼T (˜s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='˜a)[w(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' a)w(˜s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' ˜a)(K(x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' ˜x) + K(s′,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' ˜s′) − K(x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' ˜s′) − K(˜x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' s′)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' (86) Table A2 presents the performance of the MBLB algorithm with RKHS parameterization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' On most of the environments, MBLB-RKHS performs better than/comparable with MML-RKHS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' However, MBLB-Quad consistently outperforms MBLB- RKHS on all the environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' We suspect that MBLB-RKHS could outperform MBLB-Quad with different choices of kernels because the quadratic parameterization can be seen as a special case of RKHS parameterization (with quadratic kernels).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Dataset Type Env MOPO MML (Squared) MML (Polynomial) MML (RKHS) MBLB (Linear) MBLB (Quad) MBLB (RKHS) medium hopper 175.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 (95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='3) 379.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 (466.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4) 375.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6 (459.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5) 375.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 (459.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9) 591.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='7 (523.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1) 808.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 (502.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='7) 317.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8 (476.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4) med-expert hopper 183.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8 (94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4) 160.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9 (131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5) 116.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 (148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4) 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 (35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0) 261.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1 (157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9) 242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 (134.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0) 208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1 (144.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='3) expert hopper 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 (63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4) 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8 (87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9) 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6 (61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9) 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 (56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2) 118.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2 (61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6) 121.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 (72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5) 120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='9 (61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8) medium halfcheetah 599.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8 (668.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4) 1967.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6 (1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5) 2625.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1 (937.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2) 3858.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2 (1231.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1) 3290.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 (1753.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1) 2484.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2 (1526.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8) 2229.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='7 (1949.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8) med-expert halfcheetah 486.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6 (48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1) 188.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5 (137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2) 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 (252.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5) 343.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2 (225.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2) 207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 (509.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='5) 192.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8 (432.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='1 (690.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6) Table A2: We report the mean and (standard deviation) of selected policy’s simulator environment performance across 5 random seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' MML and MBLB are used as model-selection procedures where they select the best policy for each seed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' Our method is choosing the most near-optimal policy across the datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='0 Normalized Score (τ) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'} +page_content='00 Fraction of runs with score > τ MBLB MML MOPO Figure A1: Performance profile between three methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FJT4oBgHgl3EQfBywq/content/2301.11426v1.pdf'}