diff --git "a/49FKT4oBgHgl3EQfSC3u/content/tmp_files/load_file.txt" "b/49FKT4oBgHgl3EQfSC3u/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/49FKT4oBgHgl3EQfSC3u/content/tmp_files/load_file.txt" @@ -0,0 +1,756 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf,len=755 +page_content='Reinforcement Learning from Diverse Human Preferences Wanqi Xue * 1 Bo An 1 Shuicheng Yan 2 Zhongwen Xu 2 Abstract The complexity of designing reward functions has been a major obstacle to the wide application of deep reinforcement learning (RL) techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' Describing an agent’s desired behaviors and prop- erties can be difficult, even for experts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' A new paradigm called reinforcement learning from hu- man preferences (or preference-based RL) has emerged as a promising solution, in which reward functions are learned from human preference la- bels among behavior trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' However, ex- isting methods for preference-based RL are lim- ited by the need for accurate oracle preference labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' This paper addresses this limitation by de- veloping a method for crowd-sourcing preference labels and learning from diverse human prefer- ences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' The key idea is to stabilize reward learning through regularization and correction in a latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' To ensure temporal consistency, a strong constraint is imposed on the reward model that forces its latent space to be close to the prior distri- bution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' Additionally, a confidence-based reward model ensembling method is designed to generate more stable and reliable predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' The pro- posed method is tested on a variety of tasks in DMcontrol and Meta-world and has shown con- sistent and significant improvements over existing preference-based RL algorithms when learning from diverse feedback, paving the way for real- world applications of RL methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' Introduction Recent advances in reinforcement learning (RL) have achieved remarkable success in simulated environments such as board games (Silver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' Moravˇc´ık et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=', 2017) and video games (Mnih et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' Vinyals et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' Wurman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' However, the application This work was done during an internship at Sea AI Lab, Singapore 1Nanyang Technological University, Singapore 2Sea AI Lab, Singapore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/49FKT4oBgHgl3EQfSC3u/content/2301.11774v1.pdf'} +page_content=' Correspondence to: Wanqi Xue