diff --git "a/DtFKT4oBgHgl3EQfZS4v/content/tmp_files/load_file.txt" "b/DtFKT4oBgHgl3EQfZS4v/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/DtFKT4oBgHgl3EQfZS4v/content/tmp_files/load_file.txt" @@ -0,0 +1,600 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf,len=599 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content='11802v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content='LG] 27 Jan 2023 Decentralized Online Bandit Optimization on Directed Graphs with Regret Bounds Johan ¨Ostman 1 Ather Gattami 1 Daniel Gillblad 1 Abstract We consider a decentralized multiplayer game, played over T rounds, with a leader-follower hi- erarchy described by a directed acyclic graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' For each round, the graph structure dictates the order of the players and how players observe the actions of one another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' By the end of each round, all players receive a joint bandit-reward based on their joint action that is used to update the player strategies towards the goal of minimiz- ing the joint pseudo-regret.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' We present a learn- ing algorithm inspired by the single-player multi- armed bandit problem and show that it achieves sub-linear joint pseudo-regret in the number of rounds for both adversarial and stochastic ban- dit rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' Furthermore, we quantify the cost incurred due to the decentralized nature of our problem compared to the centralized setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' Introduction Decentralized multi-agent online learning concerns agents that, simultaneously, learn to behave over time in order to achieve their goals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' Compared to the single-agent setup, novel challenges are present as agents may not share the same objectives, the environment becomes non- stationary, and information asymmetry may exist between agents (Yang & Wang, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' Traditionally, the multi- agent problem has been addressed by either relying on a central controller to coordinate the agents’ actions or to let the agents learn independently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' However, access to a central controller may not be realistic and indepen- dent learning suffers from convergence issues (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' To circumvent these issues, a common approach is to drop the central coordinator and allow informa- tion exchange between agents (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' Cesa-Bianchi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' Decision-making that involves multiple agents is often 1AI Sweden, Gothenburg, Sweden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFKT4oBgHgl3EQfZS4v/content/2301.11802v1.pdf'} +page_content=' Correspondence to: Johan ¨Ostman