diff --git "a/5NA0T4oBgHgl3EQfNv96/content/tmp_files/load_file.txt" "b/5NA0T4oBgHgl3EQfNv96/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/5NA0T4oBgHgl3EQfNv96/content/tmp_files/load_file.txt" @@ -0,0 +1,829 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf,len=828 +page_content='Beyond spectral gap (extended): The role of the topology in decentralized learning Thijs Vogels* thijs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='vogels@epfl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='ch Hadrien Hendrikx* hadrien.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='hendrikx@epfl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='ch Martin Jaggi martin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='jaggi@epfl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='ch Machine Learning and Optimization Laboratory EPFL Lausanne, Switzerland Abstract In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model: more accurate gradients allow them to use larger learning rates and optimize faster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In the decentralized setting, in which workers communicate over a sparse graph, current theory fails to capture important aspects of real-world behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' First, the ‘spectral gap’ of the communication graph is not predictive of its empirical performance in (deep) learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Second, current theory does not explain that collaboration enables larger learning rates than training alone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In fact, it prescribes smaller learning rates, which further decrease as graphs become larger, failing to explain convergence dynamics in infinite graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This paper aims to paint an accurate picture of sparsely-connected distributed optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We quantify how the graph topology influences convergence in a quadratic toy problem and provide theoretical results for general smooth and (strongly) convex objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Our theory matches empirical observations in deep learning, and accurately describes the relative merits of different graph topologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This paper is an extension of the conference paper by Vogels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Code: github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='com/epfml/topology-in-decentralized-learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Keywords: Decentralized Learning, Convex Optimization, Stochastic Gradient Descent, Gossip Algorithms, Spectral Gap 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Introduction Distributed data-parallel optimization algorithms help us tackle the increasing complexity of machine learning models and of the data on which they are trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We can classify those training algorithms as either centralized or decentralized, and we often consider those settings to have different benefits over training ‘alone’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In the centralized setting, workers compute gradients on independent mini-batches of data, and they average those gradients between all workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The resulting lower variance in the updates enables larger learning rates and faster training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In the decentralized setting, workers average their models with only a sparse set of ‘neighbors’ in a graph instead of all-to-all, and they may have private datasets sampled from different distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' As the benefit of decentralized learning, we usually focus only on the (indirect) access to other worker’s datasets, and not of faster training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Homogeneous (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=') setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' While decentralized learning is typically studied with heterogeneous datasets across workers, sparse (decentralized) averaging between them is also useful when worker’s data is identically distributed (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=') (Lu and Sa, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' As an example, ©2022 Thijs Vogels and Hadrien Hendrikx and Martin Jaggi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' *: Equal contribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Preprint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Under Review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' License: CC-BY 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='0, see https://creativecommons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='org/licenses/by/4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='0/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='02151v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='LG] 5 Jan 2023 Vogels, Hendrikx, Jaggi 10 100 1000 ↑ Steps until loss < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1 1 Learning rate → Fully connected Ring Alone (disconnected) Current theory uses lower learning rates but decentralized averaging enables higher learning rates 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1 1 Learning rate → 1-ring (spectral gap 1) 2-ring (spectral gap 1) 4-ring (spectral gap 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='67) 8-ring (s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='20) ∞-ring (s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 0) ↮ Instead of a speedup, current theory predicts a slowdown with ring size Figure 1: ‘Time to target’ for D-SGD (Lian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2017) with constant learning rates on an i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' isotropic quadratic dataset (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The noise disappears at the optimum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Compared to optimizing alone, 32 workers in a ring (left) are faster for any learning rate, but the largest improvement comes from being able to use a large learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This benefit is not captured by current theory, which prescribes a smaller learning rate than training alone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' On the right, we see that rings of increasing size enable larger learning rates and faster optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Because a ring’s spectral gap goes to zero with the size of the ring, this cannot be explained by current theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' sparse averaging is used in data centers to mitigate communication bottlenecks (Assran et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' When the data is i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (or heterogeneity is mild), the goal of sparse averaging is to optimize faster, just like in centralized (all-to-all) graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Yet, current decentralized learning theory poorly explains this speed-up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Analyses typically show that, for small enough learning rates, training with sparse averaging behaves the same as with all-to-all averaging (Lian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Koloskova et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2020) and so it reduces the gradient variance by the number of workers compared to training alone with the same small learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In practice, however, such small learning rates would never be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In fact, a reduction in variance should allow us to use a larger learning rate than training alone, rather than imposing a smaller one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Contrary to current theory, we show that (sparse) averaging lowers variance throughout all phases of training (both initially and asymptotically), allowing to take higher learning rates, which directly speeds up convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We characterize how much averaging with various communication graphs reduces the variance, and show that centralized performance (variance divided by the number of workers) is not always achieved when using optimal large learning rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The behavior we explain is illustrated in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Heterogeneous (non-i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=') setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In standard analyses, heterogeneity affects convergence in a very worst-case manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Standard guarantees intuitively correspond to the pessimistic case in which the most distant workers have the most different functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' These guarantees are typically loose in the settings where workers have different finite datasets sampled i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' from the same distribution, or if each worker has a lot of diversity in its close neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In this work, we characterize the impact of heterogeneity together with the communication graph, 2 Beyond spectral gap enabling non-trivial guarantees even for infinite graphs under non-adversarial heterogeneity patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Spectral gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In both the homogeneous and heterogeneous settings, the graph topology appears in current convergence rates through the spectral gap of its averaging (gossip) matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The spectral gap poses a conservative lower bound on how much one averaging step brings all worker’s models closer together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The larger, the better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' If the spectral gap is small, a significantly smaller learning rate is required to make the algorithm behave close to SGD with all-to-all averaging with the same learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Unfortunately, we experimentally observe that, both in deep learning and in convex optimization, the spectral gap of the communication graph is not predictive of its performance under tuned learning rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The problem with the spectral gap quantity is clearly illustrated in a simple example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Let the communication graph be a ring of varying size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' As the size of the ring increases to infinity, its spectral gap goes to zero since it becomes harder and harder to achieve consensus between all the workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This leads to the optimization progress predicted by current theory to go to zero as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In some cases, when the worker’s objectives are adversarially heterogeneous in a way that requires workers to obtain information from all others, this is indeed what happens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In typical cases, however, this view is overly pessimistic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In particular, this view does not match the empirical behavior with i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' With i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' data, as the size of the ring increases, the convergence rate actually improves (Figure 1), until it saturates at a point that depends on the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In this work, we aim to accurately describe the behavior of distributed learning algorithms with sparse averaging, both in theory and in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We aim to do so both in the high learning rate regime, which was previously studied in the conference version of this paper Vogels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2022), as well as in the small learning rate regime, in which we characterize the interplay between topology and data heterogeneity, as well as stochastic noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We quantify the role of the graph in a quadratic toy problem designed to mimic the initial phase of deep learning (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1), showing that averaging enables a larger learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' From these insights, we derive a problem-independent notion of ‘effective number of neighbors’ in a graph that is consistent with time-varying topologies and infinite graphs, and is predictive of a graph’s empirical performance in both convex and deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We provide convergence proofs for (strongly) convex objectives that do not depend on the spectral gap of the graph (Section 4), and consider finer spectral quantities instead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Our rates disentangle the homogeneous and heterogeneous settings, and highlight that all problems behave as if they were homogeneous when the iterates are far from the optimum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' At its core, our analysis does not enforce global consensus, but only between workers that are close to each other in the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Our theory shows that sparse averaging provably enables larger learning rates and thus speeds up optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' These insights prove to be relevant in deep learning, where we accurately describe the performance of a variety of topologies, while their spectral gap does not (Section 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 3 Vogels, Hendrikx, Jaggi 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Related work Decentralized SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This paper studies decentralized SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Koloskova et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020) obtain the tightest bounds for this algorithm in the general setting where workers optimize heterogeneous objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' They show that gossip averaging reduces the asymptotic variance suffered by the algorithm at the cost of a degradation (depending on the spectral gap of the gossip matrix) of the initial linear convergence term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This key term does not improve through collaboration and gives rise to a smaller learning rate than training alone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Besides, as discussed above, this implies that optimization is not possible in the limit of large graphs, even in the absence of heterogeneity: for instance, the spectral gap of an infinite ring is zero, which would lead to a learning rate of zero as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' These rates suggest that decentralized averaging speeds up the last part of training (dominated by variance), at the cost of slowing down the initial (linear convergence) phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Beyond the work of Koloskova et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020), many papers focus on linear speedup (in the variance phase) over optimizing alone, and prove similar results in a variety of settings (Lian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Tang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Lian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' All these results rely on the following insight: while linear speedup is only achieved for small learning rates, SGD eventually requires such small learning rates anyway (because of, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', stochastic noise, or non-smoothness).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This observation leads these works to argue that “topology does not matter”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This is the case indeed, but only for very small learning rates, as shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Besides, while linear speedup might be achievable indeed for very small learning rates, some level of variance reduction should be obtained by averaging for any learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In practice, averaging speeds up both the initial and last part of training and in a possibly non-linear way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This is what we show in this work, both in theory and in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Another line of work studies decentralized SGD under statistical assumptions on the local data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In particular, Richards and Rebeschini (2020) show favorable properties for D-SGD with graph-dependent implicit regularization and attain optimal statistical rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Their suggested learning rate does depend on the spectral gap of the communication network, and it goes to zero when the spectral gap shrinks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Richards and Rebeschini (2019) also show that larger (constant) learning rates can be used in decentralized GD, but their analysis focuses on decentralized kernel regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Their analysis relies on statistical concentration of local objectives rather, while the analysis in this paper relies on the notion of local neighborhoods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Gossiping in infinite graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' An important feature of our results is that they do not depend on the spectral gap, and so they apply independently of the size of the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Instead, our results rely on new quantities that involve a combination of the graph topology and the heterogeneity pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' These may depend on the spectral gap in extreme cases, but are much better in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Berthier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020) study acceleration of gossip averaging in infinite graphs, and obtain the same conclusions as we do: although spectral gap is useful for asymptotics (how long does information take to spread in the whole graph), it fails to accurately describe the transient regime of gossip averaging, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', how quickly information spreads over local neighborhoods in the first few gossip rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This is especially limiting for optimization (compared to just averaging), as new local updates need to be averaged at every step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The averaging for latest gradient updates always starts in the transient regime, implying that the transient regime of gossip averaging deeply affects the asymptotic regime of decentralized SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In this work, we build on tools from Berthier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020) to show 4 Beyond spectral gap how the effective number of neighbors, a key quantity we introduce, is related to the graph’s spectral dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The impact of the graph topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Lian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2017) argue that the topology of the graph does not matter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This is only true for asymptotic rates in specific settings, as illustrated in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Neglia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020) investigate the impact of the graph on decentralized optimization, and contradict this claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Similarly to us, they show that the graph has an impact in the early phases of training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Their analysis of the heterogeneous setting, their analysis depends on how gradient heterogeneity spans the eigenspace of the Laplacian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Their assumptions, however, differ from ours, and they retain an unavoidable dependence on the spectral gap of the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Our results are different in nature, and show the benefits of averaging and the impact of the graph through the choice of large learning rates, and a better dependence on the noise and the heterogeneity for a given learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Even et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2021) also consider the impact of the graph on decentralized learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' They focus on non-worst-case dependence on heterogeneous delays, and still obtain spectral-gap-like quantities but on a reweighted gossip matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Another line of work studies the interaction of topology with particular patterns of data heterogeneity (Le Bars et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Dandi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2022), and how to optimize graphs with this heterogeneity in mind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Our analysis highlights the role of heterogeneity through a different quantity than these works, that we believe is tight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Besides, both works either try to reduce this heterogeneity all along the trajectory, or optimize for both the spectral gap of the graph and the heterogeneity term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Instead, we show that heterogeneity changes the fixed-point of the algorithm but not the global dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Time-varying topologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Time-varying topologies are popular for decentralized deep learning in data centers due to their strong mixing (Assran et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The benefit of varying the communication topology over time is not easily explained through standard theory, but requires dedicated analysis (Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' While our proofs only cover static topologies, the quantities that appear in our analysis can be computed for time-varying schemes, too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' With these quantities, we can empirically study static and time-varying schemes in the same framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Conference version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This paper is an extension of Vogels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2022), which focused on the homogeneous setting where all workers share the same global optimum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In this extension, we introduce a simpler analysis that strictly improves and generalizes the previous one, extending the results to the important heterogeneous setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In the conference version, it remained unclear if larger learning rates could only be achieved thanks to homogeneity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We also connect the quantities we introduce to the spectral dimension of a graph, and use this connection to derive explicit formulas for the optimal learning rates based on the spectral dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This allows us to accurately compare with previous bounds (for instance Koloskova et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020)) and show that we improve on them in all settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Measuring collaboration in decentralized learning Both this paper’s analysis of decentralized SGD for general convex objectives and its deep learning experiments revolve around a notion of ‘effective number of neighbors’ that we would introduce in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The aim of this section is to motivate the quantity based on 5 Vogels, Hendrikx, Jaggi a simple toy model for which we can exactly characterize the convergence (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We then connect this quantity to the typical graph metrics such as spectral gap and spectral dimensions in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1 A toy problem: D-SGD on isotropic random quadratics The aim of this section is to provide intuition while avoiding the complexities of general analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' To keep this section light, we omit any derivations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The appendix of (Vogels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2022) contains a longer version of this section that includes derivations and proofs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We consider n workers that jointly optimize an isotropic quadratic Ed∼N d(0,1) 1 2(d⊤x)2 = 1 2∥x∥2 with a unique global minimum x⋆ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The workers access the quadratic through stochastic gradients of the form g(x) = dd⊤x, with d ∼ N d(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This corresponds to a linear model with infinite data, and where the model can fit the data perfectly, so that stochastic noise goes to zero close to the optimum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We empirically find that this simple model is a meaningful proxy for the initial phase of (over-parameterized) deep learning (Section 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' A benefit of this model is that we can compute exact rates for it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' These rates illustrate the behavior that we capture more generally in the theory of Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The stochasticity in this toy problem can be quantified by the noise level ζ = sup x∈Rd Ed∥g(x)∥2 ∥x∥2 = sup x∈Rd Ed∥dd⊤x∥2 ∥x∥2 , (1) which is equal to ζ = d + 2, due to the random normal distribution of d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The workers run the D-SGD algorithm (Lian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Each worker i has its own copy xi ∈ Rd of the model, and they alternate between local model updates xi ← xi − ηg(xi) and averaging their models with others: xi ← �n j=1 wijxj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The averaging weights wij are summarized in the gossip matrix W ∈ Rn×n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' A non-zero weight wij indicates that i and j are directly connected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In the following, we assume that W is symmetric and doubly stochastic: �n j=1 wij = 1 ∀i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' On our objective, D-SGD either converges or diverges linearly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Whenever it converges, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', when the learning rate is small enough, there is a convergence rate r such that E∥x(t) i ∥2 ≤ (1 − r)∥x(t−1) i ∥2, with equality as t → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' When the workers train alone (W = I), the convergence rate for a given learning rate η reads: ralone = 1 − (1 − η)2 − (ζ − 1)η2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2) The optimal learning rate η⋆ = 1 ζ balances the optimization term (1 − η)2 and the stochastic term (ζ − 1)η2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In the centralized (fully connected) setting (wij = 1 n ∀i, j), the rate is simple as well: rcentralized = 1 − (1 − η)2 − (ζ − 1)η2 n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (3) Averaging between n workers reduces the impact of the gradient noise, and the optimal learning rate grows to η⋆ = n n+ζ−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We find that D-SGD with a general gossip matrix W interpolates those results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 6 Beyond spectral gap 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='2 The effective number of neighbors To quantify the reduction of the (ζ − 1)η2 term in general, we introduce the problem- independent notion of effective number of neighbors nW(γ) of the gossip matrix W and decay parameter γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Definition 1 (Effective number of neighbors) The effective number of neighbors nW(γ) = limt→∞ �n i=1 Var[y(t) i ] �n i=1 Var[z(t) i ] measures the ratio of the asymptotic variance of the processes y(t+1) = √γ · y(t) + ξ(t), where y(t) ∈ Rn and ξ(t) ∼ N n(0, 1) (4) and z(t+1) = W(√γ · z(t) + ξ(t)), where z(t) ∈ Rn and ξ(t) ∼ N n(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (5) We call y and z random walks because workers repeatedly add noise to their state, somewhat like SGD’s parameter updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This should not be confused with a ‘random walk’ over nodes in the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Since averaging with W decreases the variance of the random walk by at most n, the effective number of neighbors is a number between 1 and n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The decay γ modulates the sensitivity to communication delays.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' If γ = 0, workers only benefit from averaging with their direct neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' As γ increases, multi-hop connections play an increasingly important role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' As γ approaches 1, delayed and undelayed noise contributions become equally weighted, and the reduction tends to n for any connected topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Proposition 2 For regular doubly-stochastic symmetric gossip matrices W with eigenvalues λ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' , λn, nW(γ) has a closed-form expression nW(γ) = 1 1−γ 1 n �n i=1 λi2 1−λ2 i γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (6) This follows from unrolling the recursions for y and z, using the eigendecomposition of W, and the limit lim t → ∞ �t k=1 xk = x 1−x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' While this closed-form expression only covers a restricted set of gossip matrices, the notion of variance reduction in random walks, however, naturally extends to infinite topologies or time-varying averaging schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Figure 2 illustrates nW for various topologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In our exact characterization of the convergence of D-SGD on the isotropic quadratic toy problem, we find that the effective number of neighbors appears in place of the number of workers n in the fully-connected rate of Equation 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The rate r is the unique solution to r = 1 − (1 − η)2 − (ζ − 1)η2 nW � (1−η)2 1−r �.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (7) For fully-connected and disconnected W, nW(γ) = n or 1 respectively, irrespective of γ, and Equation 7 recovers Equations 2 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' For other graphs, the effective number of workers depends on the learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Current theory only considers the case where nW ≈ n, but 7 Vogels, Hendrikx, Jaggi ↑ Effective number of neighbors (variance reduction in a ‘random walk’) 1 4 8 16 24 32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='9999 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='999 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='9 0 Decay γ of the ‘random walk’ → (Think “lower learning rate” or “iterates moving slower”) → Fully connected Two cliques Time-varying exponential Ring Alone (disconnected) · · Figure 2: The effective number of neighbors for several topologies measured by their variance reduction in (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The point γ on the x-axis that matters depends on the learning rate and the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Which topology is ‘best’ varies from problem to problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' For large decay rates γ (corresponding small learning rates), all connected topologies achieve variance reduction close to a fully connected graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' For small decay rates (large learning rates), workers only benefit from their direct neighbors (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 3 in a ring).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' These curves can be computed explicitly for constant topologies, and simulated efficiently for the time-varying exponential scheme (Assran et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' the small learning rates this requires can make the term (1 − η)2 too large, defeating the purpose of collaboration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Beyond this toy problem, we find that the proposed notion of effective number of neighbors is also meaningful in the analysis of general objectives (Section 4) and in deep learning (Section 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3 Links between the effective number of neighbors and other graph quantities In general, the effective number of neighbors function nW(γ) cannot be summarized by a single scalar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Figure 2 demonstrates that the behavior of this function varies from graph to graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We can, however, bound the effective number of neighbors by known graph quantities such as its spectral gap or spectral dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We aim to create bounds for both finite and infinite graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' To allow for this, we introduce a generalization of Proposition 2 as an integral over the spectral measure dσ of the gossip matrix, instead of a sum over its eigenvalues: nW(γ)−1 = (1 − γ) � 1 0 λ2 1 − γλ2 dσ(λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (8) For finite graphs, dσ is a sum of Dirac deltas of mass 1 n at each eigenvalue of matrix W, recovering Equation (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 8 Beyond spectral gap 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1 Upper and lower bounds We can use the fact that there all eigenvalues λ are ≤ 1, leading to: nW(γ)−1 ≤ (1 − γ) � 1 0 1 1 − γ dσ(λ) = 1, (9) This lower bound to the ‘effective number of neighbors’ corresponds to a disconnected graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' On the other hand, for finite graphs, we can use the fact that σ(λ) contains a series of n Diracs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The peak at λ = 1, corresponding to the fully-averaged state, has value 1 n, while the other peaks have values ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Using this bound, we obtain nW(γ)−1 ≥ 1 − γ 1 − γ 1 n = 1 n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (10) This upper bound to the ‘effective number of neighbors’ is tight for a fully-connected graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='2 Bounding by spectral gap If the graph has a spectral gap α, this means that σ(λ) contains a Dirac delta with mass 1 n at λ = 1, corresponding to the fully-averaged state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The rest of σ(λ) has mass n−1 n and is contained in the subdomain λ ∈ [0, 1 − α].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In this setting, we obtain nW(γ)−1 ≤ 1 n + n − 1 n (1 − γ)(1 − α)2 1 − γ(1 − α)2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (11) This lower bound to the ‘effective number of neighbors’ is typically pessimistic, but it is tight for the finite gossip matrix W = (1 − α)I + α n11⊤.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3 Bounding by spectral dimension Next, we will link the notion of ‘effective number of neighbors’ to the spectral dimension ds of the graph (Berthier, 2021, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='9), which controls the decay of eigenvalues near 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This notion is usually linked with the spectral measure of the Laplacian of the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' However, to avoid introducing too many graph-related quantities, we define spectral dimension with respect to the gossip matrix W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Standard definitions using the Laplacian LW = I − W are equivalent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In the remainder of this paper, the ‘graph’ will always refer to the communication graph implicitly induced by W of Laplacian LW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Definition 3 (Spectral Dimension) A gossip matrix has a spectral dimension at least ds if there exists cs > 0 such that for all λ ∈ [0, 1], the density of its eigenvalues is bounded by σ((λ, 1)) ≤ c−1 s (1 − λ) ds 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (12) The notation σ((λ, 1)) here refers to the integral � 1 λ σ(l) dl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The spectral dimension of a graph has a natural geometric interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' For instance, the line (or ring) are of spectral dimension ds = 1, whereas 2-dimensional grids are of spectral dimension 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' More generally, a d-dimensional torus is of spectral dimension d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Besides, the spectral dimension describes macroscopic topological features and are robust to microscopic changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' For instance, random geometric graphs are of spectral dimension 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 9 Vogels, Hendrikx, Jaggi Note that since finite graphs have a spectral gap, σ((λ2(W), 1)) = 0 and so finite graphs verify (12) for any spectral dimension ds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' However, the notion of spectral dimension is still relevant for finite graphs, since the constant cs blows up when ds is bigger than the actual spectral dimension of an infinite graph with similar topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Alternatively, it is sometimes helpful to explicitly take the spectral gap into account in (12), as in Berthier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020, Section 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We now proceed to bounding nW(γ) using the spectral dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Since λ �→ λ2(1 − γλ2)−1 is a non-negative non-decreasing function on [0, 1], we can use Berthier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020, Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1) to obtain that: nW(γ)−1 ≤ 1 n + c−1 s (1 − γ) � 1 0 λ2 1 − γλ2 (1 − λ) ds 2 −1dλ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (13) The term 1 n comes from the fact that for finite graphs, the density dσ includes a Dirac delta with mass 1 n at eigenvalue 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This Dirac is not affected by spectral dimension, and is required for consistency, as it ensures that nW(γ) ≤ n for any finite graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' To evaluate the integral, we then distinguish three cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Case ds > 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Since γλ < 1, then 1 − λ ≤ 1 − γλ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In particular we use integration by parts to get: nW(γ)−1 − n−1 ≤ c−1 s (1 − γ) � 1 0 λ2(1 − γλ2) ds 2 −2dλ ≤ − (1 − γ)c−1 s 2γ(ds/2 − 1) � 1 0 −2γλ(ds/2 − 1)(1 − γλ2) ds 2 −2dλ = (1 − γ)c−1 s γ(ds − 2) � 1 − (1 − γ) ds 2 −1� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This leads to a scaling of: nW(γ) ≥ � 1 n + (1 − γ) γ(ds − 2)cs �−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (14) For large enough n, we obtain the same scaling of (1 − γ)−1 as in the previous section, thus indicating that for networks that are well-enough connected (ds > 2), the spectral dimension only affects the constants, and not the scaling in γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Case ds = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' When ds = 2, only the primitive of the integrand changes, leading to: nW(γ) ≥ � 1 n − (1 − γ) ln(1 − γ) 2γcs �−1 (15) Case ds < 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In this case, we start by splitting the integral as: (1 − γ) � 1 0 λ2(1 − λ) ds 2 −1 (1 − γλ2) dλ = (1 − γ) � γ 0 λ2(1 − λ) ds 2 −1 (1 − γλ2) dλ + (1 − γ) � 1 γ λ2(1 − λ) ds 2 −1 (1 − γλ2) dλ 10 Beyond spectral gap For the first term, note that γλ ≤ 1, so (1 − γλ2)−1 ≤ (1 − λ)−1, leading to: (1 − γ) � γ 0 λ2(1 − λ) ds 2 −1 (1 − γλ2) dλ ≤ (1 − γ) � γ 0 (1 − λ) ds 2 −2dλ = 2(1 − γ) 2 − ds � (1 − γ) ds 2 −1 − 1 � ≤ 2 2 − ds (1 − γ) ds 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' For the second term, note that λ2 ≤ 1, so (1 − γλ2)−1 ≤ (1 − γ)−1, leading to: (1 − γ) � 1 γ λ2(1 − λ) ds 2 −1 (1 − γλ2) dλ ≤ � 1 γ (1 − λ) ds 2 −1dλ = 2 ds (1 − γ) ds 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (16) In the end, we obtain that nW(γ)−1 − 1 n ≤ 2 cs � 1 2−ds + 1 ds � (1 − γ) ds 2 , and so: nW(γ) ≥ � 1 n + 4(1 − γ) ds 2 ds(2 − ds)cs �−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (17) In this case, scaling in γ is impacted by the spectral dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Better-connected graphs benefit more from higher γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Convergence analysis 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1 Notations and Definitions In the previous section, we have derived exact rates for a specific function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Now we present convergence rates for general (strongly) convex functions that are consistent with our observations in the previous section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We obtain rates that depend on the level of noise, the hardness of the objective, and the topology of the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' More formally, we assume that we would like to solve the following problem: min θ∈Rd n � i=1 fi(θ) = min x∈Rnd,xi=xj n � i=1 fi(xi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (18) In this case, xi ∈ Rd represents the local variable of node i, and x ∈ Rnd the stacked variables of all nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We will assume the following iterations for D-SGD: (D-SGD): x(t+1) i = n � j=1 wijx(t) j − η∇fξ(t) i (x(t) i ) (19) where fξ(t) i represent sampled data points and the gossip weights wij are elements of W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Denoting LW = I − W, we rewrite this expression in matrix form as: x(t+1) = x(t) − � η∇Fξ(t)(x(t)) + LWx(t)� , (20) where (∇Fξ(t)(x(t)))i = ∇fξ(t) i (x(t) i ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We abuse notations in the sense that W ∈ Rnd×nd is now the Kronecker product of the standard n×n gossip matrix and the d×d identity matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 11 Vogels, Hendrikx, Jaggi This definition is a slight departure from the conference version of this work (Vogels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2022), which alternated randomly between gossip steps and gradient updates instead of in turns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The analysis of the randomized setting is still possible, but with heterogeneous objectives xi ̸= �n j=1 wijxj, even for the fixed points of D-SGD (19), and randomizing the updates adds undesirable variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Similarly, it is also possible to analyze the popular variant x(t+1) = W[x(t) − η∇Fξ(t)(x(t))], which locally averages the stochastic gradients before they are applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Yet, the D-SGD algorithm in (19) allows communications and computations to be performed in parallel, and leads to a simpler analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We analyze this model under the following assumptions, where Df(x, y) = f(x) − f(y) − ∇f(y)⊤(x − y) denotes the Bregman divergence of f between points x and y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Assumption 4 The stochastic gradients are such that: ( i) the sampled data points ξ(t) i and ξ(ℓ) j are independent across times t, ℓ and nodes i ̸= j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' ( ii) stochastic gradients are locally unbiased: E [fξ(t) i ] = fi for all t, i ( iii) the objectives fξ(t) i are convex and ζξ-smooth for all t, i, with E � ζξDfξ(x, y) � ≤ ζDf(x, y) for all x, y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' ( iv) all local objectives fi are µ-strongly-convex for µ ≥ 0 and L-smooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Large learning rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The smoothness constant ζ of the stochastic functions fξ defines the level of noise in the problem (the lower, the better) in the transient regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The ratio ζ/L compares the difficulty of optimizing with stochastic gradients to the difficulty with the true global gradient before reaching the ‘variance region’ in which the iterates of D-SGD with a constant learning rate lie almost surely as t → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This ratio is thus especially important in interpolating settings when all fξ(t) i have the same minimum, so that the ‘variance region’ is reduced to the optimum x⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Assuming better smoothness for the global average objective than for the local functions is key to showing that averaging between workers allows for larger learning rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Without communication, convergence to the ‘variance region’ is ensured for learning rates η ≤ 1/ζ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' If ζ ≈ L, there is little noise and cooperation only helps to reduce the final variance, and to get closer to the global minimum (instead of just your own).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Yet, in noisy regimes (ζ ≫ L), such as in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1 in which ζ = d + 2 ≫ 1 = L, averaging enables larger learning rates up to min(1/L, n/ζ), greatly speeding up the initial training phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This is precisely what we will prove in Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' If the workers always remain close (xi ≈ 1 n(x1 +.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='+xn) ∀i, or equivalently 1 n11⊤x ≈ x), D-SGD behaves the same as SGD on the average parameter 1 n �n i=1 xi, and the learning rate depends on max(ζ/n, L), showing a reduction of variance by n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Maintaining “ 1 n11⊤x ≈ x”, however, requires a small learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This is a common starting point for the analysis of D-SGD, in particular for the proofs in Koloskova et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' On the other extreme, if we do not assume closeness between workers, “Ix ≈ x” always holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In this case, there is no variance reduction, but no requirement for a small learning rate either.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1, we found that, at the optimal learning rate, workers are not close to all other workers, but they are close to others that are not too far away in the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We capture the concept of ‘local closeness’ by defining a neighborhood matrix M ∈ Rn×n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' It allows us to consider semi-local averaging beyond direct neighbors, but without fully averaging with the whole graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We ensure that “Mx ≈ x”, leading to an improvement in the smoothness somewhere between ζ (achieved alone) and ζ/n (achieved when global consensus 12 Beyond spectral gap is maintained).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Each neighborhood matrix M implies a requirement on the learning rate, as well as an improvement in smoothness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' While we can conduct our analysis with any M, those matrices that strike a good balance between the learning rate requirement and improved smoothness are most interesting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Based on Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1, we therefore focus on a specific construction of matrices: We choose M as the covariance of a decay-γ ‘random walk process’ with the graph, as in (5), meaning that M = (1 − γ) ∞ � k=1 γk−1W2k = (1 − γ)W2(I − γW2)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (21) Varying γ induces a spectrum of averaging neighborhoods from M = W2 (γ = 0) to M = 1 n11⊤ (γ = 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' γ also implies an effective number of neighbors nW(γ): the larger γ, the larger nW(γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We make the following assumption on the neighborhood matrix M: Assumption 5 The neighborhood matrix M is of the form of (21), and all the diagonal elements have the same value, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', Mii = Mjj for all i, j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Assumption 5 implies that Mii−1 = nW(γ): the effective number of neighbors defined in (6) is equal to the inverse of the self-weights of M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This comes from the fact that the trace of M is equal to the sum of its eigenvalues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Otherwise, all results that require Assumption 5 hold by replacing nW(γ) with mini Mii−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Besides this interesting relationship with the effective number of neighbors nW(γ), we will be interested in another spectral property of M, namely the constant β(γ) (which only depends on γ through M, but we make this dependence explicit), which is such that: LM ≼ β(γ)−1LWW (22) This constant can be interpreted as the strong convexity of the semi-norm defined by LWW relatively to the one defined by LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Due to the form of M, we have 1 − λ2(W) ≤ β(γ) ≤ 1, and the lower bound is tight for γ → 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' However, the specific form of M (involving neighborhoods as defined by W) and the use of γ < 1 ensure a much larger constant β(γ) in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Fixed points of D-(S)GD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In Vogels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2022), we consider a homogeneous setting, in which E fξ(t) i = f for all i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We now go beyond this analysis, and consider a setting in which local functions fi might be different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In this case, constant-learning-rate Decentralized Gradient Descent (the deterministic version of D-SGD) does not converge to the minimizer of the average function but to a different one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Let us now consider this fixed point x⋆ η, which verifies: η∇F(x⋆ η) + LWx⋆ η = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (23) Note that x⋆ η crucially depends on the learning rate η (which we emphasize in the notation) and that it is generally not at consensus (LWx⋆ η ̸= 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In the presence of stochastic noise, D-SGD will oscillate in a neighborhood (proportional to the gradients’ variance) of this fixed point x⋆ η, and so from now on we will refer to x⋆ η as the fixed point of D-SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In the remainder of this section, we show that the results from Vogels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2022) still hold as long as we replace the global minimizer x⋆ (solution of Problem (18)) by this fixed 13 Vogels, Hendrikx, Jaggi point x⋆ η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' More specifically, we measure convergence by ensuring the decrease of the following Lyapunov function: Lt = ∥x(t) − x⋆ η∥2 M + ω∥x(t) − x⋆ η∥2 LM = (1 − ω)∥x(t) − x⋆ η∥2 M + ω∥x(t) − x⋆ η∥2, (24) for some parameter ω ∈ [0, 1], and where LM = I − M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Then, we will show how these results imply convergence to a neighborhood of x⋆ η, and that this neighborhood shrinks with smaller learning rates η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' More specifically, the section unrolls as follows: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Theorem 6 first proves a general convergence result to x⋆ η, the fixed point of D-(S)GD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Theorem 9 then bounds the distance to the true optimum for general learning rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Corollary 10 finally gives a full convergence result with optimized learning rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Readers interested in quickly comparing our results with state-of-the art ones can skip to this result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='2 General convergence result Theorem 6 provides convergence rates for any choice of the parameter γ that determines the neighborhood matrix M, and for any Lyapunov parameter ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The best rates are obtained for specific γ and ω that balance the benefit of averaging with the constraint it imposes on closeness between neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We will discuss these choices more in depth in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Theorem 6 If Assumptions 4 and 5 hold and if η is such that η ≤ min � �β(γ)ω L , 1 4 �� nW(γ)−1 + ω � ζ + L � � � , (25) then the Lyapunov function defined in (24) verifies the following: L(t+1) ≤ (1 − ηµ)L(t) + η2σ2 M, where σ2 M = 2[(1 − ω)nW(γ)−1 + ω] E � ∥∇Fξt(x⋆ η) − ∇F(x⋆ η)∥2� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This theorem shows convergence (up to a variance region) to the fixed point x⋆ η of D-SGD, regardless of the ‘true’ minimizer x⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Although converging to x⋆ η might not be ideal depending on the use case (but do keep in mind that x⋆ η → x⋆ as η shrinks), this is what D-SGD does, and so we believe it is important to start by stating this clearly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The homogeneous case did not have this problem since x⋆ η = x⋆ for all η for η that implied convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Parameter ω ∈ [0, 1] is free, and it is often convenient to choose it as ω = ηL/β(γ) to get rid of the first condition on η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' However, we present the result with a free parameter ω since, as we will see in the remainder of this section, setting ω = nW(γ)−1 allows for simple corollaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Proof We now detail the proof, which is both a simplification and generalization of Theorem IV from Vogels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 14 Beyond spectral gap 1 - General decomposition We first analyze the first term in the Lyapunov (24), and use the fixed-point conditions of (23) to write: E � ∥x(t+1) − x⋆ η∥2 M � = ∥x(t) − x⋆ η∥2 M + ∥η∇Fξt(x(t)) + LWx(t)∥2 M − 2η � ∇F(x(t)) − ∇F(x⋆ η) �⊤ M(x(t) − x⋆ η) − 2∥x(t) − x⋆ η∥2 LWM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (26) The second term is the same with M in place of I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 2 - Error terms We start by bounding the error terms,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' and use the optimality conditions to obtain: E � ∥η∇Fξt(x(t)) + LWx(t)∥2 M � = E � ∥η � ∇Fξt(x(t)) − ∇F(x⋆ η) � + LW(x(t) − x⋆ η)∥2 M � = E � ∥η � ∇Fξt(x(t)) − ∇Fξt(x⋆ η) � + � η � ∇Fξt(x⋆ η) − ∇F(x⋆ η) � + LW(x(t) − x⋆ η) � ∥2 M � ≤ 2η2 E � ∥∇Fξt(x(t)) − ∇Fξt(x⋆ η)∥2 M � + 2η2 E � ∥∇Fξt(x⋆ η) − ∇F(x⋆ η)∥2 M � + 2∥x(t) − x⋆ η∥2 LWMLW,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' where the last inequality comes from the bias-variance decomposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The second term corresponds to variance, whereas the first and last one will be canceled by descent terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Stochastic gradient noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' To bound the first term, we crucially use that stochastic noises are independent for two different nodes, so in particular: E � ∥∇Fξt(x(t)) − ∇Fξt(x⋆ η)∥2 M � = nW(γ)−1 E � ∥∇Fξt(x(t)) − ∇Fξt(x⋆ η)∥2� + ∥∇F(x(t)) − ∇F(x⋆ η)∥2 M−nW(γ)−1I ≤ 2nW(γ)−1 E � ζξtDFξt(x⋆ η, x(t)) � + ∥∇F(x(t)) − ∇F(x⋆ η)∥2 ≤ 2 � nW(γ)−1ζ + L � DF (x(t), x⋆ η), where we used that M ≼ I, the L-cocoercivity of F, and the noise assumption, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', E � ζξtDFξt � ≤ ζDF .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The effective number of neighbors nW(γ) kicks in since Assump- tion 5 implies that the diagonal of M is equal to nW(γ)−1I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Using independence again,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' we obtain: E � ∥∇Fξt(x⋆ η) − ∇F(x⋆ η)∥2 M � = nW(γ)−1 E � ∥∇Fξt(x⋆ η) − ∇F(x⋆ η)∥2� (27) Performing the same computations for the E � ∥∇Fξt(x(t)) − ∇F(x⋆ η)∥2� term and adding consensus error leads to: E � ∥η∇Fξt(x(t)) + LWx(t)∥2 (1−ω)M+ωI � ≤ 4 �� (1 − ω)nW(γ)−1 + ω � ζ + (1 − ω)L � DF (x(t),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' x⋆ η) + 2η2((1 − ω)nW(γ)−1 + ω) E � ∥∇Fξt(x⋆ η) − ∇F(x⋆ η)∥2� + 2∥x(t) − x⋆ η∥2 LW[M+ωLM]LW (28) Here,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' the first term will be controlled by the descent obtained through the gradient terms,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' and the second one through communication terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 15 Vogels, Hendrikx, Jaggi 3 - Descent terms Gradient terms We first analyze the effect of all gradient terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In particular, we use that (1 − ω)M + ωI = I − (1 − ω)LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Then, we use that � ∇F(x(t)) − ∇F(x⋆ η) �⊤ (x(t) − x⋆ η) = DF (x(t), x⋆ η) + DF (x⋆ η, x(t)), and: 2 � ∇F(x(t)) − ∇F(x⋆ η) �⊤ LM(x(t) − x⋆ η) ≤ 2∥∇F(x(t)) − ∇F(x⋆ η)∥∥LM(x(t) − x⋆ η)∥ ≤ 1 2L∥∇F(x(t)) − ∇F(x⋆ η)∥2 + 2L∥x(t) − x⋆ η∥LM2 ≤ DF (x(t), x⋆ η) + 2L∥x(t) − x⋆ η∥LM2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Overall, the gradient terms sum to: − 2 � ∇F(x(t)) − ∇F(x⋆ η) �⊤ (x(t) − x⋆ η) + 2(1 − ω) � ∇F(x(t)) − ∇F(x⋆ η) �⊤ LM(x(t) − x⋆ η) ≤ −2DF (x⋆ η, x(t)) − (1 + ω)DF (x(t), x⋆ η) + 2(1 − ω)L∥x(t) − x⋆ η∥LM2 ≤ −µ∥x(t) − x⋆ η∥2 − DF (x(t), x⋆ η) + 2L∥x(t) − x⋆ η∥LM2 ≤ −(1 − ω)µ∥x(t) − x⋆ η∥2 M − ω∥x(t) − x⋆ η∥2 − DF (x(t), x⋆ η) + 2β(γ)−1L∥x(t) − x⋆ η∥LMLWW, (29) where we used that LM ≼ β(γ)−1LWW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Gossip terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We simply recall the gossip terms we use for descent here, which write: −2∥x(t) − x⋆ η∥2 LWM − 2ω∥x(t) − x⋆ η∥2 LWLM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (30) 4 - Putting everything together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We now add all the descent and error terms together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' More specifically, using Equations (28), (29) and (30) we obtain: L(t+1) ≤ (1 − ηµ)L(t) − 2∥x(t) − x⋆ η∥2 LWM(I−LW) − 2ω [1 − ηL/(ωβ(γ))] ∥x(t) − x⋆ η∥2 LWLMW − η � 1 − 4η �� (1 − ω)nW(γ)−1 + ω � ζ + (1 − ω)L �� DF (x(t), x⋆ η) + 2η2 � (1 − ω)nW(γ)−1 + ω � E � ∥∇Fξt(x⋆ η) − ∇F(x⋆ η)∥2� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The conditions in the theorem are chosen so that the terms from lines 3 and 4 are positive (which is automatically true for line 2), and using that 1 − ω ≤ 1 (since ω is small anyway).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 16 Beyond spectral gap 5 10 15 20 25 30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='002 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='003 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='004 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='005 ↑ Learning rate given by Theorem 1 (L = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='0, ζ = 2000) Effective number of neighbors nW(γ) → 5 10 15 20 25 30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='002 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='004 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='006 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='008 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='010 Effective number of neighbors nW(γ) → Ring Torus (4x8) Hypercube Restricted by noise Restricted by consensus M Figure 3: Maximum learning rates prescribed by Theorem 6, varying the parameter γ that implies an effective neighborhood size (x-axis) and an averaging matrix M (drawn as heatmaps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' On the left, we show the details for a 32-worker ring topology, and on the right, we compare it to more connected topologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Increasing γ (and with it nW(γ)) initially leads to larger learning rates thanks to noise reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' At the optimum, the cost of consensus exceeds the benefit of further reduced noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3 Main corollaries 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1 Large learning rate: speeding up convergence for large errors We now investigate Theorem 6 in the case in which both the noise σ2 and the heterogeneity ∥∇F(x⋆)∥2 LW† are small (compared to L(0)), and so we would like to have the highest possible learning rate in order to ensure fast decrease of the objective (which is consistent with Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Using (25), we obtain a rate for each parameter γ that controls the local neighborhood size (remember that β(γ) depends on γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The task that remains is to find the γ parameter that gives the best convergence guarantees (the largest learning rate).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' As explained before, one should never reduce the learning rate in order to be close to others, because the goal of collaboration (in this regime in which we are not affected by variance and heterogeneity) is to increase the learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We illustrate this in Figure Figure 3, that we obtain by choosing ω = nW(γ)−1, and evaluating the two terms of (25) for different values of γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The expression for the linear part of the curve (before consensus dominates) is given in Corollary 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Corollary 7 Consider that Assumptions 4 and 5 hold,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' then the largest (up to constants) learning rate is obtained as: η = (8ζ/nW(γ) + 4L)−1 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' for γ such that 4nW(γ)−1β(γ)(2nW(γ)−1ζ + L) ≤ L (31) We see that the learning rate scales linearly with the number of effective neighbors in this case (which is equivalent to taking a mini-batch of size linear in nW(γ)) until a certain number 17 Vogels,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Hendrikx,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Jaggi of neighbors is reached (condition on the right),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' or centralized performance is achieved (ζ = nW(γ)L).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The condition on γ always has a solution since when γ ≈ 0, both β(γ) and nW(γ)−1 are close to 1, and they both decrease when γ grows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This corollary directly follows from taking ω = nW(γ)−1 in Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Note that a slightly tighter choice could be obtained by setting ω = ηβ(γ)/L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Investigating β(γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We now evaluate β(γ) in order to obtain more precise bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In particular, choosing M as in (21), the eigenvalues of LM are equal to: λLM i = 1 − λ2 i 1 − γλ2 i , (32) where λi are the eigenvalues of W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In particular, β(γ)LM ≼ WLW translates into the fact that for all i such that λi ̸= 1 (automatically verified in this case), we want for all i: β(γ) ≤ 1 − γλ2 i 1 − λ2 i (1 − λi)λi = λi(1 − γλ2 i ) 1 + λi .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (33) We now make the simplifying assumption that λmin(W) ≥ 1 2 (which we can always enforce by taking W′ = (I + W)/2), but note that the theory holds regardless.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We motivate this simplifying assumption by the fact that the for arbitrarily small spectral gaps, the right side of (33) will always be minimized for λ2(W) assuming γ is large enough, so the actual value of λmin(W) < 1 does not matter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In particular, in this case, neglecting the effect of the spectral gap, we can just take: β(γ) = 1 − γλ2(W) 4 ≥ 1 − γ 4 , (34) Note that β(γ) allows for large γ when the spectral gap 1 − λ2(W) is large, but we allow non-trivial learning rates η > 0 even when λ2(W) = 1 (infinite graphs) as long as γ < 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Optimal choice of nW(γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Leveraging the spectral dimension results from Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' we obtain the following corollary: Corollary 8 Under Assumption 4 and 5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' and assuming that λmin(W) ≥ 1 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' that the commu- nication graph has spectral dimension ds > 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' and that ζ ≫ L,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' the highest possible learning rate is η = 1 8 �cs(ds − 2) ζ2L � 1 3 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' obtained for nW(γ) = � cs(ds − 2) ζ L � 1 3 (35) This result follows from Corollary 7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' which,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' if ζ ≫ L,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' writes: L ζ ≥ 8nW(γ)−2β(γ) = nW(γ)−3cs(ds − 2),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (36) where the right part is obtained by plugging in the expressions for β(γ) from (34) into nW(γ)−1 ≤ 2(1−γ) cs(ds−2) from (14) (assuming γ ≥ 1/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Then, one can solve for 1 − γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Assump- tions besides Assumption 4 allow to give a simple result in this specific case, but similar expressions can easily be obtained for ds ≤ 2 and ζ < LnW(γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 18 Beyond spectral gap 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='2 Small learning rate: approaching the optimum arbitrarily closely Theorem 6 gives a convergence result to x⋆ η, the fixed point of D-SGD, and we have investigated in the previous section the behavior of D-SGD for large learning rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In Theorem 9, we focus on small error levels, for which the variance and heterogeneity terms dominate, and we would like to take small learning rates η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In this setting, we bound the distance between the current iterate and the true minimizer x⋆ instead of x⋆ η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We also provide a result that gets rid of all dependence on x⋆ η, and only explicitly depends on the learning rate η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Theorem 9 Under the same assumptions and conditions on the learning rate as Theo- rem 6 and Corollary 8,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' we have that: ∥x(t) − x⋆∥M ≤ 2(1 − ηµ)tL(0) + 2ησ2 M µ + 2η2(1 + κ)∥LW†∇F(x⋆ η)∥2 (37) We can further remove x⋆ η from the bound,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' and obtain: ∥x(t) − x⋆∥M ≤ 2(1 − ηµ)tL(0) + 6ησ2 M,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='⋆ µ + 6η2κp−1∆2 W,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' where σ2 M,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='⋆ = (nW(γ)−1 + ω) E � ∥∇Fξ(x⋆) − ∇F(x⋆)∥2� and p−1 = maxη ∥LW†∇F(x⋆ η)∥2 ∥∇F(x⋆η)∥2 LW† ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' so that p ≥ 1 − λ2(W),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' and ∆2 W = ∥∇F(x⋆)∥2 LW† The norm ∥x(t) − x⋆∥2 M considers convergence of locally averaged neighborhoods,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' but ∥x(t) − x⋆∥2 M ≥ ∥x(t) −x⋆∥2 since 1 is an eigenvector of M with eigenvalue 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We now briefly discuss the various terms in this corollary, and then prove it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Heterogeneity term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The term due to heterogeneity only depends on the distance between the true optimum x⋆ and the fixed point x⋆ η, which we then transform into a condition on ∥∇F(x⋆)∥2 LW†.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In particular, it is not influenced by the choice of M (and thus of γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Constant p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We introduce constant p to get rid of the explicit dependence on x⋆ η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Indeed, p−1 intuitively denotes how large LW† is in the direction of ∇F(x⋆ η).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' For instance, if ∇F(x⋆ η) is an eigenvector of W associated with eigenvalue λ, then we have p = 1−λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In the worst case, we have that p = 1 − λ2(W), but p can be much better in general, when the heterogeneity is spread evenly, instead of having very different functions on distant nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Variance term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In this case, the largest variance reduction (of order n) is obtained by taking ω and nW(γ)−1 as small as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' For learning rates that are too large to imply nW(γ)−1 ≈ n−1, decreasing it decreases the variance term in two ways: (i) directly, through the η term, (ii) indirectly, by allowing to take smaller values of nW(γ)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' For very large (infinite) graphs, we can take ω = nW(γ)−1, and in this case Theorem 6 gives that the smallest nW(γ)−1 is given by nW(γ)−1β(γ) = ηL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Using spectral dimension results (for instance with ds > 2),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' we obtain (similarly to Corollary 8) that we can take 19 Vogels,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Hendrikx,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Jaggi β(γ) = nW(γ)−1cs(ds − 2)/8,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' and so: nW(γ)−1 = � 8ηL cs(ds − 2),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (38) so the residual variance term for this choice of nW(γ)−1 is of order: O � η 3 2 µ � L cs(ds − 2) E � ∥∇Fξ(x⋆) − ∇F(x⋆)∥2� � (39) In particular,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' we obtain super-linear scaling when reducing the learning rate η thanks to the added benefit of gaining more effective neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Note that again, the cases ds ≤ 2 can be treated in the same way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Proof [Theorem 9] We start by writing: ∥x(t) − x⋆∥2 M ≤ 2∥x(t) − x⋆ η∥2 M + 2∥x⋆ η − x⋆∥2 M ≤ 2L(t) + 2∥x⋆ η − x⋆∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (40) Theorem 6 ensures that L(t) becomes small, and so we are left with bounding the distance between x⋆ η and x⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 1 - Distance to the global minimizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We define x⋆η = 11⊤x⋆ η/n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Using the fact that both x⋆η and x⋆ are at consensus, and 1⊤∇F(x⋆ η) = 0 (immediate from (23)), we write: DF (x⋆, x⋆ η) = F(x⋆) − F(x⋆ η) − ∇F(x⋆ η)⊤(x⋆ − x⋆ η) = F(x⋆η) − F(x⋆ η) − ∇F(x⋆ η)⊤(x⋆η − x⋆ η) + F(x⋆) − F(x⋆η) ≤ DF (x⋆η, x⋆ η), (41) where the last line comes from the fact that x⋆ is the minimizer of F on the consensus space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Therefore: ∥x⋆ η − x⋆∥2 = ∥x⋆η − x⋆∥2 + ∥x⋆ η − x⋆η∥2 ≤ 1 µDF (x⋆, x⋆ η) + ∥x⋆ η − x⋆η∥2 ≤ 1 µDF (x⋆η, x⋆ η) + ∥x⋆ η − x⋆η∥2 ≤ � 1 + L µ � ∥x⋆η − x⋆ η∥2 = η2 � 1 + L µ � ∥LW†∇F(x⋆ η)∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Note that the result depends on the heterogeneity pattern of the gradients at the fixed point, and might be bounded (and even small) even when W has no spectral gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' However, this quantity is proportional to the squared inverse spectral gap in the worst case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 2 - Monotonicity in η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We now prove that ∥∇F(x⋆ η)∥2 LW† decreases when η increases, and so is maximal for η = 0, corresponding to x⋆ η = x⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' More specifically: d∥∇F(x⋆ η)∥2 LW† dη = d � η−2∥x⋆ η∥2 LW � dη = − 2∥x⋆ η∥2 LW η3 + 2η−2(x⋆ η)⊤LW dx⋆ η dη 20 Beyond spectral gap Differentiating the fixed-point conditions, we obtain that η∇2F(x⋆ η)dx⋆ η dη + ∇F(x⋆ η) + LW dx⋆ η dη = 0, (42) so that: dx⋆ η dη = − � η∇2F(x⋆ η) + LW �−1 ∇F(x⋆ η) = η−1 � η∇2F(x⋆ η) + LW �−1 LWx⋆ η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (43) Plugging this into the previous expression and using that ∇2F(x⋆ η) is positive semi-definite, we obtain: d∥∇F(x⋆ η)∥2 LW† dη = − 2 η3 (x⋆ η)⊤ � LW − LW � LW + η∇2F(x⋆ η) �−1 LW � x⋆ η ≤ − 2 η3 (x⋆ η)⊤ � LW − LWLW†LW � x⋆ η = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 3 - Getting rid of x⋆ η.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' By definition of p, we can write: ∥LW†∇F(x⋆ η)∥2 ≤ p−1∥∇F(x⋆ η)∥2 LW† ≤ p−1∥∇F(x⋆)∥2 LW†.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (44) Note that we have to bound this constant p in order to use the monotonicity in η of ∥∇F(x⋆ η)∥2 LW† since this result does not hold for ∥LW†∇F(x⋆ η)∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' For the variance, we write that: E � ∥∇Fξt(x⋆ η) − ∇F(x⋆ η)∥2� ≤ 3 E � ∥∇Fξt(x⋆ η) − ∇Fξt(x⋆)∥2� + 3 E � ∥∇Fξt(x⋆) − ∇F(x⋆)∥2� + 3∥∇F(x⋆ η) − ∇F(x⋆)∥2 ≤ 3σ2 M,⋆ + 3 (ζ + L) DF (x⋆, x⋆ η).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' From here, we use Equation (41) and obtain that: E � ∥∇Fξt(x⋆ η) − ∇F(x⋆ η)∥2�� ≤ 3σ2 M,⋆ + 3L (ζ + L) η2∥LW†∇F(x⋆ η)∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (45) To obtain the final result, we use that η(nW(γ)−1 +ω)(ζ +L) ≤ 1/4 thanks to the conditions on the learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3 Comparison with existing work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Expressed in the form of Koloskova et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020), we can summarize the previous corollaries into the following result by taking either η as the largest possible constant (as indicated in Corollary 8) or η = ˜O(1/(µT)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Here, ˜O denotes inequality up to logarithmic factors, and recall that ∥x(t) − x⋆∥2 M ≥ ∥x(t) − x⋆∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We recall that L is the smoothness of the global objective f, ζ is the smoothness of the stochastic functions fξ, µ is the strong convexity parameter, ds is the spectral dimension of the gossip matrix W (and we assume ds > 2) and cs is the associated constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 21 Vogels, Hendrikx, Jaggi Corollary 10 (Final result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=') Under the same assumptions as Corollary 8, there exists a choice of learning rate (and, equivalently, of decay parameters γ∗ large and γ∗ small) such that the expected squared distance to the global optimum after T steps of D-SGD ∥x(t) − x⋆∥2 is of order: ˜O � σ2 µ2TnW(γ∗ small) + L∆2 W µ3pT 2 + exp � −nW(γ∗ large)µ ζ T �� , (46) where ∆2 W and p are defined in Theorem 9, and x(t) is the average parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The optimal effective number of neighbors in respectively the small and large learning rate settings are: nW(γ∗ small) = min �� dsT Lcs , n � and nW(γ∗ large) = min ��csdsζ L � 1 3 , n � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (47) This result can be contrasted with the result from Koloskova et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020), which writes: ˜O � σ2 µ2T � 1 n + L µ(1 − λ2(W))T � + L∆2 µ3(1 − λ2(W))2T 2 + exp � − µ (1 − λ2(W))ζ T �� , (48) We can now make the following observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Scheduling the learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Here, the learning rate is either chosen as ηlarge = nW(γ∗ large)/ζ, or as ηsmall = ˜O((µT)−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In practice, one would start with the large learning rate, and switching to η when training does not improve anymore (heterogeneity/variance terms dominate).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Exponential decrease term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We first show a significant improvement in the exponential decrease term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Indeed, nW(γ∗ large)/(1 − λ2(W)), the ratio between the largest learning rate permitted in our analysis versus existing ones, is always large since nW(γ∗ large) ≥ 1 and 1 − λ2(W) ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Besides, the exponential decrease term is no longer affected by the spectral gap in our analysis, which only affects how big nW(γ) can be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This improvement holds even when ζ = L (in this case nW(γ) = 1 is enough), and is due to the fact that heterogeneity only affects lower-order terms, so that when cooperation brings nothing it doesn’t hurt convergence either.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Impact of heterogeneity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The improvement in the heterogeneous case does not depend on some γ, and relies on bounding heterogeneity in a non-worst case fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Indeed, ζW and p capture the interplay between how heterogeneity is distributed among nodes, and the actual topology of the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Note that this does not contradict the lower bound from Koloskova et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020), since ∆2 W/p = ∆2/(1 − λ2(W))2 in the worst case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In the worst case, the heterogeneity pattern of ∇F(x⋆) is aligned with the smallest eigenvalue of LW, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', very distant nodes have very different objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The quantity p, however, gives more fine-grained bounds that depend on the actual heterogeneity pattern in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Variance term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' One key difference between the analyses is on the variance term that involves σ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Both analyses depend on the variance of a single node, σ2/(µT), which is 22 Beyond spectral gap then multiplied by a ‘variance reduction’ term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In both cases, this term is of the form nW(γ)−1+ηLβ(γ)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' However, the standard analysis implicitly use γ = 1, and so nW(γ) = n, and β(γ) = 1 − λ2(W).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Then, the form from (48) follows from taking η = ˜O(1/(µT)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Our analysis on the other hands relies on tuning γ such that nW(γ)−1 + ηLβ(γ)−1 is the smallest possible, and is therefore strictly better than just considering γ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Assuming a given spectral dimension ds > 2 for the graph leads to (46), but any assumption that precisely relates nW(γ) and γ would allow getting similar results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' While the ˜O(T −2) in the variance term of Koloskova et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020) seems better than our ˜O(T −3/2) term, this is misleading because constants are very important in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Our rate is optimized by over γ, which accounts for the fact that if the ˜O(T −2) term dominates, then it is better to just consider a smaller neighborhood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In that case, we would not benefit from n−1 variance reduction anyway.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Our result optimally balances the two variance terms from (48) instead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Thanks to this balancing, we obtain that in graphs of spectral dimension ds > 2, the variance decreases as ˜O(T − 3 2 ) with a learning rate of ˜O(T −1) due to the combined effect of a smaller learning rate and adding more effective neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In finite graphs, this effect caps at nW(γ) = n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Finally, note that our analysis and the analysis of Koloskova et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020) allow for different generalizations of the standard framework: our analysis applies to arbitrarily large (infinite) graphs, while Koloskova et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2020) can handle time-varying graphs with weak (multi-round) connectivity assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Empirical relevance in deep learning While the theoretical results in this paper are for convex functions, the initial motivation for this work comes from observations in deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' First, it is crucial in deep learning to use a large learning rate in the initial phase of training (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Contrary to what current theory prescribes, we do not use smaller learning rates in decentralized optimization than when training alone (even when data is heterogeneous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=') And second, we find that the spectral gap of a topology is not predictive of the performance of that topology in deep learning experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In this section, we experiment with a variety of 32-worker topologies on Cifar-10 (Krizhevsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=') with a VGG-11 model (Simonyan and Zisserman, 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Like other recent works (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Vogels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2021), we opt for this older model, because it does not include BatchNorm (Ioffe and Szegedy, 2015) which forms an orthogonal challenge for decentralized SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Please refer to Appendix E of (Vogels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2022) for full details on the experimental setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Our set of topologies includes regular graphs like rings and toruses, but also irregular graphs such as a binary tree (Vogels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2021) and social network Davis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (1930), and a time-varying exponential scheme (Assran et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We focus on the initial phase of training, 25k steps in our case, where both train and test loss converge close to linearly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Using a large learning rate in this phase is found to be important for good generalization (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Figure 4 shows the loss reached after the first 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='5k SGD steps for all topologies and for a dense grid of learning rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The curves have the same global structure as those for isotropic quadratics Figure 1: (sparse) averaging yields a small increase in speed for small learning rates, but a large gain over training alone comes from being able to increase the learning 23 Vogels, Hendrikx, Jaggi 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='55 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1 ↑ Cifar-10 training loss after 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='5k steps (∼25 epochs) Learning rate → Binary tree Fully connected Hypercube Ring Social network Solo Star Time-varying exponential Torus (4x8) Two cliques Figure 4: Training loss reached after 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='5k SGD steps with a variety of graph topologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In all cases, averaging yields a small increase in speed for small learning rates, but a large gain over training alone comes from being able to increase the learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' While the star has a better spectral gap (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='031) than the ring (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='013), it performs worse, and does not allow large learning rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' For reference, similar curves for fully-connected graphs of varying sizes are in the appendix of Vogels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The best schemes support almost the same learning rate as 32 fully-connected workers, and get close in performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We also find that the random walks introduced in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1 are a good model for variance between workers in deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Figure 5 shows the empirical covariance between the workers after 100 SGD steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Just like for isotropic quadratics, the covariance is accurately modeled by the covariance in the random walk process for a certain decay rate γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Finally, we observe that the effective number of neighbors computed by the variance reduction in a random walk (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1) accurately describes the relative performance under tuned learning rates of graph topologies on our task, including for irregular and time-varying topologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' This is in contrast to the topology’s spectral gaps, which we find to be not predictive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We fit a decay rate γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='951 that seems to capture the specifics of our problem, and show the correlation in Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Appendix F of (Vogels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2022) replicates the same experiments in a different setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' There, we use larger graphs (of 64 workers), a different model and data set (an MLP on Fashion MNIST Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' (2017)), and no momentum or weight decay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The results in this setting are qualitatively comparable to the ones presented above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Conclusion We have shown that the sparse averaging in decentralized learning allows larger learning rates to be used, and that it speeds up training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' With the optimal large learning rate, the workers’ models are not guaranteed to remain close to their global average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Enforcing global consensus is often unnecessary and the small learning rates it requires can be counter-productive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Indeed, 24 Beyond spectral gap Gossip matrix Measured cov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' on Cifar-10 Covariance in random walk Two cliques nW(γ := 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='948) = 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='8 Torus (4x8) nW(γ := 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='993) = 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='4 Star nW(γ := 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='986) = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1 Social network nW(γ := 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='992) = 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3 Ring nW(γ := 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='983) = 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='9 Hypercube nW(γ := 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='997) = 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3 Binary tree nW(γ := 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='984) = 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3 Figure 5: Measured covariance in Cifar-10 (second row) between workers using various graphs (top row).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' After 10 epochs, we store a checkpoint of the model and train repeatedly for 100 SGD steps, yielding 100 models for 32 workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We show normalized covariance matrices between the workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' These are very well approximated by the covariance in the random walk process of Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1 (third row).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We print the fitted decay parameters and corresponding ‘effective number of neighbors’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' ↑ Cifar-10 training loss after 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='5k steps (∼25 epochs) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='9 1 Spectral gap → × × × × × 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='6 1 2 4 8 16 32 Effective num.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' neighbors (γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='951, tuned) → × × × × × Figure 6: Cifar-10 training loss after 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='5k steps for all studied topologies with their optimal learning rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Colors match Figure 4, and × indicates fully-connected graphs with varying number of workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' After fitting a decay parameter γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='951 that captures problem specifics, the effective number of neighbors (left) as measured by variance reduction in a random walk (like in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='1) explains the relative performance of these graphs much better than the spectral gap of these topologies (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 25 Vogels, Hendrikx, Jaggi models do remain close to some local average in a weighted neighborhood around them even with high learning rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' The workers benefit from a number of ‘effective neighbors’, potentially smaller than the whole graph, that allow them to use larger learning rates while retaining sufficient consensus within the ‘local neighborhood’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Similar insights apply when nodes have heterogeneous local functions: there is no need to enforce global averaging over the whole network when heterogeneity is small across local neighborhoods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Besides, there is no need to compensate for heterogeneity in the early phases of training, when models are all far from the global optimum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Based on our insights, we encourage practitioners of sparse distributed learning algorithms to look beyond the spectral gap of graph topologies, and to investigate the actual ‘effective number of neighbors’ that is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We also hope that our insights motivate theoreticians to be mindful of assumptions that artificially limit the learning rate, even though they are tight in worst cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Indeed, the spectral gap is omnipresent in the decentralized litterature, which sometimes hides some subtle phenomena such as the superlinear decrease of the variance in the learning rate, that we highlight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We show experimentally that our conclusions hold in deep learning, but extending our theory to the non-convex setting is an important open direction that could reveal interesting new phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Another interesting direction would be to better understand (beyond the worst-case) the effective number of neighbors for irregular graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Acknowledgments and Disclosure of Funding This project was supported by SNSF grant 200020_200342.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We thank Lie He for valuable conversations and for identifying the discrepancy between a topology’s spectral gap and its empirical performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We also thank Raphaël Berthier for helpful discussions that allowed us to clarify the links between effective number of neighbors and spectral dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' We also thank Aditya Vardhan Varre, Yatin Dandi and Mathieu Even for their feedback on the manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' References Mahmoud Assran, Nicolas Loizou, Nicolas Ballas, and Michael G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Rabbat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Stochastic gradient push for distributed deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' ICML, volume 97, pages 344–353, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Raphaël Berthier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Analysis and acceleration of gradient descents and gossip algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' PhD Thesis, Université Paris Sciences & Lettres, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Raphaël Berthier, Francis R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Bach, and Pierre Gaillard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Accelerated gossip in networks of given dimension using jacobi polynomial iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Data Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 2(1):24–47, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Yatin Dandi, Anastasia Koloskova, Martin Jaggi, and Sebastian U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Stich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Data-heterogeneity- aware mixing for decentralized learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' CoRR, abs/2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='06477, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 26 Beyond spectral gap Allison Davis, Burleigh Bradford Gardner, and Mary R Gardner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Deep South: A social anthropological study of caste and class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Univ of South Carolina Press, 1930.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Mathieu Even, Hadrien Hendrikx, and Laurent Massoulie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Decentralized optimization with heterogeneous delays: a continuous-time approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='03585, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Sergey Ioffe and Christian Szegedy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Batch normalization: Accelerating deep network training by reducing internal covariate shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' ICML, volume 37, pages 448–456, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Anastasia Koloskova, Nicolas Loizou, Sadra Boreiri, Martin Jaggi, and Sebastian U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Stich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' A unified theory of decentralized SGD with changing topology and local updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' ICML, volume 119, pages 5381–5393, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Cifar-10 (Canadian Institute for Advanced Research).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Le Bars, Aurélien Bellet, Marc Tommasi, and Anne-Marie Kermarrec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Yes, topology matters in decentralized optimization: Refined convergence and topology learning under heterogeneous data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' CoRR, abs/2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='04452, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Yuanzhi Li, Colin Wei, and Tengyu Ma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Towards explaining the regularization effect of initial large learning rate in training neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In NeurIPS, pages 11669–11680, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, and Ji Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Can decentralized algorithms outperform centralized algorithms?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' A case study for decentralized parallel stochastic gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In NeurIPS, pages 5330–5340, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Xiangru Lian, Wei Zhang, Ce Zhang, and Ji Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Asynchronous decentralized parallel stochastic gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' ICML, volume 80, pages 3049–3058, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Tao Lin, Sai Praneeth Karimireddy, Sebastian U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Stich, and Martin Jaggi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Quasi-global momentum: Accelerating decentralized deep learning on heterogeneous data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' ICML, volume 139, pages 6654–6665, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Yucheng Lu and Christopher De Sa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Optimal complexity in decentralized training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' ICML, volume 139, pages 7111–7123, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Giovanni Neglia, Chuan Xu, Don Towsley, and Gianmarco Calbi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Decentralized gradient methods: does topology matter?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In AISTATS,, volume 108, pages 2348–2358, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Dominic Richards and Patrick Rebeschini.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Optimal statistical rates for decentralised non- parametric regression with linear speed-up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In NeurIPS, pages 1214–1225, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Dominic Richards and Patrick Rebeschini.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Graph-dependent implicit regularisation for distributed stochastic subgradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=', 21:34:1–34:44, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Karen Simonyan and Andrew Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Very deep convolutional networks for large-scale image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In ICLR, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Hanlin Tang, Xiangru Lian, Ming Yan, Ce Zhang, and Ji Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' d2: Decentralized training over decentralized data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' ICML, volume 80, pages 4855–4863, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 27 Vogels, Hendrikx, Jaggi Thijs Vogels, Lie He, Anastasia Koloskova, Sai Praneeth Karimireddy, Tao Lin, Sebastian U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Stich, and Martin Jaggi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Relaysum for decentralized deep learning on heterogeneous data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In NeurIPS, pages 28004–28015, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Thijs Vogels, Hadrien Hendrikx, and Martin Jaggi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Beyond spectral gap: the role of topology in decentralized learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In NeurIPS, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Jianyu Wang, Anit Kumar Sahu, Zhouyi Yang, Gauri Joshi, and Soummya Kar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' MATCHA: speeding up decentralized SGD via matching decomposition sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' CoRR, abs/1905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='09435, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Han Xiao, Kashif Rasul, and Roland Vollgraf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' CoRR, abs/1708.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content='07747, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Bicheng Ying, Kun Yuan, Yiming Chen, Hanbin Hu, Pan Pan, and Wotao Yin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' Exponential graph is provably efficient for decentralized deep training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' In NeurIPS, pages 13975–13987, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'} +page_content=' 28' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NA0T4oBgHgl3EQfNv96/content/2301.02151v1.pdf'}