diff --git "a/FtE1T4oBgHgl3EQf-waO/content/tmp_files/load_file.txt" "b/FtE1T4oBgHgl3EQf-waO/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/FtE1T4oBgHgl3EQf-waO/content/tmp_files/load_file.txt" @@ -0,0 +1,2397 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf,len=2396 +page_content='Balance is Essence: Accelerating Sparse Training via Adap- tive Gradient Correction Bowen Lei bowenlei@stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='tamu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='edu Texas A&M University Dongkuan Xu dxu27@ncsu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='edu North Carolina State University Ruqi Zhang ruqiz@purdue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='edu Purdue University Shuren He dri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='tea@tamu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='edu Texas A&M University Bani K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Mallick bmallick@stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='tamu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='edu Texas A&M University Abstract Despite impressive performance on a wide variety of tasks, deep neural networks re- quire significant memory and computation costs, prohibiting their application in resource- constrained scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparse training is one of the most common techniques to reduce these costs, however, the sparsity constraints add difficulty to the optimization, resulting in an increase in training time and instability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In this work, we aim to overcome this problem and achieve space-time co-efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' To accelerate and stabilize the convergence of sparse train- ing, we analyze the gradient changes and develop an adaptive gradient correction method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Specifically, we approximate the correlation between the current and previous gradients, which is used to balance the two gradients to obtain a corrected gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Our method can be used with most popular sparse training pipelines under both standard and adversarial se- tups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Theoretically, we prove that our method can accelerate the convergence rate of sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Extensive experiments on multiple datasets, model architectures, and sparsities demonstrate that our method outperforms leading sparse training methods by up to 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0% in accuracy given the same number of training epochs, and reduces the number of training epochs by up to 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1% to achieve the same accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 1 Introduction With the development of deep neural networks (DNNs), there is a trend towards larger and more intensive computational models to enhance task performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Despite of the good performance, such large models are not applicable when memory or computational resources are limited (Bellec et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In addition, these large models consume a considerable amount of energy and produce a large amount of carbon footprint (Thompson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Patterson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Matus & Veale, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As a result, it attracts more efforts in research to find resource-efficient ways (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', less memory & less compute) to train DNNs while maintaining results comparable to the state of the art (Yu & Li, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Rock et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Leite & Xiao, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparse training (Mocanu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2022) is one of the most popular classes of methods to improve efficiency in terms of space (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' memory storage) and is receiving increasing attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' During sparse training, a certain percentage of connections are removed to save memory (Bellec et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparse patterns, which describe where connections are retained or removed, are 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='03573v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='LG] 9 Jan 2023 iteratively updated with various criteria (Dettmers & Zettlemoyer, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Özdenizci & Legenstein, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The goal is to find a resource-efficient sparse neural network (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', removing some connections) with comparable or even higher performance compared to the original dense model (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', keeping all connections).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, sparse training can bring some side effects to the training process, especially in the case of high sparsity (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 99% weights are zero).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' First, sparsity can increase the variance of stochastic gradients, leading the model to move in a sub-optimal direction and hence slow convergence (Hoefler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Graesser et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As shown in Figure 1 (a), we empirically see that the gradient variance grows with increasing sparsity (more details in Section C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Second, it can result in training instability (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', a noisy trajectory of test accuracy w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' iterations) (Sehwag et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Bartoldson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020), which requires additional time to compensate for the accuracy drop, resulting in slow convergence (Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Additionally, the need to consider the robustness of the model during sparse training is highlighted in order to apply sparse training to a wide range of real-world scenarios where there are often challenges with dataset shifts (Ye et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Hoefler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Kundu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Özdenizci & Legenstein, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' To address these issues, we raise the following questions: Question 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' How to simultaneously improve convergence speed and training stability of sparse training?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Prior gradient correction methods, such as variance reduction (Zou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Gorbunov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020), are used to accelerate and stabilize dense training, while we find that it fails in sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' They usually assume that current and previous gradients are highly correlated, and therefore they add a large constant amount of previous gradients to correct the gradient (Dubey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chatterji et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, this assumption does not hold in sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Figure 1 (b) shows the gradient correlation at different sparsities, implying that the gradient correlation decreases with increasing sparsity (more details in Section C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1), which breaks the balance between current and previous gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Therefore, we propose to adaptively change the weights of previous and current gradients based on their correlation to add an appropriate amount of previous gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Question 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' How to design an accelerated and stabilized sparse training method that is effective in real-world scenarios with dataset shifts?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Moreover, real-world applications are under-studied in sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Prior methods use adversarial training to improve model robustness and address the challenge of data shifts, which usually introduces additional bias beyond the variance in the gradient estimation (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020), increasing the difficulty of gradient correction (more details in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Thus, to more accurately approximate the full gradient, especially during the adversarial setup, we design a scaling strategy to control the weights of the two gradients, determining the amount of previous gradient information to be added to the current gradient, which helps the balance and further accelerates the convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In this work, we propose an adaptive gradient correction (AGENT) method to accelerate and stabilize sparse training for both standard and adversarial setups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Theoretically, we prove that our method can accelerate the convergence rate of sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Empirically, we perform extensive experiments on multiple benchmark datasets, model architectures, and sparsities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In both standard and adversarial setups, our method improves the accuracy by up to 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0% given the same number of epochs and reduces the number of epochs up to 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1% to achieve the same performance compared to the leading sparse training methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In contrast to previous efforts of sparse training acceleration which mainly focus on structured sparse patterns, our method is compatible with both unstructured ans structured sparse training pipelines (Hubara et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 2 Related Work 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 Sparse Training Interest in sparse DNNs has been on the rise recently, especially when dealing with resource constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The goal is to achieve comparable performance with sparse weights to satisfy the constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Different sparse training methods have emerged, where sparse weights are maintained in the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Various 2 0% 50% 80% 90% 95% Sparsity 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 Gradient Variance (e-8) RigL SET (a) Gradient Variance 0% 50% 80% 90% 95% Sparsity 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 Gradient Correlation RigL SET (b) Gradient Correlation Figure 1: Gradient variance (a) and gradient correlation (b) of models obtained by RigL and SET at different sparsities including 0% (dense), 50%, 80%, 90%, 95%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Gradient variance grows with increasing sparsity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Gradient correlation drops with increasing sparsity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The sparse models have larger gradient variance and smaller gradient correlation compared to dense models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' pruning and growth criteria are proposed, such as weight/gradient magnitude, random selection, and weight sign (Mocanu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Bellec et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Frankle & Carbin, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Mostafa & Wang, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Dettmers & Zettlemoyer, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Jayakumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Özdenizci & Legenstein, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Schwarz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, the aforementioned studies focus on improving the performance of sparse training, while neglect- ing the side effect of sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparsity not only increases gradient variance, thus delaying conver- gence (Hoefler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Graesser et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2022), but also leads to training instability (Bartoldson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' It is a challenge to achieve both space and time efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Additionally, sparse training can also exacerbate models’ vulnerability to adversarial samples, which is one of the weaknesses of DNNs (Özdenizci & Legenstein, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' When the model encounters intentionally manipulated data, its performances may deteriorate rapidly, leading to increasing security concerns Rakin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Akhtar & Mian (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In this paper, we focus on sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In general, our method can be applied to any SGD-based sparse training pipelines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 Accelerating Training Studies have been conducted in recent years on how to achieve time efficiency in DNNs, and one popular direction is to obtain a more accurate gradient estimate to update the model (Gorbunov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020), such as variance reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' SGD is the most common training method, where one uses small batches of data to approach the full gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In standard training, the batch estimator is unbiased, but can have a large variance and misguide the model, leading to studies on variance reduction (Johnson & Zhang, 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Xiao & Zhang, 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Shang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Zou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Gorbunov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' While adversarial training brings bias in the gradient estimation (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020), and we need to face the bias- variance tradeoff when doing gradient correction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A shared idea is to balance the gradient noise with a less-noisy old gradient (Nguyen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Fang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Some other momentum- based methods have a similar strategy of using old information (Cutkosky & Orabona, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chayti & Karimireddy, 2022) However, all the above work considers only the acceleration in non-sparse case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Acceleration is more challenging in sparse training, and previous research on it has focused on structured sparse training (Hubara et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' First, sparse training will induce larger variance (Hoefler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In addition, some key assumptions associated with gradient correction methods do not hold under sparsity constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In the non-sparse case, the old and new gradients are assumed to be highly correlated, so we can collect a large amount of knowledge from the old gradients (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 3 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chatterji et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Dubey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, sparsity tends to lead to lower correlations, and this irrelevant information can be harmful, making previous methods no longer applicable to sparse training and requiring a finer balance between new and old gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Furthermore, the structured sparsity pattern is not flexible enough, which can lead to lower model accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In contrast, our method accelerates sparse training from an optimization perspective and is compatible with both unstructured and structured sparse training pipelines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 3 Preliminaries: Stochastic Variance Reduced Gradient Stochastic variance reduced gradient (SVRG) (Johnson & Zhang, 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Allen-Zhu & Hazan, 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Dubey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2016) is a widely-used gradient correction method designed to obtain more accurate gradient estimates, which has been followed by many studies (Zou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Baker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Specifically, after each epoch of training, we evaluate the full gradients �g based on �θ at that time and store them for later use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In the next epoch, the batch gradient estimate on Bt is updated using the stored old gradients via Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' ˆg(θt) = 1 n � i∈Bt � gi(θt) − gi(�θ) � + �g (1) where gi(θt) = ∇G(xi|θt), G(θt) = (�N i=1 G(xi|θt))/N is the loss function, �g = ∇G(�θ), θt is the current parameters, n is the number of samples in each mini-batch data, and N is the total number of samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' SVRG successfully accelerates many training tasks in the non-sparse case, but does not work well in sparse training, which is similar to many other gradient correction methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 4 Method We propose an adaptive gradient correction (AGENT) method and integrate it with recent sparse training pipelines to achieve accelerations and improve training stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Specifically, to accomplish the goal, our AGENT filters out less relevant information and obtains a well-controlled and time-varying amount of knowl- edge from the old gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Our method overcomes the limitations of previous acceleration methods such as SVRG (Allen-Zhu & Hazan, 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Dubey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Elibol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020), and successfully accelerates and stabilizes sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We will illustrate each part of our method in the following sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Our AGENT method is outlined in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 Adaptive Control over Old Gradients In AGENT, we designed an adaptive addition of old gradients to new gradients to filter less relevant informa- tion and achieve a balance between new and old gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Specifically, we add an adaptive weight ct ∈ [0, 1] to the old gradient as shown in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2), where we use gnew = 1 n � i∈Bt gi(θt) and gold = 1 n � i∈Bt gi(�θ) to de- note the gradient on current parameters θt and previous parameters �θ for a random subset Bt, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' When the old and new gradients are highly correlated, we need a large c to get more useful information from the old gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Conversely, when the relevance is low, we need a smaller c so that we do not let irrelevant information corrupt the new gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' ˆg(θt) = 1 n � i∈Bt � gi(θt) − ct · gi(�θ) � + ct · �g = gnew − ct · gold + ct · �g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2) A suitable ct should effectively reduce the variance of ˆg(θt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' To understand how ct influence the variance of updated gradient, we decompose the variance of ˆg(θt) in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (3) with some abuse of notation, where the variance of updated gradient is a quadratic function of ct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For simplicity, considering the case where ˆg(θt) is a scalar, the optimal c∗ t will be in the form of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As we can see, c∗ t is not closed to 1 when the new gradient is not highly correlated with the old gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Since low correlation between gnew and gold is more common in sparse training, directly setting ct = 1 in previous methods is not appropriate and we need to estimate adaptive weights c∗ t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In support of this claim, we include a discussion and empirical analysis in 4 the Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 to demonstrate that as sparsity increases, the gradient changes faster, leading to lower correlations between gnew and gold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Var(ˆg(θt)) = Var(gnew) + c2 t · Var(gold) − 2ct · Cov(gnew, gold), c∗ t = Cov(gnew, gold) Var(gold) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (3) We find it impractical to compute the exact c∗ t and thus propose an approximation algorithm for it to obtain a balance between the new and old gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' There are two challenges to calculate the exact c∗ t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' On the one hand, to approach the exact value, we need to calculate the gradients on every batch data, which is too expensive to do it in each iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' On the other hand, the gradients are often high-dimensional and the exact optimal c∗ t will be different for different gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Thus, inspired by Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2020), we design an approximation algorithm that makes good use of the loss information and leads to only a small increase in computational effort.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' More specifically, we estimate c∗ t according to the changes of loss as shown in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (4) and update �c∗ t adaptively before each epoch using momentum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Loss is a scalar, which makes it possible to estimate the shared correlation for all current and previous gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In addition, the loss is intuitively related to gradients and the correlation between losses can give us some insights into that of the gradients (some empirical analyses are included in the Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' �c∗ t = Cov(G(B|θt), G(B|�θ)) Var(G(B|�θ)) , (4) where B denotes a subset of samples used to estimate the gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Algorithm 1 Adaptive Gradient Correction Input: �θ = θ0, epoch length m, step size ηt, c0 = 0, scaling parameter γ, smoothing factor α for t = 0 to T − 1 do if t mod m = 0 then �θ = θt �g = (�N i=1 ∇G(xi|�θ))/N if t > 0 then Calculate ˆc∗ t via Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (4) ct = (1 − α)ct−1 + α ˆc∗ t end if else ct = ct−1 end if Sample a mini-batch data Bt with size n θt+1 = θt − ηt · � 1 n � i∈Bt � gi(θt) − γct · gi(�θ)� + γct · �g � end for We empirically justify the loss-based approxi- mation in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Experimental details are included in Section B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We compare the approximation �c∗ t and the correlation between the gradient of current weights and the gradi- ent of previous epoch weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We find that �c∗ t and the correlation have similar up-and- down patterns, indicating that our approxima- tion captures the dynamic patterns of the cor- relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For differences in magnitude, they can be matched by the scaling strategy we will de- scribe in the next Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 Additional Scaling Parameter is Important To guarantee successful acceleration in sparse and adversarial training, we further propose a scaling strategy that multiplies the estimated c∗ t by a small scaling parameter γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' There are two main benefits of using a scaling parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' First, the scaling parameter γ can reduce the bias of the gradient estimates in adversarial training (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In standard training, the batch gradient estimator is an unbiased estimator of the full gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, in adversarial training, we perturb the mini-batch of samples Bt into ¯Bt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The old gradients gold are calculated on batch data ¯Bt, but the stored old gradients �g are obtained from the original data including Bt, which makes E[gold−�g] unequal to zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Consequently, as shown in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (5), the corrected estimator for full gradients will no longer be unbiased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' It may have a small variance but a large bias, resulting in poor performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Therefore, we propose a scaling parameter γ between 0 and 1 to reduce the bias from ct(gold − �g) to γct(gold − �g).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' E[ˆg(θt)] = E[gnew − ct(gold − �g)] ̸= E[gold] = 1 N N � i=1 gi(θt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (5) Second, the scaling parameter γ guarantees that the variance can still be reduced in the face of worst-case estimates of c∗ t to accelerate the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The key idea is illustrated in Figure 2, where x and y axis 5 correspond to the weight ct and the gradient variance, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The blue curve is a quadratic function that represents the relationship between ct and the variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Suppose the true optimal is c∗, and we make an approximation to it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In the worst case, this approximation may be as bad as ˆc1, making the variance even larger than a3 (variance in SGD) and slowing down the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Then, if we replace ˆc1 with γˆc1, we can reduce the variance and accelerate the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 Connection to Momentum-based Method To some extent, our AGENT is designed with a similar idea to the momentum-based method (Qian, 1999;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Ruder, 2016), where old gradients are used to improve the current batch gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, the momentum- based method still suffers from optimization difficulties due to sparsity constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The reason is that it does not take into account sparse and adversarial training characteristics such as the reduced correlation between current and previous gradients and potential bias of gradient estimator, and fails to provide an adaptive balance between old and new information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' When the correlation is low, the momentum-based method can still incorporate too much of the old information and increase the gradient variance or bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In contrast, our AGENT is designed for sparse and adversarial training and can establish finer adaptive control over how much information we should take from the old to help the new.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 Connection to Adaptive Gradient Method c∗ a3 ˆc1 γˆc1 y = a1c2 − 2a2c + a3 Figure 2: Illustration of how the scaling parameter γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 ensures the acceleration in the face of worst- case estimate of c∗ t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The blue curve is a quadratic function, representing the relationship between ct and the variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' c∗ is the optimal value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' ˆc1 is a poor estimate making the variance larger than a3 (variance in SGD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' γˆc1 can reduce the variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Our AGENT can be viewed as a new type of adaptive gradient method that adaptively adjusts the amount of gradient information used to update parameters, such as Adam (Kingma & Ba, 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, pre- vious adaptive gradient methods are not designed for sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Although they also provide adaptive gradients, their adaptivity is different and does not take the reduced correlation into account.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' On the contrary, our AGENT is designed for sparse training and is tailored to the characteristics of sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' When old information is used to correct the gradients, the main problem is the reduced correlation between the old and new gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Therefore, our AGENT approximates the correlation and adds an adaptive weight to the old gradient to establish a balance be- tween the old and new gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 5 Theoretical Justification Theoretically, we provide a convergence analysis for our AGENT and compare it to SVRG (Reddi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We use G(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=') to denote the loss function and g to denote the gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Our proof is based on Assumptions 1-2, and detailed derivation is included in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (L-smooth): The differentiable loss function G : Rn → R is L-smooth, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', for all x, y ∈ Rn is satisfies ||∇G(x) − ∇G(y)|| ≤ L||x − y||.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' And an equivalent definition is for.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' all x, y ∈ Rn: −L 2 ||x − y||2 ≤ G(x) − G(y) − ⟨∇G(x), x − y⟩ ≤ L 2 ||x − y||2 Assumption 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (σ-bounded): The loss function G has a σ-bounded gradient, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', ||∇Gi(x)|| ≤ σ for all i ∈ [N] and x ∈ Rn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Our convergence analysis framework outlines four steps: We first show that an appropriate choice of ct will result in smaller variance in our gradient estimates compared to SVRG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 6 Next, we show the convergence rate of one arbitrary training epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We then extend the one-epoch results and analyze the convergence rate for the whole epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' After obtaining the convergence rate, we bring it to the real case of sparse learning and find that our method indeed yields a tighter bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Given Assumptions 1-2, we follow the analysis framework above and establish Theorem 1 to show the convergence rate of our AGENT: Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Under Assumptions 1-2, with proper choice of step size ηt and ct, the gradient E[||g(θπ)||2] using AGENT after T training epochs can be bounded by: E[||g(θπ)||2] ≤ (G(θ0) − G(θ∗))LN α Tnν + 2κµ2σ2 N αmν where θπ is sampled uniformly from {{θs t }m−1 t=0 }T −1 s=0 , N denotes the data size, n denotes the mini-batch size, m denotes the epoch length, θ0 is the initial point and θ∗ is the optimal solution, ν, µ, κ, α > 0 are constants depending on ηt and ct, N and n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In regard to Theorem 1, we make the following remarks to justify the acceleration from our AGENT: Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (Faster Gradient Change Speed) An influential difference between sparse and dense training is the gradient change speed, which is reflected in Assumption 1 (L-smooth).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Typically, L in sparse training will be larger than L in dense training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (First Term Analysis) In Theorem 1, the first term in the bound of our AGENT measures the error introduced by deviations from the optimal parameters, which goes to zero when the number of epochs T reaches infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, in real sparse training applications, T is finite and this term is expanded due to the increase of L in sparse training, which implies that the optimization under sparse constraints is more challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (Second Term Analysis) In Theorem 1, the second term measures the error introduced by the noisy gradient and the finite data during the optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Since σ2 is relatively small and N is usually large in our DNNs training, the second term is negligible or much smaller compared to the first term when T is assumed to be finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' From the above analysis, we can compare the bounds of AGENT and SVRG and find that in the case of sparse training, an appropriate choice of ct can make the bound for our AGENT tighter than the bound for SVRG by well-corrected gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (Comparison with SVRG) Under Assumptions 1-2, the gradient E[||g(θπ)||2] using SVRG after T training epochs can be bounded by (Reddi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2016): E[||g(θπ)||2] ≤ (G(θ0) − G(θ∗))LN α Tnν∗ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' This bound is of a similar form to the first term in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Since the second term of Theorem 1 is negligible or much smaller than the first one, we only need to compare the first term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' With a proper choice of ct, the variance of ˆg(θt) will decrease, which leads to a smaller ν for AGENT than ν∗ for SVRG (detailed proof is included in Appendix A Remark 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Thus, AGENT can bring a smaller first term compared to SVRG, which indicates that AGENT effectively reduces the error due to the deviations and has a tighter bound compared to SVRG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 6 Experiments We add our AGENT to three recent sparse training pipelines, namely SET (Mocanu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018), RigL (Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020), BSR-Net (Özdenizci & Legenstein, 2021) and ITOP (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' SET is a broadly-used sparse training method that prunes and regrows connections by examining the magnitude of the weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 7 Table 1: Testing accuracy (%) of BSR-Net-based models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparse VGG-16 are learned in standard and adversarial setups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Results are presented as clean/robust accuracy (%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For the same number of training epochs, our method has higher accuracy compared to BSR-Net in almost all cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 90% Sparsity 99% Sparsity BSR-Net Ours BSR-Net Ours AT 20-th 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='59)/38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='31)/37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='46)/31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='39)/31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 40-th 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='88)/39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='81)/37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='72)/33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='39)/34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 70-th 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='39)/37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='27)/45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='30)/34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='23)/39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 90-th 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='29)/33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='25)/44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='25)/35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='24)/39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 140-th 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='27)/46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='26)/43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='20)/40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='14)/41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 200-th 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='25)/43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='24)/44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='15)/42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='06)/42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 TRADES 20-th 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='82)/33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='61)/37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='76)/25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='45)/31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 40-th 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='97)/35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='34)/37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='69)/28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='34)/33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 70-th 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='52)/34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='33)/45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='35)/33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='30)/39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 90-th 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='36)/36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='28)/44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='33)/31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='24)/39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 140-th 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='25)/45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='25)/46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='29)/38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='21)/41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 200-th 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='23)/47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='24)/46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='19)/39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='25)/41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 Standard 20-th 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='50)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='62)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='26)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='45)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 40-th 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='39)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='47)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='47)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='36)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 70-th 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='78)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='38)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='72)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='24)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 90-th 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='63)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='22)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='55)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='42)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 140-th 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='44)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='06)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='42)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='07)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 200-th 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='23)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='12)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='12)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='25)/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 RigL is another popular dynamic sparse training method which uses weight and gradient magnitudes to learn the connections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' BSR-Net is a recent sparse training method that updates connections by Bayesian sampling and also includes adversarial setups for model robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' ITOP is another recent pipeline for dynamic sparse training, which uses sufficient and reliable parameter exploration to achieve in-time over- parameterization and find well-performing sparse models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Detailed information about the dataset, model architectures, and other training and evaluation setups is provided below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Datasets & Model Architectures: The datasets we use include CIFAR-10, CIFAR-100 (Krizhevsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2009), SVHN (Netzer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2011), and ImageNet-2012 (see Appendix) (Russakovsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For model architectures, we use VGG-16 (Simonyan & Zisserman, 2015), ResNet-18, ResNet-50 (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2016), and Wide-ResNet-28-4 (Zagoruyko & Komodakis, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Training Settings: For sparse training, we choose two sparsity levels, namely 90% and 99%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For BSR-Net, we consider both standard and adversarial setups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In RigL and ITOP, we focus on standard training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In standard training, we only use the original data to update the parameters instead of using perturbed samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For adversarial part, we use the perturbed data with two popular objective (AT and TRADES) (Madry et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Following Özdenizci & Legenstein (2021), we evaluate robust accuracy against PGD attacks with random starts using 50 iterations (PGD50) (Madry et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Brendel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Implementations: Aligned with the choice of Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sundar & Dwaraknath (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Özdenizci & Legenstein (2021), the parameters of the model are optimized by SGD with momentum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Thus, the comparison between the popular sparse training pipelines can be viewed as a comparison between AGENT and momentum-based SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 Convergence Speed & Stability Comparisons We compare the convergence speed by two criteria, including (a) the test accuracy at the same number of pass data (epoch) and (b) the number of pass data (epoch) required to achieve the same test accuracy, 8 0 50 100 150 200 250 Number of epochs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='90 Testing accuracy A-RigL-ITOP RigL-ITOP (a) VGG-C(RigL) 0 50 100 150 200 250 Number of epochs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='95 Testing accuracy A-RigL-ITOP RigL-ITOP (b) ResNet-34(RigL) 0 50 100 150 200 250 Number of epochs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='90 Testing accuracy A-SET-ITOP SET-ITOP (c) VGG-C(SET) 0 50 100 150 200 250 Number of epochs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='95 Testing accuracy A-SET-ITOP SET-ITOP (d) ResNet-34(SET) Figure 3: Testing accuracy for ITOP-based models at 99% sparsity on CIFAR-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A-RigL-ITOP and A- SET-ITOP (blue curves) converge faster than RigL-ITOP and SET-ITOP (pink curves).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='85 Testing accuracy 0 20 40 60 80 100 120 140 Number of epochs A-BSR-Net BSR-Net (a) VGG-16, Standard 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='85 Testing accuracy 0 20 40 60 80 100 Number of epochs A-BSR-Net BSR-Net (b) WRN-28-4, Standard 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='500 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='525 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='550 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='575 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='600 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='625 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='650 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='675 Testing accuracy 20 40 60 80 100 120 140 160 Number of epochs A-BSR-Net BSR-Net (c) VGG-16, AT 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='500 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='525 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='550 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='575 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='600 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='625 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='650 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='675 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='700 Testing accuracy 20 40 60 80 100 120 Number of epochs A-BSR-Net BSR-Net (d) WRN-28-4, AT Figure 4: Number of training epochs required to achieve the accuracy at 99% sparsity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Our A-BSR-Net (blue curves) need less time to achieve the accuracy compared to BSR-Net (pink curves).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' which is widely used to compare the speed of optimization algorithms (Allen-Zhu & Hazan, 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chatterji et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Zou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Cutkosky & Orabona, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For BSR-Net-based results using criterion (a), Table 1 lists the accuracies on both clean and adver- sarial samples after 20, 40, 70, 90, 140, and 200 epochs of training, where the higher accuracies are bolded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparse VGG-16 are learned on CIFAR-10 in both standard and adversarial sutups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For the standard setup, we only present the clean accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As we can see, our method maintains higher clean and robust accuracies for almost all training epochs and setups which demonstrates the successful acceleration from our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In particular, for limited time periods like 20 epochs, our A-BSR-Net usually shows dramatic improvements with clean accuracy as high as 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4%, indicating a significant reduction in early search time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In addition, considering the average accuracy improvement over the 6 time budgets, our method outperforms BSR-Net in accuracy by upto 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Table 2: Final accuracy (%) of RigL-based models at 0% (dense), 90% and 99% sparsity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' AGENT + RigL (A-RigL) maintains or even improves the accuracy compared to that of RigL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Dense 90% 99% CIFAR-10 A-RigL 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='24) 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='21) 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='25) RigL 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='26) 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='22) 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='33) CIFAR-100 A-RigL 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='19) 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='20) 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='14) RigL 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='17) 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='26) 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='19) For ITOP-based results using cri- terion (a), as shown in Figure 3, the blue curves (A-RigL-ITOP and A-SET- ITOP) are always higher than the pink curves (RigL-ITOP and SET-ITOP), in- dicating faster training when using our AGENT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In addition, we can see that the pink curves experience severe up and down fluctuations, especially in the early stages of training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In contrast, the blue curves are more stable all the settings, which indicates AGENT is effective in stabilizing the sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For BSR-Net-based results using criterion (b), Figure 4 depicts the number of training epochs required to achieve certain accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We can see that the blue curves (A-BSR-Net) are always lower than the pink curves (BSR-Net), and on average our method reduces the number of training epochs by up to 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1%, indicating faster training when using our proposed A-BSR-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 9 0 50 100 150 200 250 Number of epoch 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 Testing accuracy A-RigL-ITOP RigL-ITOP + SVRG RigL-ITOP (a) VGG-C 0 50 100 150 200 250 Number of epoch 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='95 Testing accuracy A-RigL-ITOP RigL-ITOP + SVRG RigL-ITOP (b) ResNet-34 0 50 100 150 200 250 Number of epoch 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 Testing accuracy A-SET-ITOP SET-ITOP + SVRG SET-ITOP (c) VGG-C 0 50 100 150 200 250 Number of epoch 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='95 Testing accuracy A-SET-ITOP SET-ITOP + SVRG SET-ITOP (d) ResNet-34 Figure 5: Testing accuracy for ITOP-based models at 99% sparsity on CIFAR-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' SVRG (green curves) slows down the training compared to SGD (pink curves).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Our AGENT (blue curves) accelerates the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 Final Accuracy Comparisons In addition, we compare the final accuracy after sufficient training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' RigL-based results on CIFAR-10/100 are shown in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Our method A-RigL tends to be the best in almost all the scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For BSR-Net-based results in Table 3, we compare our A-BSR-Net with BSR-Net on SVHN using VGG-16 and WideResNet-28-4 (WRN-28-4), and our method is often the best again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' This shows that our AGENT can accelerate sparse training while maintaining or even improving the accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 Comparison with Other Gradient Correction Methods Table 3: Final accuracy (%) of BSR-Net-based mod- els at 90% and 99% sparsity on SVHN with adversarial training objectives (TRADES).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Our AGENT maintains or even improves the accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' BSR-Net Ours 90% VGG-16 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='29) 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='25) WRN-28-4 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='24) 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='23) 99% VGG-16 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='25) 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='26) WRN-28-4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='22) 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='19) We also compare our AGENT with SVRG (Baker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2019), a popular gradient correction method in the non-sparse case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The presented ITOP-based results are based on sparse (99%) VGG-C and ResNet-34 on CIFAR-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Fig- ure 5 (a)-(b) show the testing accuracy of A- RigL-ITOP (blue), RigL-ITOP (pink), and RigL- ITOP+SVRG (green) at different epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We can see that the green curve for RigL-ITOP+SVRG is often lower than the other two curves for A-RigL- ITOP and RigL-ITOP, indicating that model con- vergence is slowed down by SVRG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As for the blue curve for our A-RigL-ITOP, it is always on the top of the pink curve for RigL-ITOP and also smoother than the green curve for RigL-ITOP+SVRG, indicating a successful acceleration and stabilization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The SET-ITOP-based results depicted in Figure 5 (c)-(d) show a similar pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The green curve (SET-ITOP+SVRG) is often lower than the blue (A-SET-ITOP) and pink (SET-ITOP) curves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' This demonstrates that SVRG does not work for sparse training, while our AGENT overcomes its limitations, leading to accelerated and stabilized sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 Comparison with Other Adaptive Gradient Methods We also compare our AGENT with other adaptive gradient methods, where we take Adam (Kingma & Ba, 2014) as an example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As shown in Figure 6, AGENT-RigL-ITOP and AGENT-SET-ITOP (blue curves) are usually above Adam-RigL-ITOP and Adam-SET-ITOP (pink curves), indicating that our AGENT converges faster compared to Adam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' This demonstrates the importance of using correlation in sparse training to balance old and new information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 Combination with Other Gradient Correction Methods In addition to working with SVRG, our AGENT can be combined with other gradient correction meth- ods to achieve sparse training acceleration, such as the momentum-based variance reduction method 10 0 50 100 150 200 250 Number of epochs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='90 Testing accuracy AGENT-RigL-ITOP Adam-RigL-ITOP (a) VGG-C(RigL) 0 50 100 150 200 250 Number of epochs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 Testing accuracy AGENT-RigL-ITOP Adam-RigL-ITOP (b) ResNet-34(RigL) 0 50 100 150 200 250 Number of epochs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='90 Testing accuracy AGENT-SET-ITOP Adam-SET-ITOP (c) VGG-C(SET) 0 50 100 150 200 250 Number of epochs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 Testing accuracy AGENT-SET-ITOP Adam-SET-ITOP (d) ResNet-34(SET) Figure 6: Testing accuracy for ITOP-based models at 99% sparsity on CIFAR-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' AGENT-based training (blue curves) converge faster than Adam-based training (pink curves).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (MVR) (Cutkosky & Orabona, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We train 99% SET-ITOP-based sparse VGG-C using MVR and MVR+AGENT, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As shown in Table 4, MVR+AGENT usually achieves higher test accuracy than MVR for different number of training epochs (20, 40, 70, 90, 140, and 200), which demonstrates the acceleration effect of AGENT and the generality of our AGENT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 Ablation Studies Table 4: Testing accuracy (%) comparisons between MVR and AGENT+MVR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' AGENT can accelerate MVR in sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 20-th 40-th 70-th 90-th 140-th 200-th MVR 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 AGENT+MVR 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 We demonstrate the importance of each component in our method AGENT by removing them one by one and compar- ing the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Specifically, we consider examining the contribution of the time- varying weight ct of the old gradients and the scaling parameter γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The term "Fixed ct" corresponds to fixing weight ct = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 during training, and "No γ" represents a direct use of ˆc∗ t in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (4) and the momentum scheme without adding the scaling parameter γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Table 5 shows the clean and robust accuracies of standard and adversarial (AT or TRADES) training at 90% and 99% sparsity on CIFAR-10 using VGG-16 under different number of training epoch budgets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In the adversarial training (AT and TRADES), we can see that "No γ" is poorly learned and has the worst results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' While our method outperforms "Fix ct" and "No γ" in almost all cases, especially in highly sparse tasks (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 99% sparsity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For standard training, "No γ" can learn some information, but still performs worse than the other two methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For "Fix ct", it provides similar convergence speed as our method, while ours tends to have a better final score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' From the above discussion, both the adaptive update of ct and the multiplication of the scaling parameter γ are important for the acceleration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' On the one hand, the traditional way of setting ct = 1 is not desirable in sparse training and can cause model divergence under sparsity constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Fixing it as a smaller value, such as 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1, sometimes can work in standard training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' But updating ct adaptively with loss-dependent information usually provides some benefits, such as a better final score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' These benefits become more significant in sparse and adversarial training which are more challenging and of great value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' On the other hand, we recommend adding a scaling parameter γ (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1) to ct to avoid increasing the variance and reduce the potential bias in adversarial training, which helps the balance and further accelerates the convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 Scaling Parameter Setting The scaling parameter γ is to avoid introducing large variance due to error in approximating c∗ t and bias due to the adversarial training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The choice of γ is important and can be seen as a hyper-parameter tuning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Our results are based on γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 and the best value for γ depends on many factors such as the dataset, architecture, and sparsity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Therefore, if we tune the value of γ according to the gradient correlation of different settings, it is possible to obtain a faster convergence rate than the reported results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 11 Table 5: Ablation Studies: testing accuracy (%) comparisons with Fixed c and No γ on sparse VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Results are presented as clean/robust accuracy (%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For the same number of training epochs, our method has higher accuracy compared to Fixed c and No γ in almost all cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 90% Sparsity 99% Sparsity Fixed ct No γ Ours Fixed ct No γ Ours AT 20-th 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1/36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6/20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6/37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4/31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 40-th 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4/13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7/34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 70-th 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1/45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 90-th 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7/43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1/44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 140-th 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4/43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4/43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 200-th 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7/43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5/9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1/44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7/42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 TRADES 20-th 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6/35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5/21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5/31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2/21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6/31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 40-th 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7/20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4/33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 70-th 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5/45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5/36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3/39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 90-th 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1/44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6/44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5/39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 140-th 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7/46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6/14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5/39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7/14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 200-th 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7/12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3/38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1/13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 Standard 20-th 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 40-th 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 70-th 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 90-th 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 140-th 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 200-th 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1/0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 We check different values from 0 to 1 and find that it is generally better not to set γ to close to 1 or 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' If setting γ close to 1, we will not be able to completely avoid the increase in variance, which leads to performance drop, similar to "No γ" in Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' If γ is set too small, such as 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='01, the weight of the old gradients will be too small and the old gradients will have limited influence on the model update, which will return to SGD’s slowdown and training instability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' More detailed experimental results using different scaling parameters γ are included in the Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 7 Discussion and Conclusion We develop an adaptive gradient correction (AGENT) method for sparse training to achieve the time ef- ficiency and reduce training instability from an optimization perspective, which can be incorporated into any SGD-based sparse training pipeline and work in both standard and adversarial setups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' To achieve a fine-grained control over the balance of current and previous gradients, we use loss information to analyze gradient changes, and add an adaptive weight to the old gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In addition, we design a scaling parameter to reduce the bias of the gradient estimator introduced by the adversarial samples and improve the worst case of the adaptive weight estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In theory, we show that our AGENT can accelerate the convergence rate of sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Experiment results on multiple datasets, model architectures, and sparsities demonstrate that our method outperforms state-of-the-art sparse training methods in terms of accuracy by up to 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0% and reduces the number of training epochs by up to 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1% for the same accuracy achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A number of methods can be employed to reduce the FLOPs in our AGENT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Similar to SVRG, our AGENT increases the training FLOPs in each iteration due to the extra forward and backward used to compute the old gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' To reduce the FLOPs, the first method is to use sparse gradients (Elibol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020), which effectively reduces the cost of backward in sparse training and can be easily applied to our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The second method is parallel computing Allen-Zhu & Hazan (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Since the additional forward and backward over the old model parameters are fully parallelizable, we can view it as doubling the mini-batch size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Third, we can follow the idea of SAGA (Defazio et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2014) by storing gradients for each single sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' By this way, we do not need extra forward and backward steps, saving the computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, it requires extra memory to store the gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 12 References Naveed Akhtar and Ajmal Mian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Threat of adversarial attacks on deep learning in computer vision: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Ieee Access, 6:14410–14430, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Zeyuan Allen-Zhu and Elad Hazan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Variance reduction for faster non-convex optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International conference on machine learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 699–707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Jack Baker, Paul Fearnhead, Emily B Fox, and Christopher Nemeth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Control variates for stochastic gradient mcmc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Statistics and Computing, 29(3):599–615, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Brian Bartoldson, Ari Morcos, Adrian Barbu, and Gordon Erlebacher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The generalization-stability tradeoff in neural network pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:20852–20864, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Guillaume Bellec, David Kappel, Wolfgang Maass, and Robert Legenstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Deep rewiring: Training very sparse deep networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='05136, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Guillaume Bellec, David Kappel, Wolfgang Maass, and Robert Legenstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Deep rewiring: Training very sparse deep networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' International Conference on Learning Representations (ICLR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, and Matthias Bethge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Accurate, reliable and fast robustness evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='01003, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Niladri Chatterji, Nicolas Flammarion, Yian Ma, Peter Bartlett, and Michael Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' On the theory of variance reduction for stochastic gradient monte carlo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 764–773.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' El Mahdi Chayti and Sai Praneeth Karimireddy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Optimization with access to auxiliary information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='00395, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Beidi Chen, Tri Dao, Kaizhao Liang, Jiaming Yang, Zhao Song, Atri Rudra, and Christopher Re.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Pixelated butterfly: Simple and efficient sparse training for neural network models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='00029, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Changyou Chen, Wenlin Wang, Yizhe Zhang, Qinliang Su, and Lawrence Carin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A convergence analysis for a class of practical variance-reduction stochastic gradient mcmc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Science China Information Sciences, 62 (1):1–13, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Ashok Cutkosky and Francesco Orabona.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Momentum-based variance reduction in non-convex sgd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in neural information processing systems, 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Aaron Defazio, Francis Bach, and Simon Lacoste-Julien.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Saga: A fast incremental gradient method with support for non-strongly convex composite objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in neural information processing systems, 27, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Wei Deng, Qi Feng, Georgios Karagiannis, Guang Lin, and Faming Liang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Accelerating convergence of replica exchange stochastic gradient mcmc via variance reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='01084, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Tim Dettmers and Luke Zettlemoyer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparse networks from scratch: Faster training without losing perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='04840, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Kumar Avinava Dubey, Sashank J Reddi, Sinead A Williamson, Barnabas Poczos, Alexander J Smola, and Eric P Xing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Variance reduction in stochastic gradient langevin dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in neural information processing systems, 29:1154–1162, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Melih Elibol, Lihua Lei, and Michael I Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Variance reduction with sparse gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='09623, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Rigging the lottery: Making all tickets winners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 2943–2952.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 13 Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 31, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Jonathan Frankle and Michael Carbin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The lottery ticket hypothesis: Finding sparse, trainable neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' International Conference on Learning Representations (ICLR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Eduard Gorbunov, Filip Hanzely, and Peter Richtárik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A unified theory of sgd: Variance reduction, sampling, quantization and coordinate descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Artificial Intelligence and Statistics, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 680–690.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Laura Graesser, Utku Evci, Erich Elsen, and Pablo Samuel Castro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The state of sparse training in deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 7766–7792.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Delving deep into rectifiers: Surpassing human- level performance on imagenet classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Proceedings of the IEEE international conference on computer vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 1026–1034, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Deep residual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 770–778, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='00554, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Shaoyi Huang, Bowen Lei, Dongkuan Xu, Hongwu Peng, Yue Sun, Mimi Xie, and Caiwen Ding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Dynamic sparse training via balancing the exploration-exploitation trade-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='16667, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Joseph Naor, and Daniel Soudry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Accelerated sparse neural training: A provable and efficient method to find n: m transposable masks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Siddhant Jayakumar, Razvan Pascanu, Jack Rae, Simon Osindero, and Erich Elsen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Top-kast: Top-k always sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:20744–20754, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Rie Johnson and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Accelerating stochastic gradient descent using predictive variance reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in neural information processing systems, 26:315–323, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Diederik P Kingma and Jimmy Ba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Adam: A method for stochastic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6980, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Alex Krizhevsky, Geoffrey Hinton, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Learning multiple layers of features from tiny images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Technical report, University of Toronto, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Souvik Kundu, Mahdi Nazemi, Peter A Beerel, and Massoud Pedram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Dnr: A tunable robust pruning framework through dynamic network rewiring of dnns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Proceedings of the 26th Asia and South Pacific Design Automation Conference, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 344–350, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Clayton Frederick Souza Leite and Yu Xiao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Optimal sensor channel selection for resource-efficient deep activity recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Proceedings of the 20th International Conference on Information Processing in Sensor Networks (co-located with CPS-IoT Week 2021), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 371–383, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Yan Li, Ethan Fang, Huan Xu, and Tuo Zhao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Implicit bias of gradient descent based adversarial training on separable data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Learning Representations, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chuang Liu, Xueqi Ma, Yinbing Zhan, Liang Ding, Dapeng Tao, Bo Du, Wenbin Hu, and Danilo Mandic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Comprehensive graph gradual pruning for sparse training in graph neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='08629, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 14 Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, and Mykola Pechenizkiy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Do we actually need dense over-parameterization?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' in-time over-parameterization in sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 6989–7000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Ilya Loshchilov and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Decoupled weight decay regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Learning Representations (ICLR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Towards deep learning models resistant to adversarial attacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Learn- ing Representations (ICLR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Kira JM Matus and Michael Veale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Certification systems for machine learning: Lessons from sustainability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Regulation & Governance, 16(1):177–196, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H Nguyen, Madeleine Gibescu, and An- tonio Liotta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Nature communications, 9(1):1–12, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Hesham Mostafa and Xin Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 4646–4655.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Reading digits in natural images with unsupervised feature learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Lam M Nguyen, Jie Liu, Katya Scheinberg, and Martin Takáč.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sarah: A novel method for machine learning problems using stochastic recursive gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 2613– 2621.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Ozan Özdenizci and Robert Legenstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Training adversarially robust sparse networks via bayesian connec- tivity sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 8314–8324.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Carbon emissions and large neural network training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='10350, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Ning Qian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' On the momentum term in gradient descent learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Neural networks, 12(1):145–151, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Adnan Siraj Rakin, Zhezhi He, Li Yang, Yanzhi Wang, Liqiang Wang, and Deliang Fan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Robust sparse regularization: Simultaneously optimizing neural network robustness and compactness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:1905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='13074, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sashank J Reddi, Ahmed Hefny, Suvrit Sra, Barnabás Póczos, and Alex Smola.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Stochastic variance reduction for nonconvex optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International conference on machine learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 314–323.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Johanna Rock, Wolfgang Roth, Mate Toth, Paul Meissner, and Franz Pernkopf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Resource-efficient deep neural networks for automotive radar interference mitigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' IEEE Journal of Selected Topics in Signal Processing, 15(4):927–940, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sebastian Ruder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' An overview of gradient descent optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:1609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='04747, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Imagenet large scale visual recognition challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' International journal of computer vision, 115(3):211–252, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Jonathan Schwarz, Siddhant Jayakumar, Razvan Pascanu, Peter E Latham, and Yee Teh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Powerpropagation: A sparsity inducing weight reparameterisation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34: 28889–28903, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 15 Vikash Sehwag, Shiqi Wang, Prateek Mittal, and Suman Jana.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Hydra: Pruning adversarially robust neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:19655–19666, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Fanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor W Tsang, Lijun Zhang, Dacheng Tao, and Licheng Jiao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Vr-sgd: A simple stochastic variance reduction method for machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' IEEE Transactions on Knowledge and Data Engineering, 32(1):188–202, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Karen Simonyan and Andrew Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Very deep convolutional networks for large-scale image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Interna- tional Conference on Learning Representations (ICLR), 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Varun Sundar and Rajat Vadiraj Dwaraknath.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' [reproducibility report] rigging the lottery: Making all tickets winners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='15767, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Neil C Thompson, Kristjan Greenewald, Keeheon Lee, and Gabriel F Manso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Deep learning’s diminishing returns: the cost of improvement is becoming unsustainable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' IEEE Spectrum, 58(10):50–55, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Lin Xiao and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A proximal stochastic gradient method with progressive variance reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' SIAM Journal on Optimization, 24(4):2057–2075, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Xia Xiao, Zigeng Wang, and Sanguthevar Rajasekaran.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Autoprune: Automatic network pruning by regular- izing auxiliary parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in neural information processing systems, 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Shaokai Ye, Kaidi Xu, Sijia Liu, Hao Cheng, Jan-Henrik Lambrechts, Huan Zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, and Xue Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Adversarial robustness vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' model compression, or both?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 111–120, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Rong Yu and Peichun Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Toward resource-efficient federated learning in mobile edge computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' IEEE Network, 35(1):148–155, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sergey Zagoruyko and Nikos Komodakis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Wide residual networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' British Machine Vision Conference, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Theo- retically principled trade-off between robustness and accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 7472–7482.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Xiao Zhou, Weizhong Zhang, Zonghao Chen, Shizhe Diao, and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Efficient neural network training via forward and backward propagation sparsification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34:15216–15229, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Xiao Zhou, Weizhong Zhang, Hang Xu, and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Effective sparsification of neural networks with global sparsity constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 3599–3608, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Difan Zou, Pan Xu, and Quanquan Gu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Subsampled stochastic variance-reduced gradient langevin dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Uncertainty in Artificial Intelligence, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' References Naveed Akhtar and Ajmal Mian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Threat of adversarial attacks on deep learning in computer vision: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Ieee Access, 6:14410–14430, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Zeyuan Allen-Zhu and Elad Hazan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Variance reduction for faster non-convex optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International conference on machine learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 699–707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Jack Baker, Paul Fearnhead, Emily B Fox, and Christopher Nemeth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Control variates for stochastic gradient mcmc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Statistics and Computing, 29(3):599–615, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Brian Bartoldson, Ari Morcos, Adrian Barbu, and Gordon Erlebacher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The generalization-stability tradeoff in neural network pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:20852–20864, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 16 Guillaume Bellec, David Kappel, Wolfgang Maass, and Robert Legenstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Deep rewiring: Training very sparse deep networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='05136, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Guillaume Bellec, David Kappel, Wolfgang Maass, and Robert Legenstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Deep rewiring: Training very sparse deep networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' International Conference on Learning Representations (ICLR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, and Matthias Bethge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Accurate, reliable and fast robustness evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='01003, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Niladri Chatterji, Nicolas Flammarion, Yian Ma, Peter Bartlett, and Michael Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' On the theory of variance reduction for stochastic gradient monte carlo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 764–773.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' El Mahdi Chayti and Sai Praneeth Karimireddy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Optimization with access to auxiliary information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='00395, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Beidi Chen, Tri Dao, Kaizhao Liang, Jiaming Yang, Zhao Song, Atri Rudra, and Christopher Re.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Pixelated butterfly: Simple and efficient sparse training for neural network models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='00029, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Changyou Chen, Wenlin Wang, Yizhe Zhang, Qinliang Su, and Lawrence Carin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A convergence analysis for a class of practical variance-reduction stochastic gradient mcmc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Science China Information Sciences, 62 (1):1–13, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Ashok Cutkosky and Francesco Orabona.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Momentum-based variance reduction in non-convex sgd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in neural information processing systems, 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Aaron Defazio, Francis Bach, and Simon Lacoste-Julien.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Saga: A fast incremental gradient method with support for non-strongly convex composite objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in neural information processing systems, 27, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Wei Deng, Qi Feng, Georgios Karagiannis, Guang Lin, and Faming Liang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Accelerating convergence of replica exchange stochastic gradient mcmc via variance reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='01084, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Tim Dettmers and Luke Zettlemoyer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparse networks from scratch: Faster training without losing perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='04840, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Kumar Avinava Dubey, Sashank J Reddi, Sinead A Williamson, Barnabas Poczos, Alexander J Smola, and Eric P Xing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Variance reduction in stochastic gradient langevin dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in neural information processing systems, 29:1154–1162, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Melih Elibol, Lihua Lei, and Michael I Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Variance reduction with sparse gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='09623, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Rigging the lottery: Making all tickets winners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 2943–2952.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 31, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Jonathan Frankle and Michael Carbin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The lottery ticket hypothesis: Finding sparse, trainable neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' International Conference on Learning Representations (ICLR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Eduard Gorbunov, Filip Hanzely, and Peter Richtárik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A unified theory of sgd: Variance reduction, sampling, quantization and coordinate descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Artificial Intelligence and Statistics, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 680–690.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Laura Graesser, Utku Evci, Erich Elsen, and Pablo Samuel Castro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The state of sparse training in deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 7766–7792.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 17 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Delving deep into rectifiers: Surpassing human- level performance on imagenet classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Proceedings of the IEEE international conference on computer vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 1026–1034, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Deep residual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 770–778, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='00554, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Shaoyi Huang, Bowen Lei, Dongkuan Xu, Hongwu Peng, Yue Sun, Mimi Xie, and Caiwen Ding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Dynamic sparse training via balancing the exploration-exploitation trade-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='16667, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Joseph Naor, and Daniel Soudry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Accelerated sparse neural training: A provable and efficient method to find n: m transposable masks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Siddhant Jayakumar, Razvan Pascanu, Jack Rae, Simon Osindero, and Erich Elsen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Top-kast: Top-k always sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:20744–20754, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Rie Johnson and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Accelerating stochastic gradient descent using predictive variance reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in neural information processing systems, 26:315–323, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Diederik P Kingma and Jimmy Ba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Adam: A method for stochastic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6980, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Alex Krizhevsky, Geoffrey Hinton, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Learning multiple layers of features from tiny images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Technical report, University of Toronto, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Souvik Kundu, Mahdi Nazemi, Peter A Beerel, and Massoud Pedram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Dnr: A tunable robust pruning framework through dynamic network rewiring of dnns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Proceedings of the 26th Asia and South Pacific Design Automation Conference, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 344–350, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Clayton Frederick Souza Leite and Yu Xiao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Optimal sensor channel selection for resource-efficient deep activity recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Proceedings of the 20th International Conference on Information Processing in Sensor Networks (co-located with CPS-IoT Week 2021), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 371–383, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Yan Li, Ethan Fang, Huan Xu, and Tuo Zhao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Implicit bias of gradient descent based adversarial training on separable data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Learning Representations, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chuang Liu, Xueqi Ma, Yinbing Zhan, Liang Ding, Dapeng Tao, Bo Du, Wenbin Hu, and Danilo Mandic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Comprehensive graph gradual pruning for sparse training in graph neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='08629, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, and Mykola Pechenizkiy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Do we actually need dense over-parameterization?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' in-time over-parameterization in sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 6989–7000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Ilya Loshchilov and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Decoupled weight decay regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Learning Representations (ICLR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Towards deep learning models resistant to adversarial attacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Learn- ing Representations (ICLR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Kira JM Matus and Michael Veale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Certification systems for machine learning: Lessons from sustainability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Regulation & Governance, 16(1):177–196, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 18 Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H Nguyen, Madeleine Gibescu, and An- tonio Liotta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Nature communications, 9(1):1–12, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Hesham Mostafa and Xin Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 4646–4655.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Reading digits in natural images with unsupervised feature learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Lam M Nguyen, Jie Liu, Katya Scheinberg, and Martin Takáč.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sarah: A novel method for machine learning problems using stochastic recursive gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 2613– 2621.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Ozan Özdenizci and Robert Legenstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Training adversarially robust sparse networks via bayesian connec- tivity sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 8314–8324.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Carbon emissions and large neural network training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='10350, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Ning Qian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' On the momentum term in gradient descent learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Neural networks, 12(1):145–151, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Adnan Siraj Rakin, Zhezhi He, Li Yang, Yanzhi Wang, Liqiang Wang, and Deliang Fan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Robust sparse regularization: Simultaneously optimizing neural network robustness and compactness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:1905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='13074, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sashank J Reddi, Ahmed Hefny, Suvrit Sra, Barnabás Póczos, and Alex Smola.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Stochastic variance reduction for nonconvex optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International conference on machine learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 314–323.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Johanna Rock, Wolfgang Roth, Mate Toth, Paul Meissner, and Franz Pernkopf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Resource-efficient deep neural networks for automotive radar interference mitigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' IEEE Journal of Selected Topics in Signal Processing, 15(4):927–940, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sebastian Ruder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' An overview of gradient descent optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:1609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='04747, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Imagenet large scale visual recognition challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' International journal of computer vision, 115(3):211–252, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Jonathan Schwarz, Siddhant Jayakumar, Razvan Pascanu, Peter E Latham, and Yee Teh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Powerpropagation: A sparsity inducing weight reparameterisation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34: 28889–28903, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Vikash Sehwag, Shiqi Wang, Prateek Mittal, and Suman Jana.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Hydra: Pruning adversarially robust neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:19655–19666, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Fanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor W Tsang, Lijun Zhang, Dacheng Tao, and Licheng Jiao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Vr-sgd: A simple stochastic variance reduction method for machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' IEEE Transactions on Knowledge and Data Engineering, 32(1):188–202, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Karen Simonyan and Andrew Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Very deep convolutional networks for large-scale image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Interna- tional Conference on Learning Representations (ICLR), 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Varun Sundar and Rajat Vadiraj Dwaraknath.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' [reproducibility report] rigging the lottery: Making all tickets winners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='15767, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 19 Neil C Thompson, Kristjan Greenewald, Keeheon Lee, and Gabriel F Manso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Deep learning’s diminishing returns: the cost of improvement is becoming unsustainable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' IEEE Spectrum, 58(10):50–55, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Lin Xiao and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A proximal stochastic gradient method with progressive variance reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' SIAM Journal on Optimization, 24(4):2057–2075, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Xia Xiao, Zigeng Wang, and Sanguthevar Rajasekaran.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Autoprune: Automatic network pruning by regular- izing auxiliary parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in neural information processing systems, 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Shaokai Ye, Kaidi Xu, Sijia Liu, Hao Cheng, Jan-Henrik Lambrechts, Huan Zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, and Xue Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Adversarial robustness vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' model compression, or both?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 111–120, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Rong Yu and Peichun Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Toward resource-efficient federated learning in mobile edge computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' IEEE Network, 35(1):148–155, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sergey Zagoruyko and Nikos Komodakis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Wide residual networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' British Machine Vision Conference, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Theo- retically principled trade-off between robustness and accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 7472–7482.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' PMLR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Xiao Zhou, Weizhong Zhang, Zonghao Chen, Shizhe Diao, and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Efficient neural network training via forward and backward propagation sparsification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34:15216–15229, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Xiao Zhou, Weizhong Zhang, Hang Xu, and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Effective sparsification of neural networks with global sparsity constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 3599–3608, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Difan Zou, Pan Xu, and Quanquan Gu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Subsampled stochastic variance-reduced gradient langevin dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In International Conference on Uncertainty in Artificial Intelligence, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 20 A Appendix: Theoretical Proof of Convergence Rate In this section, we provide a detailed proof for the convergence rate of our AGENT method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We start with some assumptions on which we will give some useful lemmas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Then, we will establish the convergence rate of our AGENT method based on these lemmas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 Algorithm Reformulation We reformulate our Adaptive Gradient Correction (AGENT) into a math-friendly version that is shown in Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Algorithm 2 Adaptive Gradient Correction Input: Initialize θ0 0 and c−1 = 0, set the number of epochs S, epoch length m, step sizes ht, scaling parameter γ, and smoothing factor α for s = 0 to S − 1 do �θ = θs 0 �g = (�N i=1 ∇G(xi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' �θ))/N Calculate �c∗ s via Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (4) �cs = (1 − α)�cs−1 + α�c∗ s cs = γ�cs for t = 0 to m − 1 do Sample a mini-batch data Bt with size n θs t+1 = θs t − ηt � 1 n � i∈Bt � gi(θs t ) − cs · gi(�θ)� + cs · �g � end for θs+1 0 = θs m end for Output: Iterates θπ chosen uniformly random from {{θs t }m−1 t=0 }S−1 s=0 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 Assumptions L-smooth: A differentiable function G : Rn → R is said to be L-smooth if for all x, y ∈ Rn is satisfies ||∇G(x) − ∇G(y)|| ≤ L||x − y||.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' And an equivalent definition is for.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' all x, y ∈ Rn: −L 2 ||x − y||2 ≤ G(x) − G(y) − ⟨∇G(x), x − y⟩ ≤ L 2 ||x − y||2 σ-bounded: We say function G has a σ-bounded gradient if ||∇Gi(x)|| ≤ σ for all i ∈ [N] and x ∈ Rn A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 Analysis framework Under the above assumptions, we are ready to analyze the convergence rate of AGENT in Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' To introduce the convergence analysis more clearly, we provide a brief analytical framework for our proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' First, we need to show that the variance of our gradient estimator is smaller than that of minibatch SVRG under proper choice of cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Since the gradient estimator of both AGENT and minibatch SVRG are unbiased estimators in standard training, we only need to show that our bound E[||ut||2] is smaller than minibatch SVRG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (See in Lemma 1) Based on above fact, we next apply the Lyapunov function to prove the convergence rate of AGENT in one arbitrary epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (See in Lemma 3) 21 Then, we extend our previous results to the entire epoch (from 0 to S-th epoch) and derive the convergence rate of the output θπ of Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (See in Lemma 4) Finally, we compare the convergence rate of our AGENT with that of minibatch SVRG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Setting the parameters in Lemma 4 according to the actual situation of sparse learning, we obtain a bound that is more stringent than minibatch SVRG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 lemma We first denote step length ηt = N · ht.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Since we mainly focus on a single epoch, we drop the superscript s and denote ut = 1 n � i∈Bt � gi(θt) − c · gi(�θ) � + c · �g which is the gradient estimator in our algorithm and τt = 1 n � i∈Bt � gi(θt) − c · gi(�θ) � , then lines the update procedure in Algorithm 2 can be replaced with θt+1 = θt − ηt · ut A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 lemma 1 For the ut defined above and function G is a L-smooth,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' λ - strongly convex function with σ-bounded gradient,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='then we have the following results: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||ut||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='≤ 2E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||g(θt)||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='+ 4c2L2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||θt − ˜θ||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='+ 4(1 − c)2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='σ2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='(6) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='Proof : ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||ut||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='= E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||τt + c · �g||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='= E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||τt + c · �g − g(θt) + g(θt)||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='≤ 2E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||g(θt)||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='+ 2E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||τt − E(τt)||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='≤ 2E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||g(θt)||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='+ 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='nE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='τ 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='t ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='= 2E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||g(θ)||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='+ 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='nE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||c(gi(θt) − gi(˜θ)) + (1 − c)gi(θt)||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='≤ 2E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||g(θ)||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='+ 4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='nE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||c(gi(θt) − gi(˜θ))||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='+ 4(1 − c)2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||gi(θt)||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='≤ 2E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||g(θ)||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='+ 4c2L2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='||θt − ˜θ||2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='+ 4(1 − c)2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='σ2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='The first and third inequality are because ||a + b||2 ≤ 2||a||2 + 2||b||2 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' the second inequality follows the E � ||τ − E [τ] ||2� ≤ E � ||τ||2� and the last inequality follows the L-smoothness and σ-bounded of function Gi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Compared with the gradient estimator of minibatch SVRG, the bound of E[||ut||2] is smaller when L is large, σ is relative small and c is properly chosen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 Lemma 2 E [G(θt+1)] ≤ E � G(θt) + ηt||g(θt)||2 + Lη2 2 ||ut||2 � (7) Proof : By the L-smoothness of function G, we have 22 E [G(θt+1)] ≤ E � G(θt) + ⟨g(θt), θt+1 − θt⟩ + L 2 ||θt+1 − θt||2 � By the update procedure in algorithm 2 and unbiasedness, the right hand side can further upper bounded by E � G(θt) + ηt||g(θt)||2 + Lη2 t 2 ||ut||2 � A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 Lemma 3 For bt, bt+1, ζt > 0 and bt and bt+1 have the following relationship bt = bt+1(1 + ηtζt + 4c2η2 t L2 n ) + 2c2η2 t L3 n and define Φt := ηt − bt+1ηt ζt − η2 t L − 2bt+1η2 t Ψt := E � G(θt) + bt||θt − ˜θ||2� (8) ηt, ζt and bt+1 can be chosen such that Φt > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='Then the xt in Algorithm 1 have the bound: E[||g(θt)||2] ≤ Ψt − Ψt+1 + 2(Lη2 t +2bt+1η2 t )(1−c)2 n σ2 Φt Proof : We apply Lyapunov function Ψt = E � G(θt) + bt||θt − ˜θ||2� Then we need to bound ||θt − ˜θ|| E � ||θt+1 − ˜θ||2� = E � ||θt+1 − θt + θt − ˜θ||2� = E � ||θt+1 − θt||2 + ||θt − ˜θ||2 + 2⟨θt+1 − θt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' θt − ˜θ⟩ � = E � η2 t||ut||2 + ||θt − ˜θ||2� − 2ηtE � ⟨g(θt),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' θt − ˜θ⟩ � ≤ E[η2 t ||us+1 t ||2 + ||θt − ˜θ||2] + 2ηtE � 1 2ζt ||g(θt)|| + ζt 2 ||θt − ˜θ||2 � (9) The third equality due to the unbiasedness of the update and the last inequality follows Cauchy-Schwarz and Young’s inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Plugging Equation (6), Equation (7) and Equation (9) into Equation (8), we can get the following bound: Ψt+1 ≤ E [G(θt)] + � bt+1(1 + ηtζt + 4c2η2 t L2 n ) + 2c2η2 t L3 n � E[||θt − ˜θ||2] − (ηt − bt+1ηt ζt − Lη2 t − 2bt+1η2 t )E � ||g(θt)||2� + 4(Lη2 t 2 + bt+1η2 t )(1 − c)2 n σ2 = Ψt − (ηt − bt+1ηt ζt − Lη2 t − 2bt+1η2 t )E � ||g(θt)||2� + 4(Lη2 t 2 + bt+1η2 t )(1 − c)2 n σ2 23 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 Lemma 4 Now we consider the effect of epoch and use s to denote the epoch number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Let bs m = 0, ηs t = η, ζs t = ζ and bs t = bs t+1(1 + ηζ + 4c2 sηL2 n ) + 2 c2 sη2L2 n , Φs t = η − bs t+1η ζt − η2L − 2bs t+1η2 Define φ := mint,s Φs t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Then we can conclude that: E[||g(θπ)||2] ≤ G(θ0) − G(θ∗) Tφ + S−1 � s=0 m−1 � t=0 2(L + 2bs t+1)(1 − cs)2η2σ2 Tnφ Proof : Under the condition of ηs t = η,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' we apply telescoping sum on Lemma 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' then we will get: m−1 � t=1 E[||g(θs t )||2] ≤ Ψs 0 − Ψs m φ + m−1 � t=0 2(L + 2bs t+1)(1 − cs)2η2σ2 nφ From previous definition,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' we know Ψs 0 = G(˜θs),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Ψs m = G(˜θs+1) and plugging into previous equation,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' we obtain: m−1 � t=1 E[||g(θs t )||2] ≤ G(˜θs) − G(˜θs+1) φ + m−1 � t=0 2(L + 2bs t+1)(1 − cs)2η2σ2 nφ Take summation over all the epochs and using the fact that ˜θ0 = θ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' G(˜θS) ≤ G(θ∗) we immediately obtain: 1 T S−1 � s=0 m−1 � t=1 E[||g(θs t )||2] ≤ G(θ0) − G(θ∗) φ + S−1 � s=0 m−1 � t=0 2(L + 2bs t+1)(1 − cs)2η2σ2 Tnφ (10) A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 Theorem A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 Theorem 1 Define ξs = �m−1 t=0 (L + 2bs t+1) and ξ := mins ξs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Let η = µn LNα (0 < µ < 1) and (0 < α ≤ 1), ζ = L N α/2 and m = N 3α 2 µn .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Then there exists constant ν, µ, α, κ > 0 such that φ ≥ nν LNα and ξ ≤ κL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Then E[||g(θπ)||2] can be future bounded by: E[||g(θπ)||2] ≤ (G(θ0) − G(θ∗))LN α Tnν + 2κµ2σ2 N ανm Proof : By applying summation formula of geometric progression on the relation bs t = bs t+1(1 + ηtζt + 4c2 sη2 t L2 n ) + 2 c2 sη2 t L3 n ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' we have bs t = 2c2 sη2L3 n (1+ωs)m−t−1 ωs where: ωs = ηζ + 4c2 sη2L n = µn N 3α 2 + 4c2 sµ2n N 2α ≤ (4c2 s + 1)µn N 3α 2 24 This bound holds because µ ≤ 1 and N ≥ 1 and thus 4c2 sµ2n N2α = 4c2 sµn N 3α 2 × µ N α 2 ≤ 4c2 sµn N 3α 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' And using this bound, we obtain: bs 0 = 2η2c2 sL3 n (1 + ωs)m − 1 ωs = 2µ2nc2 sL N 2α (1 + ωs)m − 1 ωs ≤ 2µnc2 sL((1 + ωs)m − 1) N α 2 (4cs + 1) ≤ 2µnc2 sL((1 + (4c2 s+1)µn N 3α 2 ) N 3α 2 µn − 1) N α 2 (4c2s + 1) ≤ 2µnc2 sL(e 1 4c2s+1 − 1) N α 2 (4c2s + 1) The last inequality holds because (1 + 1 x)x is a monotone increasing function of x when x > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Thus (1 + (4c2 s+1)µn N 3α 2 ) N 3α 2 µn ≤ e 1 4c2s+1 in the third inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' And we can obtain the lower bound for φ φ = min t,s Φs t ≥ min s (η − bs 0η ζ − η2L − 2bs 0η2) ≥ nν LN α The first inequality holds since bt s is a decrease function of t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Meanwhile, the second inequality holds because there exist uniform constant ν such that ν ≥ µ(1 − bs 0η ζ − Lη − bs 0η).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Remark 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In practice, bs 0 ≈ 0 because both γ and cs is both smaller than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 which leads to µ(1 − bs 0 ζ − Lη − bs 0η) ≈ µ(1 − Lη) and this value is usually much bigger than the ν∗ in the bound of minibatch SVRG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='We need to find the upper bound for ξ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='ξs = ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='m−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='t=0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='(L + 2bs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='t+1) = mL + 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='m−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='t=0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='bs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='t+1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='= mL + 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='m−1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='t=0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2c2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='sη2L3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='(1 + ωs)m−t − 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='ωs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='= mL + 2c2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='sη2L3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='nωs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='[(1 + ωs)m+1 − (1 + ωs) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='ωs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='− m] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='≤ mL + 2c2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='sη2L3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='[1 + ωs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='ω2s ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='(e ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4c2s+1 − 1) − m] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='≤ mL + 2c2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='sLN α ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='(1 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='µn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='N 3α/2 )(e ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4c2s+1 − 1) − 2c2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='sµ2nmL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='N 2α ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='= L[(1 − 2c2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='sµ2nL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='N 2α ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=')m + 2c2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='sN α ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='(1 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='µn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='N 3α/2 )(e ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4c2s+1 − 1)] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='The reason why the first inequality holds is explained before and the second inequality holds because 1+x ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='x2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='is a monotone decreasing function of x when x > 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' ωs = µn N 3α 2 + 4c2 sµ2n N2α ≤ µn N 3α 2 and η = µn LNα .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Then ξ = maxs ξs ≤ κL where κ ≥ maxs((1 − 2c2 sµ2nL N2α )m + 2c2 sNα n (1 + µn N 3α/2 )(e 1 4c2s+1 − 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' When cs ≈ 0, (1 − 2c2 sµ2nL N2α )m + 2c2 sNα n (1 + µn N3α/2 )(e 1 4c2s+1 − 1) ≈ m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Now we obtain the lower bound for φ and upper bound for ξ, plugging them into equation (10), we will have: 25 E[||g(θπ)||2] ≤ G(θ0) − G(θ∗) φ + S−1 � s=0 m−1 � t=0 2(L + 2bs t+1)(1 − cs)2η2σ2 Tnφ ≤ (G(θ0) − G(θ∗))LN α Tnν + S−1 � s=0 m−1 � t=0 2(L + 2bs t+1)η2σ2 Tnφ ≤ (G(θ0) − G(θ∗))LN α Tnν + S−1 � s=0 (2η2σ2 Tnφ ) m−1 � t=0 (L + 2bs t+1) ≤ (G(θ0) − G(θ∗))LN α Tnν + 2κµ2σ2 N ανm Remark 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In our theoretical analysis above, we consider c as a constant in each epoch, which is still consistent with our practical algorithm for the following reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (i) In our Algorithm 1, �c∗ t is actually a fixed constant within each epoch, which can be different in different epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Since it is too expensive to compute the exact �c∗ t in each iteration, we compute it at the beginning of each epoch and use it as an approximation in the following epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (ii) As for our proof, we first show the convergence rate of one arbitrary training epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In this step, treating c as a constant is aligned with our practical algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (iii) Then, when we extend the results of one epoch to the whole epoch, we establish an upper bound for different c in each epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Thus, the bound can be applied when c differs across epochs, which enables our theoretical analysis consistent with our practical algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 Real Case Analysis for Sparse Training A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 CIFAR-10/100 dataset In our experiments, we apply both SVRG and AGENT on CIFAR-10 and CIFAR-100 dataset with η = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1, γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1, batch size m = 128 and in total 50000 training sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Under this parameter setting, ν and ν∗in Theorem 1 and Remark 4 are about 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='06, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' While 2κµ2σ2 Nανm is around 10−5 which is negligible so we know AGENT should have a tighter bound than SVRG in this situation which matches with the experimental results show in Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 svhn dataset Meanwhile, in SVHN dataset, we train our model with parameters: η = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1, γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1, batch size m = 573 and sample size N = 73257.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' ν, ν∗ equal 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='06 respectively and 2κµ2σ2 N ανm is around 10−4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Although the second term in Theorem 1 is bigger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Since ν here is a lot bigger than ν∗ which lead to the first term in Theorem 1 much smaller than that of Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' So we still obtain a more stringent bound compared with SVRG which also meets with the outcome presented in Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 26 B Additional Experimental Results We summarize additional experimental results for the BSR-Net-based Özdenizci & Legenstein (2021), RigL- based Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2020), and ITOP-based Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2021) models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 Accuracy Comparisons in Different Epochs Aligned with the main manuscript, we compare the accuracy for a given number of epochs to compare both the speed of convergence and training stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We first show BSR-Net-based results in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Since our approach has faster convergence and does not require a long warm-up period, the dividing points for the decay scheduler are set to the 50th and 100th epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In the manuscript, we also use this schedule for BSR- Net for an accurate comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In the Appendix, we include the results using its original schedule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' BSR-Net and BSR-Net (ori) represent the results learned using our learning rate schedule and original schedule in Özdenizci & Legenstein (2021), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As shown in Figures 7, 8, 9, 10, 11, 12, 13, the blue curves (A-BSR-Net) are always higher than the yellow curves and also much smoother than yellow curves (BSR-Net and BSR-Net (ori)), indicating faster and more stable training when using our proposed A-BSR-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (a) 90% Sparsity (b) 90% Sparsity (c) 99% Sparsity (d) 99% Sparsity Figure 7: Comparisons (accuracy given the number of epochs) with BSR-Net Özdenizci & Legenstein (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99% or 90%) learned with natural training on CIFAR-10 using VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (a) 90% Sparsity (b) 90% Sparsity (c) 99% Sparsity (d) 99% Sparsity Figure 8: Comparisons (accuracy given the number of epochs) with BSR-Net Özdenizci & Legenstein (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99% or 90%) learned with adversarial training (objective: AT) on CIFAR-10 using VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 27 Koangoe Jugra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 A-BSR-Net BSR-Net 0 50 lio 150 2i0 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 A-BSR-Net BSR-Net (ori) 0 50 140 150 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B accurecy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 Bugga 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net BSR-Net 0 50 lio 150 2i0 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B accurecy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 Buggal 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net BSR-Net (ori) 0 50 140 150 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 fiugs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 EO 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 BSR-Net 0 50 140 150 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 EO 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 BSR-Net (ori) 0 50 lio 150 2i0 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 Koangoe Jugra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 BSR-Net 0 50 lio 150 2i0 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 BSR-Net (ori) 0 50 lio 150 2i0 Number of epochs(a) 90% Sparsity (b) 90% Sparsity (c) 99% Sparsity (d) 99% Sparsity Figure 9: Comparisons (accuracy given the number of epochs) with BSR-Net Özdenizci & Legenstein (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99% or 90%) learned with natural training on CIFAR-10 using Wide-ResNet- 28-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (a) 90% Sparsity (b) 90% Sparsity (c) 99% Sparsity (d) 99% Sparsity Figure 10: Comparisons (accuracy given the number of epochs) with BSR-Net Özdenizci & Legenstein (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99% or 90%) learned with adversarial training (objective: AT) on CIFAR-10 using Wide-ResNet-28-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (a) 90% Sparsity (b) 90% Sparsity (c) 99% Sparsity (d) 99% Sparsity Figure 11: Comparisons (accuracy given the number of epochs) with BSR-Net Özdenizci & Legenstein (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99% or 90%) learned with natural training on SVHN using VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B Kaunoe fugsa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net BSR-Net 0 50 lio 150 2i0 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net BSR-Net (ori) 0 50 140 150 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B accurecy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 fugra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net BSR-Net 0 50 140 150 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B accurecy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 Eesting 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net BSR-Net (ori) 0 50 140 150 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 BSR-Net 0 50 lio 150 2i0 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 BSR-Net (ori) 0 50 lio 150 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 Koangoe Jugra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 EO 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 BSR-Net 0 50 lio 150 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net BSR-Net (ori) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 0 50 140 150 240 Number of epochsLD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 - A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 - BSR-Net 0 50 1+0 150 20 Number of epochaLD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 A-BSR-Net BSR-Net (ori) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 0 50 140 150 20 Number of epochsaccurecy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 Bugs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 - BSR-Net 0 50 150 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 BSR-Net (ori) 0 50 140 150 240 Number of epochs(a) 90% Sparsity (b) 90% Sparsity (c) 99% Sparsity (d) 99% Sparsity Figure 12: Comparisons (accuracy given the number of epochs) with BSR-Net Özdenizci & Legenstein (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99% or 90%) learned with adversarial training (objective: TRADES) on SVHN using VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (a) CIFAR-100,VGG-16 (b) SVHN,VGG-16 (c) CIFAR-100,WRN-28-4 (d) SVHN,WRN-28-4 Figure 13: Training curve (accuracy given number of epochs) of BSR-Net-based models (Özdenizci & Leg- enstein, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparse networks (99%) are learned in standard setups on (a) CIFAR-100 using VGG-16, (b) SVHN using VGG-16, (c) CIFAR-100 using WRN-28-4, (d) SVHN using WRN-28-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (a) Standard (b) Adversarial (AT) Figure 14: Training curve (required epochs to reach given accuracy) of BSR-Net-based models (Özdenizci & Legenstein, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Dense networks are learned in standard and adversarial setups on CIFAR-10 using VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B accurecy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 Bugs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 - BSR-Net 0 50 lio 150 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 fugri 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 BSR-Net (ori) 0 50 150 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B accurecy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 fiugri 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 BSR-Net 0 50 lio 150 2i0 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 BSR-Net (ori) 0 50 140 150 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 A-BSR-Net BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='D 50 10 150 240 Number of epochaaccurecy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 Bugs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 - BSR-Net 0 50 150 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 A-BSR-Net BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='D 0 50 lio 150 2i0 Number of epochsLD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net BSR-Net 0 50 140 150 240 Number of epocha0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 A-BSR-Net BSR-Net 0 25 50 1i0 125 150 175 240 Number of epochs0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='B - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 ecy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 - A-BSR-Net 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 BSR-Net 0 25 50 75 140 125 150 175 240 Number of epochs0 20 40 60 80 100 Number of epochs 10 20 30 40 50 60 70 Testing accuracy RigL-ITOP A-RigL-ITOP (a) 80% Sparsity 0 20 40 60 80 100 Number of epochs 10 20 30 40 50 60 70 Testing accuracy RigL-ITOP A-RigL-ITOP (b) 90% Sparsity Figure 15: Training curve (required epochs to reach given accuracy) of ITOP-based models (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparse networks are learned in standard setup on ImageNet-2012 using ResNet-50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Figure 14, we also compare the convergence speed without sparsity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We show a BSR-Net-based result, where dense network is learned by adversarial training (AT) and standard training on CIFAR-10 using VGG- 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The blue curve of our A-BSR-Net tends to be above the yellow curve of BSR-Net, indicating successful acceleration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' This demonstrates the broad applicability of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Then, we also show ITOP-based results on ImageNet-2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As shown in Figure 15, the red and blue curve represent AGENT + RigL-ITOP and RigL-ITOP on 80% and 90% sparse ResNet-50, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For 80% sparsity, we can see that the red curve is above the blue curve, demonstrating the acceleration effect of our AGENT, especially in the early stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For 90% sparity, we can see that the red curve is more stable than the blue curve, which shows the stable effect of our AGENT on large data sets and is a slightly different manifestation of the strengths of our AGENT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' If we use SVRG in this case, we will not only fail to train stably, but also slow down the training speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In contrast, our AGENT can solve the limitation of SVRG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For other sparsity levels, we can expect advantages of our AGENT, in terms of acceleration or stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Moreover, we can expect more significant speedups at different sparsity levels with more hyperparameter tuning, as the speedups are guaranteed by theoretical proofs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 Number of Training Epoch Comparisons We also compare the number of training epochs required to reach the same accuracy in BSR-Net-based results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In Figures 16, 17, 18, 19, 20, 21, 22, the blue curves (A-BSR-Net) are always lower than yellow curves (BSR-Net and BSR-Net (ori)), indicating faster convergence of A-BSR-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 30 (a) Wide-ResNet-28-4 (b) ResNet-18 Figure 16: Comparisons (required hours to reach given accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99%) learned with natural training on CIFAR-100 using (a) Wide-ResNet-28-4, and (b) ResNet-18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (a) 90% Sparsity (b) 90% Sparsity (c) 99% Sparsity (d) 99% Sparsity Figure 17: Comparisons (required hours to reach given accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99% or 90%) learned with natural training on CIFAR-10 using VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (a) 90% Sparsity (b) 90% Sparsity (c) 99% Sparsity (d) 99% Sparsity Figure 18: Comparisons (required hours to reach given accuracy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99% or 90%) learned with adversarial training (objective: AT) on CIFAR-10 using VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 31 A-BSR-Net BSR-NetA-BSR-Net BSR-NetA-BSR-Net BSR-NetA-BSR-Net BSR-Net (ori)A-BSR-Net BSR-NetA-BSR-Net BSR-Net (ori)A-BSR-Net BSR-NetA-BSR-Net BSR-Net (ori)(a) 90% Sparsity (b) 90% Sparsity (c) 99% Sparsity (d) 99% Sparsity Figure 19: Comparisons (required hours to reach given accuracy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99% or 90%) learned with natural training on CIFAR-10 using Wide-ResNet-28-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (a) 90% Sparsity (b) 90% Sparsity (c) 99% Sparsity (d) 99% Sparsity Figure 20: Comparisons (required hours to reach given accuracy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99% or 90%) learned with adversarial training (objective: AT) on CIFAR-10 using Wide-ResNet-28-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (a) 90% Sparsity (b) 90% Sparsity (c) 99% Sparsity (d) 99% Sparsity Figure 21: Comparisons (required hours to reach given accuracy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99% or 90%) learned with natural training on SVHN using VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (a) 90% Sparsity (b) 90% Sparsity (c) 99% Sparsity (d) 99% Sparsity Figure 22: Comparisons (required hours to reach given accuracy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99% or 90%) learned with adversarial training (objective: TRADES) on SVHN using VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 32 A-BSR-Net BSR-NetA-BSR-Net BSR-Net (ori)A-BSR-Net BSR-NetA-BSR-Net BSR-Net (ori)A-BSR-Net BSR-NetA-BSR-Net BSR-Net (ori)A-BSR-Net BSR-NetA-BSR-Net BSR-Net (ori)A-BSR-Net BSR-NetA-BSR-Net BSR-Net (ori)A-BSR-Net BSR-NetA-BSR-Net BSR-Net (ori)A-BSR-Net BSR-NetA-BSR-Net BSR-Net (ori)A-BSR-Net BSR-NetA-BSR-Net BSR-Net (ori)B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 Scaling Parameter Setting The choice of the scaling parameter γ is important to the acceleration and can be seen as a hyper-parameter tuning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We experiment with different values of γ and find that setting γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 is a good choice for effective acceleration of training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The presented results are based on sparse networks (99%) learned with adversarial training (objective: AT) on CIFAR-10 using VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As shown in Figure 23 (a), we compare the training curves (testing accuracy at different epochs) A-BSR-Net (γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1), A-BSR-Net (γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5), and BSR-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The yellow curve for A-BSR-Net (γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5) collapses after around 40 epochs training, indicating a model divergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The reason is that if setting γ close to 1, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', like 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5, we will not be able to completely avoid the increase in variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The increase in variance will lead to a decrease in performance, which is similar to "No γ" in section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 of the manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As shown in Figure 23 (b), we compare the training curves (testing accuracy at different epochs) A-BSR-Net (γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1), A-BSR-Net (γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='01), and BSR-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The yellow curve for A-BSR-Net (γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='01) is below the blue curve for A-BSR-Net (γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1), indicating a slower convergence speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The reason is that if γ is set small, such as 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='01, the weight of the old gradients will be small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Thus, the old gradients will have limited influence on the updated direction of the model, which tends to slow down the convergence and sometimes can lead to more training instability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 0 25 50 75 100 125 150 175 200 Number of epochs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 Testing accuracy A-BSR-Net, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 A-BSR-Net, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 BSR-Net (a) Scaling parameter = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 or 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0 25 50 75 100 125 150 175 200 Number of epochs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 Testing accuracy A-BSR-Net, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 A-BSR-Net, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='01 BSR-Net (b) Scaling parameter = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 or 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='01 Figure 23: Comparisons (testing accuracy given the number of epochs) with different scaling parameters in BSR-Net-based models Özdenizci & Legenstein (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99%) learned with adversarial training (objective: AT) on CIFAR-10 using VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (a) scaling parameter = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 or 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5, (b) scaling parameter = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 or 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 Other Variance Reduction Method Comparisons We also include more results about comparison between our ADSVRG and stochastic variance reduced gradient (SVRG) Baker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Zou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2018), a popular variance reduction method in non-sparse case, to show the limitations of previous methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 BSR-Net-based Results The presented results are based on sparse networks (99%) learned with adversarial training (objective: AT) on CIFAR-10 using VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As presented in Figure 24, we show the training curves (testing accuracy at different epochs)of A-BSR-Net, BSR-Net, and BSR-Net using SVRG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The yellow curve for BSR-Net using SVRG rises to around 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 and then rapidly decreases to a small value around 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1, indicating a model divergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' This demonstrates that SVRG does not work for sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As for the blue curve for our A-BSR-Net, it is always above the green curve for BSR-Net, indicating a successful acceleration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 33 0 25 50 75 100 125 150 175 200 Number of epochs 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 Testing accuracy A-BSR-Net BSR-Net, SVRG BSR-Net Figure 24: Comparisons (testing accuracy given the number of epochs) with different variance reduction methods in BSR-Net-based models Özdenizci & Legenstein (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (99%) learned with adversarial training (objective: AT) on CIFAR-10 using VGG-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 RigL-based Results The presented results are based on sparse networks (90%) learned with standard training on CIFAR-100 using ResNet-50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As presented in Figure 25, we show the training curves (testing accuracy at different epochs) of A-RigL, RigL, and RigL using SVRG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The yellow curve for RigL using SVRG is always below the other two curves, indicating a slower model convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' This demonstrate that SVRG does not work for sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As for the blue curve for our A-RigL, it is always on the top of the green curve for RigL, indicating that the speedup is successful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 0 50 100 150 200 250 Number of epoch 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 Accuracy A-RigL RigL, SVRG RigL Figure 25: Comparisons (testing accuracy given the number of epochs) with different variance reduction methods in RigL-based models Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks (90%) learned with standard training on CIFAR-100 using ResNet-50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 34 Table 6: Comparisons the BSR-Net Özdenizci & Legenstein (2021) and HYDRA Sehwag et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Evaluations of sparse networks learned with robust training objectives (TRADES) on SVHN using VGG- 16 and WideResNet-28-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Evaluations are after full training (200 epochs) and presented as clean/robust accuracy (%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Robust accuracy is evaluated via PGD50 with 10 restarts ϵ = 8/255.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' BSR-Net HYDRA Ours 90% Sparsity VGG-16 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4/53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2/52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4/51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 WRN-28-4 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8/55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4/43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5/46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 99% Sparsity VGG-16 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4/48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4/47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 WRN-28-4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5/52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9/39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2/51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 Final Accuracy Comparisons We also provide additional BSR-Net-based results for the final accuracy comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In addition to the BSR-Net and A-BSR-Net in the manuscript, we also include HYDRA in the appendix, which is also a SOTA sparse and adversarial training pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The results are trained on SVHN using VGG-16 and WideResNet- 28-4 (WRN-28-4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The final results for BSR-Net and HYDRA are obtained from Özdenizci & Legenstein (2021) using their original learning rate schedules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As shown in Table 6, it is encouraging to note that our method tends to be the best in all cases when given clean test samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In terms of the robustness, our A-BSR-Net beats HYDRA in most cases, while experience a performance degradation compared to BSR-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 Gradient Change Speed & Sparsity Level In sparse training, when there is a small change in the weights, the gradient changes faster than in dense training, and this phenomenon can be expressed as a low correlation between the current and previous gradients, making the existing variance reduction methods ineffective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We first demonstrate this lower correlation from an intuitive point of view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Considering the weights on which the current and previous gradients were calculated, there are three cases to be discussed in sparse training when the masks of current and previous gradients are different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' First, if current weights are pruned, we do not need to consider their correlation because we do not need to update the current weights using the corresponding previous weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Second, if current weights are not pruned but previous weights are pruned, the previous weights are zero and the difference between two weights is relatively large, leading to a lower relevance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Third, if neither the current nor the previous weights are pruned, which weights are pruned can still change significantly, leading to large changes in the current and previous models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Thus, the correlation between the current and previous gradients of the weights will be relatively small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Thus, it is not a good idea to set c = 1 directly in sparse training which can even increase the variance and slow down the convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' When the masks of the current and previous gradients are the same, the correlation still tends to be weaker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As we know, c∗ t = Cov(gnew,gold) Var(gold) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Even if Cov(gnew, gold) does not decrease, the variance Var(gold) increases in sparse training, leading to a decrease in c∗ t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Apart from the analysis above, we also do some experiments to demonstrate that the gradient changes faster as the sparsity increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' To measure the rate of change, our experiments are described below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We begin with fully-trained checkpoints from ResNet-50 on CIFAR-100 with RigL and SET at 0%, 50%, 80%, 90%, and 95% sparsity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We calculate and store the gradient of each weight on all training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Then, we add Gaussian perturbations (std = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='015) to all the weights and calculate the gradients again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Lastly, we calculate the correlation between the gradient of the new perturbed weights and the old original weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As we know, there is always a difference between the old and new weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' If the gradients become very different after adding some small noise to the weights, the new and old gradients will tend to have smaller correlations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' If the gradients do not change a lot after adding some small noise, the old and new gradients will have a higher correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Thus, we add Gaussian noise to the weights to simulate the difference between 35 the new and old gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As shown in Table 7, the correlation decreases with increasing sparsity, which indicates a weaker correlation in sparse training and supports our claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Table 7: Correlation between the gradient of the new perturbed weights and the old original weights from ResNet-50 on CIFAR-100 produced by RigL and SET at different sparsity including 0%, 50%, 80%, 90%, 95%, 99%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparsity 0% 50% 80% 90% 95% ResNet-50, CIFAR-100 (RigL) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6005 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4564 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3217 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1886 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1590 ResNet-50, CIFAR-100 (SET) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6005 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4535 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2528 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1763 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1195 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 Comparison between True Correlation & Our Approximation In this section, to test how well our approximation estimates the true optimum c, we empirically compare the approximation c∗ in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (4) (in the main manuscript) and the correlation between gradient of current weights and gradient of previous epoch weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As shown in Figure 26, the yellow and blue curves represent the approximation c∗ and the correlation, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The two curves tend to have similar up-and-down patterns, and the yellow curves usually have a larger magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' This suggests that our c approximation captures the dynamic patterns of the correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For the larger magnitude, it can be matched by our scaling parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 5 10 15 20 25 30 Number of epoch 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 Value True Correlation c Approximation (a) 90% 5 10 15 20 25 30 Number of epoch 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 Value True Correlation c Approximation (b) 99% Figure 26: Comparisons between the approximation c∗ and correlation between gradient of current weights and gradient of previous epoch weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse networks learned with RigL-based standard training on CIFAR-10 using ResNet-50 with (a) 90% sparsity and (b) 99% sparsity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 Variants of RigL RigL is one of the most popular dynamic sparse training pipeline which uses weight magnitude for pruning and gradient magnitude for growing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Our method adaptively updates the new batch gradient using the old storage gradient which usually has less noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As a result, the variance of the new batch gradient is reduced, leading to fast convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Currently, we only use gradients with corrected variance in weight updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A natural question is how does it perform if we also use this variance-corrected gradient for weight growth in RigL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We do some experiments in RigL-based models trained on CIFAR-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As shown in Figure 27, the blue curves (RigL-ITOP-G) and yellow curves (RigL-ITOP) correspond to the weight growth with and without the variance-corrected gradient, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We can see that in the initial stage, the blue curves are higher than the yellow curves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' But after the first learning rate decay, they tend to be lower than the yellow curves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' This suggests that weight growth using a variance-corrected gradient at the beginning of training can help 36 the model improve accuracy faster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, this may lead to a slight decrease in accuracy in the later training stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' This may be due to the fact that some variance in the gradient can help the model explore local regions better and find better masks as the model approaches its optimal point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 0 50 100 150 200 250 Number of epoch 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 Accuracy RigL-ITOP-G RigL-ITOP (a) VGG-C 0 50 100 150 200 250 Number of epoch 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 Accuracy RigL-ITOP-G RigL-ITOP (b) ResNet-34 Figure 27: Comparisons (testing accuracy given the number of epochs) between weight growth with (RigL- ITOP-G) and without (RigL-ITOP) variance-corrected gradient Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We evaluate sparse net- works (99%) learned with standard training on CIFAR-10 using (a) VGG-C and (b) ResNet-34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 Comparison with Reducing Learning Rate To demonstrate the design of the scaling parameter γ, we compare our AGENT with "Reduce LR", where we remove the scaling parameter γ from AGENT and set the learning rate to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 times the original one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' As shown in Table 8, reducing the learning rate can lead to a comparable convergence rate in the early stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, it slows down the later stages of training and leads to sub-optimal final accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The reason is that it reduces both signal and noise, and therefore does not improve the signal-to-noise ratio or speed up the sparse training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The motivation of γ is to avoid introducing large variance due to error in approximating ct and bias due to the adversarial training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The true correlation depends on many factors such as the dataset, architecture, and sparsity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In some cases, it can be greater or smaller than 10%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For the value of γ, it is a hyperparameter and we can choose different values for different settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In our case, for simplicity, we choose γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 for all the settings, and find that it works well and accelerates the convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' If we tune the value of γ for different settings according to their corresponding correlations, it is possible to obtain faster convergence rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Table 8: Testing accuracy (%) of SET-ITOP-based models for AGENT (ours) and "Reduce LR".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sparse VGG-C and ResNet-34 are learned in standard setups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Epoch 20 80 130 180 240 Reduce LR (VGG-C, SET-ITOP) 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 AGENT (VGG-C, SET-ITOP) 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 Reduce LR (ResNet-34, SET-ITOP) 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='9 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='8 AGENT (ResNet-34, SET-ITOP) 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='0 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='10 Comparison with Momentum-based Methods The momentum-based approach works well in general, but it still suffers from optimization difficulties due to sparsity constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For example, in our baseline SGD, following the original code base, we have also added momentum to the optimizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, as shown in the pink curves in Figure 2, it still has training instability and convergence problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The reason is that they do not take into account the sparse and adversarial training characteristics and cannot provide an adaptive balance between old and new information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 37 Our method AGENT is designed for sparse and adversarial training and can establish a finer control over how much information we should get from the old to help the new.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' To demonstrate the importance of this fine-grained adaptive balance, we do ablation studies in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In "Fixed ct", we set ct = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 and test the convergence rate without the adaptive control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We find that the adaptive balance (ours) outperforms "Fixed ct" in almost all cases, especially in adversarial training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For standard training, "Fix ct" provides similar convergence rates to our method, while ours tends to have better final scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' C Additional Details about Experiment Settings C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 Gradient Variance and Correlation Calculation We calculate the gradient variance and correlation of the ResNet-50 on CIFAR-100 from RigL (Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020) and SET (Mocanu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018) at different sparsities including 0%, 50%, 80%, 90%, and 95%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The calculation is based on the checkpoints from Sundar & Dwaraknath (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Gradient variance: We first load fully trained checkpoints for the 0%, 50%, 80%, 90%, and 95% sparse models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Then, to see the gradient variance around the converged optimum, we add small perturbations to the weights and compute the mean of the gradient variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For each checkpoint, we do three replicates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Gradient correlation: We begin with fully-trained checkpoints at 0%, 50%, 80%, 90%, and 95% sparsity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We calculate and store the gradient of each weight on all training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Then, we add Gaussian perturbations to all the weights and calculate the gradients again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Lastly, we calculate the correlation between the gradient of the new perturbed weights and the old original weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For each checkpoint, we do three replicates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 Implementations In BSR-Net-based results, aligned with the choice of Özdenizci & Legenstein (2021), the gradients for all models are calculated by SGD with momentum and decoupled weight decay (Loshchilov & Hutter, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' All models are trained for 200 epochs with a batch size of 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In RigL-based results, we follow the settings in Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Sundar & Dwaraknath (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We train all the models for 250 epochs with a batch size of 128, and parameters are optimized by SGD with momentum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In ITOP-based results, we follow the settings in Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For CIFAR-10 and CIFAR-100, we train all the models for 250 epochs with a batch size of 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For ImageNet-2012, we train all the models for 100 epochs with a batch size of 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Parameters are optimized by SGD with momentum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 Learning Rate Aligned with popular sparse training methods (Evci et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Özdenizci & Legenstein, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2021), we choose piecewise constant decay schedulers for learning rate and weight decay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In our A-BSR-Net, we use the 50th and 100th epochs as the dividing points of our learning rate decay scheduler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The reason is that our approach has faster convergence and doesn’t require a long warm-up period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In the evaluation shown in the manuscript, we also use this scheduler for BSR-Net for a more accurate and fair comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='4 Initialization (BSR-Net-based results) Consistent with Özdenizci & Legenstein (2021), we also choose Kaiming initialization to initialize the network weights He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2015) C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='5 Benchmark Datasets (BSR-Net-based results) For a fair comparison, we choose the same benchmark datasets as Özdenizci & Legenstein (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Specifically, we use CIFAR-10 and CIFAR-100 Krizhevsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2009) and SVHN Netzer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2011) in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 38 Both CIFAR-10 and CIFAR-100 datasets include 50, 000 training and 10, 000 test images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' SVHN dataset includes 73, 257 training and 26, 032 test samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='6 Data Augmentation We follow a popular data augmentation method used in Özdenizci & Legenstein (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In particular, we randomly shift the images to the left or right, crop them back to their original size, and flip them in the horizontal direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In addition, all the pixel values are normalized in the range of [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' D Sparse Training Method Description D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 Bayesian Sparse Robust Training Bayesian Sparse Robust Training (BSR-Net) Özdenizci & Legenstein (2021) is a Bayesian Sparse and Robust training pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Based on a Bayesian posterior sampling principle, a network rewiring process simultane- ously learns the sparse connectivity structure and the robustness-accuracy trade-off based on the adversarial learning objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' More specifically, regarding its mask update, it prunes all negative weights and grows new weights randomly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' E Limitations of Our Adaptive Gradient Correction Method E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 Extra FLOPs Similar to SVRG, our ADSVRG increases the training FLOPs in each iteration due to the extra forward and backward used to compute the old gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, the true computation difference can be smaller and the GPU-based runining time of SVRG will not be affected that much.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For example, in the adversarial setting, we need additional computations to generate the adversarial samples, which is time-consuming and only needs to be done once in each iteration of our AVR and SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' For BSR-Net, we empirically find that the ratio of time required for each iteration of our AVR and SGD is about 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' There are also several methods to reduce the extra computation caused by SVRG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The first approach is to use the sparse gradients proposed by M Elibol (2020) Elibol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' It can effectively reduce the computational cost of SVRG and can be easily applied to our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The second approach is suggested by Allen-Zhu and Hazan (2016) Allen-Zhu & Hazan (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' The extra cost on computing batch gradient on old model parameters is totally parallelizable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Thus, we can view SVRG as doubling the mini-batch size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Third, we can follow the idea of SAGA Defazio et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' (2014) and store gradients for individual samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' By this way, we do not need the extra forward and backward step and save the computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' But it requires extra memory to store the gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' In the main manuscript, we choose to compare the convergence speed of our ADSVRG and SGD for the same number of pass data (epoch), which is widely used as a criterion to compare SVRG-based optimization and SGD (Allen-Zhu & Hazan, 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Chatterji et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Zou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' Cutkosky & Orabona, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' A comparison in this way in this way can demonstrate the accelerating effect of the optimization method and provide inspiration for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='2 Scaling parameter tuning In our adaptive variance reduction method (AVR), we add an additional scaling parameter γ which need to be adjusted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' We find that setting γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='1 is a good choice for BSR-Net, RigL, and ITOP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' However, it can be different for other different sparse training pipelines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 39 E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content='3 Robust Accuracy Degradation For the final accuracy results of BSR-Net-based models, there is a small decrease in the robustness accuracy after using our AVR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' It is still an open question how to further improve the robust accuracy when using adaptive variance reduction in sparse and adversarial training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'} +page_content=' 40' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQf-waO/content/2301.03573v1.pdf'}