diff --git "a/FNAyT4oBgHgl3EQfe_gi/content/tmp_files/load_file.txt" "b/FNAyT4oBgHgl3EQfe_gi/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/FNAyT4oBgHgl3EQfe_gi/content/tmp_files/load_file.txt" @@ -0,0 +1,2242 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf,len=2241 +page_content='Efficient On-device Training via Gradient Filtering Yuedong Yang Guihong Li Radu Marculescu The University of Texas at Austin {albertyoung, lgh, radum}@utexas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='edu Abstract Despite its importance for federated learning, continu- ous learning and many other applications, on-device train- ing remains an open problem for EdgeAI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The problem stems from the large number of operations (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', floating point multiplications and additions) and memory consump- tion required during training by the back-propagation al- gorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Consequently, in this paper, we propose a new gradient filtering approach which enables on-device DNN model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' More precisely, our approach creates a special structure with fewer unique elements in the gradi- ent map, thus significantly reducing the computational com- plexity and memory consumption of back propagation dur- ing training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Extensive experiments on image classification and semantic segmentation with multiple DNN models (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', MobileNet, DeepLabV3, UPerNet) and devices (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', Rasp- berry Pi and Jetson Nano) demonstrate the effectiveness and wide applicability of our approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For example, com- pared to SOTA, we achieve up to 19× speedup and 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1% memory savings on ImageNet classification with only 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1% accuracy loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Finally, our method is easy to implement and deploy;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' over 20× speedup and 90% energy savings have been observed compared to highly optimized baselines in MKLDNN and CUDNN on NVIDIA Jetson Nano.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Conse- quently, our approach opens up a new direction of research with a huge potential for on-device training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Introduction Existing approaches for on-device training are neither efficient nor practical enough to satisfy the resource con- straints of edge devices (Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' This is because these methods do not properly address a fundamental problem in on-device training, namely the computational and memory complexity of the back-propagation (BP) algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' More precisely, although the architecture modification [6] and layer freezing [19, 21] can help skipping the BP for some layers, for other layers, the complexity remains high.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Gra- dient quantization [4, 7] can reduce the cost of arithmetic operations but cannot reduce the number of operations (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', multiplications);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' thus, the speedup in training remains lim- ited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Moreover, gradient quantization is not supported by existing deep-learning frameworks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', CUDNN [9], MKLDNN [1], PyTorch [26] and Tensorflow [2]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' To en- able on-device training, there are two important questions must be addressed: How can we reduce the computational complexity of back propagation through the convolution layers?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' How can we reduce the data required by the gradient computation during back propagation?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In this paper, we propose gradient filtering, a new research direction, to address both questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' By addressing the first question, we reduce the computational complexity of train- ing;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' by addressing the second question, we reduce the mem- ory consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In general, the gradient propagation through a convolu- tion layer involves multiplying the gradient of the output variable with a Jacobian matrix constructed with data from either the input feature map or the convolution kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We aim at simplifying this process with the new gradient filter- ing approach proposed in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Intuitively, if the gradi- ent map w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the output has the same value for all entries, then the computation-intensive matrix multiplication can be greatly simplified, and the data required to construct the Ja- cobian matrix can be significantly reduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Thus, our gra- dient filtering can approximate the gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the output by creating a new gradient map with a special (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', spatial) structure and fewer unique elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' By doing so, the gra- dient propagation through the convolution layers reduces to cheaper operations, while the data required (hence memory) for the forward propagation also lessens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Through this fil- tering process, we trade off the gradient precision against the computation complexity during BP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We note that gradi- ent filtering does not necessarily lead to a worse precision, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', models sometimes perform better with filtered gradi- ents when compared against models trained with vanilla BP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In summary, our contributions are as follows: We propose gradient filtering, which reduces the com- putation and memory required for BP by more than 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00330v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='CV] 1 Jan 2023 two orders of magnitude compared to the exact gradi- ent calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We provide a rigorous error analysis which shows that the errors introduced by the gradient filtering have only a limited influence on model accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Our experiments with multiple DNN models and com- puter vision tasks show that we can train a neural net- work with significantly less computation and memory costs, with only a marginal accuracy loss compared to baseline methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Side-by-side comparisons against other training acceleration techniques also suggest the effectiveness of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Our method is easy to deploy with highly optimized deep learning frameworks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', MKLDNN [1] and CUDNN [9]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Evaluations on resource-constrained edge (Raspberry Pi and Jetson Nano) and high- performance devices (CPU/GPU) show that our method is highly suitable for real life deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Section 2 reviews rel- evant work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Section 3 presents our method in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Sec- tion 4 discusses error analysis, computation and memory consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Experimental results are presented in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Finally, Section 6 summarizes our main contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Related Work Architecture Modification: Authors of [6] propose to at- tach small branches to the original neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Dur- ing training, the attached branches and biases in the orig- inal model are updated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Though memory consumption is reduced, updating these branches still needs gradient prop- agation through the entire network;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' moreover, a large com- putational overhead for inference is introduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Layer Freezing: Authors of [19, 21] propose to only train parts of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' [19] makes layer selection based on layer importance metrics, while [21] uses evolutionary search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' However, the layers selected by all these methods are typ- ically computationally heavy layers (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', the last few lay- ers in ResNet [15]) which consume most of the resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Thus, the speedup achieved by these approaches is limited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Gradient Quantization: [3,5] quantize gradient after back- propagation, which means these methods cannot accelerate the training on a single device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Work in [4, 7, 16, 18, 29, 30, 34] accelerates training by reducing the cost for every arithmetic operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' However, these methods do not re- duce the number of operations, which is typically huge for SOTA CNNs, so their achievable speedup is limited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Also, all these methods are not supported by the popular deep learning frameworks [1,2,9,26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In contrast to the prior work, our method opens up a new research direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' More precisely, we reduce the number of Arch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Modification Example: [6] Drawbacks: Large overhead;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Limited to specific model Layer/Channel Freezing Example: [19, 21] Drawbacks: High search cost;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Limited to simple models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Gradient Quantization Example: [4, 7, 16] Drawbacks: Not supported by existing DL frameworks Gradient Filtering [Ours] Advantages: Very fast and accurate;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Well supported by existing DL frameworks Efficiency Applicability Orthogonal Research Directions for On-device Training Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Matrix of orthogonal directions for on-device training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' “Arch” is short for “architecture”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Our approach opens up a new direction of research for on-device training for EdgeAI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' computations and memory consumption required for train- ing a single layer via gradient filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Thus, our method can be combined with any of the methods mentioned above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For example, in Section G in the Supplementary, we illus- trate how our method can work together with the gradient quantization methods to enable a higher speedup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Proposed Method In this section, we introduce our gradient filtering ap- proach to accelerate BP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' To this end, we target the most computation and memory heavy operation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', convolution (Figure 2(a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Table 1 lists some symbols we use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Cx Number of channels of x Wx, Hx Width and height of x θ Convolution kernel θ′ Rotated θ, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', θ′ = rot180(θ) r Patch size (r × r ) gx, gy, gθ Gradients w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' x, y, θ ˜gy Approximated gradient gy ˜x, ˜θ′ Sum of x and θ′ over spatial dimensions (height and width) x[n, ci, h, w] Element for feature map x at batch n, channel ci, pixel (h, w) θ[co, ci, u, v] Element for convolution kernel θ at output channel co, input channel ci, position (u, v) Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Table of symbols we use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 2 Loss Loss Gradient Filter 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 -1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 -1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 -1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 -4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='36 ������������ ������������� ������������������������ ������������������������� ������������������������� 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 ������������𝜃 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 �������������𝜃 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01 ������������������������� Ⓕ ⊙ ������������ ������������������������ ������������𝜃 ⊛ ������������ ������������������������ Memory Ⓕ ������������ ������������������������� Memory ������������� Ⓕ ������������ ������������������������� ������������𝜃 �������������𝜃 ⊙ ������������ ������������ ������������������������� ������������������������ ⊛ ������������ ������������ ������������������������ ⊛ Vanilla Conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Our Conv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Ⓐ (a) (b) Ⓐ Spatial Sum = ������������.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ������������ × ������������.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ������������ + ������������.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ������������ × ������������.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ������������ + ������������.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ������������ × ������������.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ������������ + −������������.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ������������ × (−������������.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ������������) Patch size ������������ × ������������ = 2 × 2 Forward Propagation Backward Propagation Average Filter Ⓐ ������������ Spatial Sum ⊛Convolution ⒻFrobenius Inner Product ⊙Element-wise Product ������������ ������������ ������������ ������������ Average Value Other Layers Other Layers Height Width Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' (a) Computation procedures for vanilla training method (upper) and our method (lower).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' (b) Example of gradient propagation with gradient filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Numbers in this example are chosen randomly for illustration purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In this case, the patch size selected for the gradient filter is 2 × 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Thus, the 4 × 4 gradient map gy is approximated by ˜gy, which has four 2 × 2 patches with one unique value for each patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Also, input feature map x and mirrored convolution kernel θ′ are spatial summed to ˜x and ˜θ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Since ˜x has fewer unique values than x, memory consumption is reduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Finally, with ˜gy, ˜x and ˜θ, we compute the gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' kernel and input feature map with much fewer operations than the standard back propagation method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Problem Setup The computations for both forward and backward paths are shown in Figure 2(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For the standard (vanilla) ap- proach (upper Figure 2(a)), starting with input x, the for- ward propagation convolves the input feature map x with kernel θ and returns output y, which is further processed by the other layers in the neural network (dotted arrow) un- til the loss value l is calculated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' As shown in Figure 2(a), the BP of the convolution layer starts with the gradient map w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' output y (gy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' input (gx) is calcu- lated by convolving gy with the rotated convolution kernel θ′, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', gx = gy ⊛ rot180(θ) = gy ⊛ θ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' convolution kernel, namely gθ, is calculated with the Frobe- nius inner product [17] between x and gy, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', gθ = gy F x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The lower half of Figure 2(a) shows our method, where several changes are made: We introduce the gradient filter “ A ” after gy to generate the approximate gradient for BP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Also, instead of using the accurate x and θ′ values for gra- dient computation, we sum over spatial dimensions (height and width dimensions), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', ˜x and ˜θ′, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Finally, the convolution layer now multiplies the approximate gra- dient ˜gy with spatial kernel ˜θ′ instead of convolving with it to calculate ˜gx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Figure 2(b) shows an example of gradient propagation with our gradient filter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Preliminary Analysis Consider the vanilla BP for convolution in Figure 2(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Equation (1) shows the number of computations (#FLOPs) required to calculate gx given gy: #FLOPs = 2CxCy · WyHy · WθHθ (1) The computation requirements in Equation (1) belong to three categories: number of channels, number of unique el- ements per channel in the gradient map, and kernel size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Our method focuses on the last two categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Unique elements: (WyHy) represents the number of unique elements per channel in the gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' output variable y (gy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Given the high-resolution images we use, this term is huge, so if we manage to reduce the number of unique elements in the spatial dimensions (height and width), the computations required are greatly reduced too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Kernel size: (WθHθ) represents the number of unique elements in the convolution kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' If the gradient gy has some special structure, for example gy = 1Hy×Wy · v (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', every element in gy has the same value v), then the convolution can be simplified to (� θ′)v1Hy×Wy (with boundary elements ignored).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' With such a special structure, only one multiplication and (WθHθ − 1) additions are re- quired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Moreover, � θ′ is independent of data so the result can be shared across multiple images until θ gets updated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Gradient Filtering To reduce the number of unique elements and create the special structure in the gradient map, we apply the gradi- ent filter after the gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' output (gy) is provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' During the backward propagation, the gradient filter A ap- proximates the gradient gy by spatially cutting the gradient map into r×r-pixel patches and then replacing all elements in each patch with their average value (Figure 2(b)): ˜gy[n, co, h, w] = 1 r2 ⌈h/r⌉r � i=⌊h/r⌋r ⌈w/r⌉r � j=⌊w/r⌋r gy[n, co, i, j] (2) 3 For instance in Figure 2(b), we replace the 16 distinct values in the gradient map gy with 4 average values in ˜gy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' So given a gradient map gy with N images per batch, C channels, and H × W pixels per channel, the gradient filter returns a structured approximation of the gradient map containing only N × C × ⌈ H r ⌉ × ⌈ W r ⌉ blocks, with one unique value per patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use this matrix of unique values to represent the approximate gradient map ˜gy, as shown in Figure 2(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Back Propagation with Gradient Filtering We describe now the computation procedure used after applying the gradient filter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Detailed derivations are pro- vided in Supplementary Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' input: The gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' input is cal- culated by convolving θ′ with gy (Figure 2(a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' With the approximate gradient ˜gy, this convolution simplifies to: ˜gx[n, ci, h, w] = � co ˜gy[n, co, h, w] ⊙ ˜θ′[co, ci] (3) where ˜θ′[co, ci] = � u,v θ′[co, ci, u, v] is the spatial sum of convolution kernel θ, as shown in Figure 2(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' kernel: The gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the kernel is calculated by taking the Frobenius inner product between x and gy, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', gθ[co, ci, u, v] = x F gy, namely: gθ[co, ci, u, v] = � n,i,j x[n, ci, i+u, j +v]gy[n, co, i, j] (4) With the approximate gradient ˜gy, the operation can be sim- plified to: ˜gθ[co, ci, u, v] = � n,i,j ˜x[n, ci, i, j]˜gy[n, co, i, j] (5) with ˜x[n, ci, i, j] = �⌈i/r⌉r h=⌊i/r⌋r �⌈j/r⌉r w=⌊j/r⌋r x[n, ci, h, w].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' As shown in Figure 2(b), ˜x[n, ci, i, j] is the spatial sum of x elements in the same patch containing pixel (i, j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Analyses of Proposed Approach In this section, we analyze our method from three per- spectives: gradient filtering approximation error, computa- tion reduction, and memory cost reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Error Analysis of Gradient Filtering We prove that the approximation error introduced by our gradient filtering is bounded during the gradient propaga- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Without losing generality, we consider that all vari- ables have only one channel, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', Cx0 = Cx1 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Proposition 1: For any input-output channel pair (co, ci) in the convolution kernel θ, assuming the DC component has the largest energy value compared to all components in the spectrum1, then the signal-to-noise-ratio (SNR) of ˜gx is greater than SNR of ˜gy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Proof: We use Gx, Gy and Θ to denote the gradients gx, gy and the convolution kernel θ in the frequency domain;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Gx[u, v] is the spectrum value at frequency (u, v) and δ is the 2D discrete Dirichlet function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' To simplify the discus- sion, we consider only one patch of size r × r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The gradient returned by the gradient filtering can be written as: ˜gy = 1 r2 1r×r ⊛ gy (6) where ⊛ denotes convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' By applying the discrete Fourier transformation, Equation (6) can be rewritten in the frequency domain as: ˜Gy[u, v] = 1 r2 δ[u, v]Gy[u, v] (7) ˜gy is the approximation of gy (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', the ground truth for ˜gy is gy), and the SNR of ˜gy equals to: SNR˜gy = � (u,v) G2 y[u, v] � (u,v)(Gy[u, v] − 1 r2 δ[u, v]Gy[u, v])2 = (1 − 2r2 − 1 r4 G2 y[0, 0] � (u,v) G2y[u, v])−1 (8) For the convolution layer, the gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the approxi- mate variable ˜x in the frequency domain is2: ˜Gx[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] = Θ[−u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −v] ˜Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] = 1 r2 Θ[−u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −v]δ[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v]Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] (9) and its ground truth is: Gx[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] = Θ[−u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −v]Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] (10) Similar to Equation (8),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the SNR of g˜x is: SNR˜gx = (1 − 2r2 − 1 r4 (Θ[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0]Gy[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0])2 � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v) (Θ[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v]Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v])2 )−1 (11) Equation (11) can be rewritten as: r4(1 − SNR−1 ˜gx ) 2r2 − 1 = (Θ[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0]Gy[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0])2 � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v)(Θ[−u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −v]Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v])2 = G2 y[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0] � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v)( Θ[−u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='−v] Θ[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0] Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v])2 (12) Furthermore,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the main assumption (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', the DC component dominates the frequency spectrum of Θ) can be written as: Θ2[0, 0]/max(u,v)̸=(0,0)Θ2[u, v] ≥ 1 (13) 1As a reminder, the energy of a signal is the sum of energy of the DC component and the energy of its AC components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 2Because gy is convolved with the rotated kernel θ′, in the frequency domain, we use Θ[−u, −v] instead of Θ[u, v].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 4 1 × 1 10 × 10 20 × 20 30 × 30 Patch Size r × r 1M 10M 100M 1G #FLOPs Baseline Reduced Unique Elements Reduced Unique Elements +Structured Gradient Actual Minimum Achievable Computation Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Computation analysis for a specific convolution layer3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Minimum achievable computation is given in Equation (16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' By reducing the number of unique elements, computations required by our approach drop to about 1/r2 compared with the standard BP method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' By combining it with structured gradient map, com- putations required by our approach drop further, getting very close to the theoretical limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' that is, ∀(u, v), Θ2[−u,−v] Θ2[0,0] ≤ 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' thus, by combining Equa- tion (12) and Equation (13), we have: G2 y[0, 0] � (u,v)( Θ[−u,−v] Θ[0,0] Gy[u, v])2 ≥ G2 y[0, 0] � (u,v)(Gy[u, v])2 ⇔ r4(1 − SNR−1 ˜gx ) 2r2 − 1 ≥ r4(1 − SNR−1 ˜gy ) 2r2 − 1 (14) which means that: SNR˜gx ≥ SNR˜gy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' This completes our proof for error analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ■ In conclusion, as the gradient propagates through the net- work, the noise introduced by our gradient filter becomes weaker compared to the real gradient signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' This property ensures that the error in gradient has only a limited influ- ence on the quality of BP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We validate Proposition 1 later in the experimental section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Computation and Overhead Analysis In this section, we analyse the computation required to compute gx, the gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' input x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Figure 3 compares the computation required to propagate the gradient through this convolution layer under different patch sizes r × r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' A patch size 1 × 1 means the vanilla BP algorithm which we use as the baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' As discussed in the preliminary analysis section (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2), two terms contribute to the computa- tion savings: fewer unique elements in the gradient map and the structured gradient map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Fewer unique elements: In vanilla BP, there are HyWy unique elements in the gradient map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' After applying gradi- ent filtering with a patch size r × r, the number of unique 3The layer is from U-Net [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The size of the input is assumed to be 120 × 160 pixels with 192 channels;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the output has the same resolution, but with only 64 channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The kernel size of the convolution layer is 3×3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Analysis for ResNet is included in the supplementary material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' elements reduces to only ⌈ Hy r ⌉⌈ Wy r ⌉.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' This reduction con- tributes the most to the savings in computation (orange line in Figure 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Structured Gradient Map: By creating the structured gra- dient map, the convolution over the gradient map ˜gy is sim- plified to the element-wise multiplication and channel-wise addition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Computation is thus reduced to (HθWθ)−1 of its original value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For instance, the example convolution layer in Figure 3 uses a 3 × 3 convolution kernel so around 89% computations are removed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The blue line in Figure 3 shows the #FLOPs after combining both methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Greater reduc- tion is expected when applying our method with larger con- volution kernels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For instance, FastDepth [31] uses 5 × 5 convolution kernel so as much as 96% reduction in compu- tation can be achieved, in principle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Minimum Achievable Computation: With the two reduc- tions mentioned above, the computation required to propa- gate the gradient through the convolution layer is: #FLOPs(r) = ⌈Hy r ⌉⌈Wy r ⌉Cx(2Cy −1)+o(HyWy) (15) where o(HyWy) is a constant term which is independent of r and negligible compared to HyWy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' When the patch is as large as the feature map, our method reaches the minimum achievable computation (blue dashed line in Figure 3): minr #FLOPs(r) = 2CxCy − Cx + o(HyWy) (16) In this case, each channel of the gradient map is represented with a single value, so the computation is controlled by the number of input and output channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Overhead: The overhead of our approach comes from ap- proximating the feature map x, gradient gy, and kernel θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' As the lower part of Figure 2(a) shows, the approximation for x is considered as part of forward propagation, while the other two as back propagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Indeed, with the patch size r, the ratio of forward propagation overhead is about 1/(2CoWθHθ), while the ratio of backward propagation overhead is about (r2 − 1)/(2Cx).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Given the large number of channels and spatial dimen- sions in typical neural networks, both overhead values take less than 1% computation in the U-Net example above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Memory Analysis As Figure 2(a) shows, the standard back propagation for a convolution layer relies on the input feature map x, which needs to be stored in memory during forward propagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Since every convolution layer requiring gradient for its ker- nel needs to save a copy of feature map x, the memory consumption for storing x is huge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' With our method, we simplify the feature map x to approximated ˜x, which has only ⌈ Hx r ⌉⌈ Wx r ⌉ unique elements for every channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Thus, by saving only these unique values, our method achieves around (1 − 1 r2 ) memory savings, overall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 5 MobileNetV2 [28] #Layers Accuracy FLOPs Mem ResNet-18 [15] #Layers Accuracy FLOPs Mem No Finetuning 0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 0 0 No Finetuning 0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 0 0 Vanilla BP All 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='13G 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='33MB Vanilla BP All 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='42G 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='33MB 2 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 113.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='68M 245.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB 2 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 489.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20M 196.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB 4 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 160.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00M 459.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='38KB 4 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14G 490.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB TinyTL [6] N/A 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 663.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='51M 683.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB TinyTL [6] N/A 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='88G 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='76MB Ours 2 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='27M 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB Ours 2 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='32M 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB 4 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='96M 150.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB 4 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='53M 112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB MCUNet [20] #Layers Accuracy FLOPs Mem ResNet-34 [15] #Layers Accuracy FLOPs Mem No Finetune 0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 0 0 No Finetune 0 0 0 Vanilla BP All 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 231.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='67M 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='17MB Vanilla BP All 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='17G 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11MB 2 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80M 220.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='50KB 2 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 489.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20M 196.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB 4 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='71M 312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='38KB 4 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21G 392.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB TinyTL [6] N/A 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01M 571.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5KB TinyTL [6] N/A 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='03G 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='95MB Ours 2 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='34M 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB Ours 2 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='32M 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB 4 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01M 102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB 4 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='07M 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00KB Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Experimental results for ImageNet classification with four neural networks (MobileNet-V2, ResNet18/34, MCUNet).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' “#Layers” is short for “the number of active convolutional layers”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For example, #Layers equals to 2 means that only the last two convolutional layers are trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For memory consumption, we only consider the memory for input feature x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Strategy “No Finetuning” shows the accuracy on new datasets without finetuning the pretrained model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Since TinyTL [6] changes the architecture, “#Layers” is not applicable (N/A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' PSPNet [33] #Layers GFLOPs mIoU mAcc PSPNet-M [33] #Layers GFLOPs mIoU mAcc FCN [22] #Layers GFLOPs mIoU mAcc Calibration 0 0 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='86 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='74 Calibration 0 0 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='46 Calibration 0 0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='95 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='69 Vanilla BP All 166.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='02 Vanilla BP All 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='48 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='48 Vanilla BP All 170.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='22 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80 5 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='54 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='86 5 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='22 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='35 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='09 5 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='41 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='90 10 110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='15 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10 10 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='46 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='70 10 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='87 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='58 Ours 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='34 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='86 Ours 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='86 Ours 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='58 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='42 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='88 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='79 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='88 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='73 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='76 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='90 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='50 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='96 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='30 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='82 DLV3 [8] #Layers GFLOPs mIoU mAcc DLV3-M [8] #Layers GFLOPs mIoU mAcc UPerNet [32] #Layers GFLOPs mIoU mAcc Calibration 0 0 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='95 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='62 Calibration 0 0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='96 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='15 Calibration 0 0 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='71 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='82 Vanilla BP All 151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='32 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='72 Vanilla BP All 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='66 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='95 Vanilla BP All 541.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='88 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='13 5 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='85 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='16 5 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='35 5 503.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='93 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='67 10 102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='65 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='64 10 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='95 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='49 10 507.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='83 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='02 Ours 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='31 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='09 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='33 Ours 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='26 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='47 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='35 Ours 5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='97 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='04 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='44 10 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='96 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='28 10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='40 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='53 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='99 10 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='22 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='07 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Experimental results for semantic segmentation task on augmented Pascal VOC12 dataset [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Model name with postfix “M” means the model uses MobileNetV2 as backbone, otherwise ResNet18 is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' “#Layers” is short for “the number of active convolutional layers” that are trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' All models are pretrained on Cityscapes dataset [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Strategy “Calibration” shows the accuracy when only the classifier and normalization statistics are updated to adapt different numbers of classes between augmented Pascal VOC12 and Cityscapes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Experiments In this section, we first present our experimental results on ImageNet classification [12] and semantic segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Then, we study the impact of different hyper-parameter se- lections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Furthermore, we present the evaluation result run- ning our method on real hardware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Lastly, we empirically validate the assumption in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Experimental Setup Classification: Following [25], we split every dataset into two highly non-i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' partitions with the same size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Then, we pretrain our models on the first partition with a vanilla training strategy, and finetune the model on the other par- tition with different configurations for the training strat- egy (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', with/without gradient filtering, hyper-parameters, number of convolution layers to be trained).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' More details (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', hyper-parameters) are in the Supplementary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Segmentation: Models are pretrained on Cityscapes [11] by MMSegmentation [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Then, we calibrate and finetune these models with different training strategies on the aug- mented Pascal-VOC12 dataset following [8], which is the combination of Pascal-VOC12 [13] and SBD [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' More details are included in the supplementary material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' On-device Performance Evaluation: For CPU per- formance evaluation, we implement our method with MKLDNN [1] (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' OneDNN) v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 and compare it with the convolution BP method provided by MKLDNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We test on three CPUs, namely Intel 11900KF, Quad-core Cortex- A72 (Jetson Nano) and Quad-core Cortex-A53 (Raspberry Pi 3b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For GPU performance evaluation, we implement our method on CUDNN v8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 [9] and compare with the BP method provided by CUDNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We test on two GPUs, RTX 3090Ti and the edge GPU on Jetson Nano.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Since both 6 MKLDNN and CUDNN only support float32 BP, we test float32 BP only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Additionally, for the experiments on Jet- son Nano, we record the energy consumption for CPU and GPU with the embedded power meter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' More details (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', frequency) are included in the supplementary material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ImageNet Classification Table 2 shows our evaluation results on the ImageNet classification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' As shown, our method significantly re- duces the FLOPs and memory required for BP, with very little accuracy loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For example, for ResNet34, our method achieves 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9× speedup with 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7% accuracy loss when training four layers;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' for MobileNetV2, we get a 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2% bet- ter accuracy with 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0× speedup and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1× memory savings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' These results illustrate the effectiveness of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' On most networks, TinyTL has a lower accuracy while consum- ing more resources compared to the baselines methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Semantic Segmentation Table 3 shows our evaluation results on the augmented Pascal-VOC12 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' On a wide range of networks, our method constantly achieves significant speedup with marginal accuracy loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For the large network UPerNet, our method achieves 229× speedup with only 1% mIoU loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For the small network PSPNet, our method speedups train- ing by 140× with only 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='27% mIoU loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' This shows the effectiveness of our method on a dense prediction task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Hyper-Parameter Selection Figure 4 shows our experimental results for ResNets un- der different hyper-parameter selection, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' number of con- volution layers and patch size of gradient filter r × r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Of note, the y-axis (MFLOPs) in Figure 4 is log scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' More results are included in Supplementary Section F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We high- light three qualitative findings in Figure 4: a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For a similar accuracy, our method greatly reduces the number of operations (1 to 2 orders of magni- tude), while for a similar number of computations, our method achieves a higher accuracy (2% to 5% better).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' This finding proves the effectiveness of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Given the number of convolution layers to be trained, the more accurate method returns a better accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Baseline (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', standard BP) uses the most accurate gra- dient, Ours-R4 (BP with gradient filter with patch size 4 × 4) uses the least accurate gradient;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' thus, Baseline > Ours-R2 > Ours-R4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' This finding is intuitive since the more accurate method should introduce smaller noise to the BP, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', the gradi- ent filtering with patch size 2 × 2 (Ours-R2) introduces less noise than with patch size 4 × 4 (Ours-R4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Figure 5, we 92 93 94 Accuracy [%] ResNet18-CIFAR10 101 102 103 #MFLOPs 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1x OPs 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1x OPs 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2% Acc 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3x OPs Baseline Ours-R2 Ours-R4 78 80 82 Accuracy [%] ResNet34-CIFAR100 101 102 103 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8x OPs 1% Acc 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6x OPs 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0x OPs 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1% Acc 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6x OPs Baseline Ours-R2 Ours-R4 Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Computation (#MFLOPs, log scale) and model accuracy [%] under different hyper-parameter selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' “Baseline” means vanilla BP;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' “Ours-R2/4” uses gradient filtering with patch size 2× 2/4 × 4 during BP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' evaluate the relationship between accuracy and noise level introduced by gradient filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' With a higher SNR (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', a lower noise level), a better accuracy is achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='050 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='075 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='125 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='150 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='175 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='200 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='225 SNR [db] 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8 Top 1 Accuracy [%] Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Relationship between accuracy and noise level intro- duced by the gradient filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' As shown, accuracy increases as the SNR increases, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', noise level decreases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Given the number of computations, the less accurate method returns the better accuracy by training more layers, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', Ours-R4 > Ours-R2 > baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' This finding suggests that for neural network training with relatively low computational resources, training more layers with less accurate gradients is preferable than training fewer layers with more accurate gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' On-device Performance Evaluation Figure 6 and Table 4 show our evaluation results on real devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' More results are included in the Supplementary Section H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' As Figure 6 shows, on CPU, most convolution layers achieve speedups over 20× with less than 50% mem- ory consumption for gradient filtering with patch sizes 2×2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' for gradient filtering with patch size 4 × 4, the speedups are much higher, namely over 60×.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' On GPU, the speedup is a little bit lower, but still over 10× and 25×, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Furthermore,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' as Table 4 shows,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' our method saves over 95% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='40× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='60× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='100× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Speedup (×times) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='114× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='CPU Speedup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11900KF-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11900KF-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='RPi3-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='RPi3-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='30× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='40× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='50× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='60× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='GPU Speedup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='RTX3090Ti-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='RTX3090Ti-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Test Case - Baseline: MKLDNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Percentage [%] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Baseline: MKLDNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='50% Memory Cost ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Normalized CPU Memory Cost ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11900KF-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11900KF-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='RPi3-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='RPi3-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Test Case - Baseline: CUDNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Baseline: CUDNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='50% Memory Cost ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Normalized GPU Memory Cost ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='RTX3090Ti-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='RTX3090Ti-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Speedup and normalized memory consumption results on multiple CPUs and GPUs under different test cases (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' different input sizes, numbers of channels, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=') Detailed configuration of these test cases are included in the supplementary material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' “R2”, “R4” mean using gradient filtering with 2 × 2 and 4 × 4 patch sizes, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Our method achieves significant speedup with low memory consumption compared to all baseline methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For example, on Jetson CPU with patch size 4 × 4 (“Jetson-R4” in left top figure), our method achieves 114× speedup with only 33% memory consumption for most test cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Device Patch Size Normalized Energy Cost [STD] Edge CPU 2 × 2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='13% [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='61%] 4 × 4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='15% [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='18%] Edge GPU 2 × 2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80% [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='73%] 4 × 4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='22% [1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10%] Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Normalized energy consumption for BP with gradient filtering for different patch sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Results are normalized w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the energy cost of standard BP methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For instance, for edge CPU with a 4 × 4 patch, only 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='15% of energy in standard BP is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Standard deviations are shown within brackets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' energy for both CPU and GPU scenarios, which largely re- solves one of the most important constraints on edge de- vices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' All these experiments on real devices show that our method is practical for the real deployment of both high- performance and IoT applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Model Ratio Model Ratio (Wide)ResNet18-152 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='462 VGG(bn)11-19 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='497 DenseNet121-201 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='278 EfficientNet b0-b7 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='240 Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Evaluation of energy ratio defined in Equation (13) on models published on Torchvision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The ratio greater than 1 empiri- cally verifies our assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Main Assumption Verification We now empirically verify the assumption that the DC component dominates the frequency spectrum of the convo- lution kernel (Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' To this end, we collect the en- ergy ratio shown in Equation (13) from trained models pub- lished in Torchvision [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' As Table 5 shows, for the con- volution kernels in all these networks, we get a ratio greater than one, which means that the energy of DC components is larger than energy of all AC components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Thus, our as- sumption in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 empirically holds true in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Conclusions In this paper, we have addressed the on-device model training for resource-constrained edge devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' To this end, a new gradient filtering method has been proposed to sys- tematically reduce the computation and memory consump- tion for the back-propagation algorithm, which is the key bottleneck for efficient model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Section 3, a new gradient filtering approach has been proposed to reduce the computation required for propagat- ing gradients through the convolutional layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The gradient filtering creates an approximate gradient feature map with fewer unique elements and a special structure;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' this reduces the computation by more than two orders of magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Furthermore, we proved that the error introduced during back-propagation by our gradient filter is bounded so the influence of gradient approximation is limited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Extensive experiments in Section 5 have demonstrated the efficiency and wide applicability of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Indeed, models can be finetuned with orders of magnitudes fewer computations, while having only a marginal accuracy loss compared to popular baseline methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 8 References [1] Intel® oneapi deep neural network library (onednn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='intel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='com/content/www/us/en/ developer/tools/oneapi/onednn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 1, 2, 6 [2] Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Corrado,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Andy Davis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Jeffrey Dean,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Matthieu Devin,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Sanjay Ghemawat,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Ian Goodfellow,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Andrew Harp,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Geoffrey Irving,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Michael Isard,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Yangqing Jia,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Rafal Jozefowicz,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Lukasz Kaiser,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Manjunath Kudlur,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Josh Levenberg,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Dandelion Man´e,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Rajat Monga,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Sherry Moore,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Derek Murray,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Chris Olah,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Mike Schuster,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Jonathon Shlens,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Benoit Steiner,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Ilya Sutskever,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Kunal Tal- war,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Paul Tucker,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Vincent Vanhoucke,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Vijay Vasudevan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Fer- nanda Vi´egas,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Oriol Vinyals,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Pete Warden,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Martin Watten- berg,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Martin Wicke,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Yuan Yu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' and Xiaoqiang Zheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Tensor- Flow: Large-scale machine learning on heterogeneous sys- tems, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Software available from tensorflow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='org.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 1, 2 [3] Dan Alistarh, Demjan Grubic, Jerry Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Li, Ryota Tomioka, and Milan Vojnovic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Qsgd: Communication-efficient sgd via gradient quantization and encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Proceedings of the 31st International Conference on Neural Information Pro- cessing Systems, NIPS’17, page 1707–1718, Red Hook, NY, USA, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Curran Associates Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 2 [4] Ron Banner, Itay Hubara, Elad Hoffer, and Daniel Soudry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Scalable methods for 8-bit training of neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, page 5151–5159, Red Hook, NY, USA, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Curran Associates Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 1, 2, 11, 17, 19 [5] Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzade- nesheli, and Animashree Anandkumar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' signsgd: Com- pressed optimisation for non-convex problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Inter- national Conference on Machine Learning, pages 560–569.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' PMLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 2 [6] Han Cai, Chuang Gan, Ligeng Zhu, and Song Han.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Tinytl: Reduce activations, not trainable parameters for efficient on- device learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' arXiv preprint arXiv:2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11622, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 1, 2, 6 [7] Jianfei Chen, Yu Gai, Zhewei Yao, Michael W Mahoney, and Joseph E Gonzalez.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' A statistical framework for low-bitwidth training of deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Larochelle, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Ran- zato, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Hadsell, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Balcan, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 883–894.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 1, 2, 11, 15, 17, 19 [8] Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Rethinking atrous convolution for seman- tic image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' arXiv preprint arXiv:1706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='05587, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 6 [9] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' cudnn: Efficient primitives for deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' arXiv preprint arXiv:1410.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0759, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 1, 2, 6 [10] MMSegmentation Contributors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' https : / / github .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' com / open - mmlab/mmsegmentation, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 6 [11] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The cityscapes dataset for semantic urban scene understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 6 [12] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Deng, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Dong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Socher, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Li, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Li, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Fei-Fei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ImageNet: A Large-Scale Hierarchical Image Database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In CVPR09, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 6 [13] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Everingham, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Van Gool, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Williams, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Winn, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The pascal visual object classes (voc) chal- lenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' International Journal of Computer Vision, 88(2):303– 338, June 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 6 [14] Bharath Hariharan, Pablo Arbelaez, Lubomir Bourdev, Subhransu Maji, and Jitendra Malik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Semantic contours from inverse detectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In International Conference on Computer Vision (ICCV), 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 6 [15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Deep residual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 2, 6 [16] Ziyang Hong and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Patrick Yue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Efficient-grad: Efficient training deep convolutional neural networks on edge devices with gradient optimizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Embed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', 21(2), feb 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 2 [17] Roger A Horn and Charles R Johnson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Matrix analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Cambridge university press, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3 [18] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El- Yaniv, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Quantized neural networks: Training neural networks with low precision weights and ac- tivations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', 18(1):6869–6898, jan 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 2 [19] Yoonho Lee, Annie S Chen, Fahim Tajwar, Ananya Ku- mar, Huaxiu Yao, Percy Liang, and Chelsea Finn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Surgical fine-tuning improves adaptation to distribution shifts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11466, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 1, 2 [20] Ji Lin, Wei-Ming Chen, John Cohn, Chuang Gan, and Song Han.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Mcunet: Tiny deep learning on iot devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In An- nual Conference on Neural Information Processing Systems (NeurIPS), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 6 [21] Ji Lin, Ligeng Zhu, Wei-Ming Chen, Wei-Chen Wang, Chuang Gan, and Song Han.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' On-device training under 256kb memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='15472, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 1, 2 [22] Jonathan Long, Evan Shelhamer, and Trevor Darrell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Fully convolutional networks for semantic segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Pro- ceedings of the IEEE conference on computer vision and pat- tern recognition, pages 3431–3440, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 6 [23] Ilya Loshchilov and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' SGDR: Stochastic gradi- ent descent with warm restarts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In International Conference on Learning Representations, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 15 [24] S´ebastien Marcel and Yann Rodriguez.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Torchvision the machine-vision package of torch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Proceedings of the 18th ACM international conference on Multimedia, pages 1485– 1488, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 8 [25] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Communication- efficient learning of deep networks from decentralized data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 9 In Artificial intelligence and statistics, pages 1273–1282.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' PMLR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 6, 14 [26] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zem- ing Lin, Natalia Gimelshein, Luca Antiga, Alban Desmai- son, Andreas Kopf, Edward Yang, Zachary DeVito, Mar- tin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Pytorch: An imperative style, high-performance deep learning library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems 32, pages 8024–8035.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 1, 2 [27] Olaf Ronneberger, Philipp Fischer, and Thomas Brox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' U- net: Convolutional networks for biomedical image segmen- tation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In International Conference on Medical image com- puting and computer-assisted intervention, pages 234–241.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Springer, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 5 [28] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zh- moginov, and Liang-Chieh Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Mobilenetv2: Inverted residuals and linear bottlenecks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 4510–4520, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 6 [29] Xiao Sun, Naigang Wang, Chia-Yu Chen, Jiamin Ni, Ankur Agrawal, Xiaodong Cui, Swagath Venkataramani, Kaoutar El Maghraoui, Vijayalakshmi (Viji) Srinivasan, and Kailash Gopalakrishnan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Ultra-low precision 4-bit training of deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Larochelle, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Ranzato, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Hadsell, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Balcan, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Lin, editors, Advances in Neural Infor- mation Processing Systems, volume 33, pages 1796–1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 2 [30] Yue Wang, Ziyu Jiang, Xiaohan Chen, Pengfei Xu, Yang Zhao, Yingyan Lin, and Zhangyang Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' E2-train: Train- ing state-of-the-art cnns with over 80% energy savings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Ad- vances in Neural Information Processing Systems, 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 2 [31] Diana Wofk, Fangchang Ma, Tien-Ju Yang, Sertac Karaman, and Vivienne Sze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Fastdepth: Fast monocular depth esti- mation on embedded systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In 2019 International Confer- ence on Robotics and Automation (ICRA), pages 6101–6108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' IEEE, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 5 [32] Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Unified perceptual parsing for scene understand- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Proceedings of the European conference on computer vision (ECCV), pages 418–434, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 6 [33] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Pyramid scene parsing network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2881–2890, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 6 [34] Kang Zhao, Sida Huang, Pan Pan, Yinghan Li, Yingya Zhang, Zhenyu Gu, and Yinghui Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Distribution adaptive int8 quantization for training cnns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 3483–3491, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 2 10 In this supplementary material, we present: A: Detailed derivation for gradient filtering described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' B: Detailed proof for Proposition 1 in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' C: Visualized computation analysis for ResNet18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D: Detailed experimental setup for Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' E: More experimental results for Semantic Segmenta- tion in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' F: More experimental results for hyper-parameter ex- ploration on CIFAR datasets in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' G: Experimental results for combining gradient filter- ing (our method) with existing INT8 gradient quanti- zation approaches [4,7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' H: More experimental results for on-device perfor- mance evaluation in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Gradient Filtering Derivation In this section, we present the complete derivations for Equation (3) and Equation (5) in Section 3, namely the back propagation with gradient filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For convenience, Ta- ble 6 (reproduced from Table 1 in paper) lists commonly used symbols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Gradient Filtering We have: ˜gy[n, co, h, w] = 1 r2 ⌈i/r⌉r � h=⌊i/r⌋r ⌈j/r⌉r � w=⌊j/r⌋r gy[n, co, i, j] (17) Cx Number of channels of x Wx, Hx Width and height of x θ Convolution kernel θ′ Rotated θ, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', θ′ = rot180(θ) r Patch size (r × r) gx, gy, gθ Gradients w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' x, y, θ ˜gy Approximated gradient gy ˜x, ˜θ′ Sum of x and θ′ over spatial dimensions (height and width) x[n, ci, h, w] Element for feature map x at batch n, channel ci, pixel (h, w) θ[co, ci, u, v] Element for convolution kernel θ at output channel co, input channel ci, position (u, v) Table 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Table of symbols we use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Thus, for any entry in the approximated gradient ˜gy, the value equals to the average of all neighboring elements within the same r × r patch, as shown in the Figure 2 in the main manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For the approximated gradient ˜gy with batch size n, channel c, resolution (Hy, Wy), there will be (n × c × ⌈ Hy r ⌉ × ⌈ Wy r ⌉) unique numbers in ˜gy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' To simplify the following derivations, we rewrite the approximated gra- dient ˜gy as follows: ˜gp y[n, co, hp, wp, i, j] = ˜gy[n, co, hp∗r+i, wp∗r+j] (18) where (hp, wp) is the position of the patch and (i, j) is the offset within the patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Since every element in the same patch has the exact same value, we denote this unique value with ˜gu y , i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', ˜gu y [n, co, hp, wp] = ˜gp y[n, co, hp, wp, i, j], ∀0 ≤ i, j < r (19) A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Approximation for Rotated Convolution Ker- nel θ′ ˜θ′[co, ci] = � u,v θ′[co, ci, u, v] = � u,v rot180(θ)[co, ci, u, v] = � u,v θ[co, ci, u, v] (20) A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Approximation for Input Feature x ˜x[n, ci, h, w] = ⌈i/r⌉r � h=⌊i/r⌋r ⌈j/r⌉r � w=⌊j/r��r x[n, ci, i, j] (21) Thus for every entry in approximated feature map ˜x, the value equal to the sum of all neighboring elements within the same r × r patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Following the definition of the gra- dient filter in Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1, we use the following symbols to simplify the derivation: ˜xp[n, ci, hp, wp, i, j] = ˜x[n, ci, hp ∗r +i, wp ∗r +j] (22) and ˜xu[n, ci, hp, wp] = ˜xp[n, ci, hp, wp, i, j], ∀0 ≤ i, j < r (23) A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Boundary Elements As mentioned in Section 3, given the structure created by the gradient filters, the gradient propagation in a con- volution layer can be simplified to weights summation and multiplication with few unique gradient values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' This is true 11 for all elements far away from the patch boundary because for these elements, the rotated kernel θ′ only covers the ele- ments from the same patch, which have the same value, thus the computation can be saved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' However, for the elements close to the boundary, this is not true, since when convolv- ing with boundary gradient elements, the kernel may cover multiple patches with multiple unique values instead of just one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' To eliminate the extra computation introduced by the boundary elements,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' we pad each patch sufficiently such that every element is far away from boundary: ˜gp y[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' hp,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' wp,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' j] = ˜gu y [n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' hp,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' wp],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ∀i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' j ∈ Z (24) For example,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' with the patch size 4 × 4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the element at the spatial position (3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3) is on the boundary,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' so when we calcu- late ˜gx[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3] by convolving the rotated kernel θ′ with the approximated gradient ˜gy: ˜gx[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3] = � i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='j θ′[co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' j]˜gy[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3+i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3+j] (25) values of ˜gy are from multiple patches and have differ- ent values (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', ˜gy[n, co, 3, 3] is from patch (0, 0) while ˜gy[n, co, 4, 4] is from patch (1, 1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' they have different val- ues).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In our method,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' we simplify the Equation (25) by rewriting it in the following way: ˜gx[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3] ≈ 1 � i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='j=−1 θ′[co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' j]˜gp y[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊3 4⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊3 4⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3 + i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3 + j] (26) = 1 � i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='j=−1 θ′[co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' j]˜gu y [n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊3 4⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊3 4⌋] (27) = 1 � i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='j=−1 θ′[co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' j]˜gu y [n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0] (28) where Equation (26) is derived from Equation (25) by con- sidering that patch (0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0) is sufficiently padded so that for elements with all offsets (3 + i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 3 + j),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' they have the same value,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' which is the unique value gu y [n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For approximated input feature map ˜x, we apply the same approximation for the boundary elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Input (Equation (3) in the Pa- per) ˜gx[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' w] (29) = � co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v θ[co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −v]˜gy[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' h + u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' w + v] (30) ≈ � co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v θ[co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −v]· ˜gp y[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊h r ⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊w r ⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' (h mod r) + u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' (w mod r) + v] (31) = � co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v θ[co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −v]˜gu y [n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊h r ⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊w r ⌋] (32) = � co ˜gu y [n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊h r ⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊w r ⌋] � u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v θ[co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −v] (33) = � co ˜gu y [n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊h r ⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊w r ⌋]˜θ′[co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci] (34) By expanding ˜gu y to ˜gy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' we have: ˜gx[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' w] = � co ˜gy[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' w] ⊙ ˜θ′[co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci] (35) which is the Equation (3) in Section 3 in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' From Equation (30) to Equation (32), we consider that the patch in the approximated gradient ˜gy is padded suffi- ciently so they have the same value for all possible offsets ((h mod r) + u, (w mod r) + v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' If there is only one in- put channel and output channel for the convolutional layer as the Figure 2 in the paper shows, then Equation (34) become an element-wise multiplication, which is Equa- tion (35) (also the Equation (3) in the paper).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Convolution Kernel (Equation (5) in the Paper) ˜gθ[co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] (36) = � n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='w x[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' h + u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' w + v]˜gy[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' w] (37) ≈ � n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='w ˜xp[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊h r ⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊w r ⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' (h mod r) + u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' (w mod r) + v]· ˜gu y [n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊h r ⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊w r ⌋] (38) = � n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='w ˜xu[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊h r ⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊w r ⌋]˜gu y [n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊h r ⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊w r ⌋] (39) = � n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='h,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='w ˜xu[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊h r ⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊w r ⌋]˜gu y [n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊h r ⌋,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ⌊w r ⌋] (40) 12 By expanding ˜xu and ˜gu y to ˜x and ˜gy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' respectively,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' we have: ˜gθ[co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] = � n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='j ˜x[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ci,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' j]˜gy[n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' co,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' j] (41) which is precisely Equation (5) in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' From Equation (37) to Equation (39), we consider that the patch in the approximated input feature map ˜x is padded sufficiently thus they have the same value for all possible offsets ((h mod r) + u, (w mod r) + v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For every given input/output channel pair (co, ci), Equation (40) represents the Frobenius inner product between ˜xu and ˜gu y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Detailed Proof for Proposition 1 In this section, we provide more details to the proof in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use Gx, Gy and Θ to denote the gradients gx, gy and the convolution kernel θ in the frequency domain, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Gx[u, v] is the spectrum value at frequency (u, v) and δ is the 2D discrete Dirichlet function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Without losing generality and to simplify the proof, we consider the batch size is 1, the number of input/output channels is 1, namely Cx = Cy = 1, and there is only one patch in ˜gy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The gradient returned by the gradient filtering can be written as: ˜gy = 1 r2 1r×r ⊛ gy (42) where ⊛ denotes convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' By applying the discrete Fourier transformation,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Equation (42) can be rewritten in the frequency domain as: ˜Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] = 1 r2 δ[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v]Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] (43) ˜gy is the approximation for gy(so the ground truth for ˜gy is gy),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' and the SNR of ˜gy equals to: SNR˜gy = ( � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v)(Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' − ˜Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v])2 � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v) G2y[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] )−1 = ( � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v)(Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] − 1 r2 δ[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v]Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v])2 � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v) G2y[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] )−1 (44) where the numerator can be written as: � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v) (Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] − 1 r2 δ[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v]Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v])2 = � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v)̸=(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0) (Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] − 1 r2 δ[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v]Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v])2 + (Gy[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0] − 1 r2 δ[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0]Gy[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0])2 (45) Because δ[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] = � 1 (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v) = (0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0) 0 (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v) ̸= (0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Equation (45) can be written as: � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v)̸=(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0) G2 y[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] + (r2 − 1)2 r4 G2 y[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0] = � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v)̸=(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0) G2 y[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] + G2 y[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0] − G2 y[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0] + (r2 − 1)2 r4 G2 y[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0] = � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v) G2 y[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] − 2r2 − 1 r4 G2 y[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0] (46) By substituting the numerator in Equation (44) with Equa- tion (46),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' we have: SNR˜gy = ( � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v) G2 y[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] − 2r2−1 r4 G2 y[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0] � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v) G2y[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] )−1 = (1 − 2r2 − 1 r4 G2 y[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0] � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v) G2y[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v])−1 = (1 − 2r2 − 1 r4 Energy of DC Component in Gy Total Energy4in Gy )−1 (47) For the convolution layer,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the gradient w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' approximated variable ˜x in the frequency domain is: ˜Gx[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] = Θ[−u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −v] ˜Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] = 1 r2 Θ[−u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −v]δ[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v]Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] (48) and its ground truth is: Gx[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] = Θ[−u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −v]Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v] (49) Similar to Equation (47),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the SNR of g˜x is: SNR˜gx = (1 − 2r2 − 1 r4 Θ2[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0]G2 y[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0] � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v) Θ2[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v]G2y[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v])−1 = (1 − 2r2 − 1 r4 G2 x[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0] � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v) G2x[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v])−1 = (1 − 2r2 − 1 r4 Energy of DC Component in Gx Total Energy5in Gx )−1 (50) Equation (50) can be rewritten as: r4(1 − SNR−1 ˜gx ) 2r2 − 1 = (Θ[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0]Gy[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0])2 � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v)(Θ[−u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' −v]Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v])2 = G2 y[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0] � (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='v)( Θ[−u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='−v] Θ[0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0] Gy[u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' v])2 (51) 4As reminder,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the total energy of a signal is the sum of energy in DC component and energy in AC components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 13 Besides, the proposition’s assumption (the DC component dominates the frequency spectrum of Θ) can be written as: Θ2[0, 0] max(u,v)̸=(0,0)Θ2[u, v] ≥ 1 (52) which is: ∀(u, v), Θ2[−u, −v] Θ2[0, 0] ≤ 1 (53) thus, by combining Equation (51) and Equation (53), we have: r4(1 − SNR−1 ˜gx ) 2r2 − 1 = G2 y[0, 0] � (u,v)( Θ[−u,−v] Θ[0,0] Gy[u, v])2 ≥ G2 y[0, 0] � (u,v)(Gy[u, v])2 = r4(1 − SNR−1 ˜gy ) 2r2 − 1 (54) which means that: SNR˜gx ≥ SNR˜gy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' This completes our proof for error analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='■ In conclusion, as the gradient propagates, the noise in- troduced by the gradient filter becomes weaker and weaker compared to the real gradient signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' This property ensures that the error in gradient has only a limited influence on the quality of BP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' This proof can be extended to the more general case where batch size and the number of channels are greater than 1 by introducing more dimensions (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', batch dimen- sion, channel dimension) into all equations listed above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Computation Analysis for ResNet18 In this section, we provide two more examples for com- putation analysis in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Figure 7 shows the com- putation required by the convolution layers from ResNet18 with different patch sizes for gradient filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' With re- duced unique elements, our approach reduces the num- ber of computations to 1/r2 of standard BP method;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' with structured gradient, our approach further reduces the num- ber of computations to about 1/(r2HθWθ) of standard BP method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Detailed Experimental Setup In this supplementary section, we extend the experimen- tal setup in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ImageNet Classification D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 Environment ImageNet related experiments are conducted on IBM Power System AC922, which is equipped with a 40-core IBM Power 9 CPU, 256 GB DRAM and 4 NVIDIA Tesla V100 16GB GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use PyTorch 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 compiled with CUDA 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 as the deep learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 1 × 1 3 × 3 5 × 5 7 × 7 Patch Size r × r 1M 10M 100M FLOPs Baseline Reduced Unique Elements +Structured Gradient Actual Minimum Achievable Computation (a) Last convolutional layer in block 4 of ResNet18 with 512 input/output channels;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the resolution of input feature map is 7 × 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 1 × 1 4 × 4 8 × 8 12 × 12 Patch Size r × r 100K 1M 10M 100M FLOPs Baseline Reduced Unique Elements +Structured Gradient Actual Minimum Achievable Computation (b) Last convolutional layer in block 3 of ResNet18 with 256 input/output channels;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the resolution of input feature map is 14 × 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 1 × 1 10 × 10 20 × 20 Patch Size r × r 100K 1M 10M 100M FLOPs Baseline Reduced Unique Elements +Structured Gradient Actual Minimum Achievable Computation (c) Last convolutional layer in block 2 of ResNet18 with 128 input/output channels;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' the resolution of input feature map is 28 × 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Computation analysis for three convolution layers in of ResNet18 model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Since convolutional layers in every block of ResNet18 is similar, we use the last convolutional layer as the representative of all convolutional layers in the block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Minimum achievable computation is presented in Equation (16) in the pa- per.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' By reducing the number of unique elements, computations required by our approach drop to about 1/r2 compared with the standard BP method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' By combining it (“+” in the figure) with structured gradient map, computations required by our approach drop further.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 Dataset Split We split the dataset into two non-i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' partitions following the FedAvg method [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The label distribution is shown in Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Among all 1000 classes for the ImageNet, pretrain and finetune partitions overlap on only 99 classes, which suggests that our method can efficiently adapt the 14 Model Accuracy Model Accuracy ResNet-18 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5% MobileNet-V2 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3% ResNet-34 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4% MCUNet 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4% Table 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Model pretraining accuracy on ImageNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' DNN model to data collected from new environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For each partition, we randomly select 80% data as training data and 20% as validation data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0 200 400 600 800 1000 Class Index 0 200 400 600 800 1000 Image Count ImageNet Data Split Pretrain Finetune Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Label distribution for pretraining and finetuning datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Pretraining and finetuning partitions are split from ImageNet dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 Pretraining We pretrain ResNet 18, ResNet 34, MobileNet-V2 and MCUNet with the same configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use SGD opti- mizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The learning rate of the optimizer starts at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='05 and decays according to cosine annealing method [23] during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Additionally, weight decay is set to 1 × 10−4 and momentum is set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We set batch size to 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We ran- domly resize, randomly flip and normalize the image for data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use cross entropy as loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Models are trained for 200 epochs and the model with the highest validation accuracy is kept for finetuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Table 7 shows the pretrain accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 Finetuning We adopt the hyper-parameter (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', momentum, weight de- cay, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=') from pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Several changes are made: mod- els are finetuned for 90 epochs instead of 200;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' we apply L2 gradient clipping with threshold 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' linear learning rate warm-up for 4 epochs is introduced at the beginning of fine- tuning, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', for the first 4 epochs, the learning rate grows linearly up to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='05, then the learning rate decays accord- ing to cosine annealing method in the following epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Of note, to ensure a fair comparison, we use the same hyper- parameters for all experiments, regardless of model type and training strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' CIFAR Classification D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 Environment CIFAR related experiments are conducted on a GPU work- station with a 64-core AMD Ryzen Threadripper PRO 3995WX CPU, 512 GB DRAM and 4 NVIDIA RTX A6000 GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use PyTorch 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 compiled with CUDA 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 as the deep learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 Dataset Split We split the dataset into two non-i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' partitions following FedAvg method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The label distribution is shown in Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For CIFAR10, pretrain and finetune partitions overlap on 2 classes out of 10 classes in total.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For CIFAR100, pre- train and finetune partitions overlap on 6 classes out of 100 classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 0 2 4 6 8 Class Index 0 1000 2000 3000 4000 Image Count CIFAR10 Data Split Pretrain Finetune 0 20 40 60 80 100 Class Index 0 100 200 300 400 Image Count CIFAR100 Data Split Pretrain Finetune Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Label distribution for pretraining and finetuning datasets on CIFAR10 and CIFAR100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Pretraining and finetuning partitions are split from CIFAR10/100, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 Pretraining We pretrain ResNet18 and ResNet34 with the same config- uration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use the ADAM optimizer with a learning rate of 3 × 10−4 and weight decay 1 × 10−4 with no learning rate scheduling method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use cross entropy as loss func- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We set batch size to 128, and normalize the data be- fore feeding it to the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Models are trained for 30 and 50 epochs for CIFAR10 and CIFAR100, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Then, the model with the highest accuracy is kept for finetuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Table 8 shows the pretrain accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 Finetuning We adopt the training configuration from PSQ [7] with some changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use cross entropy loss with SGD opti- 15 ResNet18 ResNet34 CIFAR10 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1% 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6% CIFAR100 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5% 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5% Table 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Model pretraining accuracy on CIFAR10/100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' mizer for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The learning rate of the optimizer starts at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='05 and decays according to cosine annealing method during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Momentum is set to 0 and weight decay is set to 1 × 10−4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We apply L2 gradient clipping with a threshold 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Batch normalization layers are fused with convolution layers before training, which is a common tech- nique for inference acceleration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Semantic Segmentation D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 Environment ImageNet related experiments are conducted on IBM Power System AC922, which is equipped with a 40-core IBM Power 9 CPU, 256 GB DRAM and 4 NVIDIA Tesla V100 16GB GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use PyTorch 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 compiled with CUDA 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 as the deep learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We implement our method based on MMSegmentation 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 Pretraining We use models pretrained by MMSegmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Consider- ing that the numbers of classes, image statistics, and model hyper-parameters may be different when applying on dif- ferent datasets, we calibrate the model before finetuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use SGD optimizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The learning rate of the optimizer starts at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01 and decays exponentially during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Ad- ditionally, weight decay is set to 5 × 10−4 and momentum is set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We set batch size to 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We randomly crop, flip and photo-metric distort and normalize the image for data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use cross entropy as loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For DeepLabV3, FCN, PSPNet and UPerNet, we calibrate the classifier (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', the last layer) and statistics in batch normal- ization layers for 1000 steps on the finetuning dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For DeepLabV3-MobileNetV2 and PSPNet-MobileNetV2, be- cause the number of channels for convolutional layers in the decoder are different for models applied on different datasets, we calibrate the decoder and statistics in batch nor- malization layers for 5000 steps on the finetuning dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 Finetuning We finetune all models with the same configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use the SGD optimizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' The learning rate of the optimizer starts at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01 and decays according to cosine anneling method dur- ing training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Additionally, weight decay is set to 5 × 10−4 and momentum is set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We set batch size to 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We randomly crop, flip and photo-metric distort and normalize the image for data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use cross entropy as loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Models are finetuned for 20000 steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Exper- iments are repeated three times with random seed 233, 234 and 235.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' On-device Performance Evaluation D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 NVIDIA Jetson Nano We use NVIDIA Jetson Nano with quad-core Cortex-A57, 4 GB DRAM, 128-core Maxwell edge GPU for perfor- mance evaluation on both edge CPU and edge GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use the aarch64-OS Ubuntu 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 provided by NVIDIA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' During evaluation, the frequencies for CPU and GPU are 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 GHz and 921 MHz, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Our code and library MKLDNN (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' OneDNN) are compiled on Jetson Nano with GCC 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0, while libraries CUDA and CUDNN are compiled by NVIDIA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For CPU evaluations, our code and baseline are implemented with MKLDNN v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For GPU evaluations, our code and baseline are implemented with CUDA 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 and CUDNN 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Before the evaluation for every test case, we warm up the device by running the test once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Then we repeat the test 10 times and report the average value for latency, energy consumption, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Energy consumption is obtained by reading the embed- ded power meter in Jetson Nano every 20 ms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 Raspberry Pi 3b We use Raspberry Pi 3b with quad-core Cortex-A53, 1 GB DRAM for performance evaluation on CPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use the aarch64-OS Raspberry Pi OS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' During evaluation, the frequency for CPU is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 GHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Our code and library MKLDNN are compiled on Raspberry Pi with GCC 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Our code and baseline are implemented with MKLDNN v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Before the evaluation for every test case, we warm up the device by running the test once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Then we repeat the test 10 times and report the average value for latency, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 Desktop We use a desktop PC with Intel 11900KF CPU, 32 GB DRAM and RTX 3090 Ti GPU for perforamce evaluation on both desktop CPU and desktop GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' We use x86 64- OS Ubuntu 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' During evaluation, the frequencies for CPU and GPU are 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 GHz and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 GHz respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Our code is compiled with GCC 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' MKLDNN is compiled by Anaconda (tag omp h13be974 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' CUDA and CUDNN are compiled by NVIDIA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For CPU evaluations, our code and baseline are implemented with MKLDNN v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For GPU evaluations, our code and baseline are implemented with CUDA 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 and CUDNN 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 16 Pretrain: ADE20K Finetune: VOC12Aug UPerNet #Layers GFLOPs mIoU mAcc PSPNet-M #Layers GFLOPs mIoU mAcc DLV3-M #Layers GFLOPs mIoU mAcc Calibration 0 0 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='66 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='03 Calibration 0 0 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='93 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01 Calibration 0 0 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='28 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='98 Vanilla BP All 541.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='23[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='24] 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='79[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='45] Vanilla BP All 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='41 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='51[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='27] 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='19] Vanilla BP All 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='35 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='78[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21] 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='40] 5 503.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='09] 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='97[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='30] 5 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='22 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='88[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11] 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='67[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='31] 5 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='77 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='51[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='09] 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='08[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='44] 10 507.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='19] 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='83[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='44] 10 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='46 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='71[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='29] 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='93[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='32] 10 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='63[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10] 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='93[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='41] Ours 5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='97 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='76[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11] 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='57[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='07] Ours 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='59[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='08] 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='28[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='30] Ours 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='26 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='40[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00] 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='13[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='54] 10 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='22 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='78[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='23] 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='55[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='38] 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='76 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='77[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='37] 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='82[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='47] 10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='40 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='15] 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='48[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='26] Pretrain: ADE20K Finetune: Cityscapes UPerNet #Layers GFLOPs mIoU mAcc PSPNet-M #Layers GFLOPs mIoU mAcc DLV3-M #Layers GFLOPs mIoU mAcc Calibration 0 0 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='15 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='45 Calibration 0 0 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='83 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='85 Calibration 0 0 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='33 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='65 Vanilla BP All 1082.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='02[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14] 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20] Vanilla BP All 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='82 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='40] 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='72[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='68] Vanilla BP All 108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14] 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='81[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='04] 5 1007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='46[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='19] 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='62[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='27] 5 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='43 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='09[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='43] 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='70[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='49] 5 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='00[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='05] 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='03] 10 1015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21] 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='32] 10 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='90 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='03[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='24] 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='48[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10] 10 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='02[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14] 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='06] Ours 5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='94 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='58[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='25] 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='67[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='32] Ours 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='22 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='59[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='38] 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='41] Ours 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='50 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='83[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='07] 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='87[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='08] 10 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='43 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='24] 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='41[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='27] 10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='51 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='49] 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='93[1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='43] 10 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='74 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='22[1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01] 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='99[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='31] Table 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Experimental results for semantic segmentation task for UPerNet, DeepLabV3-MobileNetV2 (DLV3-M) and PSPNet- MobileNetV2 (PSPNet-M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Models are pretrained on ADE20K dataset and finetuned on augmentated Pascal VOC12 dataset and Cityscapes dataset respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' “#Layers” is short for “the number of active convolutional layers” that are trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Strategy “Calibration” shows the accuracy when only the classifier and normalization statistics are updated to adapt differences (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' different number of classes) between pretraining dataset and finetuning dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' #Input Channel #Output Channel Input Width Input Height 0 128 128 120 160 1 256 256 60 80 2 512 512 30 40 3 512 512 14 14 4 256 256 14 14 5 128 128 28 28 6 64 64 56 56 Table 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Layer configuration for test cases in Figure 6 in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Before the evaluation for every test case, we warm up the device by running the 10 times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Then we repeat the test 200 times and report the average value for latency, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 Test Case Configurations Table 10 lists the configurations for test cases shown in Fig- ure 6 in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' In addition to the parameters shown in the table, for all test cases, we set the batch size to 32, kernel size to 3 × 3, padding and stride to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' More Results for Semantic Segmentation In this section, we extend the experimental results shown in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 (Table 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Table 9 shows the experimental re- sults for UPerNet, PSPNet-MobileNetV2 (PSPNet-M) and DeepLabV3-MobileNetV2 (DLV3-M) on two pairs of pre- traing and finetuning datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' These results further show the effectiveness of our method on a dense prediction task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' More Results for CIFAR10/100 with Differ- ent Hyper-Parameter Selections In this section, we extend the experimental results shown in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 (Figure 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Table 11 (page 18) shows the ex- perimental results for ResNet18 and ResNet34 on CIFAR datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For every model, we test our method with differ- ent patch sizes for gradient filtering and different numbers of active convolutional layers (#Layers in Table 11, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', if #Layers equals to 2, the last two convolutional layers are trained while other layers are frozen).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' These results further support the qualitative findings in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Results for Combining Gradient Filtering with Gradient Quantization In this section, we provide experimental results for com- bining our method, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' gradient filtering, with gradient quantization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Table 12 (page 19) shows experimental re- sults for ResNet18 and ResNet32 with gradient quantiza- tion methods PTQ [4] and PSQ [7] and different hyper- parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Both forward propagation and backward prop- agation are quantized to INT8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' These results support the wide applicability of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' More Results for On-device Performance Evaluation In this section, we extend the experimental results shown in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Figure 10 shows the energy savings and overhead of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For most test cases with patch 4 × 4, we achieve over 80× energy savings with less than 20% overhead on both CPU and GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Moreover, for the test case 1 on Raspberry Pi CPU, the forward propagation is even faster when applied our method (which results in negtive overheads).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' These results further show that our method is practical for the real deployment of both high- performance and IoT applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 17 CIFAR10 CIFAR100 ResNet18 #Layers ACC[%] FLOPs ResNet34 #Layers ACC[%] FLOPs ResNet18 #Layers ACC[%] FLOPs ResNet34 #Layers ACC[%] FLOPs Vanilla BP 1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='25M Vanilla BP 1 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='25M Vanilla BP 1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='39M Vanilla BP 1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='39M 2 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='68M 2 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='68M 2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='82M 2 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='82M 3 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 847.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='15M 3 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 847.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='13M 3 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 847.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='29M 3 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 847.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='27M 4 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14G 4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21G 4 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14G 4 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21G +Gradient Filter R2 1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='18M +Gradient Filter R2 1 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='18M +Gradient Filter R2 1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='31M +Gradient Filter R2 1 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='31M 2 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80M 2 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80M 2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='94M 2 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='94M 3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='45M 3 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='44M 3 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='59M 3 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='58M 4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01M 4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='07M 4 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='15M 4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21M +Gradient Filter R4 1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='88M +Gradient Filter R4 1 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='88M +Gradient Filter R4 1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='02M +Gradient Filter R4 1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='02M 2 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='93M 2 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='93M 2 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='07M 2 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='07M 3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='99M 3 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='98M 3 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 3 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='04M 4 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='26M 4 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='17M +Gradient Filter R7 1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10K +Gradient Filter R7 1 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10K +Gradient Filter R7 1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 441.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='34K +Gradient Filter R7 1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 441.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='34K 2 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21M 2 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21M 2 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='35M 2 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='35M 3 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 3 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 3 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='26M 3 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='26M 4 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='90M 4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='03M 4 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='04M 4 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='17M Table 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Experimental results on CIFAR10 and CIFAR100 datasets for ResNet18 and ResNet34 with different hyper-parameter selections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' “ACC” is short for accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' “#Layers” is short for “the number of active convolution layers”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' #Layers equals to 2 means that only the last two convolutional layers are trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' “Gradient Filter R2/4/7” use proposed gradient filtering method with patch size 2 × 2, 4 × 4 and 7 × 7, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='40× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='60× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='100× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='120× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Energy Savings [×times] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='CPU Energy Savings ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='40× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='60× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='100× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='GPU Energy Savings ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Test Case - Baseline: MKLDNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='25 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='75 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Percentage [%] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Forward Cost ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20% Overhead ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Normalized CPU Overhead ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11900KF-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='11900KF-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='RPi3-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='RPi3-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Test Case - Baseline: CUDNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Forward Cost ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='20% Overhead ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Normalized GPU Overhead ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Jetson-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='RTX3090Ti-R2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='RTX3090Ti-R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='Figure 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Energy savings and overhead resuls on multiple CPUs and GPUs under different test cases (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', different input sizes, number of channels, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='.).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For test case 4 and 5 with patch size 4 × 4 (Jetson-R4) on GPU, the latency of our method is too small to be captured by the power meter with a 20 ms sample rate so the energy savings data is not available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For most test cases with patch size 4 × 4, our method achieves over 80× energy savings with less than 20% overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 18 CIFAR10 CIFAR100 ResNet18 ResNet34 ResNet18 ResNet34 Strategy #Layers ACC[%] #OPs Strategy #Layers ACC[%] #OPs Strategy #Layers ACC[%] #OPs Strategy #Layers ACC[%] #OPs PTQ 1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='25M PTQ 1 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='25M PTQ 1 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='39M PTQ 1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='39M 2 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='68M 2 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='68M 2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8 487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='82M 2 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='82M 3 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 847.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='15M 3 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 847.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='13M 3 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 847.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='29M 3 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 847.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='27M 4 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14G 4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21G 4 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14G 4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21G PTQ +Gradient Filter R2 1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='18M PTQ +Gradient Filter R2 1 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='18M PTQ +Gradient Filter R2 1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='31M PTQ +Gradient Filter R2 1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='31M 2 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80M 2 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80M 2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='94M 2 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='94M 3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='45M 3 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='44M 3 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='59M 3 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='58M 4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01M 4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='07M 4 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='15M 4 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21M PTQ +Gradient Filter R4 1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='88M PTQ +Gradient Filter R4 1 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='88M PTQ +Gradient Filter R4 1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='02M PTQ +Gradient Filter R4 1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='02M 2 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='93M 2 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='93M 2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='07M 2 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='07M 3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='99M 3 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='98M 3 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 3 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 4 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='04M 4 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='26M 4 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='17M PTQ +Gradient Filter R7 1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10K PTQ +Gradient Filter R7 1 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10K PTQ +Gradient Filter R7 1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 441.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='34K PTQ +Gradient Filter R7 1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 441.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='34K 2 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21M 2 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21M 2 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='35M 2 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='35M 3 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 3 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 3 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='26M 3 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='26M 4 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='90M 4 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='03M 4 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='04M 4 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='17M PSQ 1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='25M PSQ 1 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='25M PSQ 1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='39M PSQ 1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='39M 2 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='68M 2 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='68M 2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='82M 2 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='82M 3 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 847.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='15M 3 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 847.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='13M 3 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 847.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='29M 3 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 847.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='27M 4 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14G 4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21G 4 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='14G 4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21G PSQ +Gradient Filter R2 1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='18M PSQ +Gradient Filter R2 1 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='18M PSQ +Gradient Filter R2 1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='31M PSQ +Gradient Filter R2 1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='31M 2 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80M 2 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='80M 2 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='94M 2 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='94M 3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='8 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='45M 3 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='44M 3 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='59M 3 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='0 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='58M 4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='01M 4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='07M 4 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='15M 4 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='9 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21M PSQ +Gradient Filter R4 1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='88M PSQ +Gradient Filter R4 1 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='88M PSQ +Gradient Filter R4 1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='02M PSQ +Gradient Filter R4 1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='02M 2 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='93M 2 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='93M 2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='3 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='07M 2 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='07M 3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='99M 3 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='98M 3 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='1 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 3 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 4 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='04M 4 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='26M 4 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='17M PSQ +Gradient Filter R7 1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='2 303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10K PSQ +Gradient Filter R7 1 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='10K PSQ +Gradient Filter R7 1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 441.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='34K PSQ +Gradient Filter R7 1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 441.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='34K 2 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21M 2 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='21M 2 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='35M 2 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='35M 3 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 3 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='4 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='12M 3 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='26M 3 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='26M 4 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='7 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='90M 4 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='03M 4 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='5 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='04M 4 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='6 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='17M Table 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Experimental results for ResNet18 and ResNet34 with different gradient quantization methods (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=', PTQ [4] and PSQ [7]) and hyper-parameter selections on CIFAR10/100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' Feature map, activation, weight and gradient are quantized to INT8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' “ACC” is short for accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' “#Layers” is short for “the number of active convolution layers”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' For example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' #Layers equals to 2 means that the last two convolutional layers are trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' “Gradient Filter R2/4/7” use proposed gradient filtering method with patch size 2 × 2, 4 × 4 and 7 × 7, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'} +page_content=' 19' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfe_gi/content/2301.00330v1.pdf'}