diff --git "a/GdAyT4oBgHgl3EQffPg-/content/tmp_files/2301.00335v1.pdf.txt" "b/GdAyT4oBgHgl3EQffPg-/content/tmp_files/2301.00335v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/GdAyT4oBgHgl3EQffPg-/content/tmp_files/2301.00335v1.pdf.txt" @@ -0,0 +1,4917 @@ +Theoretical Characterization of How Neural Network +Pruning Affects its Generalization +Hongru Yang ∗ +Yingbin Liang † +Xiaojie Guo‡ +Lingfei Wu§ +Zhangyang Wang ¶ +Abstract +It has been observed in practice that applying pruning-at-initialization methods to neural +networks and training the sparsified networks can not only retain the testing performance of the +original dense models, but also sometimes even slightly boost the generalization performance. +Theoretical understanding for such experimental observations are yet to be developed. This +work makes the first attempt to study how different pruning fractions affect the model’s gradient +descent dynamics and generalization. Specifically, this work considers a classification task for +overparameterized two-layer neural networks, where the network is randomly pruned according +to different rates at the initialization. It is shown that as long as the pruning fraction is below +a certain threshold, gradient descent can drive the training loss toward zero and the network +exhibits good generalization performance. +More surprisingly, the generalization bound gets +better as the pruning fraction gets larger. To complement this positive result, this work further +shows a negative result: there exists a large pruning fraction such that while gradient descent +is still able to drive the training loss toward zero (by memorizing noise), the generalization +performance is no better than random guessing. This further suggests that pruning can change +the feature learning process, which leads to the performance drop of the pruned neural network. +Up to our knowledge, this is the first generalization result for pruned neural networks, suggesting +that pruning can improve the neural network’s generalization. +1 +Introduction +Neural network pruning can be dated back to the early stage of the development of neural networks +(LeCun et al., 1989). Since then, many research works have been focusing on using neural network +pruning as a model compression technique, e.g. (Molchanov et al., 2019; Luo and Wu, 2017; Ye +et al., 2020; Yang et al., 2021). However, all these work focused on pruning neural networks after +training to reduce inference time, and, thus, the efficiency gain from pruning cannot be directly +transferred to the training phase. It is not until the recent days that Frankle and Carbin (2018) +showed a surprising phenomenon: a neural network pruned at the initialization can be trained to +achieve competitive performance to the dense model. They called this phenomenon the lottery +ticket hypothesis. The lottery ticket hypothesis states that there exists a sparse subnetwork inside +∗Department of Computer Science, The University of Texas at Austin; e-mail: hy6385@utexas.edu +†Department of Electrical and Computer Engineering, The Ohio State University; e-mail: liang.889@osu.edu +‡IBM Thomas.J. Watson Research Center; e-mail: xguo7@gmu.edu +§Pinterest; e-mail: lwu@email.wm.edu +¶Department +of +Electrical +and +Computer +Engineering, +The +University +of +Texas +at +Austin; +e-mail: +atlaswang@utexas.edu +1 +arXiv:2301.00335v1 [cs.LG] 1 Jan 2023 + +a dense network at the random initialization stage such that when trained in isolation, it can +match the test accuracy of the original dense network after training for at most the same number +of iterations. On the other hand, the algorithm Frankle and Carbin (2018) proposed to find the +lottery ticket requires many rounds of pruning and retraining which is computationally expensive. +Many subsequent works focused on developing new methods to reduce the cost of finding such a +network at the initialization (Lee et al., 2018; Wang et al., 2019; Tanaka et al., 2020; Liu and Zenke, +2020; Chen et al., 2021a). A further investigation by Frankle et al. (2020) showed that some of +these methods merely discover the layer-wise pruning ratio instead of sparsity pattern. +The discovery of the lottery ticket hypothesis sparkled further interest in understanding this +phenomenon. Another line of research focused on finding a subnetwork inside a dense network +at the random initialization such that the subnetwork can achieve good performance (Zhou et al., +2019; Ramanujan et al., 2020). Shortly after that, Malach et al. (2020) formalized this phenomenon +which they called the strong lottery ticket hypothesis: under certain assumption on the weight ini- +tialization distribution, a sufficiently overparameterized neural network at the initialization contains +a subnetwork with roughly the same accuracy as the target network. Later, Pensia et al. (2020) +improved the overparameterization parameters and Sreenivasan et al. (2021) showed that such a +type of result holds even if the weight is binary. Unsurprisingly, as it was pointed out by Malach +et al. (2020), finding such a subnetwork is computationally hard. Nonetheless, all of the analysis is +from a function approximation perspective and none of the aforementioned works have considered +the effect of pruning on gradient descent dynamics, let alone the neural networks’ generalization. +Interestingly, via empirical experiments, people have found that sparsity can further improve +generalization in certain scenarios (Chen et al., 2021b; Ding et al., 2021; He et al., 2022). There +have also been empirical works showing that random pruning can be effective (Frankle et al., 2020; +Su et al., 2020; Liu et al., 2021b). However, theoretical understanding of such benefit of pruning +of neural networks is still limited. In this work, we take the first step to answer the following +important open question from a theoretical perspective: +How does pruning fraction affect the training dynamics and the model’s generalization, +if the model is pruned at the initialization and trained by gradient descent? +We study this question using random pruning. We consider a classification task where the input +data consists of class-dependent sparse signal and random noise. We analyze the training dynamics +of a two-layer convolutional neural network pruned at the initialization. Specifically, this work +makes the following contributions: +• Mild pruning. We prove that there indeed exists a range of pruning fraction where the +pruning fraction is small and the generalization error bound gets better as pruning fraction +gets larger. In this case, the signal in the feature is well-preserved and due to the effect of +pruning purifying the feature, the effect from noise is reduced. We provide detailed explana- +tion in Section 3. Up to our knowledge, this is the first theoretical result on generalization for +pruned neural networks, which suggests that pruning can improve generalization under some +setting. Further, we conduct experiments to verify our results. +• Over pruning. To complement the above positive result, we also show a negative result: if +the pruning fraction is larger than a certain threshold, then the generalization performance +is no better than a simple random guessing, although gradient descent is still able to drive +the training loss toward zero. This further suggests that the performance drop of the pruned +2 + +Probability Density +Signal Strength +μ +Mild Pruning +Full model +Over Pruning +Figure 1: A pictorial demonstration of our results. The bell-shaped curves model the distribution of +the signal in the features, where the mean represents the signal strength and the width of the curve +indicates the variance of noise. Our results show that mild pruning preserves the signal strength +and reduces the noise variance (and hence yields better generalization), whereas over pruning lowers +signal strength albeit reducing noise variance. +neural network is not solely caused by the pruned network’s own lack of trainability or ex- +pressiveness, but also by the change of gradient descent dynamics due to pruning. +• Technically, we develop novel analysis to bound pruning effect to weight-noise and weight- +signal correlation. +Further, in contrast to many previous works that considered only the +binary case, our analysis handles multi-class classification with general cross-entropy loss. +Here, a key technical development is a gradient upper bound for multi-class cross-entropy +loss, which might be of independent interest. +Pictorially, our result is summarized in Figure 1. We point out that the neural network training we +consider is in the feature learning regime, where the weight parameters can go far away from their +initialization. This is fundamentally different from the popular neural tangent kernel regime, +where the neural networks essentially behave similar to its linearization. +1.1 +Related Works +The Lottery Ticket Hypothesis and Sparse Training. The discovery of the lottery ticket +hypothesis (Frankle and Carbin, 2018) has inspired further investigation and applications. One line +of research has focused on developing computationally efficient methods to enable sparse training: +the static sparse training methods are aiming at identifying a sparse mask at the initialization stage +based on different criterion such as SNIP (loss-based) (Lee et al., 2018), GraSP (gradient-based) +(Wang et al., 2019), SynFlow (synaptic strength-based) (Tanaka et al., 2020), neural tangent kernel +based method (Liu and Zenke, 2020) and one-shot pruning (Chen et al., 2021a). Random pruning +has also been considered in static sparse training such as uniform pruning (Mariet and Sra, 2015; +He et al., 2017; Gale et al., 2019; Suau et al., 2018), non-uniform pruning (Mocanu et al., 2016), +expander-graph-related techniques (Prabhu et al., 2018; Kepner and Robinett, 2019) Erd¨os-R´enyi +(Mocanu et al., 2018) and Erd¨os-R´enyi-Kernel (Evci et al., 2020). On the other hand, dynamic +sparse training allows the sparse mask to be updated (Mocanu et al., 2018; Mostafa and Wang, +2019; Evci et al., 2020; Jayakumar et al., 2020; Liu et al., 2021c,d,a; Peste et al., 2021). The sparsity +pattern can also be learned by using sparsity-inducing regularizer (Yang et al., 2020). Recently, He +et al. (2022) discovered that pruning can exhibit a double descent phenomenon when the data-set +labels are corrupted. +Another line of research has focused on studying pruning the neural networks at its random +initialization to achieve good performance (Zhou et al., 2019; Ramanujan et al., 2020). In particular, +3 + +Ramanujan et al. (2020) showed that it is possible to prune a randomly initialized wide ResNet-50 +to match the performance of a ResNet-34 trained on ImageNet. This phenomenon is named the +strong lottery ticket hypothesis. Later, Malach et al. (2020) proved that under certain assumption +on the initialization distribution, a target network of width d and depth l can be approximated by +pruning a randomly initialized network that is of a polynomial factor (in d, l) wider and twice deeper +even without any further training. However finding such a network is computationally hard, which +can be shown by reducing the pruning problem to optimizing a neural network. Later, Pensia et al. +(2020) improved the widening factor to being logarithmic and Sreenivasan et al. (2021) proved that +with a polylogarithmic widening factor, such a result holds even if the network weight is binary. A +follow-up work shows that it is possible to find a subnetwork achieving good performance at the +initialization and then fine-tune (Sreenivasan et al., 2022). Our work, on the other hand, analyzes +the gradient descent dynamics of a pruned neural network and its generalization after training. +Analyses of Training Neural Networks by Gradient Descent. A series of work (Allen- +Zhu et al., 2019; Du et al., 2019; Lee et al., 2019; Zou et al., 2020; Zou and Gu, 2019; Ji and +Telgarsky, 2019; Chen et al., 2020b; Song and Yang, 2019; Oymak and Soltanolkotabi, 2020) has +proved that if a deep neural network is wide enough, then (stochastic) gradient descent provably can +drive the training loss toward zero in a fast rate based on neural tangent kernel (NTK) (Jacot et al., +2018). Further, under certain assumption on the data, the learned network is able to generalize +(Cao and Gu, 2019; Arora et al., 2019). However, as it is pointed out by Chizat et al. (2019), in the +NTK regime, the gradient descent dynamics of the neural network essentially behaves similarly to +its linearization and the learned weight is not far away from the initialization, which prohibits the +network from performing any useful feature learning. In order to go beyond NTK regime, one line +of research has focused on the mean field limit (Song et al., 2018; Chizat and Bach, 2018; Rotskoff +and Vanden-Eijnden, 2018; Wei et al., 2019; Chen et al., 2020a; Sirignano and Spiliopoulos, 2020; +Fang et al., 2021). Recently, people have started to study the neural network training dynamics in +the feature learning regime where data from different class is defined by a set of class-related signals +which are low rank (Allen-Zhu and Li, 2020, 2022; Cao et al., 2022; Shi et al., 2021; Telgarsky, +2022). However, all previous works did not consider the effect of pruning. Our work also focuses +on the aforementioned feature learning regime, but for the first time characterizes the impact of +pruning on the generalization performance of neural networks. +2 +Preliminaries and Problem Formulation +In this section, we introduce our notation, data generation process, neural network architecture +and the optimization algorithm. +Notations. We use lower case letters to denote scalars and boldface letters and symbols (e.g. +x) to denote vectors and matrices. We use ⊙ to denote element-wise product. For an integer n, we +use [n] to denote the set of integers {1, 2, . . . , n}. We use x = O(y), x = Ω(y), x = Θ(y) to denote +that there exists a constant C such that x ≤ Cy, x ≥ Cy, x = Cy respectively. We use �O, �Ω and +�Θ to hide polylogarithmic factor in these notations. +Finally, we use x = poly(y) if x = O(yC) for +some positive constant C, and x = poly log y if x = poly(log y). +2.1 +Settings +Definition 2.1 (Data distribution of K classes). Consider we are given the set of signal vectors +{µei}K +i=1, where µ > 0 denotes the strength of the signal, and ei denotes the i-th standard basis +4 + +vector with its i-th entry being 1 and all other coordinates being 0. Each data point (x, y) with +x = [x⊤ +1 , x⊤ +2 ]⊤ ∈ R2d and y ∈ [K] is generated from the following distribution D: +1. The label y is generated from a uniform distribution over [K]. +2. A noise vector ξ is generated from the Gaussian distribution N(0, σ2 +nI). +3. With probability 1/2, assign x1 = µy, x2 = ξ; with probability 1/2, assign x2 = µy, x1 = ξ +where µy = µey. +The sparse signal model is motivated by the empirical observation that during the process of +training neural networks, the output of each layer of ReLU is usually sparse instead of dense. This +is partially due to the fact that in practice the bias term in the linear layer is used (Song et al., +2021). For samples from different classes, usually a different set of neurons fire. Our study can be +seen as a formal analysis on pruning the second last layer of a deep neural network in the layer- +peeled model as in Zhu et al. (2021); Zhou et al. (2022). We also point out that our assumption on +the sparsity of the signal is necessary for our analysis. If we don’t have this sparsity assumption +and only make assumption on the ℓ2 norm of the signal, then in the extreme case, the signal is +uniformly distributed across all coordinate and the effect of pruning to the signal and the noise will +be essentially the same: their ℓ2 norm will both be reduced by a factor of √p. +Network architecture and random pruning. We consider a two-layer convolutional neural +network model with polynomial ReLU activation σ(z) = (max{0, z})q, where we focus on the case +when q = 3 1 The network is pruned at the initialization by mask M where each entry in the mask +M is generated i.i.d. from Bernoulli(p). Let mj,r denotes the r-th row of Mj. Given the data (x, y), +the output of the neural network can be written as F(W ⊙ M, x) = (F1(W1 ⊙ M1, x), F2(W2 ⊙ +M2, x), . . . , Fk(Wk ⊙ Mk, x)) where the j-th output is given by +Fj(Wj ⊙ Mj, x) = +m +� +r=1 +[σ(⟨wj,r ⊙ mj,r, x1⟩) + σ(⟨wj,r ⊙ mj,r, x2⟩)] += +m +� +r=1 +[σ(⟨wj,r ⊙ mj,r, µ⟩) + σ(⟨wj,r ⊙ mj,r, ξ⟩)]. +The mask M is only sampled once at the initialization and remains fixed through the entire training +process. From now on, we use tilde over a symbol to denote its masked version, e.g., +� +W = W ⊙ M and �wj,r = wj,r ⊙ mj,r. +Since µj ⊙ mj,r = 0 with probability 1 − p, some neurons will not receive the corresponding +signal at all and will only learn noise. Therefore, for each class j ∈ [k], we split the neurons into +two sets based on whether it receives its corresponding signal or not: +Sj +signal = {r ∈ [m] : µj ⊙ mj,r ̸= 0}, +Sj +noise = {r ∈ [m] : µj ⊙ mj,r = 0}. +Gradient descent algorithm. We consider the network is trained by cross-entropy loss with +softmax. We denote by logiti(F, x) := +eFi(x) +� +j∈[k] eFj(x) and the cross-entropy loss can be written as +1We point out that as many previous works (Allen-Zhu and Li, 2020; Zou et al., 2021; Cao et al., 2022), polynomial +ReLU activation can help us simplify the analysis of gradient descent, because polynomial ReLU activation can give +a much larger separation of signal and noise (thus, cleaner analysis) than ReLU. Our analysis can be generalized to +ReLU activation by using the arguments in (Allen-Zhu and Li, 2022). +5 + +ℓ(F(x, y)) = − log logity(F, x). The convolutional neural network is trained by minimizing the +empirical cross-entropy loss given by +LS(W) = 1 +n +n +� +i=1 +ℓ[F(W ⊙ M; xi, yi)] = E +S ℓ[F(W ⊙ M; xi, yi)], +where S = {(xi, yi)}n +i=1 is the training data set. Similarly, we define the generalization loss as +LD := E +(x,y)[ℓ(F(W ⊙ M; x, y))]. +The model weights are initialized from a i.i.d. Gaussian N(0, σ2 +0). The gradient of the cross-entropy +loss is given by ℓ′ +j,i := ℓ′ +j(xi, yi) = logitj(F, xi) − I(j = yi). Since +∇wj,rLS(W ⊙ M) = ∇wj,r⊙mj,rLS(W ⊙ M) ⊙ mj,r = ∇ �wj,rLS(� +W) ⊙ mj,r, +we can write the full-batch gradient descent update of the weights as +�w(t+1) +j,r += �w(t) +j,r − η∇ �wj,rLS(� +W) ⊙ mj,r += �w(t) +j,r − η +n +n +� +i=1 +ℓ′(t) +j,i · σ′ �� +�w(t) +j,r, ξi +�� +· �ξj,r,i − η +n +n +� +i=1 +ℓ′(t) +j,i σ′ �� +�w(t) +j,r, µyi +�� +µyi ⊙ mj,r, +for j ∈ [K] and r ∈ [m], where �ξj,r,i = ξi ⊙ mj,r. +Condition 2.2. We consider the parameter regime described as follows: (1) Number of classes +K = O(log d). (2) Total number of training samples n = poly log d. (3) Dimension d ≥ Cd for +some sufficiently large constant Cd. (4) Relationship between signal strength and noise strength: +µ = Θ(σn +√ +d log d) = Θ(1). (5) The number of neurons in the network m = Ω(poly log d). (6) +Initialization variance: σ0 = �Θ(m−4n−1µ−1). (7) Learning rate: Ω(1/ poly(d)) ≤ η ≤ �O(1/µ2). +(8) Target training loss: ϵ = Θ(1/ poly(d)). +Conditions (1) and (2) ensure that there are enough samples in each class with high probability. +Condition (3) ensures that our setting is in high-dimensional regime. Condition (4) ensures that +the full model can be trained to exhibit good generalization. Condition (5), (6) and (7) ensures that +the neural network is sufficiently overparameterized and can be optimized efficiently by gradient +descent. Condition (7) and (8) further ensures that training time is polynomial in d. We further +discuss the practical consideration of η and ϵ to justify their condition in Remark D.9. +3 +Mild Pruning +3.1 +Main result +The first main result shows that there exists a threshold on the pruning fraction p such that pruning +helps the neural network’s generalization. +Theorem 3.1 (Main Theorem for Mild Pruning, Informal). Under Condition 2.2, if p ∈ [C1 +log d +m , 1] +for some constant C1, then with probability at least 1 − O(d−1) over the randomness in the data, +network initialization and pruning, there exists T = �O(Kη−1σ2−q +0 +µ−q +K2m4µ−2η−1ϵ−1) such that +6 + +1. The training loss is below ϵ: LS(� +W(T)) ≤ ϵ. +2. The generalization loss can be bounded by LD(� +W(T)) ≤ O(Kϵ) + exp(−n2/p). +Theorem 3.1 indicates that there exists a threshold in the order of Θ(log d +m ) such that if p is +above this threshold (i.e., the fraction of the pruned weights is small), gradient descent is able to +drive the training loss towards zero (as item 1 claims) and the overparameterized network achieves +good testing performance (as item 2 claims). In the next subsection, we explain why pruning can +help generalization via an outline of our proof, and we defer all the detailed proofs in Appendix D. +3.2 +Proof Outline +Our proof contains the establishment of the following two properties: +• First we show that after mild pruning the network is still able to learn the signal, and the +magnitude of the signal in the feature is preserved. +• Then we show that given a new sample, pruning reduces the noise effect in the feature which +leads to the improvement of generalization. +We first show the above properties for three stages of gradient descent: initialization, feature +growing phase, and converging phase, and then establish the generalization property. +Initialization. First of all, readers might wonder why pruning can even preserve signal at all. +Intuitively, a network will achieve good performance if its weights are highly correlated with the +signal (i.e., their inner product is large). Two intuitive but misleading heuristics are given by the +following: +• Consider a fixed neuron weight. +At the random initialization, in expectation, the signal +correlation with the weights is given by Ew,m[| ⟨w ⊙ m, µ⟩ |] ≤ pσ0µ and the noise correlation +with the weights is given by Ew,m,ξ[| ⟨w ⊙ m, ξ⟩ |] ≤ +� +Ew,m,ξ[⟨w ⊙ m, ξ⟩2] = σ0σn +√pd by +Jensen’s inequality. Based on this argument, taking a sum over all the neurons, pruning will +hurt weight-signal correlation more than weight-noise correlation. +• Since we are pruning with Bernoulli(p), a given neuron will not receive signal at all with +probability 1 − p. Thus, there is roughly p fraction of the neurons receiving the signal and +the rest 1 − p fraction will be purely learning from noise. Even though for every neuron, +roughly √p portion of ℓ2 mass from the noise is reduced, at the same time, pruning also +creates 1 − p fraction of neurons which do not receive signals at all and will purely output +noise after training. Summing up the contributions from every neuron, the signal strength +is reduced by a factor of p while the noise strength is reduced by a factor of √p. We again +reach the conclusion of pruning under any rate will hurt the signal more than noise. +The above analysis shows that under any pruning rate, it seems pruning can only hurt the signal +more than noise at the initialization. Such analysis would be indicative if the network training is +under the neural tangent kernel regime, where the weight of each neuron does not travel far from its +initialization so that the above analysis can still hold approximately after training. However, when +the neural network training is in the feature learning regime, this average type analysis becomes +misleading. Namely, in such a regime, the weights with large correlation with the signal at the +initialization will quickly evolve into singleton neurons and those weights with small correlation +7 + +will remain small. In our proof, we focus on the featuring learning regime, and analyze how the +network weights change and what are the effect of pruning during various stages of gradient descent. +We now analyze the effect of pruning on weight-signal correlation and weight-noise correlation at +the initialization. Our first lemma leverages the sparsity of our signal and shows that if the pruning +is mild, then it will not hurt the maximum weight-signal correlation much at the initialization. On +the other hand, the maximum weight-noise correlation is reduced by a factor of √p. +Lemma 3.2 (Initialization). With probability at least 1 − 2/d, for all i ∈ [n], +σ0σn +� +pd ≤ max +r +� +�w(0) +j,r , ξi +� +≤ +� +2 log(Kmd)σ0σn +� +pd. +Further, suppose pm ≥ Ω(log(Kd)), with probability 1 − 2/d, for all j ∈ [K], +σ0 ∥µj∥2 ≤ +max +r∈Sj +signal +� +�w(0) +j,r , µj +� +≤ +� +2 log(8pmKd)σ0 ∥µj∥2 . +Given this lemma, we now prove that there exists at least one neuron that is heavily aligned +with the signal after training. Similarly to previous works (Allen-Zhu and Li, 2020; Zou et al., 2021; +Cao et al., 2022), the analysis is divided into two phases: feature growing phase and converging +phase. +Feature Growing Phase. In this phase, the gradient of the cross-entropy is large and the +weight-signal correlation grows much more quickly than weight-noise correlation thanks to the +polynomial ReLU. We show that the signal strength is relatively unaffected by pruning while the +noise level is reduced by a factor of √p. +Lemma 3.3 (Feature Growing Phase, Informal). Under Condition 2.2, there exists time T1 such +that +1. The max weight-signal correlation is large: maxr +� +�w(T1) +j,r , µj +� +≥ m−1/q for j ∈ [K]. +2. The weight-noise and cross-class weight-signal correlations are small: if j ̸= yi, then maxj,r,i +��� +� +�w(T1) +j,r , ξi +���� ≤ +O(σ0σn +√pd) and maxj,r,k +��� +� +�w(T1) +j,r , µk +���� ≤ �O(σ0µ). +Converging Phase. We show that gradient descent can drive the training loss toward zero +while the signal in the feature is still large. An important intermediate step in our argument is +the development of the following gradient upper bound for multi-class cross-entropy loss which +introduces an extra factor of K in the gradient upper bound. +Lemma 3.4 (Gradient Upper Bound, Informal). Under Condition 2.2, we have +���∇LS(� +W(t)) ⊙ M +��� +2 +F ≤ O(Km2/qµ2)LS(� +W(t)). +Proof Sketch. To prove this upper bound, note that for a given input (xi, yi), ℓ′(t) +yi,i∇Fyi(xi) should +make major contribution to +���∇ℓ(� +W; xi, yi) +��� +F . +Further note that |ℓ′(t) +yi,i| = 1 − logityi(F; xi) = +� +j̸=yi eFj(xi) +� +j eFj(xi) +≤ +� +j̸=yi eFj(xi) +eFyi (xi) +. Now, apply the property that Fj(xi) is small for j ̸= yi (which we +prove in the appendix), the numerator will contribute a factor of K. To bound the rest, we utilize +8 + +the special property of multi-class cross-entropy loss: |ℓ′(t) +j,i | ≤ |ℓ′(t) +yi,i| ≤ ℓ(t) +i . +However, a naive +application of this inequality will result in a factor of K3 instead K in our bound. The trick is to +further use the fact that � +j̸=yi |ℓ′(t) +j,i | = |ℓ′(t) +yi,i|. +Using the above gradient upper bound, we can show that the objective can be minimized. +Lemma 3.5 (Converging Phase, Informal). Under Condition 2.2, there exists T2 such that for +some time t ∈ [T1, T2] we have +1. The results from the feature growing phase (Lemma 3.3) hold up to constant factors. +2. The training loss is small LS(� +W(t)) ≤ ϵ. +Notice that the weight-noise correlation still remains reduced by a factor of √p after training. +Lemma 3.5 proves the statement of the training loss in Theorem 3.1. +Generalization Analysis. Finally, we show that pruning can purify the feature by reducing +the variance of the noise by a factor of p when a new sample is given. The lemma below shows that +the variance of weight-noise correlation for the trained weights is reduced by a factor of p. +Lemma 3.6. The neural network weight � +W⋆ after training satisfies that +P +ξ +� +max +j,r +����w⋆ +j,r, ξ +��� ≥ (2m)−2/q +� +≤ 2Km exp +� +− (2m)−4/q +O(σ2 +0σ2npd) +� +. +Using this lemma, we can show that pruning yields better generalization bound (i.e., the bound +on the generalization loss) claimed in Theorem 3.1. +4 +Over Pruning +Our second result shows that there exists a relatively large pruning fraction (i.e., small p) such that +the learned model yields poor generalization, although gradient descent is still able to drive the +training error toward zero. The full proof is defered to Appendix E. +Theorem 4.1 (Main Theorem for Over Pruning, Informal). Under Condition 2.2 if p = Θ( +1 +Km log d), +then with probability at least 1−1/ poly log d over the randomness in the data, network initialization +and pruning, there exists T = O(η−1nσq−2 +0 +σ−q +n (pd)−q/2 + η−1ϵ−1m4nσ−2 +n (pd)−1) such that +1. The training loss is below ϵ: LS(� +W(T)) ≤ ϵ. +2. The generalization loss is large: LD(� +W(T)) ≥ Ω(log K). +Remark 4.2. The above theorem indicates that in the over-pruning case, the training loss can still +go to zero. However, the generalization loss of our neural network behaves no much better than +random guessing, because given any sample, random guessing will assign each class with probability +1/K, which yields a generalization loss of log K. The readers might wonder why the condition for +this to happen is p = Θ( +1 +Km log d) instead of O( +1 +Km log d). Indeed, the generalization will still be bad +if p is too small. However, now the neural network is not only unable to learn the signal but also +cannot efficiently memorize the noise via gradient descent. +Proof Outline. Now we analyze the over-pruning case. We first show that there is a good chance +that the model will not receive any signal after pruning due to the sparse signal assumption and +mild overparameterization of the neural network. Then, leveraging such a property, we bound the +9 + +weight-signal and weight-noise properties for the feature growing and converging phases of gradient +descent, as stated in the following two lemmas, respectively. Our result indicates that the training +loss can still be driven toward zero by letting the neural network memorize the noise, the proof of +which further exploits the fact that high dimensional Gaussian noise are nearly orthogonal. +Lemma 4.3 (Feature Growing Phase, Informal). Under Condition 2.2, there exists T1 such that +• Some weights has large correlation with noise: maxr +� +�w(T1) +yi,r , ξi +� +≥ m−1/q for all i ∈ [n]. +• The cross-class weight-noise and weight-signal correlations are small: if j ̸= yi, then maxj,r,i +��� +� +�w(T1) +j,r , ξi +���� = +�O(σ0σn +√pd) and maxj,r,k +��� +� +�w(T1) +j,r , µk +���� ≤ �O(σ0µ). +Lemma 4.4 (Converging Phase, Informal). Under Condition 2.2, there exists a time T2 such that +∃t ∈ [T1, T2], the results from phase 1 still holds (up to constant factors) and LS(� +W(t)) ≤ ϵ. +Finally, since the above lemmas show that the network is purely memorizing the noise, we +further show that such a network yields poor generalization performance as stated in Theorem +4.1. +5 +Experiments +5.1 +Simulations to Verify Our Results +In this section, we conduct simulations to verify our results. We conduct our experiment using +binary classification task and show that our result holds for ReLU networks. +Our experiment +settings are the follows: we choose input to be x = [x1, x2] = [ye1, ξ] ∈ R800 and x1, x2 ∈ R400, +where ξi is sampled from a Gaussian distribution. The class labels y are {±1}. We use 100 training +examples and 100 testing examples. The network has width 150 and is initialized with random +Gaussian distribution with variance 0.01. Then, p fraction of the weights are randomly pruned. +We use the learning rate of 0.001 and train the network over 1000 iterations by gradient descent. +The observations are summarized as follows. In Figure 2a, when the noise level is σn = 0.5, +the pruned network usually can perform at the similar level with the full model when p ≤ 0.5 +and noticably better when p = 0.3. When p > 0.5, the test error increases dramatically while +the training accuracy still remains perfect. On the other hand, when the noise level becomes large +σn = 1 (Figure 2b), the full model can no longer achieve good testing performance but mild pruning +can improve the model’s generalization. Note that the training accuracy in this case is still perfect +(omitted in the figure). We observe that in both settings when the model test error is large, the +variance is also large. However, in Figure 2b, despite the large variance, the mean curve is already +smooth. In particular, Figure 2c plots the testing error over the training iterations under p = 0.5 +pruning rate. This suggests that pruning can be beneficial even when the input noise is large. +5.2 +On the Real World Dataset +To further demonstrate the mild/over pruning phenomenon, we conduct experiments on MNIST +(Deng, 2012) and CIFAR-10 (Krizhevsky et al., 2009) datasets. We consider neural network ar- +chitectures including MLP with 2 hidden layers of width 1024, VGG, ResNets (He et al., 2016) +and wide ResNet (Zagoruyko and Komodakis, 2016). In addition to random pruning, we also add +10 + +0.0 +0.2 +0.4 +0.6 +0.8 +Pruning rates +0.00 +0.05 +0.10 +0.15 +0.20 +Error +Training/Testing Error over Pruning Rates +Testing error +Training error +(a) +0.0 +0.2 +0.4 +0.6 +0.8 +Pruning rates +0.20 +0.22 +0.24 +0.26 +0.28 +0.30 +0.32 +0.34 +Error +Training/Testing Error over Pruning Rates +Testing error +(b) +0 +200 +400 +600 +800 +1000 +Iterations +0.20 +0.25 +0.30 +0.35 +0.40 +0.45 +0.50 +Error +Testing error +full +pruned +(c) +Figure 2: Figure (a) shows the relationship between pruning rates p and training/testing error +under noise variance σn = 0.5. Figure (b) shows the relationship between pruning rates p and +testing error under noise variance σn = 1. The training error is omitted since it stays effectively +at zero across all pruning rates. Figure (c) shows a particular training curve under pruning rate +p = 50% and noise variance σn = 1. Each data point is created by taking an average over 10 +independent runs. +0.0 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 +Sparsity +96.5 +97.0 +97.5 +98.0 +98.5 +99.0 +99.5 +100.0 +Accuracy +MLP MNIST Accuracy vs Sparsity +Random (Train) +Random (Test) +IMP (Train) +IMP (Test) +(a) +0.0 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 +Sparsity +88 +90 +92 +94 +96 +98 +100 +Accuracy +VGG-16 CIFAR-10 Accuracy vs Sparsity +Random (Train) +Random (Test) +IMP (Train) +IMP (Test) +(b) +Figure 3: Figure (a) shows the result between pruning rates p and accuracy on MLP-1024-1024 +on MNIST. Figure (b) shows the result on VGG-16 on CIFAR-10. Each data point is created by +taking an average over 3 independent runs. +iterative-magnitude-based pruning Frankle and Carbin (2018) into our experiments. Both pruning +methods are prune-at-initialization methods. Our implementation is based on Chen et al. (2021c). +Under the real world setting, we do not expect our theorem to hold exactly. Instead, our +theorem implies that (1) there exists a threshold such that the testing performance is no much +worse than (or sometimes may slightly better than) its dense counter part; and (2) the training +error decreases later than the testing error decreases. Our experiments on MLP (Figure 3a) and +VGG-16 (Figure 3b) show that this is the case: for MLP the test accuracy is steady competitive to +its dense counterpart when the sparsity is less than 79% and 36% for VGG-16. We further provide +experiments on ResNet in the appendix for validation of our theoretical results. +6 +Discussion and Future Direction +In this work, we provide theory on the generalization performance of pruned neural networks trained +by gradient descent under different pruning rates. Our results characterize the effect of pruning +under different pruning rates: in the mild pruning case, the signal in the feature is well-preserved +11 + +and the noise level is reduced which leads to improvement in the trained network’s generalization; +on the other hand, over pruning significantly destroys signal strength despite of reducing noise +variance. One open problem on this topic still appears challenging. In this paper, we characterize +two cases of pruning: in mild pruning the signal is preserved and in over pruning the signal is +completely destroyed. +However, the transition between these two cases is not well-understood. +Further, it would be interesting to consider more general data distribution, and understand how +pruning affects training multi-layer neural networks. We leave these interesting directions as future +works. +References +Allen-Zhu, Z. and Li, Y. (2020). Towards understanding ensemble, knowledge distillation and +self-distillation in deep learning. arXiv preprint arXiv:2012.09816 . +Allen-Zhu, Z. and Li, Y. (2022). Feature purification: How adversarial training performs robust +deep learning. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science +(FOCS). IEEE. +Allen-Zhu, Z., Li, Y. and Song, Z. (2019). A convergence theory for deep learning via over- +parameterization. In International Conference on Machine Learning. PMLR. +Arora, S., Du, S., Hu, W., Li, Z. and Wang, R. (2019). Fine-grained analysis of optimization +and generalization for overparameterized two-layer neural networks. In International Conference +on Machine Learning. PMLR. +Cao, Y., Chen, Z., Belkin, M. and Gu, Q. (2022). Benign overfitting in two-layer convolutional +neural networks. arXiv preprint arXiv:2202.06526 . +Cao, Y. and Gu, Q. (2019). Generalization bounds of stochastic gradient descent for wide and +deep neural networks. Advances in neural information processing systems 32. +Chen, T., Ji, B., Ding, T., Fang, B., Wang, G., Zhu, Z., Liang, L., Shi, Y., Yi, S. and +Tu, X. (2021a). Only train once: A one-shot neural network training and pruning framework. +Advances in Neural Information Processing Systems 34. +Chen, T., Zhang, Z., Balachandra, S., Ma, H., Wang, Z., Wang, Z. et al. (2021b). +Sparsity winning twice: Better robust generalization from more efficient training. In International +Conference on Learning Representations. +Chen, X., Cheng, Y., Wang, S., Gan, Z., Liu, J. and Wang, Z. (2021c). The elastic lottery +ticket hypothesis. Github Repository, MIT License . +Chen, Z., Cao, Y., Gu, Q. and Zhang, T. (2020a). A generalized neural tangent kernel analysis +for two-layer neural networks. Advances in Neural Information Processing Systems 33 13363– +13373. +Chen, Z., Cao, Y., Zou, D. and Gu, Q. (2020b). How much over-parameterization is sufficient +to learn deep relu networks? In International Conference on Learning Representations. +12 + +Chizat, L. and Bach, F. (2018). +On the global convergence of gradient descent for over- +parameterized models using optimal transport. Advances in neural information processing sys- +tems 31. +Chizat, L., Oyallon, E. and Bach, F. (2019). On lazy training in differentiable programming. +Advances in Neural Information Processing Systems 32. +Deng, L. (2012). The mnist database of handwritten digit images for machine learning research +[best of the web]. IEEE signal processing magazine 29 141–142. +Ding, S., Chen, T. and Wang, Z. (2021). +Audio lottery: Speech recognition made ultra- +lightweight, noise-robust, and transferable. In International Conference on Learning Represen- +tations. +Du, S., Lee, J., Li, H., Wang, L. and Zhai, X. (2019). Gradient descent finds global minima of +deep neural networks. In International conference on machine learning. PMLR. +Evci, U., Gale, T., Menick, J., Castro, P. S. and Elsen, E. (2020). Rigging the lottery: +Making all tickets winners. In International Conference on Machine Learning. PMLR. +Fang, C., Lee, J., Yang, P. and Zhang, T. (2021). Modeling from features: a mean-field frame- +work for over-parameterized deep neural networks. In Conference on learning theory. PMLR. +Frankle, J. and Carbin, M. (2018). The lottery ticket hypothesis: Finding sparse, trainable +neural networks. In International Conference on Learning Representations. +Frankle, J., Dziugaite, G. K., Roy, D. and Carbin, M. (2020). Pruning neural networks +at initialization: Why are we missing the mark? +In International Conference on Learning +Representations. +Gale, T., Elsen, E. and Hooker, S. (2019). The state of sparsity in deep neural networks. +arXiv preprint arXiv:1902.09574 . +He, K., Zhang, X., Ren, S. and Sun, J. (2016). Deep residual learning for image recognition. +In Proceedings of the IEEE conference on computer vision and pattern recognition. +He, Y., Zhang, X. and Sun, J. (2017). +Channel pruning for accelerating very deep neural +networks. In Proceedings of the IEEE international conference on computer vision. +He, Z., Xie, Z., Zhu, Q. and Qin, Z. (2022). Sparse double descent: Where network pruning +aggravates overfitting. In International Conference on Machine Learning. PMLR. +Jacot, A., Gabriel, F. and Hongler, C. (2018). Neural tangent kernel: Convergence and +generalization in neural networks. Advances in neural information processing systems 31. +Jayakumar, S., Pascanu, R., Rae, J., Osindero, S. and Elsen, E. (2020). Top-kast: Top-k +always sparse training. Advances in Neural Information Processing Systems 33 20744–20754. +Ji, Z. and Telgarsky, M. (2019). Polylogarithmic width suffices for gradient descent to achieve +arbitrarily small test error with shallow relu networks. In International Conference on Learning +Representations. +13 + +Kepner, J. and Robinett, R. (2019). Radix-net: Structured sparse matrices for deep neural +networks. In 2019 IEEE International Parallel and Distributed Processing Symposium Workshops +(IPDPSW). IEEE. +Krizhevsky, A., Hinton, G. et al. (2009). Learning multiple layers of features from tiny images +. +LeCun, Y., Denker, J. and Solla, S. (1989). +Optimal brain damage. +Advances in neural +information processing systems 2. +Lee, J., Xiao, L., Schoenholz, S., Bahri, Y., Novak, R., Sohl-Dickstein, J. and Pen- +nington, J. (2019). Wide neural networks of any depth evolve as linear models under gradient +descent. Advances in neural information processing systems 32. +Lee, N., Ajanthan, T. and Torr, P. (2018). +Snip: Single-shot network pruning based on +connection sensitivity. In International Conference on Learning Representations. +Liu, S., Chen, T., Chen, X., Atashgahi, Z., Yin, L., Kou, H., Shen, L., Pechenizkiy, M., +Wang, Z. and Mocanu, D. C. (2021a). Sparse training via boosting pruning plasticity with +neuroregeneration. Advances in Neural Information Processing Systems 34. +Liu, S., Chen, T., Chen, X., Shen, L., Mocanu, D. C., Wang, Z. and Pechenizkiy, M. +(2021b). The unreasonable effectiveness of random pruning: Return of the most naive baseline +for sparse training. In International Conference on Learning Representations. +Liu, S., Mocanu, D. C., Matavalam, A. R. R., Pei, Y. and Pechenizkiy, M. (2021c). Sparse +evolutionary deep learning with over one million artificial neurons on commodity hardware. +Neural Computing and Applications 33 2589–2604. +Liu, S., Yin, L., Mocanu, D. C. and Pechenizkiy, M. (2021d). Do we actually need dense over- +parameterization? in-time over-parameterization in sparse training. In International Conference +on Machine Learning. PMLR. +Liu, T. and Zenke, F. (2020). Finding trainable sparse networks through neural tangent transfer. +In International Conference on Machine Learning. PMLR. +Luo, J.-H. and Wu, J. (2017). An entropy-based pruning method for cnn compression. arXiv +preprint arXiv:1706.05791 . +Malach, E., Yehudai, G., Shalev-Schwartz, S. and Shamir, O. (2020). Proving the lottery +ticket hypothesis: Pruning is all you need. In International Conference on Machine Learning. +PMLR. +Mariet, Z. and Sra, S. (2015). Diversity networks: Neural network compression using determi- +nantal point processes. arXiv preprint arXiv:1511.05077 . +Mocanu, D. C., Mocanu, E., Nguyen, P. H., Gibescu, M. and Liotta, A. (2016). +A +topological insight into restricted boltzmann machines. Machine Learning 104 243–270. +Mocanu, D. C., Mocanu, E., Stone, P., Nguyen, P. H., Gibescu, M. and Liotta, A. +(2018). Scalable training of artificial neural networks with adaptive sparse connectivity inspired +by network science. Nature communications 9 1–12. +14 + +Molchanov, P., Tyree, S., Karras, T., Aila, T. and Kautz, J. (2019). Pruning convolutional +neural networks for resource efficient inference. In 5th International Conference on Learning +Representations, ICLR 2017-Conference Track Proceedings. +Mostafa, H. and Wang, X. (2019). Parameter efficient training of deep convolutional neural net- +works by dynamic sparse reparameterization. In International Conference on Machine Learning. +PMLR. +Oymak, S. and Soltanolkotabi, M. (2020). Toward moderate overparameterization: Global +convergence guarantees for training shallow neural networks. IEEE Journal on Selected Areas in +Information Theory 1 84–105. +Pensia, A., Rajput, S., Nagle, A., Vishwakarma, H. and Papailiopoulos, D. (2020). +Optimal lottery tickets via subset sum: Logarithmic over-parameterization is sufficient. Advances +in Neural Information Processing Systems 33 2599–2610. +Peste, A., Iofinova, E., Vladu, A. and Alistarh, D. (2021). +Ac/dc: Alternating com- +pressed/decompressed training of deep neural networks. Advances in Neural Information Pro- +cessing Systems 34. +Prabhu, A., Varma, G. and Namboodiri, A. (2018). Deep expander networks: Efficient deep +networks from graph theory. In Proceedings of the European Conference on Computer Vision +(ECCV). +Ramanujan, V., Wortsman, M., Kembhavi, A., Farhadi, A. and Rastegari, M. (2020). +What’s hidden in a randomly weighted neural network? +In Proceedings of the IEEE CVF +Conference on Computer Vision and Pattern Recognition. +Rotskoff, G. M. and Vanden-Eijnden, E. (2018). +Neural networks as interacting particle +systems: Asymptotic convexity of the loss landscape and universal scaling of the approximation +error. stat 1050 22. +Shi, Z., Wei, J. and Liang, Y. (2021). +A theoretical analysis on feature learning in neural +networks: Emergence from inputs and advantage over fixed features. In International Conference +on Learning Representations. +Sirignano, J. and Spiliopoulos, K. (2020). Mean field analysis of neural networks: A law of +large numbers. SIAM Journal on Applied Mathematics 80 725–752. +Song, M., Montanari, A. and Nguyen, P. (2018). +A mean field view of the landscape of +two-layers neural networks. Proceedings of the National Academy of Sciences 115 E7665–E7671. +Song, Z., Yang, S. and Zhang, R. (2021). Does preprocessing help training over-parameterized +neural networks? Advances in Neural Information Processing Systems 34. +Song, Z. and Yang, X. (2019). Quadratic suffices for over-parametrization via matrix chernoff +bound. arXiv preprint arXiv:1906.03593 . +Sreenivasan, K., Rajput, S., Sohn, J.-y. and Papailiopoulos, D. (2021). Finding everything +within random binary networks. arXiv preprint arXiv:2110.08996 . +15 + +Sreenivasan, K., Sohn, J.-y., Yang, L., Grinde, M., Nagle, A., Wang, H., Lee, K. and +Papailiopoulos, D. (2022). Rare gems: Finding lottery tickets at initialization. arXiv preprint +arXiv:2202.12002 . +Su, J., Chen, Y., Cai, T., Wu, T., Gao, R., Wang, L. and Lee, J. D. (2020). +Sanity- +checking pruning methods: Random tickets can win the jackpot. Advances in Neural Information +Processing Systems 33 20390–20401. +Suau, X., Zappella, L. and Apostoloff, N. (2018). Network compression using correlation +analysis of layer responses . +Tanaka, H., Kunin, D., Yamins, D. L. and Ganguli, S. (2020). Pruning neural networks with- +out any data by iteratively conserving synaptic flow. Advances in Neural Information Processing +Systems 33 6377–6389. +Telgarsky, M. (2022). Feature selection with gradient descent on two-layer networks in low- +rotation regimes. arXiv preprint arXiv:2208.02789 . +Wang, C., Zhang, G. and Grosse, R. (2019). Picking winning tickets before training by pre- +serving gradient flow. In International Conference on Learning Representations. +Wei, C., Lee, J. D., Liu, Q. and Ma, T. (2019). Regularization matters: Generalization and +optimization of neural nets vs their induced kernel. Advances in Neural Information Processing +Systems 32. +Yang, H., Wen, W. and Li, H. (2020). +Deephoyer: Learning sparser neural network with +differentiable scale-invariant sparsity measures. In International Conference on Learning Repre- +sentations. +Yang, Q., Mao, J., Wang, Z. and Hai, H. L. (2021). Dynamic regularization on activation +sparsity for neural network efficiency improvement. ACM Journal on Emerging Technologies in +Computing Systems (JETC) 17 1–16. +Ye, M., Gong, C., Nie, L., Zhou, D., Klivans, A. and Liu, Q. (2020). Good subnetworks +provably exist: Pruning via greedy forward selection. In International Conference on Machine +Learning. PMLR. +Zagoruyko, S. and Komodakis, N. (2016). Wide residual networks. In British Machine Vision +Conference 2016. British Machine Vision Association. +Zhou, H., Lan, J., Liu, R. and Yosinski, J. (2019). Deconstructing lottery tickets: Zeros, signs, +and the supermask. Advances in neural information processing systems 32. +Zhou, J., Li, X., Ding, T., You, C., Qu, Q. and Zhu, Z. (2022). On the optimization landscape +of neural collapse under mse loss: Global optimality with unconstrained features. arXiv preprint +arXiv:2203.01238 . +Zhu, Z., Ding, T., Zhou, J., Li, X., You, C., Sulam, J. and Qu, Q. (2021). A geometric anal- +ysis of neural collapse with unconstrained features. Advances in Neural Information Processing +Systems 34. +16 + +Zou, D., Cao, Y., Li, Y. and Gu, Q. (2021). +Understanding the generalization of adam in +learning neural networks with proper regularization. arXiv preprint arXiv:2108.11371 . +Zou, D., Cao, Y., Zhou, D. and Gu, Q. (2020). Gradient descent optimizes over-parameterized +deep relu networks. Machine Learning 109 467–492. +Zou, D. and Gu, Q. (2019). An improved analysis of training over-parameterized deep neural +networks. Advances in neural information processing systems 32. +17 + +A +Experiment Details +The experiments of MLP, VGG and ResNet-32 are run on NVIDIA A5000 and ResNet-50 and +ResNet-20-128 is run on 4 NIVIDIA V100s. We list the hyperparameters we used in training. All +of our models are trained with SGD and the detailed settings are summarized below. +Table 1: Summary of architectures, dataset and training hyperparameters +Model +Data +Epoch +Batch Size +LR +Momentum +LR Decay, Epoch +Weight Decay +LeNet +MNIST +120 +128 +0.1 +0 +0 +0 +VGG +CIFAR-10 +160 +128 +0.1 +0.9 +0.1 × [80, 120] +0.0001 +ResNets +CIFAR-10 +160 +128 +0.1 +0.9 +0.1 × [80, 120] +0.0001 +B +Further Experiment Results +We plot the experiment result of ResNet-20-128 in Figure 4. This figure further verifies our results +that there exists pruning rate threshold such that the testing performance of the pruned network +is on par with the testing performance of the dense model while the training accuracy remains +perfect. +0.0 +20.0 +36.0 +48.8 +59.0 +67.2 +73.8 +79.0 +83.2 +86.6 +Sparsity +95 +96 +97 +98 +99 +100 +Accuracy +ResNet-20-128 CIFAR-10 Accuracy vs Sparsity +Random (Train) +Random (Test) +IMP (Train) +IMP (Test) +Figure 4: The figure shows the experiment results of ResNet-20-128 under various sparsity by +random pruning and IMP. Each data point is averaged over 2 runs. +C +Preliminary for Analysis +In this section, we introduce the following signal-noise decomposition of each neuron weight from +Cao et al. (2022), and some useful properties for the terms in such a decomposition, which are +useful in our analysis. +Definition C.1 (signal-noise decomposition). For each neuron weight j ∈ [K], r ∈ [m], there exist +18 + +coefficients γ(t) +j,r,k, ζ(t) +j,r,i, ω(t) +j,r,i such that +�w(t) +j,r = �w(0) +j,r + +K +� +k=1 +γ(t) +j,r,k · ∥µk∥−2 +2 +· µk ⊙ mj,r + +n +� +i=1 +ζ(t) +j,r,i · +����ξj,r,i +��� +−2 +2 +· �ξj,r,i + +n +� +i=1 +ω(t) +j,r,i +����ξj,r,i +��� +−2 +2 +· �ξj,r,i, +where γ(t) +j,r,j ≥ 0, γ(t) +j,r,k ≤ 0, ζ(t) +j,r,i ≥ 0, ω(t) +j,r,i ≤ 0. +It is straightforward to see the following: +γ(0) +j,r,k, ζ(0) +j,r,i, ω(0) +j,r,i = 0, +γ(t+1) +j,r,j += γ(t) +j,r,j − I(r ∈ Sj +signal)η +n +n +� +i=1 +ℓ′(t) +j,i · σ′ �� +�w(t) +j,r, µyi +�� +∥µyi∥2 +2 I(yi = j), +γ(t+1) +j,r,k = γ(t) +j,r,k − I((mj,r)k = 1)η +n +n +� +i=1 +ℓ′(t) +j,i · σ′ �� +�w(t) +j,r, µyi +�� +∥µyi∥2 +2 I(yi = k), ∀j ̸= k, +ζ(t+1) +j,r,i += ζ(t) +j,r,i − η +n · ℓ′(t) +j,i · σ′ �� +�w(t) +j,r, ξi +�� ����ξj,r,i +��� +2 +2 I(j = yi), +ω(t+1) +j,r,i += ω(t) +j,r,i − η +n · ℓ′(t) +j,i · σ′ �� +�w(t) +j,r, ξi +�� ����ξj,r,i +��� +2 +2 I(j ̸= yi), +where {γ(t) +j,r,j}T +t=1, {ζ(t) +j,r,i}T +t=1 are increasing sequences and {γ(t) +j,r,k}T +t=1, {ω(t) +j,r,i}T +t=1 are decreasing se- +quences, because −ℓ′(t) +j,i ≥ 0 when j = yi, and −ℓ′(t) +j,i ≤ 0 when j ̸= yi. By Lemma D.4, we have +pd > n+K, and hence the set of vectors {µk}K +k=1 +�{�ξi}n +i=1 is linearly independent with probability +measure 1 over the Gaussian distribution for each j ∈ [K], r ∈ [m]. Therefore the decomposition is +unique. +D +Proof of Theorem 3.1 +We first formally restate Theorem 3.1. +Theorem D.1 (Formal Restatement of Theorem 3.1). Under Condition 2.2, choose initialization +variance σ0 = �Θ(m−4n−1µ−1) and learning rate η ≤ �O(1/µ2). For ϵ > 0, if p ≥ C1 +log d +m +for some +sufficiently large constant C1, then with probability at least 1 − O(d−1) over the randomness in the +data, network initialization and pruning, there exists T = �O(Kη−1σ2−q +0 +µ−q + K2m4µ−2η−1ϵ−1) +such that the following holds: +1. The training loss is below ϵ: LS(� +W(T)) ≤ ϵ. +2. The weights of the CNN highly correlate with its corresponding class signal: maxr γ(T) +j,r,j ≥ +Ω(m−1/q) for all j ∈ [K]. +3. The weights of the CNN doesn’t have high correlation with the signal from different classes: +maxj̸=k,r∈[m] |γ(T) +j,r,k| ≤ �O(σ0µ). +4. None of the weights is highly correlated with the noise: maxj,r,i ζ(T) +j,r,i = �O(σ0σn +√pd), maxj,r,i |ω(T) +j,r,i| = +�O(σ0σn +√pd). +19 + +Moreover, the testing loss is upper-bounded by +LD(� +W(T)) ≤ O(Kϵ) + exp(−n2/p). +The proof of Theorem 3.1 consists of the analysis of the pruning on the signal and noise for +three stages of gradient descent: initialization, feature growing phase, and converging phase, and the +establishment of the generalization property. We present these analysis in detail in the following +subsections. +A special note is that the constant C showing up in the following proof of each +subsequent Lemmas is defined locally instead of globally, which means the constant C within each +Lemma is the same but may be different across different Lemma. +D.1 +Initialization +We analyze the effect of pruning on weight-signal correlation and weight-noise correlation at the +initialization. We first present a few supporting lemmas, and finally provide our main result of +Lemma D.7, which shows that if the pruning is mild, then it will not hurt the max weight-signal +correlation much at the initialization. +On the other hand, the max weight-noise correlation is +reduced by a factor of √p. +Lemma D.2. Assume n = Ω(K2 log Kd). Then, with probability at least 1 − 1/d, +|{i ��� [n] : yi = j}| = Θ(n/K) +∀j ∈ [K]. +Proof. By Hoeffding’s inequality, with probability at least 1 − δ/2K, for a fixed j ∈ [K], we have +����� +1 +n +n +� +i=1 +I(yi = j) − 1 +K +����� ≤ +� +log(4K/δ) +2n +. +Therefore, as long as n ≥ 2K2 log(4K/δ), we have +����� +1 +n +n +� +i=1 +I(yi = j) − 1 +K +����� ≤ +1 +2K . +Taking a union bound over j ∈ [K] and making δ = 1/d yield the result. +Lemma D.3. Assume pm = Ω(log d) and m = poly log d. Then, with probability 1 − 1/d, for all +j ∈ [K], k ∈ [K], we have �m +r=1(mj,r)k = Θ(pm), which implies that |Sj +signal| = Θ(pm) for all +j ∈ [K]. +Proof. When pm = Ω(log d), by multiplicative Chernoff’s bound, for a given k ∈ [K], we have +P +������ +m +� +r=1 +(mj,r)k − pm +����� ≥ 0.5pm +� +≤ 2 exp {−Ω (pm)} . +Take a union bound over j ∈ [K], k ∈ [K], we have +P +������ +m +� +r=1 +(mj,r)k − pm +����� ≥ 0.5pm, ∀j ∈ [K], k ∈ [K] +� +≤ 2K2 exp {−Ω (pm)} ≤ 1/d. +20 + +Lemma D.4. Assume p = 1/ poly log d. Then with probability at least 1 − 1/d, for all j ∈ [K], +r ∈ [m], �d +i=1(mj,r)i = Θ(pd). +Proof. By multiplicative Chernoff’s bound, we have for a given j, r +P +������ +d +� +i=1 +(mj,r)i − pd +����� ≥ 0.5pd +� +≤ 2 exp{−Ω(pd)}. +Take a union bound over j, r, we have +P +������ +d +� +i=1 +(mj,r)i − pd +����� ≥ 0.5pd, ∀j ∈ [K], r ∈ [m] +� +≤ 2Km exp{−Ω(pd)} ≤ 1/d, +where the last inequality follows from our choices of p, K, m, d. +Lemma D.5. Suppose p = Ω(1/ poly log d), and m, n = poly log d. With probability at least 1−1/d, +we have +����ξj,r,i +��� +2 +2 = Θ(σ2 +npd), +��� +� +�ξj,r,i, ξi′ +���� ≤ O(σ2 +n +� +pd log d), +��� +� +µk, �ξj,r,i +���� ≤ | ⟨µ, ξi⟩ | ≤ O(σnµ +� +log d), +for all j ∈ {−1, 1}, r ∈ [m], i, i′ ∈ [n] and i ̸= i′. +Proof. From Lemma D.4, we have with probability at least 1 − 1/d, +d +� +k=1 +(mj,r)k = Θ(pd), +∀j ∈ [K], r ∈ [m]. +For a set of Gaussian random variable g1, . . . , gN ∼ N(0, σ2), by Bernstein’s inequality, with prob- +ability at least 1 − δ, we have +����� +N +� +i=1 +g2 +i − σ2N +����� ≲ σ2 +� +N log 1 +δ . +Thus, by a union bound over j, r, i, with probability at least 1 − 1/d, we have +����ξj,r,i +��� +2 +2 = Θ(σ2 +npd). +For i ̸= i′, again by Bernstein’s bound, we have with probability at least 1 − δ, +��� +� +�ξj,r,i, ξi′ +���� ≤ O +� +σ2 +n +� +pd log Kmn +δ +� +, +for all j, r, i. Plugging in δ = 1/d gives the result. The proof for | ⟨µ, ξi⟩ | is similar. +21 + +Lemma D.6. Suppose we have m independent Gaussian random variables g1, g2, . . . , gm ∼ N(0, σ2). +Then with probability 1 − δ, +max +i +gi ≥ σ +� +log +m +log 1/δ. +Proof. By the standard tail bound of Gaussian random variable, we have for every x > 0, +�σ +x − σ3 +x3 +� e−x2/2σ2 +√ +2π +≤ P [g > x] ≤ σ +x +e−x2/2σ2 +√ +2π +. +We want to pick a x⋆ such that +P +� +max +i +gi ≤ x⋆ +� += (P [gi ≤ x⋆])m = (1 − P [gi ≥ x⋆])m ≤ e−m P[gi≥x⋆] ≤ δ +⇒ P[gi ≥ x⋆] = Θ +�log(1/δ) +m +� +⇒ x⋆ = Θ(σ +� +log(m/(log(1/δ) log m))). +Lemma D.7 (Formal Restatement of Lemma 3.2). With probability at least 1−2/d, for all i ∈ [n], +σ0σn +� +pd ≤ max +r +� +�w(0) +j,r , ξi +� +≤ +� +2 log(Kmd)σ0σn +� +pd. +Further, suppose pm ≥ Ω(log(Kd)). Then with probability 1 − 2/d, for all j ∈ [K], +σ0 ∥µj∥2 ≤ +max +r∈Sj +signal +� +�w(0) +j,r , µj +� +≤ +� +2 log(8pmKd)σ0 ∥µj∥2 . +Proof. We first give a proof for the second inequality. From Lemma D.3, we know that |Sj +signal| = +Θ(pm). The upper bound can be obtained by taking a union bound over r ∈ Sj +signal, j ∈ [K]. To +prove the lower bound, applying Lemma D.6, with probability at least 1−δ/K, we have for a given +j ∈ [K] +max +r∈Sj +signal +� +�w(0) +j,r , µj +� +≥ σ0 ∥µj∥2 +� +log +pm +log K/δ. +Now, notice that we can control the constant in pm (by controlling the constant in the lower bound +of p) such that pm/ log(Kd) ≥ e. Thus, taking a union bound over j ∈ [K] and setting δ = 1/d +yield the result. +The proof of the first inequality is similar. +D.2 +Supporting Properties for Entire Training Process +This subsection establishes a few properties (summarized in Proposition D.10) that will be used in +the analysis of feature growing phase and converging phase of gradient descent presented in the next +two subsections. Define T ⋆ = η−1 poly(1/ϵ, µ, d−1, σ−2 +n , σ−1 +0 n, m, d). Denote α = Θ(log1/q(T ⋆)), β = +22 + +2 maxi,j,r,k +���� +� +�w(0) +j,r , µk +���� , +��� +� +�w(0) +j,r , ξi +���� +� +. We need the following bound holds for our subsequent +analysis. +4m1/q max +j,r,i +�� +�w(0) +j,r , µyi +� +, Cnαµ√log d +σnpd , +� +�w(0) +j,r , ξi +� +, 3Cnα +� +log d +pd +� +≤ 1 +(D.1) +Remark D.8. To see why Equation (D.1) can hold under Condition 2.2, we convert everything in +terms of d. First recall from Condition 2.2 that m, n = poly(log d) and µ = Θ(σn +√ +d log d) = Θ(1). +In both mild pruning and over pruning we require p ≥ Ω(1/poly log d). Since α = Θ(log1/q(T ⋆)), if +we assume T ⋆ ≤ O(poly(d)) for a moment (which we are going to justify in the next paragraph), +then α = O(log1/q(d)). Then if we set d to be large enough, we have 4m1/qCnα µ√log d +σnpd +≤ poly log d +√ +d +≤ +1. Finally for the quantity 4m1/q maxj,r,i{⟨�w(0) +j,r , µyi⟩, ⟨�w(0) +j,r , ξi⟩}, by Lemma 3.2, our assumption +of K = O(log d) in Condition 2.2 and our choice of σ0 = �Θ(m−4n−1µ−1) in Theorem 3.1 (or +Theorem D.1), we can easily see that this quantity can also be made smaller than 1. +Now, to justify that T ⋆ ≤ O(poly(d)), we only need to justify that all the quantities T ⋆ depend on +is polynomial in d. First of all, based on Condition 2.2, n, m = poly log(d) and µ = Θ(σn +√ +d log d) = +Θ(1) further implies σ−2 +n += Θ(d log2 d). Since Theorem 3.1 only requires σ0 = �Θ(m−4n−1µ−1), this +implies σ−1 +0 +≤ O(poly log d). Hence σ−1 +0 n = O(poly log d). Together with our assumption that +ϵ, η ≥ Ω(1/ poly(d)) (which implies 1/ϵ, 1/η ≤ O(poly(d))), we have justified that all terms involved +in T ⋆ are at most of order poly(d). Hence T ⋆ = poly(d). +Remark D.9. Here we make remark on our assumption on ϵ and η in Condition 2.2. +For our assumption on ϵ, since the cross-entropy loss is (1) not strongly-convex and (2) achieves +its infimum at infinity. In practice, the cross-entropy loss is minimized to a constant level, say +0.001. We make this assumption to avoid the pathological case where ϵ is exponentially small in +d (say ϵ = 2−d) which is unrealistic. Thus, for realistic setting, we assume ϵ ≥ Ω(1/ poly(d)) or +1/ϵ ≤ O(poly(d)). +To deal with η, the only restriction we have is η = O(1/µ2) in Theorem 3.1 and Theorem 4.1. +However, in practice, we don’t use a learning rate that is exponentially small, say η = 2−d. Thus, +like dealing with ϵ, we assume η ≥ Ω(1/ poly(d)) or 1/η ≤ O(poly d). +We make the above assumption to simplify analysis when analyzing the magnitude of Fj(X) +for j ̸= y given sample (X, y). +Proposition D.10. Under Condition 2.2, during the training time t < T ⋆, we have +1. γ(t) +j,r,j, ζ(t) +j,r,i ≤ α, +2. ω(t) +j,r,i ≥ −β − 6Cnα +� +log d +pd . +3. γ(t) +j,r,k ≥ −β − 2Cnα µ√log d +σnpd . +Notice that the lower bound has absolute value smaller than the upper bound. +Proof of Proposition D.10. We use induction to prove Proposition D.10. +23 + +Induction Hypothesis: +Suppose Proposition D.10 holds for all t < T ≤ T ⋆. +We next show that this also holds for t = T via the following a few lemmas. +Lemma D.11. Under Condition 2.2, for t < T, there exists a constant C such that +� +�w(t) +j,r − �w(0) +j,r , µk +� += +� +γ(t) +j,r,k ± Cnαµ√log d +σnpd +� +I((mj,r)k = 1), +� +�w(t) +j,r − �w(0) +j,r , ξi +� += ζ(t) +j,r,i ± 3Cnα +� +log d +pd , +� +�w(t) +j,r − �w(0) +j,r , ξi +� += ω(t) +j,r,i ± 3Cnα +� +log d +pd . +Proof. From Lemma D.5, there exists a constant C such that with probability at least 1 − 1/d, +��� +� +�ξj,r,i, ξi′ +���� +����ξj,r,i +��� +2 +2 +≤ C +� +log d +pd , +��� +� +�ξj,r,i, µk +���� +����ξj,r,i +��� +2 +2 +≤ C µ√log d +σnpd , +| ⟨µk, ξi⟩ | +∥µk∥2 +2 +≤ C σn +√log d +µ +. +Using the signal-noise decomposition and assuming (mj,r)k = 1, we have +��� +� +�w(t) +j,r − �w(0) +j,r , µk +� +− γ(t) +j,r,k +��� = +����� +n +� +i=1 +ζ(t) +j,r,i · +����ξj,r,i +��� +−2 +2 +· +� +�ξj,r,i, µk +� ++ +n +� +i=1 +ω(t) +j,r,i +����ξj,r,i +��� +−2 +2 +· +� +�ξj,r,i, µk +������ +≤ C µ√log d +σnpd +n +� +i=1 +���ζ(t) +j,r,i +��� + C µ√log d +σnpd +n +� +i=1 +���ω(t) +j,r,i +��� +≤ 2C µ√log d +σnpd nα. +where the second last inequality is by Lemma D.5 and the last inequality is by induction hypothesis. +To prove the second equality, for j = yi, +��� +� +�w(t) +j,r − �w(0) +j,r , ξi +� +− ζ(t) +j,r,i +��� = +������� +K +� +k=1 +γ(t) +j,r,k · ⟨µk, ξi⟩ +∥µk∥2 +2 ++ +� +i′̸=i +ζ(t) +j,r,i′ · +� +�ξj,r,i′, ξi +� +����ξj,r,i′ +��� +2 +2 ++ +n +� +i′=1 +ω(t) +j,r,i′ +� +�ξj,r,i′, ξi +� +����ξj,r,i′ +��� +2 +2 +������� +≤ C σn +√log d +µ +K +� +k=1 +|γ(t) +j,r,k| + C +� +log d +pd +� +i′̸=i +|ζ(t) +j,r,i′| + C +� +log d +pd +n +� +i′=1 +|ω(t) +j,r,i′| += C σn +√log d +µ +Kα + 2Cnα +� +log d +pd +24 + +≤ 3Cnα +� +log d +pd . +where the last inequality is by n ≫ K and µ = Θ(σn +√ +d log d). The proof for the case of j ̸= yi is +similar. +Lemma D.12 (Off-diagonal Correlation Upper Bound). Under Condition 2.2, for t < T, j ̸= yi, +we have that +� +�w(t) +j,r, µyi +� +≤ +� +�w(0) +j,r , µyi +� ++ Cnαµ√log d +σnpd , +� +�w(t) +j,r, ξi +� +≤ +� +�w(0) +j,r , ξi +� ++ 3Cnα +� +log d +pd , +Fj(� +W(t) +j , xi) ≤ 1. +Proof. If j ̸= yi, then γ(t) +j,r,k ≤ 0 and we have that +� +�w(t) +j,r, µyi +� +≤ +� +�w(0) +j,r , µyi +� ++ +� +γ(t) +j,r,yi + Cnαµ√log d +σnpd +� +I((mj,r)yi = 1) +≤ +� +�w(0) +j,r , µyi +� ++ Cnαµ√log d +σnpd . +Further, we can obtain +� +�w(t) +j,r, ξi +� +≤ +� +�w(0) +j,r , ξi +� ++ ω(t) +j,r,i + 3Cnα +� +log d +pd +≤ +� +�w(0) +j,r , ξi +� ++ 3Cnα +� +log d +pd . +Then, we have the following bound: +Fj(� +W(t) +j , xi) = +m +� +r=1 +[σ(⟨�wj,r, µyi⟩) + σ(⟨�wj,r, ξi⟩)] +≤ m2q+1 max +j,r,i +�� +�w(0) +j,r , µyi +� +, Cnαµ√log d +σnpd , +� +�w(0) +j,r , ξi +� +, 3Cnα +� +log d +pd +�q +≤ 1. +where the first inequality is by Equation (D.1). +Lemma D.13 (Diagonal Correlation Upper Bound). Under Condition 2.2, for t < T, j = yi, we +have +� +�w(t) +j,r, µj +� +≤ +� +�w(0) +j,r , µj +� ++ γ(t) +j,r,j + Cnαµ√log d +σnpd , +25 + +� +�w(t) +j,r, ξi +� +≤ +� +�w(0) +j,r , ξi +� ++ ζ(t) +j,r,i + 3Cnα +� +log d +pd . +If max{γ(t) +j,r,j, ζ(t) +j,r,i} ≤ m−1/q, we further have that Fj(� +W(t) +j , xi) ≤ O(1). +Proof. The two inequalities are immediate consequences of Lemma D.11. If max{γ(t) +j,r,j, ζ(t) +j,r,i} ≤ +m−1/q, we have +Fj(� +W(t) +j , xi) = +m +� +r=1 +[σ(⟨�wj,r, µj⟩) + σ(⟨�wj,r, ξi⟩)] +≤ 2 · 3qm max +j,r,i +� +γ(t) +j,r, ζ(t) +j,r,i, +��� +� +�w(0) +j,r , µj +���� , +��� +� +�w(0) +j,r , ξi +���� , Cnαµ√log d +σnpd , 3Cnα +� +log d +pd +�q +≤ O(1). +Lemma D.14. Under Condition 2.2, for t ≤ T, we have that +1. ω(t) +j,r,i ≥ −β − 6Cnα +� +log d +pd ; +2. γ(t) +j,r,k ≥ −β − 2Cnα µ√log d +σnpd . +Proof. When j = yi, we have ω(t) +j,r,i = 0. We only need to consider the case of j ̸= yi. When +ω(T−1) +j,r,i +≤ −0.5β − 3Cnα +� +log d +pd , by Lemma D.11 we have +� +�w(T−1) +j,r +, ξi +� +≤ +� +�w(0) +j,r , ξi +� ++ ω(T−1) +j,r,i ++ 3Cnα +� +log d +pd +≤ 0. +Thus, +ω(T) +j,r,i = ω(T−1) +j,r,i +− η +n · ℓ′(T−1) +j,i +· σ′ �� +�w(T−1) +j,r +, ξi +�� ����ξj,r,i +��� +2 +2 I(j ̸= yi) += ω(T−1) +j,r,i +≥ −β − 6Cnα +� +log d +pd . +When ω(T−1) +j,r,i +≥ −0.5β − 3Cnα +� +log d +pd , we have +ω(T) +j,r,i = ω(T−1) +j,r,i +− η +n · ℓ′(T−1) +j,i +· σ′ �� +�w(T−1) +j,r +, ξi +�� ����ξj,r,i +��� +2 +2 I(j ̸= yi) +≥ −0.5β − 3Cnα +� +log d +pd +− η +nσ′ +� +0.5β + 3Cnα +� +log d +pd +� ����ξj,r,i +��� +2 +2 +≥ −β − 6Cnα +� +log d +pd , +26 + +where the last inequality is by setting η ≤ nq−1 � +0.5β + 3Cnα +� +log d +pd +�2−q +(C2σ2 +nd)−1 and C2 is the +constant such that +����ξj,r,i +��� +2 +2 ≤ C2σ2 +npd for all j, r, i in Lemma D.5. +For γ(t) +j,r,k, the proof is similar. Consider I((mj,r)k) = 1. When γ(t) +j,r,k ≤ −0.5β − Cnα µ√log d +σnpd , by +Lemma D.11, we have +� +�w(t) +j,r, µk +� +≤ +� +�w(0) +j,r , µk +� ++ γ(t) +j,r,k + Cnαµ√log d +σnpd +≤ 0. +Hence, +γ(T) +j,r,k = γ(T−1) +j,r,k +− η +n +n +� +i=1 +ℓ′(T−1) +j,i +σ′ �� +�w(T−1) +j,r +, µk +�� +µ2I(yi = k) += γ(T−1) +j,r,k +≥ −β − 2Cnαµ√log d +σnpd . +When γ(t) +j,r,k ≥ −0.5β − Cnα µ√log d +σnpd , we have +γ(T) +j,r,k = γ(T−1) +j,r,k +− η +n +n +� +i=1 +ℓ′(T−1) +j,i +σ′ �� +�w(T−1) +j,r +, µk +�� +µ2I(yi = k) +≥ −0.5β − Cnαµ√log d +σnpd +− C2 +η +K σ′ +� +0.5β + Cnαµ√log d +σnpd +� +µ2 +≥ −β − 2Cnαµ√log d +σnpd , +where the first inequality follows from the fact that there are Θ( n +K ) samples such that I(yi = k), +and the last inequality follows from picking η ≤ K(0.5β + Cnα µ√log d +σnpd )2−qµ−2q−1C−1 +2 . +Lemma D.15. Under Condition 2.2, for t ≤ T, we have γ(t) +j,r,j, ζ(t) +j,r,i ≤ α. +Proof. For yi ̸= j or r /∈ Sj +signal, γ(t) +j,r,j, ζ(t) +j,r,i = 0 ≤ α. +If yi = j, then by Lemma D.12 we have +���ℓ′(t) +j,i +��� = 1 − logitj(F; X) = +� +i̸=j eFi(X) +�K +i=1 eFi(X) ≤ +Ke +eFj(X) . +(D.2) +Recall that +γ(t+1) +j,r,j += γ(t) +j,r,j − I(r ∈ Sj +signal)η +n +n +� +i=1 +ℓ′(t) +j,i · σ′ �� +�w(t) +j,r, µyi +�� +∥µyi∥2 +2 I(yi = j), +ζ(t+1) +j,r,i += ζ(t) +j,r,i − η +n · ℓ′(t) +j,i · σ′ �� +�w(t) +j,r, ξi +�� ����ξj,r,i +��� +2 +2 I(j = yi). +27 + +We first bound ζ(T) +j,r,i. Let Tj,r,i be the last time t < T that ζ(t) +j,r,i ≤ 0.5α. Then we have +ζ(T) +j,r,i = ζ(Tj,r,i) +j,r,i +− η +nℓ′(Tj,r,i) +i +· σ′ �� +�w(Tj,r,i) +j,r +, ξi +�� +I(yi = j) +����ξj,r,i +��� +2 +2 +� +�� +� +I1 +− +� +Tj,r,i 0, if p = Θ( +1 +Km log d), then +with probability at least 1−1/ log(d), there exists T = O(η−1nσq−2 +0 +σ−q +n (pd)−q/2+η−1ϵ−1m4nσ−2 +n (pd)−1) +39 + +such that the following holds: +1. The training loss is below ϵ: LS(� +W(T)) ≤ ϵ. +2. The model weight doesn’t learn any of its corresponding signal at all: γ(t) +j,r,j = 0 for all j ∈ +[K], r ∈ [m]. +3. The model weights is highly correlated with the noise: maxr∈[m] ζ(T) +j,r,i ≥ Ω(m−1/q) if yi = j. +Moreover, the testing loss is large: +LD(� +W(T)) ≥ Ω(log K). +The proof of Theorem 4.1 consists of the analysis of the over-pruning for three stages of gradient +descent: initialization, feature growing phase, and converging phase, and the establishment of the +generalization property. We present these analysis in detail in the following subsections. +E.1 +Initialization +Lemma E.2. When m = poly log d and p = Θ( +1 +Km log d), with probability 1 − O(1/ log d), for all +class j ∈ [K] we have |Sj +signal| = 0. +Proof. First, the probability that a given class j receives no signal is (1−p)m. We use the inequality +that +1 + t ≥ exp {O(t)} +∀t ∈ (−1/4, 1/4). +Then the probability that |Sj +signal| = 0, ∀j ∈ [K] is given by +(1 − p)Km ≥ exp {−O (pKm)} ≥ 1 − O +� +1 +log d +� +. +E.2 +Feature Growing Phase +Lemma E.3 (Formal Restatement of Lemma 4.3). Under the same assumption as Theorem E.1, +there exists T1 < T ⋆ such that T1 = O(η−1nσq−2 +0 +σ−q +n (pd)−q/2) and we have +• maxr ζyi,r,i ≥ m−1/q for all i ∈ [n]. +• maxj,r,i |ω(t) +j,r,i| = �O(σ0σn +√pd). +• maxj,r,k |γ(t) +j,r,k| ≤ �O(σ0µ). +Proof. First of all, recall that from Definition C.1 we have for j = yi +� +�w(t) +j,r, ξi +� += +� +�w(0) +j,r , ξi +� ++ ζ(t) +j,r,i + +� +k̸=j +γ(t) +j,r,k +� +µk, �ξj,r,i +� +µ2 ++ +� +i′̸=i +ζ(t) +j,r,i +� +�ξj,r,i′, ξi +� +����ξj,r,i′ +��� +2 +2 ++ +n +� +i′=1 +ω(t) +j,r,i +� +�ξj,r,i′, ξi +� +����ξj,r,i′ +��� +2 +2 +. +40 + +Let +B(t) +i += max +j=yi,r +� +ζ(t) +j,r,i + +� +�w(0) +j,r , ξi +� +− O(n log1/q T ⋆ +� +log d +pd ) − O(nσ0σn +� +pd +� +log d +pd ) +� +. +Since maxj=yi,r +� +�w(0) +j,r , ξi +� +≥ Ω(σ0σn +√pd), we have +B(0) +i +≥ Ω(σ0σn +� +pd) − O(n log1/q T ⋆ +� +log d +pd ) − O(nσ0σn +� +pd +� +log d +pd ) ≥ Ω(σ0σn +� +pd). +Let Ti to be the last time that ζ(t) +j,r,i ≤ m−1/q. We can compute the growth of B(t) +i +as +B(t+1) +i +≥ B(t) +i ++ Θ(ησ2 +npd +n +)[B(t) +i ]q−1 +≥ B(t) +i ++ Θ(ησ2 +npd +n +)[B(0) +i +]q−2B(t) +i +≥ +� +1 + Θ +� +ησq−2 +0 +σq +npq/2dq/2 +n +�� +B(t) +i . +Therefore, B(t) +i +will reach 2m−1/q within �O(η−1nσq−2 +0 +σ−q +n (pd)−q/2) iterations. +On the other hand, by Proposition D.10, we have |ω(t) +j,r,i| ≤ β+6Cnα +� +log d +pd = O(σ0σn +√pd log d). +E.3 +Converging Phase +From the first stage we know that +�w(T1) +j,r += �w(0) +j,r + + +� +k̸=j +γ(t) +j,r,k +µk ⊙ mj,r +µ2 ++ +n +� +i=1 +ζ(T1) +j,r,i +�ξj,r,i +����ξj,r,i +��� +2 +2 ++ +n +� +i=1 +ω(T1) +j,r,i +�ξj,r,i +����ξj,r,i +��� +2 +2 +. +Now we define � +W⋆ as follows: +�w⋆ +j,r = �w(0) +j,r + Θ(m log(1/ϵ)) +� +�� +n +� +i=1 +I(j = yi) +�ξj,r,i +����ξj,r,i +��� +2 +2 +� +�� . +Lemma E.4. Based on the result from feature growing phase, +���� +W(T1) − � +W⋆��� +F ≤ O(m2n1/2 log(1/ϵ)σ−1 +n (pd)−1/2). +Proof. We derive the following bound: +���� +W(T1) − � +W⋆��� +F +≤ +���� +W(T1) − � +W(0)��� +F + +���� +W(0) − � +W⋆��� +F +41 + +≤ +� +j,r +� +� +� +������ +� +k̸=j +γ(t) +j,r,k +µk +µ2 +������ +2 ++ +������� +n +� +i=1 +ζ(T1) +j,r,i +�ξj,r,i +����ξj,r,i +��� +2 +2 +������� +2 ++ +������� +n +� +i=1 +ω(T1) +j,r,i +�ξj,r,i +����ξj,r,i +��� +2 +2 +������� +2 +� +� +� + Θ(m2n1/2 log(1/ϵ)σ−1 +n (pd)−1/2) +≤ Km(O( +√ +Kσ0) + O(n1/2σ−1 +n (pd)−1/2 log1/q T ⋆)) + �O(m2n1/2 log(1/ϵ)σ−1 +n (pd)−1/2) +≤ �O(m2n1/2 log(1/ϵ)σ−1 +n (pd)−1/2), +where the first inequality follows from triangle inequality, the second inequality follows from the +expression of W(T1), W⋆, and the third inequality follows from Lemma D.5 and the fact that +ζ(t) +j,r,i > 0 if and only if j = yi. +Lemma E.5. For T1 ≤ t ≤ T ⋆, we have +� +∇Fyi(� +Wyi, xi), � +W⋆ +yi +� +− +� +∇Fj(� +Wj, xi), � +W⋆ +j +� +≥ q log 2qK +ϵ +. +Lemma E.6. For T1 ≤ t ≤ T ⋆ and j = yi, we have +� +∇Fj(� +W(t) +j , xi), � +W⋆ +j +� +≥ Θ(m1/q log(1/ϵ)). +Proof. By Lemma D.5, we have +� +�ξj,r,i, �w⋆ +j,r +� += Θ(m log(1/ϵ)) and by Lemma E.3 for j = yi, +maxr +� +�w(t) +j,r, ξi +� +≥ maxr ζj,r,i − maxr +� +�w(0) +j,r , ξi +� +− O(n +� +log d +d α) ≥ Θ(m−1/q). Then we have +� +∇Fj(� +W(t) +j , xi), � +W⋆ +j +� += +m +� +r=1 +σ′ �� +�w(t) +j,r, ξi +�� � +�ξj,r,i, �w⋆ +j,r +� +≥ Θ(m1/q log(1/ϵ)). +Lemma E.7. For T1 ≤ t ≤ T ⋆ and j ̸= yi, we have +� +∇Fj(� +W(t) +j , xi), � +W⋆ +j +� +≤ O(1). +Proof. We first compute +� +�w⋆ +j,r, ξi +� += +� +�w(0) +j,r , ξi +� ++Θ(m log(1/ϵ)) �n +i=1 I(j = yi)⟨�ξj,r,i,ξi⟩ +∥�ξj,r,i∥ +2 +2 += O(σ0σn +√pd log d). +Further, +� +�w(t) +j,r, ξi +� += +� +�w(0) +j,r , ξi +� ++ +� +k̸=j +γ(t) +j,r,k +� +µk, �ξj,r,i +� +µ2 ++ +n +� +i=1 +ζ(t) +j,r,i +� +�ξj,r,i, ξi +� +����ξj,r,i +��� +2 +2 ++ +n +� +i=1 +ω(t) +j,r,i +� +�ξj,r,i, ξi +� +����ξj,r,i +��� +2 +2 +≤ O(σ0σn +� +pd log d), +42 + +where the inequality follows from Lemma D.5 and Lemma D.15. Thus, we have +� +∇Fj(� +W(t) +j , xi), � +W⋆ +j +� += +m +� +r=1 +σ′ �� +�w(t) +j,r, ξi +�� � +�ξj,r,i, �w⋆ +j,r +� +≤ mO +� +σ0σn +� +pd log d +�q +≤ O(1), +where the last inequality follows from our choice of σ0 ≤ �O(m−1/qµ−1). +Lemma E.8. Under the same assumption as Theorem E.1, we have +���W(t) − W⋆��� +2 +F − +���W(t+1) − W⋆��� +2 +F ≥ CηLS(� +W(t)) − ηϵ. +Proof. To simplify our notation, we define �F (t) +j (xi) = +� +∇Fj(� +W(t) +j , xi), � +W⋆ +j +� +. The proof is exactly +the same as the proof of Lemma D.23. +���� +W(t) − � +W⋆��� +2 +F − +���� +W(t+1) − � +W⋆��� +2 +F += 2η +� +∇LS(� +W(t)) ⊙ M, � +W(t) − � +W⋆� +− η2 ���∇LS(� +W(t)) ⊙ M +��� +2 +F += 2η +n +n +� +i=1 +K +� +j=1 +ℓ′(t) +j,i +� +qFj(� +W(t) +j ; xi, yi) − +� +∇Fj(� +W(t) +j , xi), � +W⋆ +j +�� +− η2 ���∇LS(� +W(t)) ⊙ M +��� +2 +F +≥ 2qη +n +n +� +i=1 +� +�log(1 + +K +� +j=1 +eFj−Fyi) − log(1 + +K +� +j=1 +e( �Fj− �Fyi)/q) +� +� − η2 ���∇LS(� +W(t)) ⊙ M +��� +2 +F +≥ 2qη +n +n +� +i=1 +� +ℓ(� +W(t); xi, yi) − log(1 + Ke− log(2qK/ϵ)) +� +− η2 ���∇LS(� +W(t)) ⊙ M +��� +2 +F +≥ 2qη +n +n +� +i=1 +� +ℓ(� +W(t); xi, yi) − ϵ +2q +� +− η2 ���∇LS(� +W(t)) ⊙ M +��� +2 +F +≥ CηLS(� +W(t)) − ηϵ, +where the first inequality follows from the convexity of the cross-entropy loss with softmax, the +second inequality follows from Lemma D.20, the third inequality follows because log(1 + x) ≤ x, +and the last inequality follows from Lemma D.19 for some constant C > 0. +Lemma E.9 (Formal Restatement of Lemma 4.4). Under the same assumption as Theorem E.1, +choose T2 = T1 + +� +∥� +W(T1)−� +W⋆∥ +2 +F +2ηϵ +� += T1 + �O(η−1ϵ−1m4nσ−2 +n (pd)−1). Then for any time t during +this stage we have maxj,r |ω(t) +j,r,i| = O(σ0 +√pd) and +1 +t − T1 +t +� +s=T1 +LS(� +W(s)) ≤ +���� +W(T1) − � +W⋆��� +2 +F +Cη(t − T1) ++ ϵ +C . +43 + +Proof. We have +���� +W(s) − ��� +W⋆��� +2 +F − +���� +W(s+1) − � +W⋆��� +2 +F ≥ CηLS(� +W(s)) − ηϵ. +Taking a telescopic sum from T1 to t yields +t +� +s=T1 +LS(� +W(s)) ≤ +���� +W(T1) − � +W⋆��� +2 +F + ηϵ(t − T1) +Cη +. +Combining Lemma E.4, we have +t +� +s=T1 +LS(� +W(s)) ≤ O(η−1 ���� +W(T1) − � +W⋆��� +2 +F ) = �O(η−1m4nσ−2 +n (pd)−1). +E.4 +Generalization Analysis +Theorem E.10 (Formal Restatement of the Generalization Part of Theorem 4.1). Under the same +assumption as Theorem E.1, within O(η−1nσq−2 +0 +σ−q +n (pd)−q/2+η−1ϵ−1m4nσ−2 +n (pd)−1) iterations, we +can find � +W(T) such that LS(� +W(T)) ≤ ϵ, and LD(� +W(t)) ≥ Ω(log K). +Proof. First of all, from Lemma E.9 we know there exists t ∈ [T1, T2] such that LS(� +W(T)) ≤ ϵ. +Then, we can bound +����w(t) +j,r +��� +2 = +������� +�w(0) +j,r + +� +k̸=j +γ(t) +j,r,k +µk +µ2 + +n +� +i=1 +ζ(t) +j,r,i +�ξj,r,i +����ξj,r,i +��� +2 +2 ++ +n +� +i=1 +ω(t) +j,r,i +�ξj,r,i +����ξj,r,i +��� +2 +2 +������� +2 +≤ +����w(0) +j,r +��� +2 + +� +k̸=j +|γ(t) +j,r,k| 1 +µ + +n +� +i=1 +ζ(t) +j,r,i +1 +����ξj,r,i +��� +2 ++ +n +� +i=1 +|ω(t) +j,r,i| +1 +����ξj,r,i +��� +2 +≤ O(σ0 +√ +d) + �O(nσ−1 +n (pd)−1/2). +Consider a new example (x, y). Taking a union bound over r, with probability at least 1 − d−1, we +have +��� +� +w(t) +y,r, ξ +���� = �O(σ0σn +√ +d + n(pd)−1/2), +for all r ∈ [m]. Then, +Fy(x) = +m +� +r=1 +σ +�� +�w(t) +j,r, µy +�� ++ σ +�� +�w(t) +j,r, ξ +�� +≤ m max +r +��� +� +w(t) +y,r, ξ +���� +q +≤ m �O(σq +0σq +ndq/2 + nq(pd)−q/2) +≤ 1, +44 + +where the last inequality follows because σ0 ≤ �O(m−1/qµ−1) and d ≥ �Ω(m2/qn2). +Thus, with +probability at least 1 − 1/d, +ℓ(F(� +W(t); x)) ≥ log(1 + (K − 1)e−1). +45 +