diff --git "a/D9E4T4oBgHgl3EQf6Q7Z/content/tmp_files/2301.05331v1.pdf.txt" "b/D9E4T4oBgHgl3EQf6Q7Z/content/tmp_files/2301.05331v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/D9E4T4oBgHgl3EQf6Q7Z/content/tmp_files/2301.05331v1.pdf.txt" @@ -0,0 +1,7877 @@ +Detection problems in the spiked matrix models +Ji Hyung Jung∗, Hye Won Chung†, and Ji Oon Lee‡ +January 16, 2023 +Abstract +We study the statistical decision process of detecting the low-rank signal from various signal-plus- +noise type data matrices, known as the spiked random matrix models. We first show that the principal +component analysis can be improved by entrywise pre-transforming the data matrix if the noise is non- +Gaussian, generalizing the known results for the spiked random matrix models with rank-1 signals. As +an intermediate step, we find out sharp phase transition thresholds for the extreme eigenvalues of spiked +random matrices, which generalize the Baik-Ben Arous-P´ech´e (BBP) transition. +We also prove the +central limit theorem for the linear spectral statistics for the spiked random matrices and propose a +hypothesis test based on it, which does not depend on the distribution of the signal or the noise. When +the noise is non-Gaussian noise, the test can be improved with an entrywise transformation to the data +matrix with additive noise. We also introduce an algorithm that estimates the rank of the signal when +it is not known a priori. +1 +Introduction +One of the most natural approach to ‘signal-plus-noise’ type data is to consider spiked random matrices, +which are the low-rank deformation of large random matrices. Most notable examples of spiked random +matrices include spiked Wigner matrix and spiked Wishart matrix, where the signals are given as a low-rank +mean matrix (spiked Wigner matrix) and a low-rank perturbation of the identity in its covariance matrix +(spiked Wishart matrix). In this paper, we focus on the following three types of noisy data matrices, known +as spiked random matrices, which generalize spiked Wigner/Wishart matrices: +• Spiked Wigner matrix: the data matrix is of the form +UΛ1/2U T + W, +(1.1) +where U = [u(1), u(2), . . . , u(k)] ∈ RN×k with U T U = Ik, and W is an N × N Wigner matrix. The +signal-to-noise ratio (SNR) Λ = diag(λ1, λ2, . . . , λk) with λ1 ≥ λ2 ≥ . . . λk > 0 for some positive integer +k, independent of N. +∗Department of Mathematical Sciences, KAIST, Daejeon, 34141, Korea +email: jhjung66@kaist.ac.kr +†School of Electrical Engineering, KAIST, Daejeon, 34141, Korea +email: hwchung@kaist.ac.kr +‡Department of Mathematical Sciences, KAIST, Daejeon, 34141, and School of Mathematics, KIAS, Seoul, 02455, Korea +email: jioon.lee@kaist.edu +1 +arXiv:2301.05331v1 [math.ST] 12 Jan 2023 + +• Rectangular matrix with spiked mean (additive model): the data matrix is of the form +UΛ1/2V T + X, +(1.2) +where U = [u(1), u(2), . . . , u(k)] ∈ RM×k, V = [v(1), v(2), . . . , v(k)] ∈ RN×k with U T U = V T V = +Ik, and X is an M × N random i.i.d. matrix whose entries are centered with variance N −1. The SNR +Λ is given as in the spiked Wigner matrix. +• Rectangular matrix with spiked covariance (multiplicative model): the data matrix is of the form +(I + UΛU T )1/2X, +(1.3) +where U = [u(1), u(2), . . . , u(k)] with U T U = Ik and X is an M × N random i.i.d. matrix whose +entries are centered with variance N −1. The SNR Λ is given as in the spiked Wigner matrix. +Here, Ik is the identity matrix with rank k and we allow the case k = 0 where no signal is present. Throughout +the paper, for the ease of notation, we denote by W an N × N Wigner matrix, and X an M × N random +i.i.d. matrix. +To describe the detection problems we consider in this paper, we first review the known results for the +simplest case of the spiked random matrix models with rank-1 spike, i.e, k = 1 in (1.1), (1.2), and (1.3). +Signal detection problem in rank-1 spiked random matrices: Many problems concerning the +signal detection can be answered in the case with Gaussian noise and rank-1 spike. In this case, the spikes +U = u and V = v are vectors, and SNR λ1 = λ, hence the spiked random matrices are of the following +forms: +√ +λuuT + W +(1.4) +√ +λuvT + X +(1.5) +(I + λuuT )1/2X. +(1.6) +For this case, reliable detection of the signal, i.e., detection with probability 1 − o(1) as M, N → ∞, is +impossible if the SNR λ is below a certain threshold [44, 47]. The threshold is 1 as N → ∞ for spiked +Wigner matrices; for spiked rectangular matrices, with additional assumption M/N → d0 as N → ∞, the +threshold is √d0 for a general class of priors [50]. On the other hand, the signal can be reliably detected +by the principal component analysis (PCA) if the SNR is above the threshold in which case the signal can +actually be estimated [24, 41, 43]. +In the subcritical case where the signal is not reliably detectable, it is natural to consider a hypothesis +test on the presence of the signal between H0 : λ = 0 and H1 : λ = ω, commonly referred to as the +weak detection, which is also known as the sphericity test in the case the spike is drawn from the uniform +distribution on the unit sphere, known as the spherical prior. As asserted by Neyman–Pearson lemma, the +likelihood ratio (LR) test is optimal in the sense that it minimizes the sum of the Type-I error and the +Type-II error. It was proved for several distributions of the spikes, called priors, that this sum for a spiked +Wigner matrix converges to +erfc +�1 +4 +� +− log (1 − λ) +� +(1.7) +when H is Gaussian Orthogonal Ensemble (GOE), and for a spiked Wishart matrix +erfc +� +1 +4 +� +− log +� +1 − λ2 +d0 +�� +(1.8) +2 + +when XXT is a Wishart Ensemble; see, e.g., [47, 29, 28]. Here, erfc(·) is the complementary error function +defined as +erfc(x) = +� ∞ +x +e−t2dt. +(1.9) +Though optimal, the LR test is not efficient, and it is desirable to construct a test that does not depend +on information about the prior, which is typically not known in many practical applications. In [22], an +optimal and universal test for spiked Wigner matrices was proposed, which is based on the linear spectral +statistics (LSS) of the data matrix, a linear functional defined as +LN(f) = +N +� +i=1 +f(µi) +(1.10) +for a given function f, where µ1, · · · µN are the eigenvalues of the data matrix. The test is extended to spiked +rectangular matrices in [34], where the singular values of the data matrix is used instead of the eigenvalues. +If the noise is non-Gaussian, it is possible to improve the PCA by transforming the data matrix entrywise +for spiked Wigner matrices [42, 50] and for spiked rectangular matrices [34]. In this improved PCA, the +threshold is lowered by a certain factor that depends on the Fisher information of the noise distribution. +Below this threshold, the LSS-based test proposed in [22] for spiked Wigner matrices can also be improved by +applying the entrywise transformation for the improved PCA. It is not known whether the reliable detection +is impossible below the threshold except for the case of the spiked Wigner matrix with Rademacher prior +[21]. +Spiked random matrices with general rank: The more relevant structure for application is that +the latent signal contains multiple spikes, or a spike with a higher rank. For such models of spiked random +matrices, similar to the cases with rank-1 spikes, it is natural to ask the following questions: +• What is the spectral threshold for a reliable detection lower than the existing one for Gaussian noise +if the noise is non-Gaussian? +• Can we design an efficient algorithm to weakly detect the presence of signal (i.e., better than a random +guess) when a reliable detection is not feasible? +Contrary to the rank-1 spike case, the questions addressed above have never been answered, even for the +simplest case of Gaussian noise. Furthermore, for the spikes with general rank, we need to consider another +important problem of finding the rank of the spike in case it is not known a priori. While viable solutions to +resolve the issue in the context of the community detection were suggested in [40, 16] for any spiked Wigner +matrices and [49, 25] for spiked rectangular matrices, these methods are not applicable in the sub-critical +case. To the best of our knowledge, there are no spectral algorithms for estimating the rank of signal in the +sub-critical regime. We thus aim to the following question as well: +• Can we design an efficient algorithm to estimate the rank of signal when a reliable detection is not +feasible? +Main contributions +Our main contributions are mainly divided into three parts as follows: +• (Strong detection) We prove that the PCA can be improved by an entrywise transformation if the +noise is non-Gaussian, under a mild assumption on the distribution (prior) of the spike. +3 + +• (Weak detection I) We propose a universal test to detect the presence of signal with low computational +complexity, based on the linear spectral statistics (LSS). The test does not require any prior information +on the signal, and if the noise is Gaussian the error of the proposed test is optimal. For the spiked +Wigner matrix or the additive model of the spiked rectangular matrix with the non-Gaussian noise, +we suggest an improved test via an entrywise transformation. +• (Weak detection II) We present an LSS-based test for estimating the rank of a signal when Λ = λI. +Heuristically, it is possible to increase the SNR via an entrywise transformation. Here, we illustrate the +main idea of the entrywise transformation for the spiked Wigner matrix of the form M = UΛ1/2U T + W. +If |uiuT +j |, |uivT +j | ≪ Wij, then by applying a function q entrywise to +√ +NY , we obtain a transformed matrix +whose entries are +q( +√ +NMij) = q( +√ +NWij + +√ +NuiΛ1/2uT +j ) ≈ q( +√ +NWij) + +√ +Nq′( +√ +NWij)uiΛ1/2uT +j , +where ui and vi denote i−th row vector of the signal matrix U and V , respectively. With negligible error, it +is possible to approximate the coefficient q′( +√ +NWij) in the second term in the right side by its expectation. +(See Appendix B.3 for the proof) Then, +q( +√ +NMij) = q( +√ +NWij + +√ +NuiΛ1/2uT +j ) ≈ +√ +N +� +q( +√ +NWij) +√ +N ++ E[q′( +√ +NWij)]uiΛ1/2uT +j +� +and the transformed matrix is approximately of the form U(Λ′)1/2U T + Q after a proper normalization, +which becomes another spiked Wigner matrix with different SNR. By optimizing the transformation q, we +find that the SNR is effectively increased (or equivalently, the threshold √d0 is lowered) in the PCA for the +transformed matrix. The change of the threshold and a BBP-type transition for the largest eigenvalues of +the transformed matrix can be rigorously proved; see Theorem 3.3 for a precise statement. We remark that +the same idea works even if the SNR Λ is not a constant multiple of an identity matrix, and also a similar +result holds for the additive model of spiked rectangular matrix (Theorem 3.4). +For the multiplicative model of the form Y = (I + UΛU T )1/2X =: (I + UΓU T )X, with Λ = 2Γ + Γ2, +the analysis is significantly more involved due to the following reason: Applying a function q entrywise to +√ +NY , we find that +q( +√ +NYij) = q +�√ +NXij + +√ +N +� +ℓ +uiΓuT +ℓ Xℓj +� +≈ q( +√ +NXij) + +√ +Nq′( +√ +NXij) +� +ℓ +uiΓuT +ℓ Xℓj +≈ +√ +N +� +q( +√ +NXij) +√ +N ++ E[q′( +√ +NXij)] +� +ℓ +uiΓuT +ℓ Xℓj +� +, +and the transformed matrix is of the form UΓ′U T X +Q, which is not a spiked rectangular matrix anymore. +Note that Q depends on X entrywise and thus it cannot be considered as an additive model, either. +In Theorem 3.5, we prove the effective change of the SNR and the BBP-type transition for the multi- +plicative model. The proof of Theorem 3.5 is based on a generalized version of the BBP transition that +works with the matrix of the form UΓU T X +Q. We remark that the strategy for the proof, based on recent +development of random matrix theory, can also be applied to prove a BBP-type transition for other models. +As in the rank-1 case in [34], it is notable that the optimal entrywise transform for the multiplicative +model is different from the one for the additive model. For the spiked Wigner matrix, the optimal transforms +are given by −g′/g for the off-diagonal entries (and −g′ +d/gd for the diagonal entries), where g (and gd for +the diagonal entries) is the density functions of them; the optimal transform for the additive model is also +4 + +given by −g′/g. However, for the multiplicative model, the optimal transform is a linear combination of +the function −g′/g and the identity mapping. Heuristically, it is due to that the effective SNRs depend not +only on Γ′ but also on the correlation between X and Q; the former is maximized when the transform is +−g′/g while the latter is maximized when the transform is the identity mapping. We also remark that the +effective SNRs after the optimal entrywise transform is larger in the additive model, which suggests that the +detection problem is fundamentally harder for the multiplicative model. +With the BBP-type transition for the largest eigenvalues of the transformed matrices, it is also possible +to improve the performance of several statistical inferences [13, 35, 46]. One of the consequences is that the +corresponding eigenspace is adjacent to its true spike U in the sense of direction of arrival (DoA) [23]. In +other words, we can not only reliably estimate the number of spikes by parallel analysis (PA) [27], but also +approximately recover the true spikes and the corresponding SNRs. +For the subcritical case where it is impossible to reliably detect the signal by the improved PCA, we +propose algorithms for weak detection, based on the central limit theorem (CLT) of the LSS, Theorems 5.2, +5.3, 5.5, and 5.6, analogous to the ones introduced in [22]. More precisely, assuming the SNRs are uniform +i.e., Λ = λI, we propose an algorithm for a hypothesis test between +Hk1 : k = k1, +Hk2 : k = k2 +(1.11) +for non-negative integers k1 < k2. While it may seem obvious, it has not been even known in the simple +case k1 = 0 whether the detection becomes easier as k2 increases. Our test in Algorithm 2 verifies the claim +since the error of the proposed test is an increasing function of (k2 − k1) as in Theorem 4.2. As in [22], the +proposed tests are universal, and the various quantities in it can be estimated from the observed data. The +test can further be improved by applying the same entrywise transformation we used for the PCA (Algorithm +3) if the data matrix is of additive type (spiked Wigner matrix or rectangular matrix with spiked mean), +and it also can be adapted to the rank detection problem where we need to estimate the rank k of the signal +without knowing the candidates k1 and k2 a priori (Algorithm 4). +The main mathematical achievement of the second part is the CLT for the LSS of spiked random matrices +with general ranks. +For a rank-1 spiked Wigner matrix, the CLT was first proved for a special spike +1 +√ +N (1, 1, . . . , 1)T in [9] and later extended for a general rank-1 spike by comparison with the special case +[22]. However, the proof in [9] is not readily extended to the spiked Wigner matrices with higher ranks and the +spiked rectangular matrices. In this paper, we overcome the difficulty by introducing a direct interpolation +between the spiked random matrix and the corresponding pure noise matrix and tracking the change of the +LSS. Furthermore, we will prove that the proposed entrywise transformation for the data matrix of additive +type also effectively changes the SNR, and that the LSS of the transformed matrix is also asymptotically +Gaussian; this result was proved previously only for rank-1 spiked Wigner matrices in [22]. Thus, the error +from the proposed test decreases after the transformation as for spiked Wigner matrices in [22]. +Related works +Spiked random matrix models were first introduced by Johnstone [31]. The model can be applied to various +problems such as community detection [1] and submatrix localization [19]. The transition of the largest +eigenvalue was proved by Baik, Ben Arous, and P´ech´e [7] for spiked complex Wishart matrices and generalized +by Benaych-Georges and Nadakuditi [14, 15]. For more results from random matrix theory about the extreme +eigenvalues and the corresponding eigenvectors of spiked random matrices, we refer to [18] and references +therein. +The improved PCA based on the entrywise transformation was considered for rank-1 spiked Wigner +matrices in [42, 50], where the transformation is chosen to maximize the effective SNR of the transformed +5 + +matrix. Detection problems for rank-1 spiked Wigner matrices were also considered, where the analysis is +typically easier due to its symmetry and canonical connection with spin glass models. For more results on +the rank-1 spiked Wigner matrices, we refer to [44, 50, 29, 22] and references therein. +The testing problem for rank-1 spiked Wishart matrices with the spherical prior was considered by +Onatski, Moreira, and Hallin [47, 48], where they proved the optimal error of the hypothesis test. It is later +extended to the case where the entries of the spikes are i.i.d. with bounded support (i.i.d. prior) by El +Alaoui and Jordan [28]. See also [32, 44, 24, 41, 43, 12] for more about detection limits in statistical learning +theory. +Models with sparse or generative structure of the spike have extensively studied in the past literature. +Various statistical and algorithmic methods are applicable to the case where SNR is smaller than the spectral +threshold. In particular, it can be seen that the sparsity of the spikes and the dimension of the latent vector +constituting the generative spike prior actually serve to lower the threshold for the SNR to which several +algorithms are applicable; see [5, 20] and references therein. +Organization of the paper +The rest of the paper is organized as follows: In Section 2, we introduce the precise definitions of models +and relevant previous consequences. In Section 3, we state our results on the improved PCA. In Section 4, +we propose algorithms for LSS-based tests and a test for rank estimation, and analyze their performance. +In Section 5, we state general results on the CLT for the LSS. We conclude the paper in Section 6 with +the summary of our works and future research directions. In Appendix A, we consider examples of spiked +random matrices and provide results from numerical experiments. +In Appendices B and C, we provide +technical details of the proofs. +2 +Preliminaries +In this section, we introduce the precise definition of the models and previous results for the spiked random +matrices. +2.1 +Definitions of models +The noise matrices are defined as follows: +Definition 2.1 (Wigner matrix). An N ×N symmetric matrix W = (Wij) is a (real) Wigner matrix if Wij +(i, j = 1, 2, . . . , N) are independent real random variables such that +• For all i < j, NE[W 2 +ij] = 1, N +3 +2 E[W 3 +ij] = w3, and N 2E[W 4 +ij] = w4 for some w3, w4 ∈ R. +• For all i, NE[W 2 +ii] = w2 for some constant w2 ≥ 0. +• For any positive integer p, there exists Cp, independent of N, such that N +p +2 E[W p +ij] ≤ Cp for all i ≤ j. +Definition 2.2 (Random rectangular matrix). An M × N matrix X = (Xij) is a (real) random rectangular +matrix if Xij (1 ≤ i ≤ M, 1 ≤ j ≤ N) are independent real random variables such that +• For all i, j, E[Xij] = 0, NE[X2 +ij] = 1, N +3 +2 E[X3 +ij] = w3, and N 2E[X4 +ij] = w4 for some constants w3, w4. +• For any positive integer p, there exists Cp, independent of N, such that N +p +2 E[Xp +ij] ≤ Cp for all i, j. +The spiked random matrices are defined as follows: +6 + +Definition 2.3 (Spiked Wigner matrix). An N × N matrix M = UΛ1/2U T + W is a spiked Wigner matrix +with the SNR (matrix) Λ if W is a Wigner matrix and the spike U = [u(1), u(2), . . . , u(k)] ∈ RN×k with +U T U = Ik. +Definition 2.4 (Spiked rectangular matrix - additive model). An M ×N random matrix Y = UΛ1/2V T +X +is a rectangular matrix with spiked mean U, V and the SNR (matrix) Λ if X is a random rectangular matrix +and the spikes U = [u(1), u(2), . . . , u(k)] ∈ RM×k, V = [v(1), v(2), . . . , v(k)] ∈ RN×k with U T U = V T V = +Ik. +Definition 2.5 (Spiked rectangular matrix - multiplicative model). An M × N random matrix Y = (I + +UΛU T )1/2X is a rectangular matrix with spiked covariance U and the SNR (matrix) Λ if X is a rectangular +matrix and U = [u(1), u(2), . . . , u(k)] ∈ RM×k with U T U = Ik. +We assume throughout the paper that the SNR matrix Λ is a k × k diagonal matrices with Λii = λi and +λ1 ≥ λ2 ≥ . . . λk ≥ 0, and M +N → d0 ∈ (0, ∞) as M, N → ∞. +2.2 +Principal component analysis +Here are the results for principal components of spiked models in the context of random matrix theory. +Spiked Wigner matrix +Let M be the spiked Wigner matrix. +The empirical spectral measure of M converges to the Wigner’s +semicircle law µsc, i.e., if we denote by µ1 ≥ µ2 ≥ · · · ≥ µN the eigenvalues of M, then +1 +N +N +� +i=1 +δµi(x)dx → dµsc(x) +(2.1) +weakly in probability as N → ∞, where +dµsc(x) = +√ +4 − x2 +2π +1(−2,2)(x)dx. +(2.2) +The k largest eigenvalue has the following (almost sure) limit: for 1 ≤ i ≤ k +• If λi > 1, then µi → √λi + +1 +√λi . +• If λi < 1, then µi → 2. +Sample covariance matrix +Let S = Y Y T be the sample covariance matrix (Gram matrix) derived from a spiked rectangular matrix +Y . The empirical spectral measure of S converges to the Marchenko–Pastur law µMP , i.e., if we denote by +µ1 ≥ µ2 ≥ · · · ≥ µM the eigenvalues of S, then +1 +M +M +� +i=1 +δµi(x)dx → dµMP (x) +(2.3) +weakly in probability as M, N → ∞, where for M ≤ N +dµMP (x) = +� +(x − d−)(d+ − x) +2πd0x +1(d−,d+)(x)dx, +(2.4) +7 + +with d± = (1 ± √d0)2. The k largest eigenvalue has the following (almost sure) limit: for 1 ≤ i ≤ k +• If λi > √d0, then µi → (1 + λi)(1 + d0 +λi ). +• If λi < √d0, then µi → d+ = (1 + √d0)2. +This in particular shows that the detection can be reliably done by PCA if λ > √d0. We remark that the +results above hold for both the additive model and the multiplicative model. +2.3 +Linear spectral statistics +We introduce the central limit theorems for null models. +Spiked Wigner matrix +The proof of the Gaussian convergence of the LR in [8, 10] is based on the recent study of linear spectral +statistics, defined as +LY (f) = +N +� +i=1 +f(µi) +(2.5) +for a function f, where µ1 ≥ µ2 ≥ . . . µN are the eigenvalues of M. As the Wigner’s semicircle law in (2.1) +suggests, it is required to consider the fluctuation of the LSS about +N +� 2 +−2 +f(x) dµsc(x). +The CLT for the LSS is the statement +� +LM(f) − N +� 2 +−2 +f(x) dµsc(x) +� +⇒ N(mM(f), VM(f)), +(2.6) +where the right-hand side is the Gaussian random variable with the mean mM(f) and the variance VM(f). +The CLT was proved for the null case (λ = 0). We will show that the CLT also holds under the alternative +and the mean mM(f) depends on λ while the variance VM(f) does not. +Spiked rectangular matrices +The LSS for the spiked rectangular matrices defined as +LY (f) = +M +� +i=1 +f(µi) +(2.7) +for a function f, where µ1 ≥ µ2 ≥ . . . µM are the eigenvalues of S = Y Y T . As the Marchenko–Pastur law in +(2.3) suggests, it is required to consider the fluctuation of the LSS about +M +� d+ +d− +f(x) dµMP (x). +The CLT for the LSS is the statement +� +LY (f) − M +� d+ +d− +f(x) dµMP (x) +� +⇒ N(mY (f), VY (f)), +(2.8) +8 + +where the right-hand side is the Gaussian random variable with the mean mY (f) and the variance VY (f). +The CLT was proved for the null case (λ = 0). We will show that the CLT also holds under the alternative +and the mean mY (f) depends on λ while the variance VY (f) does not. +3 +Main result I - Improved PCA +In this section, we state our first main results on the improvement of PCA by entrywise transformations and +provide the results from numerical experiments. +3.1 +Improved PCA +We introduce the following assumptions for the spike and the noise. +Assumption 3.1. For the spike U (and also V in the additive model), we assume, for φ ≤ 1/2, +1. the spikes are φ-localized with high probability, i.e. ∥U∥∞, ∥V ∥∞ ≺ N −φ +2. the spike matrix is φ-orthonormal with high probability, i.e. ∥U T U − Ik∥F , ∥V T V − Ik∥F ≺ N −φ, +and so the spikes are sampled from Stiefel manifold of orthonormal k-frames in RM or RN with high +probability. +For the noise, let P be the distribution of the normalized entries +√ +NWij(i ̸= j) in 2.1 and +√ +NXij in +2.2. Further, for the spiked Wigner matrices, let Pd be the distribution of the normalized diagonal entries +√ +NWii in 2.1. We assume the following: +1. The density functions g and gd of P and Pd, respectively, are smooth, positive everywhere, and sym- +metric (about 0). +2. For any fixed (N-independent) D, the D-th moments of P and Pd are finite. +3. The functions h = −g′/g, hd = −g′ +d/gd and their all derivatives are polynomially bounded in the sense +that |h(ℓ)(w)|, |h(ℓ) +d (w)| ≤ Cℓ|w|Cℓ for some constant Cℓ depending only on ℓ. +The first condition on the prior implies that the spike is not necessarily delocalized, i.e., some entries of +the signal can be significantly larger than N −1/2. The key examples of the prior are as follows: +Example 3.2. We can consider the following examples of the spike prior: +1. the spherical prior, where u(ℓ) (and v(ℓ)) are i.i.d. drawn uniformly from the unit sphere, or +2. the i.i.d. prior, where the entries u1(ℓ), . . . , uM(ℓ) (respectively, v1(ℓ), . . . , vN(ℓ)) are i.i.d. random +variables from the probability measures µℓ (respectively, νℓ) with mean zero and variance M −1 (respec- +tively N −1) such that for any integer p > 2 +E|ui(ℓ)|p, E|vj(ℓ)|p ≤ +Cp +M 1+(p−2)φ +for some (N-independent) constants Cp > 0 and φ ≤ 1 +2, uniformly on i, j and ℓ. +We remark that for the spike Wigner matrices, due to normalization, the variance of the i.i.d. prior µℓ for +ui(ℓ) is N −1. +9 + +Spiked Wigner matrix +Given a spiked Wigner matrix M, we consider a family of the entrywise transformations +hα(x) = −g′(x) +g(x) + αx, +hd(x) = −g′ +d(x)/gd(x) +(3.1) +for α ∈ R. We also consider the transformed matrix � +M whose entries are +� +Mij = +1 +� +FgN h0( +√ +NMij)(i ̸= j), +� +Mii = +� +w2 +Fg,dN hd +�� +N +w2 +Mii +� +, +(3.2) +where the Fisher information Fg and Fg,d of g and gd are given by +Fg = +� ∞ +−∞ +(g′(x))2 +g(x) +dx, +Fg,d = +� ∞ +−∞ +g′ +d(x)2 +gd(x) dx. +Note that Fg ≥ 1 where the equality holds only if g is the standard Gaussian. +Then following theorem asserts that the effective SNRs of the transformed matrix for PCA are λℓFg, +which generalizes Theorem 4.8 in [50]. +Theorem 3.3. Let M be a spiked Wigner matrix in Definition 2.3 satisfying Assumption 3.1 with φ > 1/4. +Let � +M be the transformed matrix obtained as in (3.2) and (�µℓ, �u(ℓ)) the pair of ℓ-th largest eigenvalue and +the corresponding eigenvector of � +M. Then, almost surely, for 1 ≤ ℓ ≤ k +• If λℓ > +1 +Fg , then �µℓ → +� +λℓFg + +1 +√ +λℓFg and |�u(ℓ)T u(ℓ)|2 → 1 − +1 +λℓFg , +• If λℓ < +1 +Fg , then �µℓ → 2 and |�u(ℓ)T u(ℓ)|2 → 0. +For the proof, we adapt the strategy in [50], where the key observation is that the transformed matrix is +approximately equal to another spiked Winger matrix. See Appendix B.2 for the detail of the proof. +We remark that h0 is the optimal (up to constant factor) among all entrywise transformations. See +Appendix B.5.1 for the proof of it. +Spiked rectangular matrices +For a spiked rectangular matrix Y , we consider the family of the entrywise transformations hα(x) defined in +(3.1) and transformed matrices �Y (α) whose entries are +�Y (α) +ij += +1 +� +(α2 + 2α + Fg)N +hα( +√ +NYij). +(3.3) +Note that +For the additive model, we again show that the effective SNRs of the transformed matrix for PCA are +{λℓFg}ℓ. +Theorem 3.4. Let Y be a spiked rectangular matrix in Definition 2.4 satisfying Assumption 3.1 with φ > +1/4. Let �Y ≡ �Y (0) be the transformed matrix obtained as in (3.3) with α = 0 and (�µℓ, �u(ℓ)) the pair of ℓ-th +largest eigenvalue and the corresponding eigenvector of �Y �Y T . Then, almost surely, for 1 ≤ ℓ ≤ k +• If λℓ > +√d0 +Fg , then �µℓ → (1 + λℓFg)(1 + +d0 +λℓFg ) and |�u(ℓ)T u(ℓ)|2 → 1 − +d0(1+λℓFg) +λℓFg(λℓFg+d0). +• If λℓ < +√d0 +Fg , then �µℓ → d+ = (1 + √d0)2 and |�u(ℓ)T u(ℓ)|2 → 0. +10 + +From Theorem 3.4, if λℓ > +√d0 +Fg , we immediately see that the signal in the additive model can be reliably +detected by the transformed PCA. Thus, the detection threshold in the PCA is lowered when the noise is +non-Gaussian. We also remark that h0 is the optimal entrywise transformation (up to constant factor) as in +the Wigner case; see Appendix B.5.2. +For the proof, we adapt the strategy in [34], where the key observation is again that the transformed +matrix is approximately equal to another spiked rectangular matrix. See Appendix B.3 for the detail of the +proof. +For the multiplicative model, we have the following result. +Theorem 3.5. Let Y be a spiked rectangular matrix in Definition 2.5 satisfying Assumption 3.1 with φ > +1/4. Let �Y ≡ �Y (αg,ℓ) be the transformed matrix obtained as in (3.3) with +αg,ℓ := +−γℓFg + +� +4Fg + 4γℓFg + γ2 +ℓ F 2g +2(1 + γℓ) +and (�µℓ, �u(ℓ)) the pair of ℓ-th largest eigenvalue and the corresponding eigenvector of �Y �Y T . Then, almost +surely, +• If (λg)ℓ > √d0, then �µℓ → (1 + (λg)ℓ)(1 + +d0 +(λg)ℓ ) and +|�u(ℓ)T u(ℓ)|2 → 1 − +(λg)ℓ + d0 +(λg)ℓ · ((λg)ℓ + 1), +• If (λg)ℓ < √d0, then �µℓ → d+ = (1 + √d0)2 and |�u(ℓ)T u(ℓ)|2 → 0. +where +(λg)ℓ := γℓ + γ2 +ℓ Fg +2 ++ +γℓ +� +4Fg + 4γℓFg + γ2 +ℓ F 2g +2 +. +Note that +(λg)ℓ ≥ γℓ + γ2 +ℓ Fg +2 ++ +γℓ +� +4 + 4γℓFg + γ2 +ℓ F 2g +2 += 2γℓ + γ2 +ℓ Fg ≥ 2γℓ + γ2 +ℓ = λℓ, +and the inequality is strict if Fg > 1, i.e., g is not Gaussian. +Note that unlike the additive model, we cannot determine αg without prior knowledge on the SNR. +Nevertheless, we can apply the transformation h√ +Fg or h0, which effectively increase all SNRs simultaneously; +see Appendix B.5. +From Theorem 3.5, if (λg)ℓ > √d0, the signal can be reliably detected by the transformed PCA and the +detection threshold in the PCA is lowered if the noise is non-Gaussian. We also remark that hαg,ℓ is the +optimal entrywise transformation (up to constant factor) for the ℓ-th largest eigenvalue; see Appendix B.5. +We finish this section with an outline of the proof of Theorem 3.5. We begin by justifying that the +transformed matrix �Y is approximately of the form (Q+U�Γ +1 +2 U T X), where �Γ = diag(�γ1, · · · , �γk). Then, the +largest eigenvalue of �Y �Y T can be approximated by the largest eigenvalue of (Q+U�Γ +1 +2 U T X)T (Q+U�Γ +1 +2 U T X) +for which we consider an identity +(Q + U�Γ +1 +2 U T X)T (Q + U�Γ +1 +2 U T X) − zI = (QT Q − zI)(I + L(z)), +11 + +where +L(z) = G(z)(XT U�Γ +1 +2 U T Q + QT U�Γ +1 +2 U T X + XT U�ΓU T X), +G(z) = (QT Q − zI)−1. +If z is an eigenvalue of (Q + U�Γ +1 +2 U T X)T (Q + U�Γ +1 +2 U T X) but not of QT Q, the determinant of (I + L(z)) +must be 0 and hence −1 is an eigenvalue of L(z). Since the rank of L(z) is at most 2k, we can find that the +eigenvector of L(z) is a linear combination of vectors G(z)QT u(ℓ) and G(z)XT u(ℓ). Further, by using the +facts in Example 3.2, we can observe that a linear combination of vectors G(z)QT u(ℓ) and G(z)XT u(ℓ) be +a possible candidate for the ℓ-th eigenvector of L(z), and so of �Y T �Y i.e., for some aℓ, bℓ, +L(z)(aℓG(z)QT u(ℓ) + bℓG(z)XT u(ℓ)) = −(aℓG(z)QT u(ℓ) + bℓG(z)XT u(ℓ)). +(3.4) +From the definition of L(z), +L(z) · G(z)XT U = G(z)XT U�Γ +1 +2 (U T QG(z)XT U) + G(z)QT U�Γ +1 +2 (U T XG(z)XT U) ++ G(z)XT U�Γ(U T XG(z)XT U), +and a similar equation holds for L(z) · G(z)QT U. It suggests that if U T QG(z)XT U and U T XG(z)XT U are +concentrated around diagonal matrices where the entries are deterministic functions of z, then the left side +of (3.4) can be well-approximated by a (deterministic) linear combination of G(z)QT u(ℓ) and G(z)XT u(ℓ). +We can then find the location of the largest eigenvalue in terms of a deterministic function of z and conclude +the proof by optimizing the function q. +The concentration of random matrices U T QG(z)XT U and U T XG(z)XT U is the biggest technical chal- +lenge in the proof, mainly due to the dependence between the matrices Q and X. We prove it by applying +the technique of linearization in conjunction with resolvent identities and also several recent results from +random matrix theory, most notably the local Marchenko–Pastur law. +Once we find out the coefficients aℓ and bℓ in (3.4), the eigenvector localization is an easy corollary +since the vector aℓG(z)QT u(ℓ) + bℓG(z)XT u(ℓ) must be a right singular vector of �Y with the corresponding +singular value +� +(1 + (λg)ℓ)(1 + +d0 +(λg)ℓ ). In this paper, we will not go into further detail on this part. +The detailed proof of Theorem 3.5 can be found in Appendix B.4. +4 +Main Result II - Weak Detection +4.1 +Signal detection in rank-1 spiked models +We begin by recalling the LSS-based detection algorithms for rank-1 spiked rectangular matrices in [34]. +Suppose that our goal is to detect the presence of the signal by the hypothesis test between H0 : λ = 0 and +H1 : λ = ω where the SNR ω for the alternative hypothesis H1 is known. The key observation is that the +variances of the limiting Gaussian distributions of the LSS in (2.7) do not depend on the SNR while the means +do. If we denote by VY (f) the common variance, and mY (f)|H0 and mY (f)|H1 the means, respectively, our +goal is to find a function that maximizes the relative difference between the limiting distributions of the LSS +under H0 and under H1, i.e., +����� +mY (f)|H1 − mY (f)|H0 +� +VY (f) +����� . +(4.1) +12 + +Algorithm 1 Hypothesis test for a rank-1 spiked rectangular matrix +Input: data Yij, parameters w4, ω +Lω ← test statistic in (4.3) +mω ← critical value in (4.7) +if Lω ≤ mω then +Accept H0 +else +Reject H0 +end if +As we will see in Theorem 5.5, the optimal function f is of the form C1φω + C2 for some constants C1 and +C2, where +φω(x) = ω +d0 +� +2 +w4 − 1 − 1 +� +x − log +�� +1 + d0 +ω +� +(1 + ω) − x +� +. +(4.2) +The test statistic we use is thus defined as +Lω = +M +� +i=1 +φω(µi) − M +� d+ +d− +φω(x) dµMP (x) += − log det +�� +1 + d0 +ω +� +(1 + ω)I − Y Y T +� ++ ω +d0 +� +2 +w4 − 1 − 1 +� +(Tr Y Y T − M) ++ M +� ω +d0 +− log +� ω +d0 +� +− 1 − d0 +d0 +log(1 + ω) +� +. +(4.3) +Theorem 8 in [34] asserts that Lω converges to a Gaussian, +Lω ⇒ N(m(λ), V0). +(4.4) +Here, the mean of the limiting Gaussian distribution is given by +m(λ) = −1 +2 log +� +1 − ω2 +d0 +� ++ ω2 +2d0 +(w4 ��� 3) − log +� +1 − λ2 +d0 +� ++ λ2 +d0 +� +2 +w4 − 1 − 1 +� +(4.5) +with λ = 0 under H0 and λ = ω under H1, and the variance +V0 = −2 log +� +1 − ω2 +d0 +� ++ 2ω2 +d0 +� +2 +w4 − 1 − 1 +� +. +(4.6) +Based on the asymptotic normality of Lω, we can construct a test in which we compute the test statistic +Lω and compare it with the average of m(0) and m(ω), i.e., +mω := m(0) + m(ω) +2 += − log +� +1 − ω2 +d0 +� ++ ω2 +2d0 +� +2 +w4 − 1 + w4 − 4 +� +. +(4.7) +See Algorithm 1 for the detail. +The limiting error of the proposed test, Algorithm 1, is given by +err(ω) = P(Lω > mω|H0) + P(Lω ≤ mω|H1) → erfc +�√V0 +4 +√ +2 +� +, +(4.8) +13 + +where V0 is the variance in (4.6) and erfc(·) is the complementary error function. If the noise X is Gaussian, +w4 = 3 and the limiting error in (4.8) is +erfc +�√V0 +4 +√ +2 +� += erfc +� +1 +4 +� +− log +� +1 − ω2 +d0 +�� +, +and it coincides with the error of the LR test; see Section 2.2 of [34]. It shows that our test is optimal with +the Gaussian noise. +4.2 +Signal detection in rank-k spiked models +When the rank of the spike is larger than 1, we first consider a simple case where the data is given as a +spiked Wigner matrix and our goal is to construct an LSS-based algorithm for a hypothesis test between +H0 : Λ = 0 and Hk : Λ = ωIk, where the rank k of the spike for the alternative hypothesis is known. Our +starting point is the following test statistic, which was considered for the rank-1 spiked Wigner matrix in +[22]: +Lω = − log det +� +(1 + ω)I − √ωM +� ++ ωN +2 ++ √ω +� 2 +w2 +− 1 +� +Tr M + ω +� +1 +w4 − 1 − 1 +2 +� +(Tr M 2 − N). +(4.9) +If there is no signal present, Lω ⇒ N(m0, V0), where +m0 = −1 +2 log(1 − ω) + +�w2 − 1 +w4 − 1 − 1 +2 +� +ω + (w4 − 3)ω2 +4 +, +(4.10) +V0 = −2 log(1 − ω) + +� 4 +w2 +− 2 +� +ω + +� +2 +w4 − 1 − 1 +� +ω2. +(4.11) +For a rank-k spiked Wigner matrix, we can consider the same Lω as in (4.9) and prove that it also +converges to a Gaussian with the same variance V0 but an altered mean mk. The following is the precise +statement for the limiting distribution of Lω. +Theorem 4.1. Let M be a rank-k spiked Wigner matrix with a spike U as in Definition 2.3 with Λ = ωIk +for some nonnegative integer k. Then, +Lω ⇒ N(mk, V0) , +(4.12) +where the variance V0 is as in (4.11) and the mean mk is given by +mk = m0 + k +� +− log(1 − ω) + +� 2 +w2 +− 1 +� +ω + +� +1 +w4 − 1 − 1 +2 +� +ω2 +� += m0 + kV0 +2 . +(4.13) +Proof. Theorem 4.1 directly follows from Theorem 5.2 in Section 5. +Since the mean of Lω depends on the rank of the spike, we can construct a hypothesis test between Hk1 +and Hk2 in (1.11) based on Theorems 4.1 and 4.4. In this test, for a given spiked Wigner matrix M, we +compute Lω and compare it with the critical value m(k1+k2)/2, +m(k1+k2)/2 := mk1 + mk2 +2 +. +(4.14) +14 + +Algorithm 2 Hypothesis test for a spiked Wigner matrix +Data: Mij, parameters w2, w4, λ +Lω ← test statistic in (4.9), +m(k1+k2)/2 ← critical value in (4.14) with (4.13) +if Lω ≤ m(k1+k2)/2 then +Accept H1 +else +Accept H2 +end if +See Algorithm 2 for the detail. +In Theorems 5.2 and 5.5, we prove that the proposed test in Algorithm 2 is optimal among all CLT-based +tests, in the sense that the error is minimized with the test statistic Lω also for spiked random matrices. +Theorem 4.2. The error of the test, err(ω) = P(Lω > mω|H0)+P(Lω ≤ mω|H1), in algorithm 2 converges +to +erfc +� +k2 − k1 +4 +� +V0 +2 +� +. +Proof. Theorem 4.2 is a direct consequence of Theorems 4.1 and 4.4. (See also Section 3 of [29] and the +proof of Theorem 2 of [22].) +Remark 4.3. When w4 = 3, we find that the error err(ω) converges to +erfc +� +k2 − k1 +4 +� +− log(1 − ω) + +� 2 +w2 +− 1 +� +ω +� +. +(4.15) +The optimal error for the weak detection, achieved by the LR test, coincides with the limiting error in (4.15) +when the noise is Gaussian and the SNR ω is sufficiently small; see [33]. Thus, our proposed test is optimal +in this case. +The test in Algorithm 2 can be readily extended to the spiked rectangular matrices by replacing the +test statistic in (4.9) with the following one, which was introduced in [34] for the rank-1 spiked rectangular +matrices. +Lω = − log det +�� +1 + d0 +ω +� +(1 + ω)I − Y Y T +� ++ ω +d0 +� +2 +w4 − 1 − 1 +� +(Tr Y Y T − M) ++ M +� ω +d0 +− log +� ω +d0 +� +− 1 − d0 +d0 +log(1 + ω) +� +. +(4.16) +We have the following results for the asymptotic normality of Gaussian fluctuation of Lω: +Theorem 4.4. Let Y be a spiked rectangular matrix in Definition 2.4 or 2.5 with Λ = ωIk for some +nonnegative integer k and λ ∈ (0, √d0) and w4 > 1. Then, for any spikes with U T U = V T V = Ik, +Lω ⇒ N(mk, V0), +(4.17) +where the mean and the variance are given by +mk = m0 + k +� +− log +� +1 − ω2 +d0 +� ++ ω2 +d0 +� +2 +w4 − 1 − 1 +�� +(4.18) +15 + +and +V0 = −2 log +� +1 − ω2 +d0 +� ++ 2ω2 +d0 +� +2 +w4 − 1 − 1 +� +(4.19) +where +m0 = −1 +2 log +� +1 − ω2 +d0 +� ++ ω2 +2d0 +(w4 − 3). +(4.20) +Theorem 4.4 directly follows from the general CLT result in Theorems 5.5. See Appendix C.4 for the +detailed computation for the mean and the variance. +With Theorem 4.4, we find that Algorithm 2 is available for the weak detection of the signal in the spiked +rectangular matrices with the following change: +• Data matrix is Yij (instead of Mij). +• Test statistic Lω is defined by (4.16) (instead of (4.9)). +• Critical value m(k1+k2)/2 is obtained by (4.14) with (4.20) (instead of (4.13)). +The limiting error of the test in this case is again erfc +� +k2−k1 +4 +� +V0 +2 +� +as in Theorem 4.2, where V0 is +defined by (4.19). +4.3 +Test with entrywise transformation for spiked matrices of additive type +The entrywise transform we applied with the PCA in Section 3.1 can also be adapted to be used together +with the proposed test in Algorithm 2; see also [22] where the same idea was applied for the rank-1 spiked +Wigner matrix. Recall the transformation defined in (3.1) and the transformed matrix � +M in (3.2). We +consider a test statistic +�Lω := − log det +� +(1 + ωFg)I − +� +ωFg � +M +� ++ ωFg +2 N ++ √ω +� +2 +� +Fg,d +w2 +− +� +Fg +� +Tr � +M + λ +� GH +� +w4 − 1 − Fg +2 +� +(Tr � +M 2 − N), +(4.21) +where +GH = +1 +2Fg +� ∞ +−∞ +g′(w)2g′′(w) +g(w)2 +dw, +� +w4 = +1 +(Fg)2 +� ∞ +−∞ +(g′(w))4 +(g(w))3 dw. +We then have the following CLT result for �Lω that generalizes the results in [22]. +Theorem 4.5. Assume the conditions in Theorem 4.1, satisfying Assumption 3.1 with φ > 3/8. If λFg < 1, +�Lω ⇒ N( �mk, �V0), +(4.22) +where the mean and the variance are given by +�mk = −1 +2 log(1 − ωFg) + +�(w2 − 1)GH +�w4 − 1 +− Fg +2 +� +ω + �w4 − 3 +4 +(ωFg)2 ++ k +� +− log(1 − ωFg) + +�2Fg,d +w2 +− Fg +� +ω + +� (GH)2 +�w4 − 1 − (Fg)2 +2 +� +ω2 +� +, +(4.23) +�V0 = −2 log(1 − ωFg) + +�4Fg,d +w2 +− 2Fg +� +ω + +�2(GH)2 +�w4 − 1 − (Fg)2 +� +ω2. +(4.24) +16 + +Algorithm 3 Hypothesis test for a spiked Wigner matrix with entrywise transformation +Data: Mij, parameters w2, w4, λ, densities g, gd +� +M ← transformed matrix in (3.2), +�Lω ← test statistic in (4.21), +�m(k1+k2)/2 ← critical value in (4.25) +with (4.23) +if �Lω ≤ �m(k1+k2)/2 then +Accept H1 +else +Accept H2 +end if +Proof. Theorem 4.5 directly follows from Theorem 5.3 in Section 5. +Based on Theorem 4.5, we can adapt the test in Algorithm 2 to construct a test that utilizes the entrywise +transformation. In this test, we compute �LΛ and compare it with the critical value +�m(k1+k2)/2 := ( �mk1 + �mk2)/2. +(4.25) +See Algorithm 3 for the detail. The limiting error of the test is given as follows. +Theorem 4.6. The error of the test in Algorithm 3 converges to +erfc +� +�k2 − k1 +4 +� +�V0 +2 +� +� . +Proof. Theorem 4.6 is a direct consequence of Theorem 5.6. +We also propose an analogous test can for the additive model of the spiked rectangular matrices as follows. +Recall the transformed matrix �Y ≡ �Y (0) in (3.3). Define the test statistic �Lω by +�Lω = − log det +�� +1 + d0 +ωFg +� +(1 + ωFg)I − �Y �Y T +� ++ 2ω +d0 +� GH +�w4 − 1 − Fg +2 +� +(Tr �Y �Y T − M) ++ M +�ωFg +d0 +− log +�ωFg +d0 +� +− 1 − d0 +d0 +log(1 + ωFg) +� +. +(4.26) +We then have the following CLT for the test statistic. +Theorem 4.7. Assume the conditions in Theorem 4.4, satisfying Assumption 3.1 with φ > 3/8. If λ < +√d0/Fg, +�Lω ⇒ N( �mk, �V0), +(4.27) +where the mean and the variance are given by +�m0 = −1 +2 log +� +1 − ω2(Fg)2 +d0 +� ++ ω2(Fg)2 +2d0 +( �w4 − 3) +(4.28) +�mk = �m0 + k +� +− log +� +1 − ω2(Fg)2 +d0 +� ++ 2ω2 +d0 +� (GH)2 +�w4 − 1 − (Fg)2 +2 +�� +(4.29) +and +�V0 = 4ω2 +d0 +� (GH)2 +�w4 − 1 − (Fg)2 +2 +� +− 2 log +� +1 − ω2(Fg)2 +d0 +� +. +(4.30) +17 + +With Theorem 4.7, we can adjust Algorithm 2 for the weak detection of the signal in the additive model +of spiked rectangular matrices, where we make the following change: +• Data matrix is Yij (instead of Mij). +• Transformed matrix is �Y (instead of � +M), defined by (3.3) with α = 0. +• Test statistic �Lω is defined by (4.26) (instead of (4.21)). +• Critical value m(k1+k2)/2 is obtained by (4.25) with (4.29) (instead of (4.23)). +In Appendix A, we consider several examples of spiked Wigner matrices and spiked rectangular matrices, +where we compare the errors from numerical simulations and the theoretical errors of the proposed algorithms. +We find that the numerical errors of the proposed tests closely match the corresponding theoretical errors +and the error from Algorithm 3 is lower than that of Algorithm 2. +4.4 +Rank estimation +The test in Algorithm 2 requires prior knowledge about k1 and k2, the possible ranks of the planted spike. +In this section, we adapt the idea of the proposed tests in Algorithm 2 to estimate the rank of the signal +when there is no prior information on the rank k. Recall that the test statistic Lω defined in (4.9) does not +depend on the rank of the matrix. As proved in Theorem 4.1, the test statistic Lω converges to a Gaussian +random variable with mean mk and the variance V0, where mk is equi-distributed with respect to k and V0 +does not depend on k. It is then natural to set the best candidate for k, which we call κ, be the minimizer +of the distance |Lω − mk|. This procedure is equivalent to find the nearest nonnegative integer of the value +κ′ := 2(Lω − m0) +V0 +(4.31) +rounding half down. +We describe the procedure in Algorithm 4; for example, its probability of error for spiked Wigner matrix +converges to +P(k = 0) · P +� +Z > +√V0 +4 +� ++ +∞ +� +i=1 +P(k = i) · P +� +|Z| > +√V0 +4 +� += +� +1 − P(k = 0) +2 +� +· erfc +� +1 +4 +� +V0 +2 +� +, +(4.32) +where Z is a standard Gaussian random variable. Note that it depends only on P(k = 0). +The error can be lowered if the range of k is known a priori. See Appendix A. It is also possible to +improve Algorithm 4 by pre-transforming the data matrix entrywise as in Section 4.3. We omit the detail. +5 +Central Limit Theorems +In this section, we collect our results on general CLTs for the LSS of spiked random matrices. To precisely +define the statements, we introduce the Chebyshev polynomials of the first kind. +Definition 5.1 (Chebyshev polynomial). The n-th Chebyshev polynomial (of the first kind) Tn is a degree +n polynomial defined by T0(x) = 1, T1(x) = x, and +Tn+1(x) = 2xTn(x) − Tn−1(x). +18 + +Algorithm 4 Rank estimation +Data: Mij (or Yij), parameters w2, w4, λ +Lω ← test statistic in (4.9) or (4.16), +m0 ← mean in (4.10) or (4.20), +m1 ← mean in (4.13) or (4.18) +with k = 1 +κ′ ← value in (4.31) +if Lω ≤ (m0 + m1)/2 then +Set κ = 0 +else +Set κ = ⌈κ′ − 0.5⌉ +end if +We first state a CLT for the LSS of spiked Wigner matrices. Recall that we denote by µ1 ≥ µ2 ≥ · · · ≥ µN +the eigenvalues of a spiked Wigner matrix M. +Theorem 5.2. Assume the conditions in Theorem 4.1. Suppose that a function f is analytic on an open +interval containing [−2, 2]. Then, +� N +� +i=1 +f(µi) − N +� 2 +−2 +√ +4 − z2 +2π +f(z) dz +� +⇒ N (mk(f), V0(f)) . +The mean and the variance of the limiting Gaussian distribution are given by +mk(f) = 1 +4 (f(2) + f(−2)) − 1 +2τ0(f) + (w2 − 2)τ2(f) + (w4 − 3)τ4(f) + k +∞ +� +ℓ=1 +√ +ωℓτℓ(f), +V0(f) = (w2 − 2)τ1(f)2 + 2(w4 − 3)τ2(f)2 + 2 +∞ +� +ℓ=1 +ℓτℓ(f)2 , +where we let +τℓ(f) = 1 +π +� 2 +−2 +Tℓ +�x +2 +� +f(x) +√ +4 − x2 dx. +Furthermore, for mk, m0, and V0 defined in Theorem 4.1, +����� +mk(f) − m0(f) +� +V0(f) +����� ≤ +���� +mk − m0 +√V0 +���� +The equality holds if and only if f(x) = C1φω(x) + C2 for some constants C1 and C2 where +φω(x) := log +� +1 +1 − √ωx + ω +� ++ √ω +� 2 +w2 +− 1 +� +x + ω +� +1 +w4 − 1 − 1 +2 +� +x2. +We will give a proof of Theorem 5.2 in Appendix C. With the entrywise transformation in Section 4.3, +we have the following changes in Theorem 5.2. Recall that �µ1 ≥ �µ2 ≥ · · · ≥ �µN are the eigenvalues of the +transformed matrix � +M. +Theorem 5.3. Assume the conditions in Theorem 5.2, satisfying Assumption 3.1 with φ > 3/8. If λFg < 1, +� N +� +i=1 +f(�µi) − N +� 2 +−2 +√ +4 − z2 +2π +f(z) dz +� +⇒ N( �mk(f), �V0(f)) . +19 + +The mean and the variance of the limiting Gaussian distribution are given by +�mk(f) = 1 +4 (f(2) + f(−2)) − 1 +2τ0(f) + k +� +ωFg,dτ1(f) + (w2 − 2 + kωGH)τ2(f) ++ (� +w4 − 3)τ4(f) + k +∞ +� +ℓ=3 +� +(ωFg)ℓτℓ(f), +(5.1) +�V0(f) = (w2 − 2)τ1(f)2 + 2(� +w4 − 3)τ2(f)2 + 2 +∞ +� +ℓ=1 +ℓτℓ(f)2. +Furthermore, for �mk, �m0, and �V0 defined in Theorem 4.1, +������ +�mk2(f) − �mk1(f) +� +�V0(f) +������ +≤ +������ +�mk2 − �mk1 +� +�V0 +������ +The equality holds if and only if f(x) = C1 �φω(x) + C2 for some constants C1 and C2 with the function +�φω(x) := log +� +1 +1 − +� +ωFgx + ωFg +� ++ +� +2 +� +Fg,d +w2 +− +� +Fg +� +x + ω +� GH +�w4 − 1 − Fg +2 +� +x2. +We will also prove Theorem 5.3 in Appendix C. +Remark 5.4. For a general case where the spike Λ = diag(ω1, · · · , ωk) with possibly distinct ωi’s, we can +prove the CLT and the transformed CLT, analogous to Theorems 5.2 and 5.3, respectively, where the means +of the limiting Gaussians are given by +mM(f) = 1 +4 (f(2) + f(−2)) − 1 +2τ0(f) + (w2 − 2)τ2(f) + (w4 − 3)τ4(f) ++ +k +� +s=1 +∞ +� +ℓ=1 +� +ωℓsτℓ(f), +�mM(f) = 1 +4 (f(2) + f(−2)) − 1 +2τ0(f) + (w2 − 2)τ2(f) + (� +w4 �� 3)τ4(f) ++ +k +� +s=1 +� +ωsFg,dτ1(f) + ωsGHτ2(f) + +k +� +s=1 +∞ +� +ℓ=3 +� +(ωsFg)ℓτℓ(f), +and the variances are equal to V0(f) in Theorem 5.2 and �V0(f) in Theorem 5.3, respectively. Adapting the +proposed tests in Algorithms 2 and 3, it is possible to construct hypothesis tests for the weak detection in this +case. +The next result is the CLT for the LSS of spiked rectangular matrices Y , where we denote by µ1 ≥ µ2 ≥ +· · · ≥ µM the eigenvalues of Y Y T . +Theorem 5.5. Assume the conditions in Theorem 4.4. Suppose that a function f is analytic on an open +set containing an interval [d−, d+]. Then, +� M +� +i=1 +f(µi) − M +� d+ +d− +� +(x − d−)(d+ − x) +2πd0x +f(x) dx +� +⇒ N(mk(f), V0(f)). +(5.2) +20 + +The mean and the variance of the limiting Gaussian distribution are given by +mk(f) = +�f(2) + �f(−2) +4 +− τ0( �f) +2 ++ (w4 − 3)τ2( �f) + k +∞ +� +ℓ=1 +� ω +√d0 +�ℓ +τℓ( �f) +and +V0(f) = 2 +∞ +� +ℓ=1 +ℓτℓ( �f)2 + (w4 − 3)τ1( �f)2, +where we let �f(x) = f(√d0x + 1 + d0). +Furthermore, for mk, m0, and V0 defined in Theorem 4.4, +����� +mk2(f) − mk1(f) +� +V0(f) +����� ≤ +���� +mk2 − mk1 +√V0 +���� +The equality holds if and only if f(x) = C1φω(x) + C2 for some constants C1 and C2 with the function +φω(x) = ω +d0 +� +2 +w4 − 1 − 1 +� +x − log +�� +1 + d0 +ω +� +(1 + ω) − x +� +. +Lastly, we state the pre-transformed CLT for the LSS of the additive model of spiked rectangular matrices. +We let �Y be the transformed matrix and �µ1 ≥ �µ2 ≥ · · · ≥ �µN the eigenvalues of �Y �Y T . +Theorem 5.6. Assume the conditions in Theorem 5.5, satisfying Assumption 3.1 with φ > 3/8. If λ < +√d0/Fg, +� M +� +i=1 +f(�µi) − M +� d+ +d− +f(x)ρMP,d0(dx) +� +⇒ N( �mk(f), �V0(f)). +(5.3) +The mean and the variance of the limiting Gaussian distribution are given by +�mk(f) = +�f(2) + �f(−2) +4 +− 1 +2τ0( �f) + kω +√d0 +(GH − Fg)τ1( �f) + (� +w4 − 3)τ2( �f) ++ k +∞ +� +ℓ=1 +�ωFg +√d0 +�ℓ +τℓ( �f) +(5.4) +and +�V0(f) = 2 +∞ +� +ℓ=1 +ℓτℓ( �f)2 + (� +w4 − 3)τ1( �f)2. +(5.5) +where �f(x) = f(√d0x + 1 + d0). +Furthermore, for �mk, �m0, and �V0 defined in Theorem 4.7, The equality holds if and only if f(x) = +C1 �φω(x) + C2 for some constants C1 and C2 with the function +�φω(x) = 2λ +d0 +� GH +�w4 − 1 − Fg +2 +� +x − log +�� d0 +ωFg ++ 1 +� +(ωFg + 1) − x +� +. +Remark 5.7. As in Remark 5.4, for a general case with Λ = diag(ω1, · · · , ωk), the CLT and the transformed +21 + +CLT hold with the adjusted means +mY (f) = +�f(2) + �f(−2) +4 ++ τ0( �f) +2 ++ (w4 − 3)τ2( �f) + +k +� +s=1 +∞ +� +ℓ=1 +� ωs +√d0 +�ℓ +τℓ( �f), +�mY (f) = +�f(2) + �f(−2) +4 +− 1 +2τ0( �f) + (� +w4 − 3)τ2( �f) + +k +� +s=1 +ωs +√d0 +(GH − Fg)τ1( �f) ++ +k +� +s=1 +∞ +� +ℓ=1 +�ωsFg +√d0 +�ℓ +τℓ( �f), +where the variances are given V0(f), �V0(f), respectively. Further, the corresponding optimal functions and +test statistic can be calculated by following the same procedure in [34]. +6 +Conclusion and Future Works +In this paper, we considered the detection problems of the spiked random model with general ranks. First, +we prove the sub-optimality of the PCA for the non-Gaussian noise. Further, we proposed a hypothesis test +based on the central limit theorem for the linear spectral statistics of the data matrix and introduced a test +for rank estimation that do not require any prior information on the rank of the signal. It was shown that +the error of the proposed hypothesis test matches the error of the likelihood ratio test in case the noise is +Gaussian and the signal-to-noise ratio is small. With the knowledge on the density of the noise, the test was +further improved by applying an entrywise transformation. +We believe that the hypothesis test with the entrywise transformed matrix proposed in this paper can +be extended to the multiplicative model of spiked rectangular matrix. This will be discussed in our future +works. +Acknowledgments +The work of J. H. Jung and J. O. Lee was partially supported by National Research Foundation of Korea +under grant number NRF-2019R1A5A1028324. +The work of H. W. Chung was partially supported by +National Research Foundation of Korea under grant number 2017R1E1A1A01076340 and by the Ministry +of Science and ICT, Korea, under an ITRC Program, IITP-2019-2018-0-01402. +References +[1] E. Abbe. Community detection and stochastic block models: recent developments. The Journal of +Machine Learning Research, 18(1):6446–6531, 2017. +[2] O. H. Ajanki, L. Erd˝os, and T. Kr¨uger. Universality for general wigner-type matrices. Probab. Theory +and Related Fields, 169(3):667–727, 2017. +[3] J. Alt. Singularities of the density of states of random gram matrices. Electron. Commun. Probab., +22:1–13, 2017. +[4] J. Alt, L. Erd˝os, and T. Kr¨uger. Local law for random gram matrices. Electron. J. Probab., 22:1–41, +2017. +22 + +[5] B. Aubin, B. Loureiro, A. Maillard, F. Krzakala, and L. Zdeborov´a. The spiked matrix model with +generative priors. Advances in Neural Information Processing Systems, 32, 2019. +[6] Z. D. Bai and J. Yao. On the convergence of the spectral empirical process of Wigner matrices. Bernoulli, +11(6):1059–1092, 2005. +[7] J. Baik, G. B. Arous, S. P´ech´e, et al. Phase transition of the largest eigenvalue for nonnull complex +sample covariance matrices. Ann. Probab., 33(5):1643–1697, 2005. +[8] J. Baik and J. O. Lee. Fluctuations of the free energy of the spherical Sherrington-Kirkpatrick model. +J. Stat. Phys., 165(2):185–224, 2016. +[9] J. Baik and J. O. Lee. Fluctuations of the free energy of the spherical Sherrington-Kirkpatrick model +with ferromagnetic interaction. Ann. Henri Poincar´e, 18(6):1867–1917, 2017. +[10] J. Baik and J. O. Lee. Free energy of bipartite spherical Sherrington-Kirkpatrick model. Ann. Inst. +Henri Poincar´e Probab. Stat., 56(4):2897–2934, 2020. +[11] J. Baik, J. O. Lee, and H. Wu. Ferromagnetic to paramagnetic transition in spherical spin glass. J. +Stat. Phys., 173(5):1484–1522, 2018. +[12] D. Banerjee and Z. Ma. Optimal hypothesis testing for stochastic block models with growing degrees. +arXiv:1705.05305, 2017. +[13] Z. Bao, X. Ding, J. Wang, and K. Wang. +Statistical inference for principal components of spiked +covariance matrices. The Annals of Statistics, 50(2):1144–1169, 2022. +[14] F. Benaych-Georges and R. R. Nadakuditi. The eigenvalues and eigenvectors of finite, low rank pertur- +bations of large random matrices. Adv. Math., 227(1):494–521, 2011. +[15] F. Benaych-Georges and R. R. Nadakuditi. The singular values and vectors of low rank perturbations +of large rectangular random matrices. J. Multivar. Anal., 111:120–135, 2012. +[16] P. J. Bickel and P. Sarkar. Hypothesis testing for automated community detection in networks. J. R. +Stat. Soc., B: Stat. Methodol., 78(1):253–273, 2016. +[17] A. Bloemendal, L. Erd˝os, A. Knowles, H.-T. Yau, and J. Yin. Isotropic local laws for sample covariance +and generalized Wigner matrices. Electron. J. Probab., 19:no. 33, 53, 2014. +[18] A. Bloemendal, A. Knowles, H.-T. Yau, and J. Yin. On the principal components of sample covariance +matrices. Prob. theory and related fields, 164(1-2):459–552, 2016. +[19] C. Butucea, Y. I. Ingster, et al. Detection of a sparse submatrix of a high-dimensional noisy matrix. +Bernoulli, 19(5B):2652–2688, 2013. +[20] T. Cai, Z. Ma, and Y. Wu. Optimal estimation and rank detection for sparse spiked covariance matrices. +Probability theory and related fields, 161(3):781–815, 2015. +[21] H. W. Chung, J. Lee, and J. O. Lee. Asymptotic normality of log likelihood ratio and fundamental limit +of the weak detection for spiked wigner matrices. arXiv preprint arXiv:2203.00821, 2022. +[22] H. W. Chung and J. O. Lee. Weak detection of signal in the spiked wigner model. In International +Conference on Machine Learning, pages 1233–1241, 2019. +23 + +[23] R. Couillet. Robust spiked random matrices and a robust g-music estimator. Journal of Multivariate +Analysis, 140:139–161, 2015. +[24] M. Dia, N. Macris, F. Krzakala, T. Lesieur, L. Zdeborov´a, et al. Mutual information for symmetric +rank-one matrix estimation: A proof of the replica formula. Advances in Neural Information Processing +Systems, 29, 2016. +[25] X. Ding. +High dimensional deformed rectangular matrices with applications in matrix denoising. +Bernoulli, 26(1):387–417, 2020. +[26] X. Ding and F. Yang. Tracy-widom distribution for heterogeneous gram matrices with applications in +signal detection. IEEE Transactions on Information Theory, 2022. +[27] E. Dobriban. Permutation methods for factor analysis and pca. The Annals of Statistics, 48(5):2824– +2847, 2020. +[28] A. El Alaoui and M. I. Jordan. Detection limits in the high-dimensional spiked rectangular model. In +Conference On Learning Theory, pages 410–438, 2018. +[29] A. El Alaoui, F. Krzakala, and M. Jordan. Fundamental limits of detection in the spiked Wigner model. +Ann. Stat, 48(2):863–885, 2020. +[30] L. Erd˝os, A. Knowles, H.-T. Yau, and J. Yin. Spectral statistics of erdos-R´enyi graphs I: Local semicircle +law. Ann. Probab., 41(3B):2279–2375, 2013. +[31] I. M. Johnstone. On the distribution of the largest eigenvalue in principal components analysis. Ann. +Stat, pages 295–327, 2001. +[32] I. M. Johnstone and A. Onatski. Testing in high-dimensional spiked models. Ann. Stat, 48(3):1231–1254, +2020. +[33] J. H. Jung, H. W. Chung, and J. O. Lee. Weak detection in the spiked wigner model with general rank. +arXiv:2001.05676, 2020. +[34] J. H. Jung, H. W. Chung, and J. O. Lee. Detection of signal in the spiked rectangular models. In +International Conference on Machine Learning, pages 5158–5167. PMLR, 2021. +[35] Z. T. Ke, Y. Ma, and X. Lin. Estimation of the number of spiked eigenvalues in a covariance matrix by +bulk eigenvalue matching analysis. Journal of the American Statistical Association, pages 1–19, 2021. +[36] A. Knowles and J. Yin. The isotropic semicircle law and deformation of Wigner matrices. Comm. Pure +Appl. Math., 66(11):1663–1750, 2013. +[37] A. Knowles and J. Yin. Anisotropic local laws for random matrices. Probab. Theory Related Fields, +169(1-2):257–352, 2017. +[38] J. O. Lee and K. Schnelli. Tracy-Widom distribution for the largest eigenvalue of real sample covariance +matrices with general population. Ann. Appl. Probab., 26(6):3786–3839, 2016. +[39] J. O. Lee and K. Schnelli. Local law and Tracy-Widom limit for sparse random matrices. Probab. +Theory Related Fields, 171(1-2):543–616, 2018. +[40] J. Lei. A goodness-of-fit test for stochastic block models. Ann. Stat., 44(1):401–424, 2016. +24 + +[41] M. Lelarge and L. Miolane. Fundamental limits of symmetric low-rank matrix estimation. Probab. +Theory Related Fields, 173(3-4):859–929, 2019. +[42] T. Lesieur, F. Krzakala, and L. Zdeborov´a. Mmse of probabilistic low-rank matrix estimation: Univer- +sality with respect to the output channel. In 2015 53rd Annual Allerton Conference on Communication, +Control, and Computing (Allerton), pages 680–687, 2015. +[43] L. Miolane. Fundamental limits of low-rank matrix estimation: the non-symmetric case. arXiv preprint +arXiv:1702.00473, 2017. +[44] A. Montanari, D. Reichman, and O. Zeitouni. On the limitation of spectral methods: From the gaussian +hidden clique problem to rank-one perturbations of gaussian tensors. In Advances in Neural Information +Processing Systems, pages 217–225, 2015. +[45] F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, editors. NIST handbook of mathematical +functions. U.S. Department of Commerce, National Institute of Standards and Technology, Washington, +DC; Cambridge University Press, Cambridge, 2010. +[46] A. Onatski. Testing hypotheses about the number of factors in large factor models. Econometrica, +77(5):1447–1479, 2009. +[47] A. Onatski, M. J. Moreira, and M. Hallin. Asymptotic power of sphericity tests for high-dimensional +data. Ann. Stat, 41(3):1204–1231, 2013. +[48] A. Onatski, M. J. Moreira, and M. Hallin. Signal detection in high dimension: The multispiked case. +Ann. Stat, 42(1):225–254, 2014. +[49] D. Passemier and J.-F. Yao. On determining the number of spikes in a high-dimensional spiked popu- +lation model. Random Matrices: Theory and Applications, 1(01):1150002, 2012. +[50] A. Perry, A. S. Wein, A. S. Bandeira, and A. Moitra. Optimality and sub-optimality of PCA I: Spiked +random matrix models. Ann. Stat, 46(5):2416–2451, 2018. +A +Examples and Simulations +In Appendix A, we consider specific examples of spiked random matrices under various settings. We first +demonstrate with an example the change of the threshold by the improved PCA in Section 3. We then +provide the details of the proposed tests in Algorithms 2 and 3 with different examples, and the test for +rank estimation in Algorithm 4 for these and compute the theoretical errors. We also perform the numerical +simulation for the proposed tests and compare the numerical errors with the theoretical errors. +A.1 +Spiked Wigner matrix +A.1.1 +Improved PCA with Entrywise Transformation +Our first example is a spiked Wigner matrix with non-Gaussian noise to which we apply the entrywise +transformation for the improved PCA. We let the density function of the noise be a bimodal distribution +with unit variance, defined as +g(x) = gd(x) = +1 +√ +2π +� +e−2(x− +√ +3/2)2 + e−2(x+ +√ +3/2)2� +, +(A.1) +25 + +which is the density function of a random variable +1 +2N + +√ +3 +2 R, +where N is a standard Gaussian random variable and R is a Rademacher random variable, independent to +each other. +We sample Zij = Zji independently from the density g and let Wij = Zij/ +√ +N. +We let u(ℓ) = +(u1(ℓ), u2(ℓ), . . . , uN(ℓ))T , where +√ +Nui(ℓ)’s are i.i.d. Rademacher random variables for i = 1, 2, . . . , N and +ℓ = 1, 2, 3. The data matrix M = UΛ1/2U T , where U = [u(1), u(2), u(3)] and Λ = diag(λ, λ, λ, 0, 0, . . . , 0). +The size of the data matrix is set to be N = 4000. The BBP-transition predicts that the largest eigenvalue +of M pops up from the bulk of the spectrum if λ > 1. +With the entrywise transformation defined in (3.2), we obtain a transformed matrix +� +Mij = +1 +� +FgN h( +√ +NMij) +(A.2) +where +h(x) = −g′(x) +g(x) = +2 +�√ +3 − e4 +√ +3x( +√ +3−2x) + 2x +� +1 + e4 +√ +3x +(A.3) +and Fg = +� ∞ +−∞ +(g′(x))2 +g(x) dx ≈ 2.50810. From Theorem 3.3, it is expected that the largest eigenvalue of � +M +separates from other eigenvalues if λ > +1 +Fg ≈ 0.3987. +In the numerical experiment, we set +λℓ = +ℓ + +1 +Fg +ℓ + 1 +(A.4) +for ℓ = 1, 2, 3, and we compare the spectrum of the matrices M and � +M. In Figure 1, we find three isolated +eigenvalues in the spectrum of � +M (right), which are absent in that of M (left). +-2 +-1.5 +-1 +-0.5 +0 +0.5 +1 +1.5 +2 +0 +5 +10 +15 +20 +25 +30 +-2 +-1.5 +-1 +-0.5 +0 +0.5 +1 +1.5 +2 +0 +5 +10 +15 +20 +25 +30 +Figure 1: The spectrum of the data matrix (N = 4000) with bimodal noise, before (left) and after (right) +the entrywise transformation. Three eigenvalues pop up from the bulk of the spectrum after the entrywise +transformation. +26 + +A.1.2 +Spiked Gaussian Wigner matrix +We consider the weak detection problem with the simplest case of the spiked Gaussian Wigner matrix where +w2 = 2 (i.e., W is a GOE matrix) and the signal u(m) = (u1(m), u2(m), . . . , uN(m)) where +√ +Nui(m)’s are +i.i.d. Rademacher random variable. Note that the parameters w2 = 2 and w4 = 3. +In the numerical simulation done in Matlab, we generated 10,000 independent samples of the 256 × 256 +data matrix M, where we fix k1 = 1 (under H1) and vary k2 from 2 to 5 (under Hk2), with the SNR λ +varying from 0 to 0.7. To apply Algorithm 2, we compute +Lλ = − log det +� +(1 + λ)I − +√ +λM +� ++ λN +2 . +(A.5) +We accept H1 if +Lλ ≤ mk1 + mk2 +2 += −k2 + 2 +2 +log(1 − λ) +and reject H1 otherwise. The (theoretical) limiting error of the test is +erfc +�k2 − 1 +4 +� +− log(1 − λ) +� +. +(A.6) +In Figure 2, we compare the error from the numerical simulation and the theoretical error of the proposed +algorithm, which show that the numerical errors of the test closely match the theoretical errors. +Figure 2: The errors from the simulation with Algorithm 2 (solid) versus the limiting errors (A.6) (dashed) +for the setting in Section A.1.2 with k2 = 2, 3, 4, 5. +A.1.3 +Spiked Wigner matrix +We next consider a spiked Wigner matrix with non-Gaussian noise, where the density function of the noise +matrix is given by +g(x) = gd(x) = +1 +2 cosh(πx/2) = +1 +eπx/2 + e−πx/2 . +(A.7) +27 + +0.9 +0.8 ++ Type II error +0.7 +0.6 +0.5 +k.=2 +(,=2 (limiting) +≥ 0.4 +k,=3 +k,=3 (limiting) +0.3 +=4 (limiting) +0.2 +k,=5 (limiting) +0.1 +0 +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +rWe sample Zij = Zji from the density g and let Wij = Zij/ +√ +N. +We again let the signal u(m) = +(u1(m), u2(m), . . . , uN(m)) where +√ +Nui(m)’s are i.i.d. Rademacher random variable. Note that the pa- +rameters w2 = 1 and w4 = 5. We again perform the numerical simulation 10,000 samples of the 256 × 256 +data matrix M with the SNR λ varying from 0 to 0.6, where we fix k1 = 1 (under H1) and k2 = 3 (under +H2). +In Algorithm 2, we compute +Lλ = − log det +� +(1 + λ)I − +√ +λM +� ++ λN +2 ++ +√ +λ Tr M − λ +4 (Tr M 2 − N). +(A.8) +We accept H1 if +Lλ ≤ mk1 + mk2 +2 += −k2 + 2 +2 +log(1 − λ) + k2λ +2 +− (k2 − 3)λ2 +8 +and accept H2 otherwise. The (theoretical) limiting error of the test is +erfc +� +k2 − 1 +4 +� +− log(1 − λ) + λ − λ2 +4 +� +. +(A.9) +We can further improve the test by introducing the entrywise transformation given by +h(x) = −g′(x) +g(x) = π +2 tanh πx +2 . +The Fisher information Fg = π2 +8 , which is larger than 1. We thus construct a transformed matrix � +M by +� +Mij = 2 +√ +2 +π +√ +N +h( +√ +NMij) = +� +2 +N tanh +� +π +√ +N +2 +Mij +� +. +If λ > +1 +Fg = +8 +π2 ≈ 0.8106, we can apply PCA for strong detection of the signal. If λ < +8 +π2 , applying Algorithm +3, we compute +�Lλ = − log det +�� +1 + π2λ +8 +� +I − +� +π2λ +8 +� +M +� ++ π2λN +16 ++ π +√ +λ +2 +√ +2 Tr � +M + π2λ +16 (Tr � +M 2 − N). +(Here, Fg = Fg,d = π2 +8 , GH = π2 +16 , and �w4 = 3 +2.) We accept H1 if +�Lλ ≤ −k2 + 2 +2 +log +� +1 − π2λ +8 +� ++ k2π2λ +16 +− 3π4λ2 +512 +and accept H2 otherwise. The limiting error with entrywise transformation is +erfc +� +k2 − 1 +4 +� +− log +� +1 − π2λ +8 +� ++ π2λ +8 +� +. +(A.10) +Since erfc(·) is a decreasing function and π2 +8 > 1, it is immediate to see that the limiting error in (A.10) is +strictly smaller than the limiting error in (A.9). +In Figure 3, we plot the result of the simulation with k2 = 3, which shows that the numerical error from +Algorithm 3 is smaller than that of Algorithm 2; both errors closely match theoretical errors in (A.10) and +(A.9). +28 + +0 +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0.4 +0.5 +0.6 +0.7 +0.8 +0.9 +1 +Type I + Type II error +Alg.2 k2=3 +Alg.2 k2=3(limiting) +Alg.3 k2=3 +Alg.3 k2=3(limiting) +Figure 3: The errors from the simulation with Algorithm 2 (blue) and with Algorithm 3 (yellow), respectively, +versus the limiting errors (A.9) of Algorithm 2 (red) and (A.10) of Algorithm 3 (purple), respectively, for +the setting in Section A.1.3. +A.1.4 +Rank Estimation +We again consider the example in Section A.1.2 and apply Algorithm 4 to estimate the rank of the signal. +We again perform the numerical simulation 20,000 samples of the 256 × 256 data matrix M with the SNR +λ varying 0.025 to 0.6 and choose the rank of the signal k uniformly from 0 to 4. Since we know that the +range of the rank k is [0, 4], the (theoretical) limiting error in (4.32) changes to +P(k = 0) · P +� +Z > +√V0 +4 +� ++ +3 +� +i=1 +P(k = i) · P +� +|Z| > +√V0 +4 +� ++ P(k = 4) · P +� +Z > +√V0 +4 +� += +� +1 − P(k = 0) + P(k = 4) +2 +� +× erfc +� +1 +4 +� +− log(1 − λ) + +� 2 +w2 +− 1 +� +λ + +� +1 +w4 − 1 − 1 +2 +� +λ2 +� +. +We compute the same test statistic +Lλ = − log det +� +(1 + λ)I − +√ +λM +� ++ λN +2 +(A.11) +and find the nearest nonnegative integer of the value +− +Lλ +log(1 − λ) − 1 +2, +(A.12) +rounding half down. Since P(k = 0) = P(k = 4) = 0.2, the limiting error of the estimation is +� +1 − P(k = 0) + P(k = 4) +2 +� +· erfc +�1 +4 +� +− log(1 − λ) +� += 0.8 · erfc +�1 +4 +� +− log(1 − λ) +� +. +(A.13) +29 + +The result of the simulation can be found in Figure 4, where we compare the error from the estimation +(Algorithm 4) and the theoretical error in (A.13). We can see that the error from the numerical simulation +matches closely the theoretical error. +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0.6 +0.62 +0.64 +0.66 +0.68 +0.7 +0.72 +0.74 +0.76 +Probability of error +Alg.4 +limiting error +Figure 4: The errors from the simulation with Algorithm 4 (solid) versus the limiting error (A.13) (dashed) +for the setting in Section A.1.4. +A.2 +Spiked rectangular matrices +In this section, we check the performance of the improved PCA and the pre-transformed LSS-based tests for +spiked rectangular matrices. +A.2.1 +Improved PCA with Entrywise Transformation +Additive model +We consider the data with the non-Gaussian noise whose density function is given by the bimodal dis- +tribution in (A.1). +We sample Zij independently from the density g and let Xij = Zij/ +√ +N. +We let +u(ℓ) = (u1(ℓ), u2(ℓ), . . . , uM(ℓ))T and v(ℓ) = (v1(ℓ), v2(ℓ), . . . , vN(ℓ))T , where +√ +Mui(ℓ)’s and +√ +Nvj(ℓ)’s are +i.i.d. Rademacher random variables for i = 1, 2, . . . , M, j = 1, 2, . . . , N and ℓ = 1, 2, 3. When we apply the +entrywise transformation, defined in (3.3), with α = 0 to the rank-3 spiked mean data matrix, we get +�Yij = +1 +� +FgN h( +√ +NYij) +(A.14) +where +h(x) = −g′(x) +g(x) = +2 +�√ +3 − e4 +√ +3x( +√ +3−2x) + 2x +� +1 + e4 +√ +3x +(A.15) +and Fg = +� ∞ +−∞ +(g′(x))2 +g(x) dx ≈ 2.50810. The size of the data matrix is set to be M = 2000, N = 4000, and the +ratio d0 = M/N = 0.5. +30 + +Theoretically, the threshold for the BBP-transition of the largest eigenvalue is √d0 ≈ 0.7071 with the +vanilla PCA, whereas the threshold is lowered to +√d0 +Fg +≈ 0.2819 with the improved PCA as predicted by +Theorem 3.4. +For ℓ = 1, 2, 3, we set the SNRs +λℓ = +ℓ√d0 + +√d0 +Fg +ℓ + 1 +(A.16) +to observe the transitions of the largest eigenvalue after the transformation. In Figure 5, we compare the +spectrum of the sample covariance matrices, Y Y T (left) and �Y �Y T (right). As in the spiked Wigner case in +Section A.1.1, we again find three outlier eigenvalues only in the spectrum of �Y �Y T (right), which are absent +in that of Y Y T (left). +0 +1 +2 +3 +0 +5 +10 +15 +20 +25 +30 +35 +40 +0 +1 +2 +3 +0 +5 +10 +15 +20 +25 +30 +35 +40 +Figure 5: The spectrum of the sample covariance matrix (M = 2000, N = 4000) with bimodal noise, before +(left) and after (right) the entrywise transformation. Three eigenvalues pop up from the bulk of the spectrum +after the entrywise transformation. +Multiplicative model +In the spiked covariance model, to clearly observe the outlier in our simulation setting, a distribution with +a larger Fisher information value should be used. Thus, we let the density function ga of the noise be the +generalized version of the bimodal distribution with unit variance in (A.1), defined as +ga(x) = +1 +2 +� +2(1 − a2)π +� +e +− (x−a)2 +2(1−a2) + e +− (x+a)2 +2(1−a2) +� +, +which ga is the density function of a random variable +� +1 − a2N + aR. +31 + +We sample Zij independently from the density ga and let Xij = Zij/ +√ +N. We let u(ℓ) = (u1(ℓ), u2(ℓ), . . . , uM(ℓ))T +and v(ℓ) = (v1(ℓ), v2(ℓ), . . . , vN(ℓ))T , where +√ +Mui(ℓ)’s and +√ +Nvj(ℓ)’s are i.i.d. Rademacher random vari- +ables for i = 1, 2, . . . , M, j = 1, 2, . . . , N and ℓ = 1, 2, 3. When we apply the entrywise transformation, +defined in (3.3) to the rank-3 spiked covariance data matrix, we get +�Yij = +1 +� +(α2 + 2α + Fg)N +ha,α( +√ +NYij) +(A.17) +where +ha,α(x) = −g′ +a(x) +ga(x) + αx = +� +(x − a)e +2ax +1−a2 + (x + a) +� +(1 − a2)(1 + e +2ax +1−a2 ) ++ αx +(A.18) +and Fg = +� ∞ +−∞ +(g′ +a(x))2 +ga(x) dx ≈ 5.15583, when a = +√ +21/5. The size of the data matrix is set to be M = 4000, +N = 8000. We also use α = +� +Fg, and the ratio d0 = M/N = 0.5. The threshold for the BBP-transition +of the largest eigenvalue is √d0 ≈ 0.7071 for the vanilla PCA, whereas the threshold changes to λg,ℓ = +(1+√ +Fg) +2 +· (2γℓ + +� +Fgγ2 +ℓ ) = √d0 for the transformed PCA. (See Theorem 3.5.) For ℓ = 1, 2, 3, we set the +SNRs +λℓ = +ℓ√d0 + +2√d0 +1+√ +Fg +ℓ + 1 +(A.19) +to observe the transitions of the largest eigenvalue after the transformation. +We obtain a result analogous to the additive model. See Figure 6. +0 +1 +2 +3 +0 +10 +20 +30 +40 +50 +60 +70 +80 +0 +1 +2 +3 +0 +10 +20 +30 +40 +50 +60 +70 +80 +Figure 6: The spectrum of the sample covariance matrix (M = 4000, N = 8000) with bimodal noise, before +(left) and after (right) the entrywise transformation. Three eigenvalues pop up from the bulk of the spectrum +after the entrywise transformation. +32 + +A.2.2 +Hypothesis Testing with pre-transformed LSS estimator +We now consider an (additive) spiked rectangular matrix with the non-Gaussian noise whose density function +is given by (A.7). We let the signal u = (u1, u2, . . . , uM)T and v = (v1, v2, . . . , vN)T , where +√ +Mui’s and +√ +Nvj’s are i.i.d. Rademacher random variables for i = 1, 2, . . . , M and j = 1, 2, . . . , N. Let the data matrix +Y = +√ +λuvT + X. +Recall that w4 = 5, Fg = π2 +8 , GH = π2 +16 , and �w4 = 3 +2. The LSS estimators are given by +Lω = − log det +�� +1 + d0 +ω +� +(1 + ω)I − Y Y T +� +− ω +2d0 +(Tr Y Y T − M) ++ M +� ω +d0 +− log +� ω +d0 +� +− 1 − d0 +d0 +log(1 + ω) +� +, +(A.20) +and +�Lω = − log det +�� +1 + 8d0 +ωπ2 +� +(1 + ω π2 +8 )I − �Y �Y T +� ++ π2ω +8d0 +(Tr �Y �Y T − M) ++ M +�ωπ2 +8d0 +− log +�ωπ2 +8d0 +� +− 1 − d0 +d0 +log +� +1 + ω π2 +8 +�� +. +(A.21) +With critical values mω = − log +� +1 − ω2 +d0 +� ++ 3ω2 +4d0 and �mω = − log +� +1 − ω2π4 +64d0 +� +− 3π4ω2 +256d0, the errors are +erfc +� +1 +4 +� +− log +� +1 − ω2 +d0 +� +− ω2 +2d0 +� +and +erfc +� +1 +4 +� +− log +� +1 − π4ω2 +64d0 +�� +. +In Figure 7, we plot empirical average (after 1,000 Monte Carlo simulations) of the error of the proposed +test and the theoretical (limiting) error, varying the SNR ω from 0 to 0.5, with M = 256 and N = 512. It +can be checked that the error of the proposed test closely matches the theoretical error. +0 +0.05 +0.1 +0.15 +0.2 +0.25 +0.3 +0.35 +0.4 +0.45 +0.5 +0.65 +0.7 +0.75 +0.8 +0.85 +0.9 +0.95 +1 +simulation +limiting error +simulation-transformed +limiting error-tansformed +Figure 7: The error from the simulation (solid) and the theoretical limiting error (dashed), respectively. +33 + +B +Proof of Theorems for improved PCA +In this section, we rigorously prove Theorems 3.4 and 3.5 in Section 3, which are about the detection threshold +of the improved PCA. +B.1 +Preliminaries +We first introduce the following notions, which provide a simple way of making precise statements regarding +the bound up to small powers of N that holds with probability higher than 1 − N −D for all D > 0. +Definition B.1 (Overwhelming probability). We say that an event (or family of events) Ω holds with +overwhelming probability if for all (large) D > 0 we have P(Ω) ≤ N −D for any sufficiently large N. +Definition B.2 (Stochastic domination). Let +ξ = +� +ξ(N)(u) : N ∈ N, u ∈ U (N)� +, +ζ = +� +ζ(N)(u) : N ∈ N, u ∈ U (N)� +be two families of random variables, where U (N) is a possibly N-dependent parameter set. We say that ξ is +stochastically dominated by ζ, uniformly in u, if for all (small) ϵ > 0 and (large) D > 0 +sup +u∈U (N) P +� +|ξ(N)(u)| > N ϵζ(N)(u) +� +≤ N −D +for any sufficiently large N ≥ N0(ε, D). Throughout this appendix, the stochastic domination will always be +uniform in all parameters, including matrix indices and the spectral parameter z. +We write ξ ≺ ζ or ξ = O≺(ζ), if ξ is stochastically dominated by ζ, uniformly in u. +For a Wigner matrix W, we will use the following result for the resolvents, which is called an isotropic +local semicircle law. +Lemma B.3 (Isotropic local semicircle law). Suppose that z ∈ R outside an open interval containing [−2, 2]. +Let ssc(z) be the Stieltjes transform of the Marchenko–Pastur law, which is also given by +ssc(z) = −z + +√ +z2 − 4 +2 +. +(B.1) +Then, +⟨u(ℓ1), (W − zI)−1u(ℓ2)⟩ = ssc(z)⟨u(ℓ1), u(ℓ2)⟩ + O≺(N − 1 +2 ) +See Theorem 2.3 of [36] (also Lemma 7.7 of [22]) for the proof of Lemma B.3. +Further, for a rectangular matrix X, we will use the following analogous result for the resolvents, which +is called an isotropic Marchenko–Pastur law. +Lemma B.4 (Isotropic local Marchenko–Pastur law). Suppose that z ∈ R outside an open interval containing +[d−, d+]. Let s(z) be the Stieltjes transform of the Marchenko–Pastur law, which is also given by +s(z) = (1 − d0 − z) + +� +(1 − d0 − z)2 − 4d0z +2d0z +. +(B.2) +Then, +⟨v(ℓ1), (XT X − zI)−1v(ℓ2)⟩ = − +� +1 +zs(z) + 1 +� +⟨v(ℓ1), v(ℓ2)⟩ + O≺(N − 1 +2 ) +34 + +and +⟨XT u(ℓ1), (XT X − zI)−1XT u(ℓ2)⟩ = (zs(z) + 1)⟨u(ℓ1), u(ℓ2)⟩ + O≺(N − 1 +2 ). +See Theorem 2.5 of [17] (also Lemma 3.7 of [18]) for the proof of Lemma B.4. +The following concentration inequality will be frequently used in the proof, which is sometimes called the +large deviation estimate in random matrix theory. +Lemma B.5 (Large deviation estimate). Let +� +ξ(N) +i +� +and +� +ζ(N) +i +� +be independent families of random variables +and +� +a(N) +ij +� +and +� +b(N) +i +� +be deterministic; here N ∈ N and i, j = 1, . . . , N. +Suppose that complex-valued +random variables ξ(N) +i +and ζ(N) +i +are independent and satisfy for all p ≥ 2 that +Eξ = 0 , +E|ξ|p ≤ +Cp +NBp−2 +(B.3) +for some B ≤ N 1/2 and some (N-independent) constant Cp. Then we have the bounds +� +i +biξi ≺ +� 1 +N +� +i +|bi|2 +�1/2 ++ maxi |bi| +B +, +(B.4) +� +i,j +aijξiζj ≺ +� 1 +N 2 +� +i̸=j +|aij|2 +�1/2 ++ maxi̸=j |aij| +B ++ maxi |aii| +B2 +, +(B.5) +� +i̸=j +aijξiξj ≺ +� 1 +N 2 +� +i̸=j +|aij|2 +�1/2 ++ maxi̸=j |aij| +B +. +(B.6) +If the coefficients a(N) +ij +and b(N) +i +depend on an additional parameter u, then all of these estimates are uniform +in u, i.e. N0 = N0(ε, D) in the definition of ≺ depends not on u but only on the constant C from (B.3). +If B = N 1/2, the bounds can further be simplified to +� +i +biξi ≺ +� 1 +N +� +i +|bi|2 +�1/2 +, +� +i,j +aijξiζj ≺ +� 1 +N 2 +� +i,j +|aij|2 +�1/2 +, +� +i̸=j +aijξiξj ≺ +� 1 +N 2 +� +i̸=j +|aij|2 +�1/2 +. +(B.7) +Proof. These estimates are an immediate consequence of Lemma 3.8 in [30]. +Finally, we recall that, for our prior, |⟨u(ℓ1), u(ℓ2)⟩ − δℓ1ℓ2|, |⟨v(ℓ1), v(ℓ2)⟩ − δℓ1ℓ2| ≺ N −φ. +B.2 +Proof of Theorem 3.3 +We first prove the behavior of the k largest eigenvalues described in Section 2.2, which we will call the BBP +result, in our setting, following the strategy of [14, 15]. +M − zI = W + UΛ1/2U T − zI += (W − zI)(I + (W − zI)−1(UΛ1/2U T )). +(B.8) +Thus, if z is an eigenvalue of M but not of W, then it satisfies +det(I + (W − zI)−1UΛ1/2U T ) = 0, +which also implies that −1 is an eigenvalue of +T ≡ T(z) := (W − zI)−1UΛ1/2U T . +35 + +We then see that +T(W − zI)−1u(ℓ) = (W − zI)−1UΛ1/2U T (W − zI)−1u(ℓ) += +� +λℓ⟨u(ℓ), (W − zI)−1u(ℓ)⟩(W − zI)−1u(ℓ) + O≺(N −φ). +i.e., (W − zI)−1u(ℓ) must be a eigenvector for T with the corresponding eigenvalue −1. Thus, by Lemma +B.3, +� +λℓssc(z) = −1 + O≺(N −φ). +It is elementary to check that the solution of the above equation is z = √λℓ + +1 +√λℓ + O≺(N −φ) if and only +if λℓ > 1. +We now turn to the proof of Theorem 3.3. For the spike ∥U∥∞ ≺ N −φ, suppose that a function q and +its all derivatives are polynomially bounded in the sense of Assumption 3.1. Following the proof of Theorem +4.8 in [50], we have the following local linear estimation of q( +√ +NMij) by +q( +√ +NMij) = q( +√ +NWij) + +√ +λNuiuT +j E[q′( +√ +NWij)] + Rij, +where the error Rij is negligible. Set +Mq := E[q′( +√ +NWij)], +Vq := E[q( +√ +NWij)2], +�λ := λM 2 +q /Vq, +and +Qij := +1 +� +NVq +q( +√ +NWij). +Then the spectrum of the transformed matrix is determined by the matrix Q + U ˆΛ1/2U T . Since Q is also +Wigner matrix with NE[Q2 +ij] = 1, by repeating the same process above, we get the result. +B.3 +Proof of Theorem 3.4 +We first prove the behavior of the k largest eigenvalues described in Section 2.2, which we will again call the +BBP result, in our setting, following the strategy of [14, 15]. Note that the k largest eigenvalue of Y Y T is +equal to the k largest eigenvalues of Y T Y . Consider the identity +Y T Y − zI = (X + UΛ1/2V T )T (X + UΛ1/2V T ) − zI = (XT X − zI)T(z) +(B.9) +where +T ≡ T(z) := (XT X − zI)−1(XT UΛ1/2V T + V Λ1/2U T X + V Λ1/2U T UΛ1/2V T ). +Thus, if z is an eigenvalue of Y Y T but not of XXT , then it satisfies +det(T(z)) = 0, +which also implies that −1 is an eigenvalue of T(z). +36 + +Note that since ∥X∥, ∥(XT X − zI)−1∥ ≺ 1, from Lemma B.5, +⟨b, (XT X − zI)−1XT a⟩ = +� +i,j +� +(XT X − zI)−1XT � +ij biaj +≺ +� +� 1 +N 2 +� +i̸=j +��� +� +(XT X − zI)−1XT � +ij +��� +2 +� +� +1/2 ++ N −φ max +i,j +��� +� +(XT X − zI)−1XT � +ij +��� +≺ +� 1 +N ∥(XT X − zI)−1XT ∥2 +�1/2 ++ N −φ∥(XT X − zI)−1XT ∥ ≺ N −φ. +Then the matrix T satisfies +T · (XT X − zI)−1XT u(ℓ) += (XT X − zI)−1XT UΛ1/2(V T (XT X − zI)−1XT u(ℓ)) ++ (XT X − zI)−1V Λ1/2(U T X(XT X − zI)−1XT u(ℓ)) ++ (XT X − zI)−1V Λ1/2(U T U)Λ1/2(V T (XT X − zI)−1XT u(ℓ)) += +� +λℓ⟨u(ℓ), X(XT X − zI)−1XT u(ℓ)⟩(XT X − zI)−1v(ℓ) + θ1(ℓ) +and +T · (XT X − zI)−1v(ℓ) += (XT X − zI)−1XT UΛ1/2(V T (XT X − zI)−1v(ℓ)) ++ (XT X − zI)−1V Λ1/2(U T X(XT X − zI)−1v(ℓ)) ++ (XT X − zI)−1V Λ1/2(U T U)Λ1/2(V T (XT X − zI)−1v(ℓ)) += +� +λℓ⟨v(ℓ), (XT X − zI)−1v(ℓ)⟩(XT X − zI)−1XT u(ℓ) ++ λℓ⟨v(ℓ), (XT X − zI)−1v(ℓ)⟩(XT X − zI)−1v(ℓ) + θ2(ℓ) +where ∥θ1(ℓ)∥, ∥θ2(ℓ)∥ = O≺(N −φ) since ∥U T U − I∥F ≺ N −φ. +In particular, k extremal eigenvectors of T are a linear combination of (XT X−zI)−1XT u(ℓ) and (XT X− +zI)−1v(ℓ). +Suppose that aℓ(XT X−zI)−1XT u(ℓ)+bℓ(XT X−zI)−1v(ℓ) is an eigenvector of T with the corresponding +eigenvalue −1. Thus, from Lemma B.4, +− +� +aℓ(XT X − zI)−1XT u(ℓ) + bℓ(XT X − zI)−1v(ℓ) +� += T +� +aℓ(XT X − zI)−1XT u(ℓ) + bℓ(XT X − zI)−1v(ℓ) +� += −bℓ +� +λℓ +� +1 +zs(z) + 1 +� +(XT X − zI)−1XT u(ℓ) ++ aℓ +� +λℓ(zs(z) + 1)(XT X − zI)−1v(ℓ) − bℓλℓ +� +1 +zs(z) + 1 +� +(XT X − zI)−1v(ℓ) + �θ(ℓ) +(B.10) +for some �θ(ℓ), which is a linear combination of (XT X −zI)−1XT u(ℓ) and (XT X −zI)−1v(ℓ) with ∥�θ(ℓ)∥ = +O≺(N −φ). +Since U, V , and X are independent, (XT X − zI)−1XT u(ℓ) and (XT X − zI)−1v(ℓ) are linearly inde- +37 + +pendent with overwhelming probability. Thus, from (B.10), +−aℓ = −bℓ +� +λℓ +� +1 +zs(z) + 1 +� ++ O≺(N −φ), +−bℓ = aℓ +� +λℓ(zs(z) + 1) − bℓλℓ +� +1 +zs(z) + 1 +� ++ O≺(N −φ). +It is then elementary to check that +λℓ(zs(z) + 1) + 1 = O≺(N −φ), +which has the solution +z = (1 + λℓ) +� +1 + d0 +λℓ +� ++ O≺(N −φ) +if and only if λℓ > √d0. This proves the BBP result in our setting. +We now turn to the proof of Theorem 3.4. To simplify the exposition, we focus on the case that SNRs are +the same i.e., Λ = λI. For the spike prior in Assumption 3.1, suppose that a function q and its all derivatives +are polynomially bounded in the sense of Assumption 3.1. Following the proof of Theorem 4.8 in [50], we +define the error term from the local linear estimation of q( +√ +NYij) by +q( +√ +NYij) = q( +√ +NXij) + +√ +λNuivT +j q′( +√ +NXij) + Rij +where +Rij = 1 +2q′′( +√ +NXij + eij)λN(uivT +j )2 +for some |eij| ≤ | +√ +λNuivT +j |. The Frobenius norm of R is bounded as +∥R∥2 +F = Tr RT R = λ2N 2 +4 +M +� +i=1 +N +� +j=1 +(uivT +j )4q′′( +√ +NXij + eij)2 +≤ λ2N 2−4φ +4 +M +� +i=1 +N +� +j=1 +(uivT +j )2q′′( +√ +NXij + eij)2. +Since q′′ is polynomially bounded, q′′( +√ +NXij + eij) is uniformly bounded by an N-independent constant. +Thus, with overwhelming probability, +∥R∥2 ≤ ∥R∥2 +F ≤ Cλ2N 2−4φ. +Next, we approximate q( +√ +NXij) by its mean. Let +Eij = q′( +√ +NXij) − E[q′( +√ +NXij)], +∆ij = +√ +λNuivT +j Eij. +Then, ∥∆∥ ≺ N +1 +2 −2φ∥E∥ and, since the entries of matrix E are i.i.d., centered and with finite moments, its +norm ∥E∥ = O( +√ +N) with overwhelming probability. (See, e.g., [18].) Thus, ∥∆∥ = O≺(N 1−2φ). +Set +Mq := E[q′( +√ +NXij)], +Vq := E[q( +√ +NXij)2], +�λ := λM 2 +q /Vq, +38 + +and +Qij := +1 +� +NVq +q( +√ +NXij). +We have proved so far that the difference of the largest eigenvalue of Q + �λ +1 +2 UV T and that of the matrix +� +1 +� +NVq +q( +√ +NYij) +� +is O≺(N +1 +2 −2φ), which is o(1) with overwhelming probability for φ > 1 +4. It is directly applicable to the case +that Λ in our model with �Λℓℓ := λℓM 2 +q /Vq since the above process does not require any information of the +SNRs. The BBP result holds the matrix Q+U �Λ +1 +2 V T , which is another (additive) spiked rectangular matrix. +This shows that the BBP result also holds for �Y with SNR matrix �Λ := +M 2 +q +Vq Λ. This proves Theorem 3.4. +B.4 +Proof of Theorem 3.5 +Recall that the spike prior satisfies the technical conditions in Assumption 3.1 with φ > 1/4. For the sake +of brevity, we assume that Λ = λI. As in the additive case, we further assume that a function q and its all +derivatives are polynomially bounded and consider the local linear approximation of q( +√ +NYij), +q( +√ +NYij) = q( +√ +NXij) + γ +√ +NE[q′( +√ +NXij)] +� +ℓ +uiuT +ℓ Xℓj + Rij + γ∆ij, +(B.11) +where +Rij = 1 +2q′′�√ +NXij + θγ +� +ℓ +uiuT +ℓ +√ +NXℓj +� � +γ +� +ℓ +uiuT +ℓ +√ +NXℓj +�2 +for some θ ∈ [−1, 1] and +∆ij = +√ +NEij +� +ℓ +uiuT +ℓ Xℓj, +Eij = q′( +√ +NXij) − E[q′( +√ +NXij)]. +For any unit vectors a = (a1, a2, . . . , aM) and b = (b1, b2, . . . , bN), +aT ∆b = +� +s +� +i,j +aiui(s)Eijbj +�� +ℓ +uℓ(s) +√ +NXℓj +� += +� +s +� +i,j +aiui(s)2bjEij +√ +NXij + +� +s +� +i,j +aiui(s)Eijbj +� +�� +ℓ̸=i +uℓ(s) +√ +NXℓj +� +� +From the concentration inequalities such as Lemma B.5, +� +ℓ +uiuT +ℓ +√ +NXℓj = +� +s +� +ℓ +ui(s)uℓ(s)T √ +NXℓj ≺ +� +s +|ui(s)| +�� +ℓ +uℓ(s)2 +�1/2 +≺ N −φ. +(B.12) +Recall that ∥E∥ = O( +√ +N) with overwhelming probability. +Note that, by Assumption 3.1, the density +function q have to be an odd function. Further, since q is an odd function (hence xq′(x) is an odd function +39 + +of x), the norm of the matrix whose (i, j)-entry is Eij +√ +NXij is also O( +√ +N). Thus, +aT ∆b ≺ N −2φ + N +1 +2 −φ, +which shows that ∥∆∥ ≺ N +1 +2 −φ. Moreover, since q′′ is polynomially bounded, following the proof of Theorem +3.4 with (B.12), +∥R∥2 ≤ ∥R∥2 +F ≤ CN 2−4φ. +Thus, as in the additive case, the error terms Rij and ∆ij in (B.11) are negligible when finding the limit of +the extreme eigenvalues of the transformed matrix. +Set +Mq := E[q′( +√ +NXij)], +Vq := E[q( +√ +NXij)2], +Eq = E[ +√ +NXijq( +√ +NXij)], +�γ := γMq/ +� +Vq, +and +Qij := +1 +� +NVq +q( +√ +NXij). +With the approximation (B.11), we now focus on the largest eigenvalue of +(Q + �γUU T X)T (Q + �γUU T X). +Note that the assumption on the polynomial boundedness of q implies that the matrix Q is also a rectangular +matrix satisfying the assumptions in Definition 2.2. +Let G(z) and G(z) be the resolvents +G ≡ G(z) := (QQT − zI)−1, +G ≡ G(z) := (QT Q − zI)−1 +for z ∈ R outside an open interval containing [d−, d+]. We note that the following identities hold for G(z) +and G(z): +G(z)Q = QG(z), +QT G(z)Q = I + zG(z). +(B.13) +As in the proof of Theorem 3.4, we consider +(Q + �γUU T X)T (Q + �γUU T X) − zI += (QT Q − zI)(I + (QT Q − zI)−1(�γXT UU T Q + �γQT UU T X + �γ2XT UU T UU T X)). +(B.14) +Let +L ≡ L(z) = G(z)(�γXT UU T Q + �γQT UU T X + �γ2XT UU T UU T X), +Then, as in the proof of Theorem 3.4, if z is an eigenvalue of (Q + �γUU T X)T (Q + �γUU T X) (but not of +QT Q), −1 is an eigenvalue of L(z). Again, the rank of L is at most 2k, with +L · GQT U = �γGXT UU T QGQT U + �γGQT UU T XGQT U + �γ2GXT UU T UU T XGQT U, +L · GXT U = �γGXT UU T QGXT U + �γGQT UU T XGXT U + �γ2GXT UU T UU T XGXT U, +(B.15) +and an eigenvector of L is a linear combination of GQT u(ℓ) and GXT u(ℓ) for 1 ≤ ℓ ≤ k. +In the simplest case where Q is the identity mapping, Q = X, hence the rank of L is k, and the eigenvalue +equation (B.15) is simplified to +L · GQT U = �γGQT U(U T QGQT U) + �γGQT U(U T QGQT U) + �γ2GQT U(U T UU T QGQT U). +(B.16) +40 + +In this case, GQT u(ℓ) are eigenvectors of L corresponding to the eigenvalue −1, i.e., L · GQT u(ℓ) = +−GQT u(ℓ). +The right side of (B.16) can be approximated as follows, which is a direct consequence of +the isotropic local Marchenko–Pastur law (e.g., Theorem 2.5 of [17]). +With the isotropic local Marchenko–Pastur law, (B.16) can be approximated by a deterministic vector +equation on z (and s(z)), and the location of the k largest eigenvalues can be proved by solving the equation. +In a general case where Q is not a multiple of X and the vectors GQT u(ℓ) and GXT u(ℓ) are linearly +independent, however, the eigenvalue equation (B.15) contains other matrices U T QGQT U, U T QGXT U, +and U T XGQT U, which cannot be estimated by Lemma B.4. +For these matrices, we use the following +lemma. +Lemma B.6. Suppose that the assumptions in Lemma B.4 hold. Then, +⟨u(ℓ1), XGQT u(ℓ2)⟩ = ⟨u(ℓ1), QGXT u(ℓ2)⟩ = +� +Eq +� +Vq +(zs(z) + 1) +� +δℓ1ℓ2 + O≺(N −φ) +and +⟨u(ℓ1), XGXT u(ℓ2)⟩ = +� +E2 +q +Vq +zs(z) +� +d0s(z) + d0 − 1 +z +�2 ++ d0s(z) + d0 − 1 +z +� +δℓ1ℓ2 + O≺(N −φ). +We defer the proof to Appendix B.6. +With Lemma B.6, we are ready to finish the proof. From the definition of s(z) in Lemma B.4, we notice +that +s(z) = +1 +1 − d0 − d0zs(z) − z , +(B.17) +or +z +� +d0s(z) + d0 − 1 +z +� += − 1 +s(z) − z. +(B.18) +Set σ(z) := zs(z) + 1. By applying Lemmas B.4 and B.6 to (B.16), for 1 ≤ ℓ ≤ k +L · GQT u(ℓ) = �γ⟨u(ℓ), QGQT u(ℓ)⟩ · GXT u(ℓ) + �γ⟨u(ℓ), XGQT u(ℓ)⟩ · GQT u(ℓ) ++ ∥u(ℓ)∥2�γ2⟨u(ℓ), XGQT u(ℓ)⟩ · GXT u(ℓ) += �γσ(z)GXT u(ℓ) + �γσ(z) Eq +� +Vq +GQT u(ℓ) + �γ2 Eq +� +Vq +σ(z)GXT u(ℓ) + θ1(ℓ) , +(B.19) +and +L · GXT u(ℓ) = �γ⟨u(ℓ), QGXT u(ℓ)⟩ · GXT u(ℓ) + �γ⟨u(ℓ), XGXT u(ℓ)⟩ · GQT u(ℓ) ++ ∥u(ℓ)∥2�γ2⟨u(ℓ), XGXT u(ℓ)⟩ · GXT u(ℓ) += �γσ(z) Eq +� +Vq +GXT u(ℓ) + �γ +�� +σ(z) + +σ(z) +σ(z) − 1 +� E2 +q +Vq +− +σ(z) +σ(z) − 1 +� +GQT u(ℓ) ++ �γ2 +�� +σ(z) + +σ(z) +σ(z) − 1 +� E2 +q +Vq +− +σ(z) +σ(z) − 1 +� +GXT u(ℓ) + θ2(ℓ) , +(B.20) +for some θ1(ℓ), θ2(ℓ), which are linear combinations of GQT u(ℓ) and GXT u(ℓ), with ∥θ1(ℓ)∥, ∥θ2(ℓ)∥ = +O≺(N −φ). +Suppose that aℓGQT u(ℓ)+bℓGXT u(ℓ) is an eigenvector of L with the corresponding eigenvalue −1. From +41 + +(B.19), (B.20), and the linear independence between GQT u(ℓ) and GXT u(ℓ), we find the relation +−aℓ = aℓ�γσ(z) Eq +� +Vq ++ bℓ�γσ(z)2 +σ(z) − 1 · E2 +q +Vq +− bℓ�γσ(z) +σ(z) − 1 + O(N −φ), +−bℓ = aℓ�γσ(z) + aℓ�γ2σ(z) Eq +� +Vq ++ bℓ�γσ(z) Eq +� +Vq ++ bℓ�γ2σ(z)2 +σ(z) − 1 · E2 +q +Vq +− bℓ�γ2σ(z) +σ(z) − 1 + O(N −φ). +We then find that +bℓ +aℓ +� +1 + �γσ(z) Eq +� +Vq +� ++ �γσ(z) − �γ = O(N −φ) +and +aℓ +� +1 + �γσ(z) Eq +� +Vq +� += bℓ +� +�γσ(z) +σ(z) − 1 +� +1 − σ(z) · E2 +q +Vq +�� ++ O(N −φ), +which implies that +1 + 2�γσ(z) Eq +� +Vq ++ �γ2σ(z) = 1 + +� +2γMqEq + γ2M 2 +q +Vq +� +σ(z) = O(N −φ). +(B.21) +From the explicit formula for s, it is not hard to check that (B.21) holds if and only if +λq := 2γMqEq + γ2M 2 +q +Vq +> +� +d0 +and +z = (1 + λq) +� +1 + d0 +λq +� ++ O(N −φ). +(B.22) +We see that it is valid for general Λ in our model, since the above process also does not require any information +of the SNRs as in the additive case. Now, the desired theorem follows from the direct computation for the +case q = hαg; see also Appendix B.5.2. +B.5 +Optimal entrywise transformation +B.5.1 +Additive model +Recall that +E[q′( +√ +NWij)] = E[q′( +√ +NXij)] = Mq, +E[q( +√ +NWij)2] = E[q( +√ +NXij)2] = Vq. +Following the proof of Theorem 3.4 in Appendix B.3, it is not hard to see that the effective SNR is maximized +by optimizing M 2 +q /Vq. Such an optimization problem was already considered in [50] for the spiked Wigner +matrix. For the sake of completeness, we solve this problem by using the calculus of variations. Recall the +density of random variables +√ +NWij and +√ +NXij is g. +To optimize q, we need to maximize +�� ∞ +−∞ +q′(x)g(x)dx +�2 +/ +�� ∞ +−∞ +q(x)2g(x)dx +� += +�� ∞ +−∞ +q(x)g′(x)dx +�2 +/ +�� ∞ +−∞ +q(x)2g(x)dx +� +. +(B.23) +Putting (q + εη) in place of q in (B.23) and differentiating with respect to ε, we find that the optimal q +42 + +satisfies +�� ∞ +−∞ +η(x)g′(x)dx +� �� ∞ +−∞ +q(x)2g(x)dx +� += +�� ∞ +−∞ +q(x)η(x)g(x)dx +� �� ∞ +−∞ +q(x)g′(x)dx +� +(B.24) +for any η. It is then easy to check that q = −Cg′/g is the only solution of (B.24). Since the value in (B.23) +does not change if we replace q by Cq, and the effective SNR is increased with the entrywise transform −g′/g +is the optimal entrywise transformation for PCA. +B.5.2 +Multiplicative model +As we can see from the proof of Theorem 3.5 in Appendix B.4, we need to maximize +2 +�� ∞ +−∞ xq(x)g(x)dx +� �� ∞ +−∞ q′(x)g(x)dx +� ++ γ +�� ∞ +−∞ q′(x)g(x)dx +�2 +�� ∞ +−∞ q(x)2g(x)dx +� += +−2 +�� ∞ +−∞ xq(x)g(x)dx +� �� ∞ +−∞ q(x)g′(x)dx +� ++ γ +�� ∞ +−∞ q(x)g′(x)dx +�2 +�� ∞ +−∞ q(x)2g(x)dx +� +. +(B.25) +Putting (q + εη) in place of q in (B.23) and differentiating with respect to ε, we find that the optimal q +satisfies +− 2 +�� +xηg +� �� +qg′ +� �� +q2g +� +− 2 +�� +xqg +� �� +ηg′ +� �� +q2g +� ++ 2γ +�� +ηg′ +� �� +qg′ +� �� +q2g +� ++ 4 +�� +qηg +� �� +xqg +� �� +qg′ +� +− 2γ +�� +qg′ +�2 �� +qηg +� += 0 +(B.26) +which is written with slight abuse of notation such as +� +xηg = +� ∞ +−∞ xη(x)g(x)dx. Since the equation contains +the terms +� +xηg, +� +ηg′, +� +qηg, +it is natural to consider an ansatz +q(x) = −g′(x) +g(x) + αx +(B.27) +for a constant α. Collecting the terms involving +� +xηg and the terms involving +� +ηg′, we get +2(Fg + α)(Fg + 2α + α2) − 4α(1 + α)(Fg + α) − 2αγ(Fg + α)2 = 0 +and +−2(1 + α)(Fg + 2α + α2) − 2γ(Fg + α)(Fg + 2α + α2) + 4(1 + α)(Fg + α) + 2γ(Fg + α)2 = 0. +We can then check that +α = αg = +−γFg + +� +4Fg + 4γFg + γ2F 2g +2(1 + γ) +, +43 + +and hence (B.26) is satisfied with +q(x) = −g′(x) +g(x) + +−γFg + +� +4Fg + 4γFg + γ2F 2g +2(1 + γ) +x. +The corresponding effective SNR +λhαg ≡ λg = γ + γ2Fg +2 ++ +γ +� +4Fg + 4γFg + γ2F 2g +2 +. +For a general α, when the entrywise transform hα is applied, the effective SNR +λhα = 2γ(1 + α)(Fg + α) + γ2(α + Fg)2 +α2 + 2α + Fg +, +In particular, if α = +� +Fg, +λh√ +Fg = γ(1 + +� +Fg) + γ2 +2 (Fg + +� +Fg) ≥ 2γ + γ2 = λ +where the inequality is strict if Fg > 1. +B.6 +Proof of Lemma B.6 +B.6.1 +Key ingredient: Entrywise local estimates +Recall the definition of the random matrices X and Q. The couple of random matrices (X, Q) is one example +of the following concept for a coupled random matrices: +Definition B.7 (Entrywise correlated random matrices). Suppose that A and B are M × N random rectan- +gular matrices in Definition 2.2 satisfying the following conditions: +• For all 1 ≤ a, b ≤ M and 1 ≤ α, β ≤ N, Aaα and Bbβ are dependent only when a = b and α = β. +• For all a, α, E[Aaα] = E[Baα] = 0, NE[A2 +aα] = wA, NE[B2 +aα] = wB, and NE[AaαBaα] = wAB. +• For any positive integer p, there exists Cp, independent of N, such that +N +p +2 E[Ap +aα], N +p +2 E[Bp +aα] ≤ Cp +for all a, α. +A couple of random matrices (A, B) is called the entrywise correlated. +The key estimates in the proof of Lemma B.6 are the exact bounds on the entries of K := Q(QQT − +zI)−1XT and K := X(QQT − zI)−1XT . We prove the following lemma for the entrywise correlated random +matrices (A, B), which exactly contains the desired result. +Lemma B.8. Let (A, B) be the entrywise correlated random matrices with wB = 1. For z ∈ R outside an +open interval containing [d−, d+], +|(A(BT B − zI)−1AT )ij − (wAs(z) + w2 +ABzs(z)s(z)2)δij| = O≺(N −1/2), +(B.28) +44 + +|(A(BT B − zI)−1BT )ij − (wABs(z) + wABzs(z)s(z)2)δij| = O≺(N −1/2) +(B.29) +and +|(B(BT B − zI)−1AT )ij − (wABs(z) + wABzs(z)s(z)2)δij| = O≺(N −1/2). +(B.30) +Remark B.9. Recall that σ(z) = zs(z) + 1. For (X, Q), since wX = wQ = 1 and wXQ = Eq/ +� +Vq, we have +the following: +For z ∈ R outside an open interval containing [d−, d+], +|Kij − �s(z)δij| = O≺(N −1/2), +|Kij − ˇs(z)δij| = O≺(N −1/2), +(B.31) +where +�s(z) := σ(z) Eq +� +Vq +, +ˇs(z) := zs(z) +� +d0s(z) + d0 − 1 +z +�2 E2 +q +Vq ++ +� +d0s(z) + d0 − 1 +z +� +. +(B.32) +B.6.2 +Linearization +We consider +G ≡ GB(z) = (BBT − zI)−1, +G ≡ GB(z) = (BT B − zI)−1. +In the proof of Lemma B.8, we use the formalism known as the linearization to simplify the computation. +We define an (M + N) × (M + N) symmetric matrix HB by +HB ≡ HB(z) = +�−zIM +B +BT +−IN +� +, +(B.33) +where IM and IN are the identity matrices with size M and N, respectively. +Let RB(z) = HB(z)−1. +(For the invertibility of HB(z), we refer to Section 5.1 in [38].) +By Schur’s +complement formula, +RQ(z) = +� GB(z) +GB(z)B +BT GB(z) +zGB(z) +� +. +(B.34) +Therefore, +Rab(z) = (BBT − zI)−1 +ab = Gab(z), +Rαβ(z) = z(BT B − zI)−1 +α−M,β−M = zGα−M,β−M(z), +(B.35) +and +Rαa(z) = Raα(z) = (GB)a,α−M(z), +(B.36) +where we use lowercase Latin letters a, b, c, . . . for indices from 1 to M and Greek letters α, β, γ, . . . for +indices from (M + 1) to (M + N). We also use uppercase Latin letters A, B, C, . . . for indices from 1 to +(M + N). In the rest of Appendix B, we omit the subscript Q for brevity. +For T ⊂ {1, 2, . . . , M + N}, we define the matrix minor H(T) by +(H(T))AB := 1{A,B /∈T}HAB . +(B.37) +Moreover, for A, B /∈ T we define +R(T) +AB(z) := (H(T))−1 +AB, +(B.38) +In the definitions above, we abbreviate ({A}) by (A); similarly, we write (AB) instead of ({A, B}). +We have the following resolvent (decoupling) identities for the matrix entries of R and R(T), which are +45 + +elementary consequences of Schur’s complement formula; see e.g. Lemma 5.1 of [38]. +Lemma B.10 (Resolvent identities for R). Suppose that z ∈ R is outside an open interval containing +[d−, d+]. +- For a ̸= b, +Rab = −Raa +� +α +HaαR(a) +αb = −Rbb +� +β +R(b) +aβHβb. +- For α ̸= β, +Rαβ = −Rαα +� +a +HαaR(α) +aβ = −Rββ +� +b +R(β) +αb Hbβ. +- For any a and α, +Raα = −Raa +� +β +HaβR(a) +βα = −Rαα +� +b +R(α) +ab Hbα. +- For A, B ̸= C, +RAB = R(C) +AB + RACRCB +RCC +. +Throughout this section, we will frequently use the estimate that all entries of X and Q (and hence all +off-diagonal entries of W) are O≺(N −1/2), which holds since all moments of the entries of +√ +NQ and +√ +NX +are bounded. For the entries of R, we have the following estimates: +Lemma B.11. Let +s(z) = +� +d0s(z) + d0 − 1 +z +� +. +(B.39) +For z ∈ R outside an open interval containing [d−, d+], +|Rij(z) − s(z)δij| , |Rµν(z) − zs(z)δµν| , |Riµ(z)| ≺ N −1/2. +(B.40) +Proof of Lemma B.11. The first two estimates can be checked from Theorem 2.5 (and Remark 2.7) in [17] +with the deterministic unit vectors v = ei and w = ej where ei ∈ RN or RM is a standard basis vector +whose i-th coordinate is 1 and all other coordinates are zero. For the last estimate, we apply Lemma B.10 +to find that +Riµ(z) = −Rii +� +α +HiαR(i) +αµ. +Since Hiα and R(i) +αµ are independent, R(i) +αµ ≺ N −1/2 for α ̸= µ, and R(i) +µµ = Θ(1) with overwhelming probability, +we find from Lemma B.5 that +� +α +HiαR(i) +αµ ≺ +� +1 +N +� +α +|R(i) +αµ|2 +�1/2 +≺ N −1/2. +Proof of Lemma B.8. Throughout this section, for the sake of brevity, we will use the notation +Baα := Ba,(α−M) = Haα, +Aaα := Aa,(α−M). +We begin by estimating the diagonal entry (BGAT )ii. From Schur’s complement formula, (B.35), we can +decompose it into +(BGAT )ii = 1 +z +� +α +HiαRααAiα + 1 +z +� +α̸=β +HiαRαβAiβ. +(B.41) +46 + +From concentration inequalities it is not hard to see that +� +α +BiαAiα = E[BiαAiα] + O≺(N −1/2) = wAB + O≺(N −1/2). +Applying Lemma B.11, we find for the first term in the right side of (B.41) that +1 +z +� +α +HiαRααAiα = wABs(z) + O≺(N −1/2). +(B.42) +We next estimate the second term in the right side of (B.41). We expand it with the resolvent identities +in Lemma B.10 as follows: +� +α̸=β +HiαRαβAiβ = +� +α̸=β +HiαR(i) +αβAiβ + +� +α̸=β +Hiα +RαiRiβ +Rii +Aiβ += +� +α̸=β +HiαR(i) +αβAiβ + +� +α̸=β +Hiα +RαiRiβ +s(z) +Aiβ + O≺(N −1/2). +(B.43) +Here, in the estimate for the second term, we simply counted the power (of N) as it involves two indices for +the sum (hence O(N 2) terms) of Hiα, Rαi, Riβ, Aiβ ≺ N −1/2, hence � +α̸=β HiαRαiRiβAiβ = O≺(1). Applying +Lemma B.5 to the first term in the right side of (B.43), +� +α̸=β +HiαR(i) +αβAiβ ≺ +� +� 1 +N 2 +� +α,β +|R(i) +αβ|2 +� +� +1/2 +≺ N −1/2. +For the second term in the right side of (B.43), we further expand it to find +� +α̸=β +HiαRαiRiβAiβ = +� +α̸=β +HiαRαiRiβAiβ = − +� +α̸=β +Hiα +� +Rii +� +µ +R(i) +αµHµiRiβAiβ +� +Note that +� +µ +R(i) +αµHµi ≺ N −1/2, +as in the proof of Lemma B.11. Since +|Rij − s(z)| ≺ N −1/2, +Riβ = R(α) +iβ + RiαRαβ +Rαα += R(α) +iβ + N −1, +we have +− +� +α̸=β +Hiα +� +Rii +� +µ +R(i) +αµHµiRiβAiβ +� += −s(z) +� +α̸=β +Hiα +�� +µ +R(i) +αµHµiR(α) +iβ Aiβ +� ++ O≺(N −1/2) += −s(z) +� +α̸=β +Hiα +� +� � +µ:µ̸=α +R(i) +αµHµiR(α) +iβ Aiβ +� +� − s(z) +� +α̸=β +(Hiα)2R(i) +ααR(α) +iβ Aiβ + O≺(N −1/2). +(B.44) +47 + +Applying Lemma B.5 again to the first term in the right side of (B.44), +� +α̸=β +Hiα +� +� � +µ:µ̸=α +R(i) +αµHµiR(α) +iβ Aiβ +� +� ≺ +� +� +� 1 +N +� +α +������ +� +β:β̸=α +� +� � +µ:µ̸=α +R(i) +αµHµi +� +� R(α) +iβ Aiβ +������ +2� +� +� +1/2 +≺ +� +� +� 1 +N +� +α +� +� � +β:β̸=α +N −1/2 ���R(α) +iβ Aiβ +��� +� +� +2� +� +� +1/2 +≺ N −1/2. +Similarly, by expanding R(α) +iβ , we find for the second term in the right side of (B.44) that +−s(z) +� +α̸=β +(Hiα)2R(i) +ααR(α) +iβ Aiβ = zs(z)s(z) +� +α̸=β +(Hiα)2R(α) +ii +� +ν:ν̸=α +H(α) +iν R(iα) +νβ Aiβ + O≺(N −1/2) += zs(z)2s(z) +� +α̸=β +(Hiα)2 +� +ν:ν̸=α,β +HiνR(iα) +νβ Aiβ + zs(z)2s(z) +� +α̸=β +(Hiα)2HiβR(iα) +ββ Aiβ + O≺(N −1/2) += z2s(z)2s(z)2 � +α̸=β +(Hiα)2HiβAiβ + O≺(N −1/2), +where we used Lemma B.5 to find +� +ν̸=β:ν,β̸=α +HiνR(iα) +νβ Aiβ ≺ +� +� 1 +N 2 +� +ν̸=β:ν,β̸=α +���R(iα) +νβ +��� +2 +� +� +1/2 +≺ N −1/2. +Thus, since wB = 1, +� +α̸=β +HiαRαiRiβAiβ = z2s(z)2s(z)2 � +α̸=β +(Hiα)2HiβAiβ + O≺(N −1/2) += z2s(z)2s(z)2wAB + O≺(N −1/2), +and putting it back to (B.43) and (B.41), together with (B.42), we conclude that +(AGB)ii = wABs(z) + wABzs(z)s(z)2 + O≺(N −1/2) = wABσ(z) Eq +� +Vq ++ O≺(N −1/2), +(B.45) +where we used the identity zs(z)s(z) = −σ(z). In the same manner, we also find that +(AGA)ii = 1 +z +� +α +AiαRααAiα + zs(z)s(z)2 � +α̸=β +AiαHiαHiβAiβ + O≺(N −1/2) += wAs(z) + w2 +ABzs(z)s(z)2 + O≺(N −1/2). +(B.46) +We next estimate the off-diagonal entry (AGB)ij. We expand it as +(AGB)ij = 1 +z +� +α,β +HiαRαβAjβ = 1 +z +� +α,β +HiαR(i) +αβAjβ + 1 +z +� +α,β +Hiα +RαiRiβ +Rii +Ajβ += 1 +z +� +α,β +HiαR(ij) +αβ Ajβ + 1 +z +� +α,β +Hiα +R(i) +αjR(i) +jβ +R(i) +jj +Ajβ + 1 +z +� +α,β +Hiα +R(j) +αi R(j) +iβ +R(i) +jj +Ajβ + O≺(N −1/2) +(B.47) +48 + +From Lemma B.5, +� +α,β +HiαR(ij) +αβ Ajβ ≺ N −1/2. +We also have +� +α,β +Hiα +R(i) +αjR(i) +jβ +R(i) +jj +Ajβ ≺ +� +� +� 1 +N +� +α +������ +� +β +R(i) +αjR(i) +jβ +R(i) +jj +Ajβ +������ +2� +� +� +1/2 +≺ +� +� +� 1 +N +� +α +������ +� +β +N −3/2 +������ +2� +� +� +1/2 +≺ N −1/2 +and a similar estimate holds for the third term in the right side of (B.47). Thus, +(AGB)ij ≺ N −1/2 +In the same manner, we also find that (AGA)ij ≺ N −1/2. Together with (B.45) and (B.46), this proves +Lemma B.8. +B.6.3 +Isotropic local law +We also assume that wB = 1 and use the same notation in previous section. Then our goal is to prove the +following statement: +Lemma B.12. Let (A, B) be the entrywise correlated random matrices where wB = 1 and x, y are determin- +istic and ℓ2 - normalized vectors in RM. Then, for z ∈ R outside an open interval containing [d−, d+], +⟨x, A(BT B − zI)−1AT y⟩ = (wAs(z) + w2 +ABzs(z)s(z)2)⟨x, y⟩ + O≺(N −1/2). +Proof of Lemma B.12. Note that, due to polarization identity, we suffice to prove for ⟨x, A(BT B−zI)−1AT x⟩. +Recall that we have +(A(BT B − zI)−1AT )ij = (wAs(z) + w2 +ABzs(z)s(z)2)δij + O≺(N −1/2) +and +(A(BT B − zI)−1BT )ij = (B(BT B − zI)−1AT )ij = (wABs(z) + wABzs(z)s(z)2)δij + O≺(N −1/2). +Once the entrywise local law is given, the proof of the isotropic (or anisotropic) type law follows exactly as +in [17]. To be more precisely, we can write +⟨x, A(BT B − zI)−1AT x⟩ = +� +i +xi(AGAT )iixi + +� +i̸=j +xi(AGAT )ijxj. +Then the entrywise local law implies +� +i +xi(AGAT )iixi − (wAs(z) + w2 +ABzs(z)s(z)2)⟨x, x⟩ += +� +i +x2 +i +� +(AGAT )ii − (wAs(z) + w2 +ABzs(z)s(z)2) +� +≺ N −1/2, +49 + +and so the main difficulty is to control the off-diagonal part +ZAB := +� +α,β +� +i̸=j +xiAiαGαβAjβxj = O≺(N −1/2). +For instance, for the sample covariance matrix case +⟨x, BGBT x⟩ = (zs(z) + 1)⟨x, x⟩ + O≺(N −1/2) += (s(z) + zs(z)s(z)2)⟨x, x⟩ + O≺(N −1/2) +was proved in [17] by proving the following bound for higher moments +E|ZB|p ≺ N −p/2 +(B.48) +for any large and even p, where +ZB := +� +i̸=j +xi(BGBT )ijxj = z +� +i̸=j +xiGijxj. +In particular, the (B.48) have proved by using the standard maximal expansion method in [17] and [2], which +only requires the independence between each element, the boundedness of the moment of each entries, and +the entrywise local law. Thus, from the definition of the entrywise correlated random matrices (A, B), it can +be expected that +E|ZAB|p ≺ N −p/2 +also holds for any large and even p, by expanding maximally Gαβ instead of Gab as in (B.47). Then, we can +conclude the proof by using Markov inequality. +To prove such an argument , we only need to check what is changing. First, we express the p-th moment +of ZAB by +E|ZAB|p = E +� +b11̸=b12 +· · · +� +bp1̸=bp2 +� +� +p/2 +� +k=1 +xbk1(AGAT )bk1bk2xbk2 +� +� +� +� +p +� +k=p/2+1 +xbk1(AGAT )bk1bk2xbk2 +� +� . +(B.49) +Let T = {bk1} ∪ {bk2} be the set of indices of x appearing in the fixed summand of the representation of +the p-th moment of ZAB. Then our goal is to decompose the off-diagonal entry of the matrix (AGAT ) into the +two parts by using Lemma B.10, where one consists of the finite number of the maximally expanded term +and the other consists of the terms containing a sufficiently large number of off-diagonal entries. We note +that the latter case is small enough due to the entrywise local laws of off-diagonal entries, and so the leading +order term contained in the formal. +Step 1 : The maximal expansion for the off-diagonal entries of (AGBA). +In our case, the maximally expanded terms (cf. Definition 5.4 of [17]) refer to terms that have one of the +following forms: (AG(T\a,b)AT )ab, (AG(T\a,b)BT )ab, (BG(T\a,b)AT )ab or (BG(T\a,b)BT )ab = z(G(T\a,b))ab, for some +a ̸= b ∈ T. To proceed, we use the following operation successively : +Operation (a) +Let T ⊂ {1, . . . M} be a set of indices. +50 + +• For a ̸= b and c /∈ T , +(AG(T )AT )ab = (AG(T c)AT )ab + (AG(T )BT )ac(BG(T )AT )cb +z(G(T ))cc +• For a ̸= b and c /∈ T , +(AG(T )BT )ab = (AG(T c)BT )ab + (AG(T )BT )ac(BG(T )BT )cb +z(G(T ))cc += (AG(T c)BT )ab + (AG(T )BT )ac(G(T ))cb +(G(T ))cc +• For a ̸= b /∈ T and a, b ̸= c +1 +z (BG(T )BT )ab = (G(T ))ab = (G(T c))ab + (G(T ))ac(G(T ))cb +(G(T ))cc +• For a ̸= b /∈ T +1 +(G(T ))aa += +1 +(G(T b))aa +− +(G(T ))ab(G(T ))ba +(G(T ))aa(G(T b))aa(G(T ))bb +We then observe that the expanded terms contains at most two crossed terms (AG(T\a,b)BT )ab, (BG(T\a,b)AT )ab +and each expansions produce two types of terms, the first one has one more additional upper index, and +the second one at least one more additional off-diagonal entry of BGAT , AGBT or BGBT . Moreover, we also +remark that the denominators are always the diagonal entries of the resolvent G(T ). +It can be seen that the above expansion formulas eventually play the same role as operation (a) in [17]. +Therefore, to obtain the desired decomposition, we only need to iterate the operation (a) until it can no +longer be expanded or contains sufficiently many off-diagonal entries. +Step 2 : The further expansions for the maximally expanded off-diagonal entries +We further expand the maximally expanded term by using the following operations : +Operations (b) (and (c)) +• For a ̸= b ∈ T +(G(T\a,b))ab = z(G(T\a,b))aa(G(T\b))bb(BG(T)BT )ab. +• Furthermore, we use the following type expansion, which is from the above formula, to the terms +(G(T\a,b))aa and (G(T\a,b))bb +(G(T\a,b))aa = (G(T\a))aa + (G(T\a,b))ab(G(T\a,b))ba +(G(T\a,b))bb += (G(T\a))aa + z2(G(T\a,b))aa(G(T\a))aa(G(T\b))bb(BG(T)BT )2 +ab. +Then, this expansion splits such not-maximally expanded term into two parts, one is maximally ex- +panded and the other is a monomial expressed as the product of itself, the diagonal entry, and the +maximally expanded terms. In particular, it can be seen that the number of the off-diagonal entries +included in the latter monomial increases by exactly two. +51 + +• For a ̸= b ∈ T +(AG(T\a,b)AT )ab = (AG(T)AT )ab + z(G(T\b))bb(AG(T)BT )ab(BG(T)AT )bb ++ (AG(T\a,b)BT )aa(BG(T\a,b)AT )ab +z(G(T\a,b))aa += (AG(T)AT )ab + z(G(T\b))bb(AG(T)BT )ab(BG(T)AT )bb ++ z(G(T\a,b))aa(AG(T)BT )aa +� +(BG(T)AT )ab + z(G(T\b))bb(BG(T)BT )ab +� ++ z2(G(T\a,b))aa(G(T\b))bb(AG(T)BT )ab(BG(T)BT )ab× +� +(BG(T)AT )ab + z(G(T\b))bb(BG(T)BT )ab +� +. +since +(AG(T\a,b)BT )aa = −z(G(T\a,b))aa(AG(T\b)BT )aa += −z(G(T\a,b))aa +� +(AG(T)BT )aa + z(G(T\b))bb(AG(T)BT )ab(BG(T)BT )ab +� +and +(BG(T\a,b)AT )ab = −z(G(T\a,b))aa +� +(BG(T)AT )ab + z(G(T\b))bb(BG(T)BT )ab +� +. +The expansion of the first two monomials terminated since every term were maximally expanded. After +this, for any fixed positive integer ℓ, we expand the term which contains the term (G(T\a,b))aa until +the last term is a monomial containing ℓ or more off-diagonal entries by applying the first formula +recursively to the not-maximally expanded diagonal entry (G(T\a,b))aa. +• For a ̸= b ∈ T +(AG(T\a,b)BT )ab = −z(G(T\b))bb(AG(T)BT )ab + (AG(T\a,b)BT )aa(G(T\a,b))ab +(G(T\a,b))aa += −z(G(T\b))bb(AG(T)BT )ab +− z2(G(T\a,b))aa(AG(T)BT )aa(G(T\b))bb(BG(T)BT )ab +− z3(G(T\a,b))aa(G(T\b))2 +bb(AG(T)BT )ab(BG(T)BT )ab(BG(T)BT )ab +Even in this case, we also expand the second and third monomials recursively by applying the first +formula to not-maximally expanded diagonal entry (G(T\a,b))aa. +In particular, we have two observations from the above operations. +• The expansions of the maximally expanded off-diagonal entry consist of the monomials containing only +an odd number of off-diagonal entries: (BG(T)AT )ab, (AG(T)BT )ab and (BG(T)BT )ab. +• The diagonal entries (AG(T)BT )aa = (BG(T)AT )aa for a ∈ T, appear in the expanded term by implement- +ing operation (b) and (c). These terms can be interpreted as a loop of the vertex a in the structure of +the graph considered in [17], since like the maximally expended diagonal entry, these terms are com- +parable to wABs(z) by the entrywise local law. Therefore, similar to the maximally expanded diagonal +entry, terms of such types have no effect on the partial expectation techniques in subsection 5.13 of +[17]. This part will be explained in more detail in the next step. +As with the previous step, from the explanations depicted in each expansion formula, we can see that +the above expansions eventually play the same role as operations (b) and (c) in [17]. +52 + +Step 3 : The further expansions for the maximally expanded diagonal entries +Finally, unless we end up with an expression that includes a sufficiently large numbers of off-diagonal resolvent +entries (such trivial leaves are dealt with separately in Subsection 5.11 of [17]), we need to expand the +maximally expanded diagonal elements (AG(T)BT )aa = (BG(T)AT )aa and (G(T\a))aa for a ∈ T appearing in +the non-trivial leaves (cf. Subsection 5.12 ∼ 14 of [17]), where we need to slightly adjust the proof to the +setting. These terms corresponds to the maximally expanded diagonal G-edge in [17]. +First, for c ∈ T, +1 +(G(T\c) +B +)cc += −z − z(BG(T) +B +BT )cc. +(B.50) +We note that |(G(T))µµ − s(z)| ≺ N −1/2 by following the proof of the entrywise local law. Using (B.50) and +the facts zs(z)s(z) = −(zs(z) + 1) and |s(z)| ≍ 1, we see that +1 +(G(T\c))cc += +1 +s(z) − z +� +(BG(T)BT )cc − s(z) +� +and this implies that +(G(T\c))cc = +ℓ−1 +� +k=0 +(s(z))k+1zk � +(BG(T)BT )cc − s(z) +�k ++ O≺(N −ℓ/2) +for any integer ℓ ≥ 1 since (BG(T)BT )cc − s(z) is O≺(N −1/2), by using Lemma B.5. +Similarly, for a ∈ T, we see that +1 +(AG(T)BT )aa += +1 +wABs(z) − (AG(T) +B +BT )aa − wABs(z) +wABs(z)(AG(T) +B +BT )aa +(B.51) +and so +(AG(T)BT )aa = wABs(z) − (AG(T)BT )aa +wABs(z)−(AG(T)BT )aa +wABs(z) +1 − wABs(z)−(AG(T)BT )aa +wABs(z) +. +By using the estimate +(AG(T)BT )aa − wABs(z) = +� +µ̸=ν +Aaµ(G(T))µνBaν + +� +µ +AaµBaµ +� +(G(T))µµ − s(z) +� ++ s(z) +� +1 +N +� +µ +(NAaµBaµ − wAB) +� +≺ N −1/2, +we have the following series expansion for any integers ℓ ≥ 1, +(AG(T)BT )aa = wABs(z) − (AG(T)BT )aa1(ℓ ≥ 2) +ℓ−1 +� +k=1 +(wABs(z))−k � +wABs(z) − (AG(T)BT )aa +�k ++ O≺(N −ℓ/2) +which corresponds to the term (5.42) in [17]. +This way we end up with an expression where only contains the resolvent terms of the type (AG(T)AT )ab, +(AG(T)BT )ab, (BG(T)AT )ab or (BG(T)BT )ab = (G(T))ab, for some a ̸= b ∈ T. In other words, the x indices +and the indices of the resolvent entries are completely decoupled; only explicit products of entries of (A, B) +53 + +represent the connections between them. +Step 4 : Sketch of the rest of the proof. +Through previous steps, for our case (AGAT ), we observed the modified version of the operations, which are +done for the resolvents G and G in [17]. +After with these modifications, it can be seen that the rest procedures (Step 6 ∼ 8 in [17]) of the proof for +the non-trivial leaves with the stopping rule, which relies on the number of off-diagonal terms (cf. Definition +5.7 of [17]), are also valid for the ZAB. +More precisely, by using the entrywise laws and H¨older’s inequality, the same estimation also holds for +the trivial leave as in Subsection 5.11. Furthermore, the most of the finitely generated non-trivial leaves have +a decay N −p/2 also by applying the same argument in the case of the trivial leaves (Subsection 5.12 in [17]), +and the remaining leading order non-trivial leaves have the same decay by applying the partial expectation +method (Subsection 5.13 in [17]). +We conclude the proof. +Proof of Lemma B.6. From the above version of an isotropic law, we also arrive at the isotropic version of +the entrywise law in Lemma B.8 by taking A = Q + X and B = Q. Then, it is easy to check that +wA = 2 +� +1 + Eq +� +Vq +� +, +wAB = 1 + Eq +� +Vq +, +wB = 1. +Precisely, applying Lemma B.12 directly, we see that +2⟨u, X(QT Q − zI)−1QT u⟩ += ⟨u, X(QT Q − zI)−1QT u⟩ + ⟨u, Q(QT Q − zI)−1XT u⟩ += ⟨u, A(BT B − zI)−1AT u⟩ − ⟨u, B(BT B − zI)−1BT u⟩ − ⟨u, X(BT B − zI)−1XT u⟩ += 2s(z) +� +1 + Eq +� +Vq +� ++ zs(z)s(z)2 +� +1 + Eq +� +Vq +�2 +− s(z) − zs(z)s(z)2 − s(z) − zs(z)s(z)2 E2 +q +Vq += 2 Eq +� +Vq +(s(z) + zs(z)s(z)2) = 2 Eq +� +Vq +(zs(z) + 1) +with O≺(N −φ) error terms, and it exactly matches the entrywise law since correlation wXQ = +Eq +√ +Vq . Thus, +we conclude that the improved PCA via the entrywise transform holds for the spike U s.t. ∥U T U − Ik∥F , +∥U∥∞ ≺ N −φ, where φ > 1/4. +C +Proof of CLTs +In Appendix C, we prove the CLT for the LSS of spiked random matrices. The proof of the CLT for the +LSS is based on the strategy of [6] in which the LSS is first written as a contour integral of the resolvent of +a spiked Wigner matrix. Then, the averaged trace of the resolvent converges to a Gaussian process, which +also implies that the limiting distribution of the LSS is Gaussian. +It is the biggest obstacle in adapting the proof in [6] for spiked matrices that the martingale CLT and +covariance computation are hard to be reproduced with spikes; even with the special choice of rank-1 spike +the proof for the CLT is very tedious as in [9]. In [22], the interpolation between a general rank-1 spike +54 + +and the special rank-1 spiked was introduced to compare the LSS, based on an ansatz that the mean and +the variance of the LSS do not depend on the choice of the spike. In this paper, since we do not have a +reference matrix to be compared with as in the rank-1 case, we introduce a direct interpolation between a +spiked random matrices of general rank and a matrix without any spikes. With the interpolation, we find +the change of the mean in the limiting Gaussian distribution and also prove that its variance is invariant. +C.1 +Proof of CLTs for spiked random matrices +Proof of Theorem 5.2. We adapt the proof of Theorem 5 in [22] with the following change. Instead of inter- +polating the spiked Wigner matrices M with the original signal and with the signal with all 1’s considered in +[9], we directly interpolate M and W and track the change of the mean. Consider the following interpolating +matrix +M(θ) = θ +√ +λUU T + W +and the corresponding eigenvalues {µi(θ)}N +i=1 of M(θ) for θ ∈ [0, 1]. Let Γ be a rectangular contour in the +proof of Theorem 5 in [22]. Applying Cauchy’s integral formula, we have +N +� +i=1 +f(µi(1)) − N +� 2 +−2 +√ +4 − x2 +2π +f(x) dx = − N +2πi +� +Γ +f(z) +� +sN(1, z) − ssc(z) +� +dz +(C.1) +where ssc(z) = −z+ +√ +z2−4 +2 +is the Stieltjes transform of the Wigner semicircle law and sN(θ, z) is the Stieltjes +transform of the empirical spectral distribution (ESD) of M(θ) for θ ∈ [0, 1]. Note that the normalized trace +of the resolvent satisfies +1 +N Tr R(θ, z) = 1 +N +N +� +i=1 +1 +µi(θ) − z = sN(θ, z) +(C.2) +where R(θ, z) is the resolvent corresponding to M(θ), defined as +R(θ, z) := (M(θ) − zI)−1 +(C.3) +for z ∈ C+ and θ ∈ [0, 1]. +The change of the mean in the CLT for W and the CLT for M can be computed by tracking the change +of the corresponding resolvent in (C.3), since (C.1) can be decomposed by +N +� +i=1 +f(µi(1)) − N +� 2 +−2 +√ +4 − x2 +2π +f(x) dx = − 1 +2πi +� +Γ +f(z) +� +Tr R(1, z) − Tr R(0, z) +� +dz +(C.4) +− 1 +2πi +� +Γ +f(z) +� +Tr R(0, z) − Nssc(z) +� +dz +(C.5) +and the fluctuation result of (C.5) is already given in [6]. +Set Γε = {z ∈ C : minw∈Γ |z − w| ≤ ε}. Choose ε so that +min +w∈Γε,x∈[−2,2] |x − w| > 2ε. +55 + +Following the proof of Theorem 5 in [22], on z ∈ Γε +1/2 := Γε ∩ {z ∈ C : |Imz| > N −1/2}, we first find that +∂ +∂θ Tr R(θ, z) = − +k +� +m=1 +√ +λ ∂ +∂z +� +x(m)T R(θ, z)u(m) +� += −k ∂ +∂z +� +√ +λssc(z) +1 + θ +√ +λssc(z) +� ++ O(N − 1 +2 ) += − +k +√ +λs′ +sc(z) +(1 + θ +√ +λssc(z))2 + O(N − 1 +2 ) +(C.6) +with high probability. More precisely, since the elementary resolvent expansion implies +R(0, z) − R(θ, z) = θ +√ +λR(θ, z) +� k +� +ℓ=1 +u(ℓ)u(ℓ)T +� +R(0, z), +(C.7) +we then find that +� +u(m)T R(0, z)u(m) +� += +� +u(m)T R(θ, z)u(m) +� ++ θ +√ +λ +k +� +ℓ=1 +� +u(m)T R(θ, z)u(ℓ) +� � +u(ℓ)T R(0, z)u(m) +� +. +From the rigidity of the eigenvalues, we have a deterministic bound for resolvent +| +� +u(m)T R(θ, z)u(ℓ) +� +| ≤ ∥R(θ, z)∥ ≤ C. +(C.8) +Since columns of spike {u(ℓ)}k +ℓ=1 are orthonormal, the isotropic local law for R(0, z) implies that +� +u(m)T R(0, z)u(ℓ) +� += s(z)δmℓ + O(N −1/2). +(C.9) +uniformly on z ∈ Γε. We then obtain that +� +u(m)T R(0, z)u(m) +� += +� +u(m)T R(θ, z)u(m) +� ��� +1 + θ +√ +λ +� +u(m)T R(0, z)u(m) +�� ++ O(N − 1 +2 ) +and so +� +u(m)T R(θ, z)u(m) +� += +ssc(z) +1 + θ +√ +λssc(z) ++ O(N − 1 +2 ). +This proves (C.6). +Moreover, on Γε, we easily check that the exactly same argument holds for a finite rank perturbation of +Wigner matrix (e.g. interlacing and rigidity properties). Thus, we conclude that (C.4) is +k +2πi +� +Γ +√ +λs′ +sc(z) +1 + +√ +λssc(z) +f(z)dz + o(1) +with high probability. +Finally, following the computation in the proof of Lemma 4.4 in [9], we then find that the difference +between the LSS of M and the LSS of W is +k +∞ +� +ℓ=1 +√ +λℓτℓ(f). +(C.10) +This proves the desired theorem. +56 + +Proof of Theorem 5.5. The proof of the CLT for the spiked rectangular matrices is quite similar to the case +of spiked Wigner matrix. We first consider the interpolating matrix for the additive model, defined as +Y (θ) = θ +√ +λUV T + X +(C.11) +for θ ∈ [0, 1]. Note that Y (0) = X and Y (1) = Y . Denote by µ1(θ) ≥ µ2(θ) ≥ · · · ≥ µM(θ) the eigenvalues +of Y (θ)Y (θ)T . We also define the resolvent +G(θ, z) = (Y (θ)Y (θ)T − zI)−1, +G(θ, z) = (Y (θ)T Y (θ) − zI)−1 +(C.12) +for z ∈ C. +We choose (N-independent) constants a− < d−, a+ > d+, and v0 ∈ (0, 1) so that the function f is +analytic on the rectangular contour Γ whose vertices are (a− ± iv0) and (a+ ± iv0). With overwhelming +probability, all eigenvalues of Y (θ)Y (θ)T are contained in Γ. Applying Cauchy’s integral formula, we find +that +M +� +i=1 +f(µi(1)) − +M +� +i=1 +f(µi(0)) = − +� 1 +2πi +� +Γ +f(z) (Tr G(1, z) − Tr G(0, z)) dz +� +(C.13) +To estimate the difference Tr G(1, z) − Tr G(0, z), we consider its derivative +∂ +∂θ Tr G(θ, z). Note that +∂Gab(θ) +∂Yij(θ) = −Gai(θ)(Y (θ)T G(θ))jb − (G(θ)Y (θ))ajGib(θ), +dYij(θ) +dθ += +√ +λuivT +j . +(C.14) +Thus, by chain rule +∂ +∂θ Tr G(θ, z) = +M +� +a=1 +M +� +i=1 +N +� +j=1 +∂Yij(θ) +∂θ +∂Gaa(θ) +∂Yij(θ) += − +M +� +a=1 +M +� +i=1 +N +� +j=1 +√ +λuivT +k [Gai(θ)(Y (θ)T G(θ))ja + (G(θ)Y (θ))ajGia(θ)] += −2 +M +� +a=1 +M +� +i=1 +N +� +j=1 +M +� +b=1 +√ +λuivT +j [Ybj(θ)Gba(θ)Gai(θ)] +(C.15) +From the fact +� ∂ +∂z G(θ) +� +bi += (G(θ)2)bi = +� +a +Gba(θ)Gai(θ), +we then find that +∂ +∂θ Tr G(θ, z) = −2 +√ +λ ∂ +∂z +M +� +i=1 +N +� +j=1 +uivT +j (G(θ)Y (θ))ij = −2 +√ +λ ∂ +∂z +k +� +ℓ=1 +⟨u(ℓ), G(θ)Y (θ)v(ℓ)⟩. +(C.16) +It remains to estimate +∂ +∂z⟨u(ℓ), G(θ)Y (θ)v(ℓ)⟩ for 1 ≤ ℓ ≤ k. We suffices to estimate the desired term +for fixed ℓ. From now, we omit ℓ-dependency. Note that +⟨u, G(θ)Y (θ)v⟩ = θ +√ +λ⟨u, G(θ)u⟩ + ⟨u, G(θ)Xv⟩. +57 + +We consider the resolvent expansion +G(0, z) − G(θ, z) = G(θ, z) (H(θ) − H(0)) G(0, z) += G(θ, z) (θ2λuuT + θ +√ +λXvuT + θ +√ +λuvT XT ) G(0, z). +(C.17) +Taking inner products with u and v, we obtain +⟨u, G(0)u⟩ = ⟨u, G(θ)u⟩ + θ2λ⟨u, G(θ)u⟩⟨u, G(0)u⟩ ++ θ +√ +λ⟨u, G(θ)Xv⟩⟨u, G(0)u⟩ + θ +√ +λ⟨u, G(0)Xv⟩⟨u, G(θ)u⟩ +(C.18) +and +⟨u, G(0)Xv⟩ = ⟨u, G(θ)Xv⟩ + θ2λ⟨u, G(θ)Xv⟩⟨u, G(0)Xv⟩ ++ θ +√ +λ⟨u, G(θ)Xv⟩⟨u, G(0)Xv⟩ + θ +√ +λ⟨v, XT G(0)Xv⟩⟨u, G(θ)u⟩, +(C.19) +where we omitted z-dependence for brevity. We then use the following result to control the terms in (C.18) +and (C.19). Recall the definition of s(z) and s(z) in Lemmas B.4 and B.11. Moreover, we consider the same +linearization HX(z) of the matrix X and its inverse RX(z) = HX(z)−1 as in (B.33) and (B.34). +Lemma C.1 (Isotropic local law). For an N-independent constant ε > 0, let Γε be the ε-neighborhood of Γ, +i.e., +Γε = {z ∈ C : min +w∈Γ |z − w| ≤ ε}. +Choose ε small so that the distance between Γε and [d−, d+] is larger than 2ε, i.e., +min +w∈Γε,x∈[d−,d+] |x − w| > 2ε. +(C.20) +Then, for any unit vectors x, y ∈ CM+N independent of X, +|⟨x, (RX(z) − Π(z))y⟩| ≺ N −1/2, +(C.21) +uniformly on z ∈ Γε, where +Π(z) = +�s(z) · IM +0 +0 +zs(z) · IN +� +. +(C.22) +Proof. See Theorems 3.6, 3.7, Corollary 3.9, and Remark 3.10 in [37]. Note that Im s(z), Im s(z) = Θ(η) on +the vertical part of Γε, i.e., the neighborhood of the line segment joining (a++iv0) and (a+−iv0) (respectively +(a− + iv0) and (a− − iv0)). +Set +A := ⟨u, G(0, z)u⟩, +B := ⟨u, G(0, z)Xv⟩, +C := ⟨v, XT G(0, z)Xv⟩. +Recall that +RX(z) = +� G(0, z) +G(0, z)X +XT G(0, z) +zG(0, z) +� +. +(C.23) +Then, as consequences of Lemma C.1 with appropriate choices of the deterministic vectors, +A = s(z) + O≺(N −1/2), +C = ⟨v, zG(0, z)v⟩ + 1 + O(N −1/2) = d0(zs(z) + 1) + O≺(N −1/2), +(C.24) +58 + +and +B = O≺(N −1/2). +We thus have from (C.18) and (C.19) that +⟨u, G(θ)Xv⟩ = −θd0 +√ +λs(z)(zs(z) + 1) +θ2λzs(z) + θ2λ + 1 ++ O≺(N −1/2) +⟨u, G(θ)u⟩ = +s(z) +θ2λzs(z) + θ2λ + 1 + O≺(N ���1/2) +(C.25) +and hence +⟨u, G(θ)Y (θ)v⟩ = θ +√ +λ⟨u, G(θ)u⟩ + ⟨u, G(θ)Xv⟩ = +θ +√ +λzs(z) + θ +√ +λ +θ2λzs(z) + θ2λ + 1 + O≺(N −1/2). +(C.26) +Note that this estimate is uniform on θ. Differentiating it with respect to z and plugging it back to (C.16), +we get +∂ +∂θ Tr G(θ, z) = −k +2θλ d +dz(zs(z) + 1) +(θ2λzs(z) + θ2λ + 1)2 + O≺(N −1/2) +and, integrating over θ, we obtain +Tr G(1, z) − Tr G(0, z) = +� 1 +0 +∂ +∂θ Tr G(θ, z)dθ = −k +d +dzλ(zs(z) + 1) +λzs(z) + λ + 1 + O≺(N −1/2). +(C.27) +We now invoke the following relation between the Stieltjes transforms for Marchenko–Pastur law and the +Wigner semicircle law. Let +ssc(z) = −z + +√ +z2 − 4 +2 +be the Stieltjes transform of the Wigner semicircle law and +ϕ(z) = +1 +√d0 +(z − (1 + d0)). +Then +� +d0(zs(z) + 1) = ssc(ϕ(z)). +(C.28) +We thus have +1 +2πi +� +Γ +f(z)λ d +dz(zs(z) + 1) +λzs(z) + λ + 1 dz = +1 +2πi +� +Γ +�f(ϕ(z)) λs′ +sc(ϕ(z))ϕ′(z) +λssc(ϕ(z)) + √d0 +dz += +1 +2πi +� +�Γ +�f(ϕ) +λs′ +sc(ϕ) +λssc(ϕ) + √d0 +dϕ +(C.29) +where we let f(√d0z + 1 + d0) = �f(z) and �Γ = ϕ(Γ). (Note that �Γ contains the interval [−2, 2].) +So far, we have proved that +M +� +i=1 +f(µi(1)) − +M +� +i=1 +f(µi(0)) = +k +2πi +� +�Γ +�f(ϕ) +λs′ +sc(ϕ) +λssc(ϕ) + √d0 +dϕ + O≺(N −1/2). +(C.30) +Since the difference in (C.30) is the sum of a deterministic term and a random term stochastically dominated +59 + +by N −1/2, we can see that the CLT holds for the LSS with the non-null model Y (1). Moreover, the variance +is the same as that of the null model, which is +VY (f) = 2 +∞ +� +ℓ=1 +ℓτℓ( �f)2 + (w4 − 3)τ1( �f)2. +(C.31) +(See, e.g., [10].) +The change of the mean is the first term in the right side of (C.30), which can be computed by following +the proof of Lemma 4.4 in [9]. We obtain +mY (f) = +�f(2) + �f(−2) +4 +− 1 +2τ0( �f) + (w4 − 3)τ2( �f) + k +∞ +� +ℓ=1 +� λ +√d0 +�ℓ +τℓ( �f). +(C.32) +This proves the first part of Theorem 5.2 for the additive model. +For the multiplicative model, we will follow the same strategy as in the additive model. Let +Y (θ) = X + θγUU T X +(C.33) +for θ ∈ [0, 1]. +Note that Y (0) = X and Y (1) = Y . +We denote by µ1(θ) ≥ µ2(θ) ≥ · · · ≥ µM(θ) the +eigenvalues of Y (θ)Y (θ)T , and also let +G(θ, z) = (Y (θ)Y (θ)T − zI)−1, +G(θ, z) = (Y (θ)T Y (θ) − zI)−1 +(C.34) +for z ∈ C. We have the relations +∂Gab(θ) +∂Yij(θ) = −Gai(θ)(Y (θ)T G(θ))jb − (G(θ)Y (θ))ajGib(θ), +∂Yij(θ) +∂θ += γ +M +� +c=1 +uiuT +b Xbj. +(C.35) +Following (C.15)-(C.16), we get +∂ +∂θ Tr G(θ, z) = −γ +M +� +a=1 +M +� +i=1 +N +� +j=1 +M +� +b=1 +uiuT +b Xbj[Gai(θ)(Y (θ)T G(θ))ja + (G(θ)Y (θ))ajGia(θ)] += −2γ +M +� +a=1 +M +� +i=1 +N +� +j=1 +M +� +b=1 +uiuT +b Xbj[(Y (θ)T G(θ))jaGai(θ)] += −2γ ∂ +∂z +M +� +i=1 +N +� +j=1 +M +� +b=1 +uiuT +b Xbj(G(θ)Y (θ))ij += −2γ ∂ +∂z +k +� +ℓ=1 +⟨u(ℓ), G(θ)Y (θ)XT u(ℓ)⟩ = −2γ ∂ +∂z +k +� +ℓ=1 +⟨u(ℓ), G(θ)Y (θ)Y (0)T u(ℓ)⟩. +(C.36) +Moreover, since +Y (0) = X = (I + θγUU T )−1Y (θ) = +� +I − +θγ +1 + θγ UU T +� +Y (θ), +(C.37) +60 + +we have +⟨u(ℓ), G(θ)Y (θ)Y (0)T u(ℓ)⟩ = ⟨u(ℓ), G(θ)Y (θ)Y (θ)T (I + θγUU T )−1u(ℓ)⟩ += ⟨u(ℓ), (I + zG(θ))(I + θγUU T )−1u(ℓ)⟩ += +1 +1 + θγ + +z +1 + θγ ⟨u(ℓ), G(θ)u(ℓ)⟩. +(C.38) +To estimate the term ⟨u(ℓ), G(θ)u(ℓ)⟩, we use the following Anisotropic local law in [37]. +Lemma C.2 (Anisotropic local law). Let Γε be the ε-neighborhood of Γ as in Lemma C.1. Then, for any +unit vectors x, y ∈ CM independent of X, the following estimate holds uniformly on z ∈ Γε : +���� +� +x, +� +G(θ, z) + +� +zI + zs(z)(I + θγUU T )2�−1� +y +����� ≺ N − 1 +2 . +(C.39) +Proof. The proof of Lemma C.2 is the same as that of Lemma C.1. +Now, as in the additive case, we drop the ℓ-dependency. From Lemma C.2, we find that +⟨u, G(θ)u⟩ = − +� +u, +� +zI + zs(z)(I + θγUU T )2�−1 +u +� ++ O(N −1/2) += − +1 +(1 + θγ)2z(1 + s(z)) + O(N −1/2), +(C.40) +and plugging it into (C.38), we obtain +⟨u, G(θ)Y (θ)Y (0)T u⟩ = +1 +1 + θγ − +1 +(1 + θγ)(1 + (1 + θγ)2s(z)) + O(N −1/2). +(C.41) +We thus get +∂ +∂θ Tr G(θ, z) = −2kγ +(1 + θγ)s′(z) +(1 + (1 + θγ)2s(z))2 + O(N −1/2), +(C.42) +and integrating it yields +Tr G(1, z) − Tr G(0, z) = −k +λs′(z) +(1 + s(z))(1 + (1 + λ)s(z)) + O(N −1/2) += −λk d +dz(zs(z) + 1) +λzs(z) + λ + 1 + O(N −1/2). +(C.43) +Since (C.43) coincides with (C.27), the rest of the proof is exactly the same as in the additive case. This +finishes the proof of the first part of Theorem 5.2. +C.2 +Proof of CLTs for entrywise transformed matrices +Proof of Theorem 5.3. We adapt the proof of Theorem 7 in [22] with the following changes. Let S be the +variance matrix of the transformed matrix � +M. We then find that +Sij = E[� +M 2 +ij] − (E[� +Mij])2 = 1 +N + λ(GH − Fg)(uiuT +j )2 + O(N 1−8φ) +and +Sii = E[� +M 2 +ii] − (E[� +Mii])2 = w2 +N + λ(Gg,d − Fg,d)(uiuT +i )2 + O(N 1−8φ). +61 + +Normalizing and centering each entry of the matrix � +M, we arrive at another Wigner matrix � +W where +� +Wij = +1 +� +NSij +(� +Mij − E� +Mij), +� +Wii = +� w2 +NSii +(� +Mii − E� +Mii). +Interpolating � +W and � +M − E[� +M] by � +W(θ) = (1 − θ)� +W + θ(� +M − E[� +M]), � +W(θ) is a general Wigner-type matrix +with the corresponding quadratic vector equation +− +1 +mi(θ, z) = z + +N +� +j=1 +E[� +Wij(θ)2] · mj(θ, z) +where mi(θ, z)δij is the limiting distribution of the (i, j)-element of the resolvent +R +� +W (θ, z) = (� +W(θ) − zI)−1 +for 0 ≤ θ ≤ 1. Recall the ssc(z) is the Stieltjes transform of the Wigner semicircle law. We also directly +check that mi(θ, z) = ssc(z) + C1(uiuT +i ) + C2N −1 = ssc(z) + O(N −2φ). Moreover, the anisotropic local law +for the general Wigner-type matrix in [2] implies that uniformly on z ∈ Γε +1/2 +(u(m)T R +� +W (θ, z)u(ℓ)) = ssc(z)δmℓ + O(N −1/2). +Following the proof of Lemmas B.2 and B.3 in [22], we check that +• Uniformly on z ∈ Γε +1/2, +Tr R +� +W (1, z) − Tr R +� +W (0, z) = kλ(GH − Fg)s′ +sc(z)ssc(z) + O(N 1N −4φ) +(C.44) +• Uniformly on z ∈ Γε\Γε +1/2, +| Tr R +� +W (1, z) − Tr R +� +W (0, z)| = O(N 1N −2/3). +(C.45) +Compared with the bound shown in [22], we give the following remark: +• The error bound in (C.44) is better. This sharper bound can be obtained by using the fact � +a ua(ℓ)2 = +1 instead of +��� +a ua(ℓ)2�� ≤ N∥u(ℓ)∥2 +∞. +Our next step is to consider � +M = � +W(1) + E[� +M]. Since +� +M = � +W(1) + +� +λFgUU T + diag(d1, · · · , dN) + E +where di = E[� +Mii] − +� +λFg(UU T )ii, we then find that +Tr(� +M − zI)−1 − Tr R +� +W (0, z) += kλ(GH − Fg)s′ +sc(z)ssc(z) − +k +� +λFgs′ +sc(z) +1 + +� +λFgssc(z) − k +√ +λ( +� +Fg,d − +� +Fg)s′ +sc(z) + O(N −1/2) +uniformly on z ∈ Γε +1/2. Thus, we obtain the desired CLT by applying Cauchy’s integral formula as in the +proof of Theorem 5.2. +62 + +Proof of Theorem 5.6. Since the proof of the transformed CLT for the spiked Wigner matrix follows the +proof in [22], we only describe the process briefly. On the other hand, there is no technical reference for the +spiked rectangular matrices. As we mentioned before, our consideration is only the additive case. +We consider the optimal entrywise transformation defined by a function +h(w) := −g′(w) +g(w) . +(C.46) +If λ = 0, it is immediate to see that for all i, j +E[h( +√ +NYij)] = +� ∞ +−∞ +h(w)g(w)dw = − +� ∞ +−∞ +g′(w)dw = 0. +Further, with λ = 0, as shown in Proposition 4.2 of [50], +Fg := E[h( +√ +NYij)2] = +� ∞ +−∞ +h(w)2g(w)dw = +� ∞ +−∞ +g′(w)2 +g(w) dw ≥ 1, +(C.47) +where the equality holds if and only if +√ +NXij is a standard Gaussian (hence h(w) = w). +We define a transformed matrix �Y as follows: the terms of �Y are defined by +�Yij = +1 +� +FgN h( +√ +NYij). +(C.48) +Note that the entries of �Y are independent up to symmetry. Since g is smooth, h is also smooth and all +moments of +√ +N �Yij are O(1). Thus, applying a high-order Markov inequality, it is immediate to find that +�Yij = O(N − 1 +2 ). +C.2.1 +Decomposition of the transformed matrix +We first estimate the mean and the variance of entry by using the comparison method with the pre- +transformed entries. For all i, j, we find that +E[�Yij] = +1 +� +FgN +� ∞ +−∞ +h(w)g +� +w − +√ +NλuivT +j +� +dw += − +1 +� +FgN +� ∞ +−∞ +g′(w) +g(w) +� +g +� +w − +√ +NλuivT +j +� +− g(w) +� +dw. +(C.49) +In the Taylor expansion +g +� +w − +√ +NλuivT +j +� +− g(w) += +4 +� +ℓ=1 +g(ℓ)(w) +ℓ! +� +− +√ +NλuivT +j +�ℓ ++ +g(5) � +w − θ +√ +NλuivT +j +� +5! +� +− +√ +NλuivT +j +�5 +(C.50) +63 + +for some θ ∈ (0, 1). Note that the second term and the fourth term in the summation are even functions. +Since g′/g is an odd function, we find that +E[�Yij] = +1 +� +Fg +√ +λuivT +j +� ∞ +−∞ +g′(w)2 +g(w) dw + C3N +�√ +λuivT +j +�3 ++ O(N 2(uivT +j )5) += +� +λFguivT +j + C3N +�√ +λuivT +j +�3 ++ O(N 2(uivT +j )5) +(C.51) +for some (N-independent) constant C3. Similarly, since +� +g′ +g +�2 +is even, +E[�Y 2 +ij] = +1 +FgN +� ∞ +−∞ +�g′(w) +g(w) +�2 +g +� +w − +√ +NλuivT +j +� +dw += 1 +N + +1 +FgN +� ∞ +−∞ +�g′(w) +g(w) +�2 � +g +� +w − +√ +NλuivT +j +� +− g(w) +� +dw += 1 +N + +1 +2Fg +�√ +λuivT +j +�2 � ∞ +−∞ +g′(w)2g′′(w) +g(w)2 +dw + O(N(uivT +j )4) += 1 +N + λGH(uivT +j )2 + O(N(uivT +j )4). +(C.52) +where +GH = +1 +2Fg +� ∞ +−∞ +g′(w)2g′′(w) +g(w)2 +dw. +The evaluation of the mean and the variance shows that the transformed matrix �Y is not a spiked +rectangular matrix when λ > 0, since the variances of the entries are not identical. +Our strategy is to +approximate �Y as a spiked generalized rectangular Gram matrix for which the variances of the each entries +is 1/N in high-dimensional regime. Let S be the variance matrix of �Y defined as +Sij = E[�Y 2 +ij] − (E[�Yij])2. +(C.53) +From (C.51) and (C.52), +Sij = 1 +N + (GH − Fg) +�√ +λuivT +j +�2 ++ O(N∥U∥4 +∞∥V ∥4 +∞), +(C.54) +which shows that �Y is indeed approximately a spiked generalized Gram matrix. +C.2.2 +CLT for a random Gram matrix +We use the local law for general rectangular Gram matrices in [4]. Consider an another M × N rectangular +matrix A = (Aij) defined by +Aij = +1 +� +NSij +(�Yij − E[�Yij]). +(C.55) +Note that E[Aij] = 0, E[A2 +ij] = 1 +N . Then the matrix A is a usual rectangular matrix. We set +GA(z) = (AAT − zI)−1 +(z ∈ C+). +(C.56) +64 + +Next, we introduce an interpolation for A. For 0 ≤ θ ≤ 1, we define a matrix A(θ) by +Aij(θ) = (1 − θ)Aij + θ(�Yij − E[�Yij]) = +� +1 − θ + θ +� +NSij +� +Aij += +� +1 + θNλ(GH − Fg)(uivT +j )2 +2 ++ O(N 2(uivT +j )4) +� +Aij +(C.57) +Note that A(0) = A and A(1) = �Y − E[�Y ]. For 0 ≤ θ ≤ 1, A(θ) is a random Gram matrix considered in +[4] satisfying the conditions (A)–(D) therein. Moreover, if we let +GA(θ, z) = (A(θ)A(θ)T − zI)−1 +(z ∈ C+) +(C.58) +and Sij(θ) = E[Aij(θ)2], then Theorem 1.7 of [4] asserts that the limiting distribution of GA +ij(z) is si(z)δij, +where si(θ, z) is the unique solution to the system of quadratic vector equations +− +1 +si(θ, z) = z + +N +� +j=1 +Sij(θ) zsj(θ, z) +(C.59) +and +− +1 +sj(θ, z) = z + +M +� +i=1 +Sij(θ) zsi(θ, z) +(C.60) +Remark C.3. Recall that s(z) is the Stieltjes transform of the Marchenko-Pastur measure. We can then +find that si(θ, z) = s(z)+C1(uiuT +i )+C2N −1 = s(z)+O(N −1/2) and sj(θ, z) = s(z)+C1(vjvT +j )+C2N −1 = +s(z) + O(N −1/2); see also Lemma 3.9 of [4]. +For the resolvent GA(θ, z), we will use the following lemma for the random Gram matrix: +Lemma C.4 (Anisotropic local law for random Gram matrix). Let Γε be the ε-neighborhood of Γ as in +Lemma C.1. Then, for any deterministic x = (x1, . . . , xM), y = (y1, . . . , yM) ∈ CM with ∥x∥ = ∥y∥ = 1, +the following estimate holds uniformly on z ∈ Γε ∩ {z ∈ C+ : Im z > N − 1 +2 }: +������ +M +� +i=1 +M +� +j=1 +xiGA +ij(θ, z)yj − +M +� +i=1 +si(θ, z)xiyi +������ += O(N − 1 +2 ). +(C.61) +and, for any deterministic x = (x1, . . . , xN), y = (y1, . . . , yN) ∈ CN with ∥x∥ = ∥y∥ = 1, +������ +N +� +i=1 +N +� +j=1 +xiGA +ij(θ, z)yj − +N +� +i=1 +si(θ, z)xiyi +������ += O(N − 1 +2 ). +(C.62) +Proof of Lemma C.4. Let Ψ(z) = +� +1 +M Im z be the control parameter for the random gram matrix model. We +then note that the bound for the entrywise local law is N −1/2 since Ψ(z) ≺ N −1/2 on Γε ∩ {z ∈ C+ : Im z > +N − 1 +2 }. With the entrywise local law in [4], the proof of the anisotropic law exactly follows the maximal +65 + +expansion argument used in [2, 17] and Lemma B.12. We consider the following decomposition of (C.61): +M +� +i̸=j +xiGA +ij(θ, z)xj + +M +� +i=1 +(GA +ii − si(θ, z))xixi. +(C.63) +From now, we drop A, θ and z-dependencies for brevity and use the linearization matrix HA(θ)(z) ≡ H and +its inverse R. Then, in usual, we suffices to prove that +Z ≡ +� +a̸=b +xaRabxb ≺ N −1/2. +To prove the above high probability bound, we will bound the 2p-moments E[|Z|2p] ≺ N −p/2 by deriving +the maximally expanded form via the resolvent identity in Lemma B.10. +Now, we will check the representation of the maximally expanded diagonal resolvent elements. e.g. R(B\b) +bb +, +b ∈ B. Together with Remark C.3, we then conclude that the standard argument in [2] is valid for our model. +By applying Shur’s complement lemma and (C.59), for b ∈ B, +1 +R(B\b) +bb += −z − +(B) +� +α,β +HbαR(B) +αβ Hβb += +1 +sb(θ, z) + +� +β +Sbβ (zsβ(θ, z)) − +(B) +� +α,β +HbαR(B) +αβ Hβb += +1 +sb(θ, z) − +(B) +� +β +(HbβR(B) +ββ Hβb − Sbβ (zsβ(θ, z))) − +(B) +� +α̸=β +HbαR(B) +αβ Hβb. +(C.64) +Then (C.64) and the analogue representation of R(B\β) +ββ +for β ∈ B replace the (6.2) in [2]. +With linearization H and its inverse R, one useful by-product of the above argument is +⟨x, GA(θ)A(θ)y⟩ = +� +a +� +α +xaRaαyα ≺ N −1/2. +(C.65) +Note that our model satisfies the closeness condition (A3) of Assumption 2.2 in [3] (See also Remark 2.4 +therein). On Γ\Γε +1/2, we use the following results on the rigidity of eigenvalues. +Lemma C.5 (Rigidity of eigenvalues for the random Gram matrix). Denote by µA +1 (θ) ≥ µA +2 (θ) ≥ · · · ≥ +µA +M(θ) the eigenvalues of A(θ)A(θ)T . Let γi be the classical location of the eigenvalues with respect to the +Marchenko-Pastur measure defined by +� ∞ +γi +ρMP,d0(dx) = 1 +M +� +i − 1 +2 +� +(C.66) +for i = 1, 2, . . . , M. Then, +|µA +i (θ) − γi| = O(M −2/3). +(C.67) +Proof. Note that the rigidity of the eigenvalues with an error of at most O(M −2/3) holds for random gram +matrices at the classical location of the eigenvalues with respect to the probability measure ρ from the Stieltjes +transform sρ(z) := +1 +M +� +i si(z), see Lemma 4 in [26]. Moreover, since |si(θ, z) − s(z)|, |sj(θ, z) − s(z)| = +66 + +O(M −2φ) for all i and j, we also have the desired rigidity near the classical location of Marchenko-Pastur +law ρMP,d0. +Remark C.6. In fact, rigorous proofs of the rigidity and anisotropic law are not given in [4, 3]. However, +as in the proof of anisotropic local law for general Wigner-type matrix in [2], the above lemmas may be proved +by using the local laws in [4] and standard methods in [2] (Remark 2.10 in [4] and Remark 2.7 in [3].) +On Γε +1/2, as a simple corollary to Lemma C.4, we obtain +��⟨x, GA(θ, z)y⟩ − s(z)⟨x, y⟩ +�� = O(N − 1 +2 ), +(C.68) +and +��⟨x, GA(θ, z)y⟩ − s(z)⟨x, y⟩ +�� = O(N − 1 +2 ). +(C.69) +We have the following lemma for the difference between Tr GA(0, z) and Tr GA(1, z) on Γε +1/2. +Lemma C.7. Let GA(θ, z) be defined as in Equations (C.57) and (C.58). Then, the following holds uniformly +for z ∈ Γε +1/2: +Tr GA(1, z) − Tr GA(0, z) = −λ(GH − Fg)k ∂ +∂z (zs(z) + 1) + O(N∥U∥2 +∞∥V ∥2 +∞). +(C.70) +We will prove Lemma C.7 later. +From Lemma C.5, we find that +| Tr GA(1, z) − Tr GA(0, z)| = +����� +N +� +i=1 +� +1 +µA +i (1) − z − +1 +µA +i (0) − z +������ = +����� +N +� +i=1 +µA +i (0) − µA +i (1) +(µA +i (1) − z)(µA +i (0) − z) +����� +≤ +����� +N +� +i=1 +|µA +i (0) − γi| + |γi − µA +i (1)| +(µA +i (1) − z)(µA +i (0) − z) +����� = O(N 1/3) +(C.71) +uniformly for z ∈ Γ. Thus, from (C.70) and (C.71), +1 +2πi +� +Γ +f(z) Tr GA(1, z)dz − 1 +2πi +� +Γ +f(z) Tr GA(0, z)dz += +1 +2πi +� +Γε +1/2 +f(z) +� +Tr GA(1, z) − Tr GA(0, z) +� +dz + 1 +2πi +� +Γ\Γε +1/2 +f(z) +� +Tr GA(1, z) − Tr GA(0, z) +� +dz += −λ(GH − Fg)k +2πi +� +Γε +1/2 +f(z) ∂ +∂z (zs(z) + 1)dz + O(N∥U∥2 +∞∥V ∥2 +∞) + O(N −1/6) += −λ(GH − Fg)k +2πi +� +Γ +f(z) ∂ +∂z (zs(z) + 1)dz + o(1). +(C.72) +Furthermore, using the relation (C.28), we have +1 +2πi +� +Γ +f(z) ∂ +∂z (zs(z) + 1)dz = +1 +2πi +� +Γ +f(z) 1 +√d0 +s′ +sc(ϕ(z))ϕ′(z)dz += +1 +2√d0πi +� +�Γ +�f(ϕ)s′ +sc(ϕ)dϕ = +1 +√d0 +τ1( �f). +67 + +C.2.3 +CLT for a random Gram matrix with a spike and small perturbation +Recall that A(1) = �Y − E[�Y ]. Our next step in the approximation is to consider �Y = A(1) + E[�Y ]. Since +E[�Y ] is not a matrix of rank k, we instead consider +B(θ) = A(1) + θ +� +λFgUV T , +GB(θ, z) = (B(θ)B(θ)T − zI)−1 +(C.73) +To prove this part of CLT, we will adapt the strategy for the proof of Theorem 5.5 with Lemmas C.4 and +C.5. We then find that, uniformly for z ∈ Γε +1/2, +Tr GB(1, z) − Tr GB(0, z) = −k +d +dzλFg(zs(z) + 1) +λFgzs(z) + λFg + 1 + O≺(N −φ), +(C.74) +since ∥U T U − Ik∥F , ∥V T V − Ik∥F ≺ N −φ. Using the rigidity (Lemma C.5) and the eigenvalue interlacing +property, we have +Tr GB′(1, z) − Tr GB′(0, z) = O(1) +on Γ\Γ1/2 +and so +1 +2πi +� +Γ +f(z) Tr GB(1, z)dz − 1 +2πi +� +Γ +f(z) Tr GB(0, z)dz += − k +2πi +� +Γ +f(z) λFg d +dz(zs + 1) +λFg(zs + 1) + 1dz + o(1). +(C.75) +The remaining part is to control an effect of small perturbation (E[�Y ] − +� +λFgUV T )ij = CN(uivT +j )3 + +O(N 2(uivT +j )5). First, we let +B′ = B(1) + CN(uivT +j )3, +GB′(z) = (B′(B′)T − zI)−1 +(C.76) +For 1 ≤ ℓ1, ℓ2, ℓ3 ≤ k, we consider vectors u3 and v3 such that +(u3(ℓ1, ℓ2, ℓ3))i = u3 +i (ℓ1, ℓ2, ℓ3) := +√ +Nui(ℓ1)ui(ℓ2)ui(ℓ3) +and +(v3(ℓ1, ℓ2, ℓ3))j = v3 +j (ℓ1, ℓ2, ℓ3) := +√ +Nvj(ℓ1)vj(ℓ2)vj(ℓ3). +We then observe that B′ contains k3 additional small spikes: +B′ = B(1) + C +� +ℓ1,ℓ2,ℓ3 +u3(ℓ1, ℓ2, ℓ3)v3(ℓ1, ℓ2, ℓ3)T +where ∥u3(ℓ1, ℓ2, ℓ3)∥∞, ∥v3(ℓ1, ℓ2, ℓ3)∥∞ ≺ N 1/2−3φ. +In the above point of view, we are able to consider B′ as another spiked Gram matrix model with two +types of spikes u(ℓ)v(ℓ)T and u3(ℓ1, ℓ2, ℓ3)(v3(ℓ1, ℓ2, ℓ3))T . As before, for 0 ≤ θ ≤ 1, let +B′(θ) = A(1) + θ +� +λFg +� +ℓ +u(ℓ)v(ℓ)T + θC +� +ℓ1,ℓ2,ℓ3 +u3(ℓ1, ℓ2, ℓ3)(v3(ℓ1, ℓ2, ℓ3))T +and +GB′(θ, z) = (B′(θ)B′(θ)T − zI)−1. +68 + +Following the proof of Theorem 5.5, we have +∂ +∂θ Tr GB′(θ, z) = −2 +� +λFg +∂ +∂z +� +ℓ +⟨u(ℓ), GB′(θ, z)B′(θ)v(ℓ)⟩ +− 2C ∂ +∂z +� +ℓ1,ℓ2,ℓ3 +⟨u3(ℓ1, ℓ2, ℓ3), GB′(θ, z)B′(θ)v3(ℓ1, ℓ2, ℓ3)⟩, +and it can be observed that the first term of the right-hand side is the leading order term, since ∥u3(ℓ1, ℓ2, ℓ3)∥∞, ∥v3(ℓ1, ℓ2, ℓ3)∥∞ ≺ +N 1/2−3φ < N −φ. Moreover, from the definition of B′(θ), the leading order term of ⟨u(ℓ), GB′(θ, z)B′(θ)v(ℓ)⟩ +is ⟨u(ℓ), GB′(θ, z)B′(0)v(ℓ)⟩+θ +� +λFg⟨u(ℓ), GB′(θ, z)u(ℓ)⟩, since ⟨v(ℓ1), v(ℓ2)⟩ = δℓ1ℓ2+O(N −φ), ⟨v(ℓ), v3(ℓ1, ℓ2, ℓ3)⟩ = +O(N 1/2−2φ) and ⟨v3(ℓ1, ℓ2, ℓ3), v3(ℓ4, ℓ5, ℓ6)⟩ = O(N 1−4φ). Carrying out the remaining procedures presented +in the proof of Theorem 5.5 and collecting the leading order terms, we eventually obtain +Tr GB′(1, z) − Tr GB′(0, z) = −k +d +dzλFg(zs(z) + 1) +λFgzs(z) + λFg + 1 + O≺(N 1/2−2φ) +(C.77) +uniformly for z ∈ Γε +1/2. Here, we last apply Lemma C.4 and (C.65) for B′(0) = A(1). Further, on Γ\Γ1/2, +from the rigidity and the interlacing property of the eigenvalues, +Tr GB′(1, z) − Tr GB′(0, z) = O(1). +(C.78) +Thus, we conclude that +1 +2πi +� +Γ +f(z) Tr GB′(z)dz − 1 +2πi +� +Γ +f(z) Tr GB′(0, z)dz += − k +2πi +� +Γ +f(z) λFg d +dz(zs + 1) +λFg(zs + 1) + 1dz + o(1). +Furthermore, we set Eij = (�Y − B′)ij = O(N 2(uivT +j )5). Then +∥E∥ ≤ ∥E∥F = O(N 2∥U∥4 +∞∥V ∥4 +∞) = o(N −1) +(C.79) +for some φ > 3/8. This implies that +1 +2πi +� +Γ +f(z) Tr G +�Y (z)dz − 1 +2πi +� +Γ +f(z) Tr GB′(1, z)dz = o(1). +Remark C.8. Under the assumption that φ > 3/8, we suffices to consider E[�Yij] up to O(N 2(uivT +j )5) error. +However, (C.77) and (C.78) are valid for any finite approximation of E[�Y ] as presented in (C.51), even for +any φ > 1/4. This means that the condition φ > 3/8 can be improved by considering a higher order expansion +of E[�Y ]. For example, if we consider +E[�Y ] = +� +λFguivT +j + C1N(uivT +j )3 + C2N 2(uivT +j )5 + O(N 3(uivT +j )7), +then it can be checked that the contributions of the second and third terms are negligible, and the error +Eij = O(N 3(uivT +j )7) is also negligible if φ > 1/3, since +∥E∥ ≤ ∥E∥F = O(N 3∥U∥6 +∞∥V ∥6 +∞) ≺ N 3−12φ = o(N −1). +69 + +C.2.4 +Conclusion for the proof of pre-transformed CLT +We are now ready to prove pre-transformed CLT. Denote by �µ1 ≥ �µ2 ≥ · · · ≥ �µN the eigenvalues of �Y �Y T . +Recall that we denoted by µA +1 (0) ≥ µA +2 (0) ≥ · · · ≥ µA +N(0) the eigenvalues of A(0)A(0)T . From Cauchy’s +integral formula, we have +M +� +i=1 +f(�µi) − M +� d+ +d− +f(x) ρMP,d0(dx) += +� M +� +i=1 +f(µA +i (0)) − +� d+ +d− +f(x) ρMP,d0(dx) +� ++ +� M +� +i=1 +f(�µi) − +M +� +i=1 +f(µA +i (0)) +� += +� M +� +i=1 +f(µA +i (0)) − M +� d+ +d− +f(x) ρMP,d0(dx) +� +− +� 1 +2πi +� +Γ +f(z) Tr G +�Y (z)dz − 1 +2πi +� +Γ +f(z) Tr GA(0, z)dz +� +. +(C.80) +Since AA∗ is a usual sample covariance matrix, the first term in the right-hand side converges to a Gaussian +random variable. Further, as computed in (C.52), +E[�Y 4 +ij] =: � +w4 +N 2 + +1 +(NFg)2 +� ∞ +−∞ +�g′(w) +g(w) +�4 � +g +� +w − +√ +NλuivT +j +� +− g(w) +� +dw, +where the first term is the leading term of E[�Y 4 +ij] and hence the leading term of E[A4 +ij] as well. This means +that the difference between �w4 and E[A4 +ij] is negligible in the sense that it has no contribution in the limiting +behavior of the resolvent, which can be checked from standard Green function comparison theorems. (Refer +to [26].) +Thus, the mean and the variance of the limiting Gaussian distribution are given by +mA(f) = +�f(2) + �f(−2) +4 +− 1 +2τ0( �f) + (� +w4 − 3)τ2( �f) +(C.81) +and +VA(f) = 2 +∞ +� +ℓ=1 +ℓτℓ( �f)2 + (� +w4 − 3)τ1( �f)2, +(C.82) +respectively. +For the second term in the right-hand side of (C.80), by (C.75), we obtain that +1 +2πi +� +Γ +f(z) Tr G +�Y (z)dz − 1 +2πi +� +Γ +f(z) Tr GA(0, z)dz += − k +2πi +� +Γ +f(z) λFg d +dz(sz + 1) +λFg(sz + 1) + 1dz + o(1) +(C.83) +with high probability. From (C.80), we thus find that the CLT for the LSS holds, i.e., +� M +� +i=1 +f(�µi) − M +� d+ +d− +f(x) ρMP,d0(dx) +� +→ N(m�Y (f), V�Y (f)), +(C.84) +and the variance V�Y (f) = VA(f) since the second term in (C.80) converges to a deterministic value as +70 + +N → ∞, which corresponds to the change of the mean. In particular, +m�Y (f) − mA(f) = (GH − Fg)λk +2πi +� +Γ +f(z) ∂ +∂z (zs(z) + 1)dz + k +2πi +� +Γ +f(z) λFg d +dz(sz + 1) +λFg(sz + 1) + 1dz. +(C.85) +Following the computation in the proof of Lemma 4.4 in [9] with the relation (C.28), we find that the +right-hand side of (C.85) is given by +k +2πi +� +Γ +f(z)(zs(z) + 1)′ +� +λ(GH − Fg) + +λFg +λFg(zs(z) + 1) + 1 +� +dz += λk +√d0 +(GH − Fg)τ1( �f) + k +∞ +� +ℓ=1 +� λFg +√d0 +�ℓ +τℓ( �f). +(C.86) +(See also Remark 1.7 of [9].) Thus, +m��Y (f) = +�f(2) + �f(−2) +4 +− 1 +2τ0( �f) + λk +√d0 +(GH − Fg)τ1( �f) + (� +w4 − 3)τ2( �f) + k +∞ +� +ℓ=1 +� λFg +√d0 +�ℓ +τℓ( �f) +(C.87) +and +V�Y (f) = 2 +∞ +� +ℓ=1 +ℓτℓ( �f)2 + (� +w4 − 3)τ1( �f)2. +(C.88) +C.3 +Proof of Lemma C.7 +Notational remarks +In the rest of the section, we use C order to denote a constant that is independent of N. Even if the constant +is different from one place to another, we may use the same notation C as long as it does not depend +on N for the convenience of the presentation. Now, we recall the linearization HA(θ)(z) and its inverse +RA(θ, z) = HA(θ)(z)−1. For simplicity, we drop the subscript A and index z of the linearization entries. +Proof of Lemma C.7. To prove the lemma, we consider +∂ +∂θ Tr GA(θ, z) = +� +b +� +a +� +α +∂Aaα(θ) +∂θ +∂Gbb(θ) +∂Aaα(θ) += +� +b +� +a +� +α +∂Haα(θ) +∂θ +∂Rbb(θ) +∂Haα(θ) += − +� +b +� +a +� +α +∂Haα(θ) +∂θ +[Rba(θ)Rαb(θ) + Rbα(θ)Rab(θ)] += −2 +� +a +� +α +∂Haα(θ) +∂θ +(R(θ)2)aα += −2 +� +a +� +α +∂Haα(θ) +∂θ +∂ +∂z Raα(θ), +(C.89) +where we again used that +∂ +∂zGA(θ, z) = GA(θ, z)2. We expand the right-hand side by using the definition of +71 + +A(θ), +Haα(θ) = Aaα(θ) = +� +1 − θ + θ +� +NSaα +� +Aaα(0) = +� +1 − θ + θ +� +NSaα +� +Haα(0), +(C.90) +and so +� +a +� +α +∂Haα(θ) +∂θ +Raα(θ) = +� +a +� +α +� +−1 + +� +NSaα +� +Haα(0)Raα(θ) += +� +a +� +α +−1 + √NSaα +1 − θ + θ√NSaα +Haα(θ)Raα(θ) += Nλ(GH − Fg) +2 +� +a +� +α +(uavT +α)2Haα(θ)Raα(θ) + O(N∥U∥2 +∞∥V ∥2 +∞). +(C.91) +From now, we further drop the θ-dependency for the brevity. +Then +∂ +∂θ Tr GA(θ, z) = −Nλ(GH − Fg) ∂ +∂z +� +a +� +α +(uavT +α)2HaαRaα + O(N∥U∥2 +∞∥V ∥2 +∞). +Here, we used the properties that Haα = Aab(θ) = O(N − 1 +2 ), Rab = GA +ba(θ) = O(N − 1 +2 ) for b ̸= a, Raa = +GA +aa(θ) = O(1), and � +a ua(ℓ1)ua(ℓ2) = δℓ1ℓ2 = � +α vα(ℓ1)vα(ℓ2), which imply +�����N 2 � +a +� +α +(uavT +α)4HaαRaα +����� ≤ N 2∥U∥2 +∞∥V ∥2 +∞ +� +a +� +α +(uavT +α)2|HaαRaα| = O(N∥U∥2 +∞∥V ∥2 +∞). +(C.92) +Together with Remark C.3, from the elementary equality for R and H, we have +� +a +� +α +ua(ℓ)2HaαRaα = +� +a +ua(ℓ)2 +�� +α +HaαRaα +� += +� +a +ua(ℓ)2(1 + zRaa) += 1 + zs(z) + O(N − 1 +2 ), +(C.93) +and +� +a +� +α +vα(ℓ)2HaαRaα = +� +α +vα(ℓ)2 +�� +a +HaαRaα +� += +� +α +vα(ℓ)2(1 + Rαα) += d0(1 + zs(z)) + O(N − 1 +2 ). +(C.94) +72 + +Plugging them into (C.91), we get +4 +λ(GH − Fg) × (C.91) += N +� +ℓ +� +a +� +α +� 1 +N ua(ℓ)2HaαRaα + 1 +M vα(ℓ)2HaαRaα ++ +� +ua(ℓ)2 − 1 +M +� +vα(ℓ)2HaαRaα + ua(ℓ)2 +� +vα(ℓ)2 − 1 +N +� +HaαRaα +� ++ 2N +� +ℓ1̸=ℓ2 +� +a +� +α +ua(ℓ1)ua(ℓ2)vα(ℓ1)vα(ℓ2)HaαRaα + O(N∥U∥2 +∞∥V ∥2 +∞) += N +� +ℓ +� +a +� +α +� � +ua(ℓ)2 − 1 +M +� +vα(ℓ)2HaαRaα + ua(ℓ)2 +� +vα(ℓ)2 − 1 +N +� +HaαRaα +� ++ 2N +� +ℓ1̸=ℓ2 +� +a +� +α +ua(ℓ1)ua(ℓ2)vα(ℓ1)vα(ℓ2)HaαRaα ++ 2k(zs(z) + 1) + O(N∥U∥2 +∞∥V ∥2 +∞). +(C.95) +It remains to estimate the first three terms in (C.95). Set +X1 ≡ X1(θ, z, ℓ) := +� +a +� +α +� +ua(ℓ)2 − 1 +M +� +vα(ℓ)2HaαRaα, +(C.96) +X2 ≡ X2(θ, z, ℓ) := +� +a +� +α +ua(ℓ)2 +� +vα(ℓ)2 − 1 +N +� +HaαRaα +(C.97) +and +X3 ≡ X3(θ, z, ℓ1, ℓ2) := +� +a +� +α +ua(ℓ1)ua(ℓ2)vα(ℓ1)vα(ℓ2)HaαRaα +(ℓ1 ̸= ℓ2). +(C.98) +We notice that |X1|, |X2|, |X3| = O(N −1) on Γ1/2 by a naive power counting as in (C.91) after applying +H¨older inequality once. To obtain a better bound, we use a method based on a recursive moment estimate, +introduced in [39]. We need the following lemma: +Lemma C.9. Let X1, X2 and X3 be as in (C.96), (C.97) and (C.98). Define an event Ωε by +Ωε = +� +a,b,α,β +{|Haα|, |Raα| ≤ N − 1 +2 +ε} ∩ {|Rab − s(z)δab| ≤ N − 1 +2 +ε} ∩ {|Rαβ − zs(z)δαβ| ≤ N − 1 +2 +ε} +Then, for any fixed (large) D and (small) ε, which may depend on D, +E[|X|2D|Ωε] ≤ CN − 1 +2 +ε∥u∥2 +∞E[|X|2D−1|Ωε] + CN −1+4ε∥u∥4 +∞E[|X|2D−2|Ωε] ++ CN −2+10ε∥u∥6 +∞E[|X|2D−3|Ωε] + CN −3+14ε∥u∥8 +∞E[|X|2D−4|Ωε], +(C.99) +where X is X1, X2 and X3. +Since the rank of the signal k is fixed, we suffices to prove the above lemma for fixed ℓ, ℓ1 and ℓ2. We will +prove Lemma C.9 for X1 at the end of this section (the calculation for the X2 and X3 is almost the same). +With Lemma C.9, we are ready to obtain an improved bound for X. First, note that the contribution from +the exceptional event Ωc +ε is negligible i.e., P(Ωc +ε) < N −D2, which can be checked by applying a high-order +73 + +Markov inequality with the moment condition on �Y (See Assumption 3.1). We decompose E[|X|2D] by +E[|X|2D] = E[|X|2D · 1(Ωε)] + E[|X|2D · 1(Ωc +ε)] = E[|X|2D|Ωε] · P(Ωε) + E[|X|2D · 1(Ωc +ε)]. +(C.100) +Then the second term in the right-hand side of (C.100), +E[|X|2D · 1(Ωc +ε)] ≤ +� +E[|X|4D] +� 1 +2 (P(Ωc +ε)) +1 +2 ≤ N − D2 +2 � +E[|X|4D] +� 1 +2 +(C.101) +and by using a trivial bound for the resolvent |Rab(z)| ≤ ∥GA(z)∥ ≤ +1 +Im z +E[|X|4D] ≤ E +�� +a +� +α +|HaαRaα| +�4D +≤ (M 2N)4D +(Im z)4D max +a,b,α E|HaαHbα|4D ≤ CN 14D. +(C.102) +To bound the right-hand side of (C.99), we use Young’s inequality: For any a, b > 0 and p, q > 0 with +1 +p + 1 +q = 1, +ab ≤ ap +p + bq +q . +We then find that the first term has the following upper bound +N − 1 +2 +ε∥u∥2 +∞|X|2D−1 = N +(2D−1)ε +2D +N − 1 +2 +ε∥u∥2 +∞ · N − (2D−1)ε +2D +|X|2D−1 +≤ +1 +2DN (2D−1)ε(N − 1 +2 +ε∥u∥2 +∞)2D + 2D − 1 +2D +N −ε|X|2D. +(C.103) +Applying Young’s inequality for other terms in (C.99), we get +E[|X|2D|Ωε] ≤ CN (2D−1)ε(N − 1 +2 +ε∥u∥2 +∞)2D + CN (D−1)ε(N −1+4ε∥u∥4 +∞)D ++ CN ( 2D +3 −1)ε(N −2+10ε∥u∥6 +∞) +2D +3 + CN ( D +2 −1)ε(N −3+14ε∥u∥8 +∞) +D +2 ++ CN −εE[|X|2D|Ωε]. +(C.104) +Absorbing the last term in the right-hand side to the left-hand side and plugging the estimates (C.101) and +(C.102) into (C.100), we now get +E[|X|2D] ≤ CN (2D−1)ε(N − 1 +2 +ε∥u∥2 +∞)2D + CN (D−1)ε(N −1+4ε∥u∥4 +∞)D ++ CN ( 2D +3 −1)ε(N −2+10ε∥u∥6 +∞) +2D +3 + CN ( D +2 −1)ε(N −3+14ε∥u∥8 +∞) +D +2 + CN − D2 +2 +7D. +(C.105) +From the (2D)-th order Markov inequality, for any fixed ε′ > 0 independent of D, +P +� +|X| ≥ N ε′N − 1 +2 ∥u∥2 +∞ +� +≤ N −2Dε′ +E[|X|2D] +(N − 1 +2 ∥u∥2∞)2D ≤ N −2Dε′N 8Dε. +(C.106) +By choosing ε = 1/D, for sufficiently large D, we find that +|X| = O(N − 1 +2 ∥u∥2 +∞). +(C.107) +74 + +We now return to (C.89) and use (C.95) with the bound (C.107), +M +� +j=1 +N +� +k=1 +∂Ajk(θ) +∂θ +(GA(θ)A(θ))jk = (GH − Fg)λk +2 +(1 + zs(z)) + O(N∥u∥2 +∞∥v∥2 +∞). +(C.108) +To handle the derivative of the right-hand side, we use Cauchy’s integral formula with a rectangular contour, +contained in Γε +1/2, whose perimeter is larger than ε. Then, we get from (C.89) that +∂ +∂θ Tr GA(θ, z) = −λ(GH − Fg) · k ∂ +∂z (1 + zs(z)) + O(N∥u∥2 +∞∥v∥2 +∞). +(C.109) +After integrating over θ from 0 to 1, we conclude that (C.70) holds for a fixed z ∈ Γε +1/2. +At last, we prove Lemma C.9. +Proof of Lemma C.9. As we mentioned above, we consider X = X1 and drop the ℓ-dependency. i.e. +E[|X|2D] = E +�� +a +� +α +� +u2 +a − 1 +M +� +v2 +αHaαRaαXD−1X +D +� +We use the following inequality that generalizes Stein’s lemma (see Proposition 5.2 of [11]): Let Φ be a C2 +function. Fix a (small) ε > 0, which may depend on D. Recall that Ωε is the complement of the exceptional +event on which |Haα| or |Raα| is exceptionally large for some a, α, defined by Ωε by +� +a,b,α,β +{|Haα|, |Raα| ≤ N − 1 +2 +ε} ∩ {|Rab − s(z)δab| ≤ N − 1 +2 +ε} ∩ {|Rαβ − zs(z)δαβ| ≤ N − 1 +2 +ε} +Then, +E[HaαΦ(Haα)|Ωε] = (E[H2 +aα|Ωε] − E[Haα|Ωε]2)E[Φ′(Haα)|Ωε] + ε1, +(C.110) +where the error term ε1 admits the bound +|ε1| ≤ C1E +� +|Haα|3 sup +|t|≤1 +Φ′′(tHaα) +���Ωε +� +(C.111) +for some constant C1. Note that by applying a decomposition (C.100) to E[Haα|Ωε] and E[H2 +aα|Ωε], we see +that +E[Haα|Ωε] − E(E[Haα|Ωε]) = E[Haα|Ωε] = O(N −D0) +(C.112) +and +E[H2 +aα|Ωε] = E(E[H2 +aα|Ωε]) + O(N −D0) = 1 +N + O(∥u∥2 +∞∥v∥2 +∞) + O(N −D0) +(C.113) +for D0 = D2+1 +2 +> 1. The estimate (C.110) follows from the proof of Proposition 5.2 of [11] with p = 1, where +we use the inequality (5.38) therein only up to second to the last line. +In the estimate (C.110), we choose +Φ(Haα) = RaαXD−1X +D +(C.114) +so that +E[|X|2D|Ωε] = +� +a +� +α +� +u2 +a − 1 +M +� +v2 +αE [HaαΦ(Haα)|Ωε] . +(C.115) +75 + +Applying (C.112) and (C.113) to the equation (C.110), +E [HaαΦ(Haα)|Ωε] = E +� +H2 +aα +� +E[Φ′(Haα)|Ωε] + ε1 += E[H2 +aα] +� +−E +� +RaaRααXD−1X +D|Ωε +� +− E +� +R2 +aαXD−1X +D|Ωε +� ++(D − 1)E +� +Raα +∂X +∂Haα +XD−2X +D��Ωε +� ++ DE +� +Raα +∂X +∂Haα +XD−1X +D−1��Ωε +�� ++ ε1, +(C.116) +for sufficiently large D. We plug it into (C.115) and estimate each term. Then the term originated from the +first term in (C.116) can be separated by +� +a +� +α +� +u2 +a − 1 +M +� +v2 +αE[H2 +aα]E +� +RaaRααXD−1X +D|Ωε +� += +� +a +� +α +� +u2 +a − 1 +M +� +v2 +αE[H2 +aα]E +� +(Raa − s)RααXD−1X +D|Ωε +� ++ s +� +a +� +α +� +u2 +a − 1 +M +� +v2 +αE[H2 +aα]E +� +RααXD��1X +D|Ωε +� +. +(C.117) +The first term satisfies that +����� +� +a +� +α +� +u2 +a − 1 +M +� +v2 +αE[H2 +aα]E +� +(Raa − s)RααXD−1X +D|Ωε +������ +≤ CM∥u∥2 +∞N −1N − 1 +2 +εE[|X|2D−1|Ωε] +� +α +v2 +α = CN − 1 +2 +ε∥u∥2 +∞E[|X|2D−1|Ωε] +(C.118) +for some constant C since � +α v2 +α = 1. Using (C.113) and � +a +� +u2 +a − +1 +M +� += 0, we also have +�����s +� +a +� +α +� +u2 +a − 1 +M +� +v2 +αE[H2 +aα|Ωε]E +� +RααXD−1X +D|Ωε +������ +≤ C∥u∥2 +∞∥v∥2 +∞|s| +� +a +� +α +� +u2 +a + 1 +M +� +v2 +αE +� +|RααXD−1X +D||Ωε +� +≤ C∥u∥2 +∞∥v∥2 +∞E[|X|2D−1|Ωε] +(C.119) +for some constant C and large D > 1. For the second term in (C.116), we also have +����� +� +a +� +α +� +u2 +a − 1 +M +� +v2 +αE[H2 +aα|Ωε]E +� +R2 +aαXD−1X +D|Ωε +������ +≤ CN −1∥u∥2 +∞ +����� +� +a +� +α +v2 +αE +� +R2 +aαXD−1X +D|Ωε +������ +≤ CN −1+2ε∥u∥2 +∞E[|X|2D−1|Ωε] +� +α +v2 +α +≤ CN −1+2ε∥u∥2 +∞E[|X|2D−1|Ωε]. +(C.120) +76 + +To estimate the third term and the fourth term in (C.116), we notice that on Ωε +���� +∂X +∂Haα +���� = +������ +− +� +b +� +β +� +u2 +b − 1 +M +� +v2 +βHbβ[RabRαβ + RbαRaβ] + +� +u2 +a − 1 +M +� +v2 +αRaα +������ +≤ CN − 1 +2 +3ε∥u∥2 +∞ +� +α +v2 +α + CN − 1 +2 +ε∥u∥2 +∞∥v∥2 +∞ ≤ CN − 1 +2 +3ε∥u∥2 +∞. +(C.121) +for some constant C. Similarly, we can observe that +���� +∂2X +∂H2aα +���� ≤ CN − 1 +2 +3ε∥u∥2 +∞. +(C.122) +Thus, we also obtain that +����� +� +a +� +α +� +u2 +a − 1 +M +� +v2 +αE[H2 +aα|Ωε]E +� +Raα +∂X +∂Haα +XD−2X +D��Ωε +������ +≤ CN −1+4ε∥u∥4 +∞E[|X|2D−2|Ωε] +(C.123) +and +����� +� +a +� +α +� +u2 +a − 1 +M +� +v2 +αE[H2 +aα|Ωε]E +� +Raα +∂X +∂Haα +XD−1X +D−1��Ωε +������ +≤ CN −1+4ε∥u∥4 +∞E[|X|2D−2|Ωε]. +(C.124) +Hence, from (C.116), (C.120), (C.123), and (C.124), +����� +� +a +� +α +� +u2 +a − 1 +M +� +v2 +αE[H2 +aα|Ωε]E[Φ′(Haα)|Ωε] +����� +≤ CN − 1 +2 +ε∥u∥2 +∞E[|X|2D−1|Ωε] + CN −1+4ε∥u∥4 +∞E[|X|2D−2|Ωε] + ε1. +(C.125) +It remains to estimate |ε1| in (C.111). Proceeding as before, +� +a +� +α +� +u2 +a − 1 +M +� +v2 +αE +� +|Haα|3Φ′′(Haα) +���Ωε +� +≤ CN −1+4ε∥u∥2 +∞E[|X|2D−1|Ωε] + CN −2+7ε∥u∥4 +∞E[|X|2D−2|Ωε] ++ CN −2+10ε∥u∥6 +∞E[|X|2D−3|Ωε]. +(C.126) +Our last goal is to find the bound for the error term ε1. To handle Φ′′(tHaα), we want to compare +Φ′′(Haα) and Φ′′(tHaα) for some |t| < 1. Let GA,t be the resolvent of A where Aaα is replaced by tAaα, +and let Xt be defined as X in (C.96) with the same replacement for Aaα and also GA is replaced by GA,t. +Correspondingly, we also consider the replacement Rt of the linearization R by substituting tHaα into Haα +(also for Hαa). Then, +Rt +AB − RAB = (Rt(H − Ht)R)AB = (1 − t)Rt +AaHaαRαB + (1 − t)Rt +AαHαaRaB. +(C.127) +77 + +and +Xt − X = +� +b +� +β +� +u2 +b − 1 +M +� +v2 +β(Ht +bβRt +bβ − HbβRbβ) += +� +b +� +β +� +u2 +b − 1 +M +� +v2 +βHbβ(Rt +bβ − Rbβ) + (t − 1) +� +u2 +a − 1 +M +� +v2 +α(HaαRt +aα) += (1 − t) +� +b +� +β +� +u2 +b − 1 +M +� +v2 +βHbβRt +baHaαRαβ ++ (1 − t) +� +b +� +β +� +u2 +b − 1 +M +� +v2 +βHbβRt +bαHαaRaβ + (t − 1) +� +u2 +a − 1 +M +� +v2 +αHaαRt +aα. +(C.128) +Thus, on Ωε, +|Xt − X| ≤ CN −1+4ε∥u∥2 +∞. +(C.129) +Using the estimates (C.127) and (C.129), on Ωε, we obtain that +|Φ′′(Haα) − Φ′′(tHaα)| ≤ C|Φ′′(Haα)| + N − 5 +2 +11ε∥u∥6 +∞|X|2D−4 +(C.130) +uniformly on t ∈ (−1, 1). +Combining (C.115) and (C.125) with (C.126), (C.130), and (C.111), we finally get +E[|X|2D|Ωε] ≤ CN − 1 +2 +ε∥u∥2 +∞E[|X|2D−1|Ωε] + CN −1+4ε∥u∥4 +∞E[|X|2D−2|Ωε] ++ CN −2+10ε∥u∥6 +∞E[|X|2D−3|Ωε] + CN −3+14ε∥u∥8 +∞E[|X|2D−4|Ωε]. +(C.131) +This proves the desired lemma for X = X1. +For the cases X = X2 or X = X3, the proofs are almost the same with the following changes: +• For X = X2, we change the role of U and V . In other words, we will use � +a ua(ℓ)2 = 1 and +� +α +� +vα(ℓ)2 − 1 +N +� += 0. Then the recursive bound for E[|X2|2D|Ωε] obtained by putting v instead of +u in the upper bound in (C.99). +• In the same way, we use � +a ua(ℓ1)ua(ℓ2) = δℓ1ℓ2 and � +α |vα(ℓ1)||vα(ℓ2)| ≤ 1 instead of � +a +� +u2 +a − +1 +M +� += +0 and � +a ua(ℓ)2 = 1, respectively. We then obtain the exactly same recursive bound in (C.99) for X3. +C.4 +Computation of the test statistic +In this section, we prove the second part of Theorem 5.5 and also provide the details on the computation of +the test statistic in Theorem 4.2. By performing the same calculations as we will do in this section, we can +obtain optimal functions for the other models, so we omit the details. (Refer to [22, 33, 34].) Recall that +mY (f)|H1 − mY (f)|H0 = +k +� +s=1 +∞ +� +ℓ=1 +� ωs +√d0 +�ℓ +τℓ( �f) +(C.132) +and +VY (f) = 2 +∞ +� +ℓ=2 +ℓτℓ( �f)2 + (w4 − 1)τ1( �f)2. +(C.133) +78 + +Assuming w2 > 0 and w4 > 1, from Cauchy’s inequality and the identity log(1 − λ) = − �∞ +ℓ=1 λℓ/ℓ, +����� +mY (f)|H1 − mY (f)|H0 +� +VY (f) +����� +2 +≤ +k +� +p,q=1 +ωpωq +d0 +� +1 +w4 − 1 − 1 +2 +� +− 1 +2 log +� +1 − ωpωq +d0 +� += +���� +m(Ω) − m(0) +√V0 +���� +2 +, +(C.134) +which proves the first part of the theorem. The equality in (C.134) holds if and only if +√d0(w4 − 1)τ1( �f) +� +s ωs += 2ℓ(√d0)ℓτℓ( �f) +� +s ωℓs +(ℓ = 2, 3, 4, . . . ). +(C.135) +We now find all functions f that satisfy (C.135). Letting 2C be the common value in (C.135), +τ1( �f) = +2C +√d0(w4 − 1) +� +s +ωs, +τℓ( �f) = +C +ℓ(√d0)ℓ +� +s +ωℓ +s +(ℓ = 2, 3, 4, . . . ). +(C.136) +We can expand �f in terms of the Chebyshev polynomials as +�f(x) = +∞ +� +ℓ=0 +CℓTℓ +�x +2 +� +. +(C.137) +The orthogonality relation of the Chebyshev polynomials implies that for ℓ ≥ 1 +τℓ( �f) = Cℓ +π +� 2 +−2 +Tℓ +�x +2 +� +Tℓ +�x +2 +� +dx +√ +4 − x2 = Cℓ +π +� 1 +−1 +Tℓ (y) Tℓ (y) +dy +� +1 − y2 = Cℓ +2 . +(C.138) +Thus, (C.136) holds if and only if +�f(x) = c0 + 2C +� +s +� +2ωs +√d0(w4 − 1)T1 +�x +2 +� ++ +∞ +� +ℓ=2 +1 +ℓ +� ωs +√d0 +�ℓ +Tℓ +�x +2 +�� += c0 + 2C +� +s +� +ωs +√d0 +� +2 +w4 − 1 − 1 +� +T1 +�x +2 +� ++ +∞ +� +ℓ=1 +1 +ℓ +� ωs +√d0 +�ℓ +Tℓ +�x +2 +�� +(C.139) +for some constant c0. We notice that the following identity holds for the Chebyshev polynomials: +∞ +� +ℓ=1 +tℓ +ℓ Tℓ (x) = log +� +1 +√ +1 − 2tx + t2 +� +. +(C.140) +(See, e.g., (18.12.9) of [45].) Since T1(x) = x, we find that (C.139) is equivalent to +�f(x) = c0 + C +� +s +� ωs +√d0 +� +2 +w4 − 1 − 1 +� +x − log +�d0 − ωs +√d0x + ω2 +s +d0 +�� +, +(C.141) +79 + +or +f(x) = c0 + C +� +s +�ωs +d0 +� +2 +w4 − 1 − 1 +� +x − ωs(1 + d0) +d0 +� +2 +w4 − 1 − 1 +�� +− C +� +s +log +�ωs +d0 +�� +1 + d0 +ωs +� +(1 + ωs) − x +�� +. +(C.142) +This concludes the proof of Theorem 5.2 with an optimal function +φΩ(x) = �φΩ(ϕ(x)) +(C.143) +where +�φΩ(x) = c0 + +� +s +� ωs +√d0 +� +2 +w4 − 1 − 1 +� +x − log +�d0 − ωs +√d0x + ω2 +s +d0 +�� +. +(C.144) +Choosing +c0 = +� +s +�(1 + d0) +d0 +� +2 +w4 − 1 − 1 +� +ωs + log(ωs/d0) +� +, +we get (4.2). Further, we can see that +φΩ(x) = +� +s +φωs(x). +(C.145) +From this, we directly obtain that LΩ = � +s Lωs, +mY (φω)|H0 = −1 +2 +� +s +log +� +1 − ω2 +s +d0 +� ++ +1 +2d0 +(w4 − 3) +� +s +ω2 +s, +(C.146) +mY (φω)|H1 = mY (φω)|H0 + +� +p,q +� +− log +� +1 − ωpωq +d0 +� ++ ωpωq +d0 +� +2 +w4 − 1 − 1 +�� +(C.147) +and +VY (φω)|H1 = VY (φω)|H0 = 2 +� +p,q +� +− log +� +1 − ωpωq +d0 +� ++ ωpωq +d0 +� +2 +w4 − 1 − 1 +�� +. +(C.148) +80 +