diff --git "a/9dA0T4oBgHgl3EQfO_9E/content/tmp_files/2301.02168v1.pdf.txt" "b/9dA0T4oBgHgl3EQfO_9E/content/tmp_files/2301.02168v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/9dA0T4oBgHgl3EQfO_9E/content/tmp_files/2301.02168v1.pdf.txt" @@ -0,0 +1,3731 @@ +On the Approximation Accuracy of +Gaussian Variational Inference +Anya Katsevich +akatsevi@mit.edu +Philippe Rigollet +rigollet@math.mit.edu +January 6, 2023 +Abstract +The main quantities of interest in Bayesian inference are arguably the first two moments of the +posterior distribution. In the past decades, variational inference (VI) has emerged as a tractable approach +to approximate these summary statistics, and a viable alternative to the more established paradigm of +Markov Chain Monte Carlo. However, little is known about the approximation accuracy of VI. In this +work, we bound the mean and covariance approximation error of Gaussian VI in terms of dimension +and sample size. Our results indicate that Gaussian VI outperforms significantly the classical Gaussian +approximation obtained from the ubiquitous Laplace method. Our error analysis relies on a Hermite +series expansion of the log posterior whose first terms are precisely cancelled out by the first order +optimality conditions associated to the Gaussian VI optimization problem. +1 +Introduction +A central challenge in Bayesian inference is to sample from, or compute summary statistics of, a posterior +distribution π on Rd. The classical approach to sampling is Markov Chain Monte Carlo (MCMC), in which +a Markov chain designed to converge to π is simulated for sufficiently long time. However, MCMC can +be expensive, and it is notoriously difficult to identify clear-cut stopping criteria for the algorithm [CC96]. +Besides, if one is only interested in summary statistics of π such as the mean and covariance, then generating +samples from π may not be the most efficient way to achieve this goal. An alternative, often computationally +cheaper, approach is variational inference (VI) [BKM17]. The idea of VI is to find, among all measures in +a certain parameterized family P, the closest measure to π. While various measures of proximity have been +proposed since the introduction of VI [DD21, DDP21], we employ here KL divergence, which is, by far, +the most common choice. Typically, statistics of interest, chiefly its first two moments, for measures in the +family P are either readily available or else easily computable. In this work, we consider the family of normal +distributions, which are directly parameterized by their mean and covariance. We define +ˆπ = N( ˆm, ˆS) ∈ argmin +p∈PGauss +KL( p ∥ π), +(1.1) +and take ˆm, ˆS as our estimates of the true mean mπ and covariance Sπ of π. PGauss denotes the family of +non-degenerate Gaussian distributions on Rd. +A key difference between MCMC and VI is that unbiased MCMC algorithms yield arbitrarily accurate +samples from π if they are run for long enough. On the other hand, the output of a perfect VI algorithm is +ˆπ, which is itself only an approximation to π. Therefore, a fundamental question in VI is to understand the +quality of the approximation ˆπ ≈ π, particularly in terms of the statistics of interest. In this work, we bound +the mean and covariance estimation errors ∥ ˆm − mπ∥ and ∥ ˆS − Sπ∥ for the Gaussian VI estimate (1.1). +Of course, we cannot expect an arbitrary, potentially multimodal π to be well-approximated by a Gaussian +distribution. In the setting of Bayesian inference, however, the Bernstein-von Mises theorem guarantees that +under certain regularity conditions, a posterior distribution converges to a Gaussian density in the limit of +large sample size [VdV00, Chapter 10]. To understand why this is the case, consider a generic posterior +1 +arXiv:2301.02168v1 [math.ST] 5 Jan 2023 + +π = πn with density of the form +πn(θ | x1:n) ∝ ν(θ) +n +� +i=1 +pθ(xi) +(1.2) +Here, ν is the prior, pθ is the model density, and x1:n = x1, . . . , xn are i.i.d observations. Provided ν and pθ +are everywhere positive, we can write πn as +πn(θ) ∝ e−nvn(θ), +vn(θ) := − 1 +n +n +� +i=1 +log pθ(xi) − 1 +n log ν(θ). +If n is large and vn has a strict global minimum at θ = m∗, then πn will place most of its mass in a +neighborhood of m∗. In other words, πn is effectively unimodal, and hence a Gaussian approximation is +reasonable in this case. This reasoning drives a second, so-called Laplace approximation to πn, which is a +Gaussian centered at m∗. Hence, the mode m∗ can also serve as an approximation to the true mean mπn. +However, as we discuss below, Gaussian VI yields a more accurate estimate of mπn. +Main contributions. +Our main result quantifies the mean and covariance estimation errors of Gaussian +VI for a target measure πn ∝ e−nvn, in terms of sample size n and dimension d. In line with the above +reasoning, the key assumption is that vn has a unique global minimizer. +It is useful at this point to think of vn as being a quantity of order 1, and for the purpose of readability, +we write simply vn = v in the rest of this introduction. It is easy to see that πn ∝ e−nv has variance of +order 1/n. To account for this vanishing variance, we rescale the approximation errors appropriately in the +statement of the following theorem. +Theorem. Let πn ∝ exp(−nv) have mean and covariance mπn, Sπn respectively. Assume that d3 ≤ n and +that v ∈ C4(Rd) has a unique strict minimum at m∗. If ∥∇3v∥ and ∥∇4v∥ grow at most polynomially away +from m∗, and if v grows at least logarithmically away from m∗, then the mean and covariance ˆmn, ˆSn of the +variational Gaussian approximation (1.1) to π satisfy +√n∥ ˆmn − mπn∥ ≲ +�d3 +n +�3/2 +n∥ ˆSn − Sπn∥ ≲ d3 +n , +(1.3) +Here, ≲ means the inequalities hold up to an absolute (d, n-independent) constant, as well as a factor +depending on second and third order derivatives of v in a neighborhood of the mode m∗. This v-dependent +factor is made explicit in Section 2. +The theorem shows that both the mean and covariance VI estimates, and especially the mean estimate +ˆmn, are remarkably accurate approximations to the true mean and covariance. As such, it is a compelling +endorsement of Gaussian VI for estimating the posterior mean and covariance in the finite sample regime. +Although the condition n ≥ d3 is restrictive when d is very large, we believe that it is unavoidable without +further assumptions and note that it also appears in existing bounds for the Laplace method [Spo22]. +As mentioned above, the Laplace method is a competing Gaussian approximation to πn that is widespread +in practice for its computational simplicity. We use it as a benchmark to put the above error bounds into +context. The Laplace approximation to πn ∝ e−nv is given by +πn ≈ N(m∗, (n∇2v(m∗))−1), +where m∗ is the global minimizer of v. This approximation simply replaces v by its second order Taylor +expansion around m∗. The recent works [Spo22] and [KGB22] derive error bounds for the Laplace approxima- +tion. Spokoiny [Spo22] shows that √n∥m∗ − mπn∥ ≲ (d3/n)1/2 assuming v is strongly convex, and [KGB22] +similarly shows that √n∥m∗ − mπn∥ ≲ 1/√n with implicit dependence on d, under weaker assumptions. +For the covariance approximation, an explicit error bound is stated only in [KGB22]; the authors show that +n∥(n∇2v(m∗))−1 − Sπn∥ ≲ 1/√n. Meanwhile, Spokoiny states lemmas in the appendix from which one can +derive a d-dependent covariance error bound. +In a companion paper [Kat23], we extend the techniques developed in the present work to obtain the +following tighter n dependence of the Laplace covariance error: +n∥(n∇2v(m∗))−1 − Sπn∥ ≲ 1/n. +(1.4) +2 + +This n dependence can also be obtained using the approach in [Spo22]. +Let us summarize the n-dependence of these bounds, incorporating the 1/√n and 1/n scaling of the +mean and covariance errors. The Gaussian VI mean approximation error is n−1/2 × n−3/2, which is a factor +of n−1 more accurate than the Laplace mean error of n−1/2 × n−1/2. The covariance approximation error +is the same for both methods (using the tighter covariance bound (1.4)): n−1 × n−1. VI’s improved mean +approximation accuracy is confirmed in our simulations of a simple Bayesian logistic regression example in +d = 2; see Figure 1, and Section 2.3 for more details. +Figure 1: Gaussian VI yields a more accurate mean estimate than does Laplace, while the two covariance +estimates are on the same order. +Here, πn is the likelihood of logistic regression given n observations +in dimension d = 2. +For the left-hand plot, the slopes of the best-fit lines are −1.04 for the Laplace +approximation and −2.02 for Gaussian VI. For covariance: the slopes of the best-fit lines are -2.09 for +Laplace, -2.12 for VI. +We note that the Laplace approximation error bounds in the companion work [Kat23] are also tighter in +their dimension dependence. +First-order optimality conditions and Hermite series expansions. +The improvement of Gaussian +VI over the Laplace method to estimate the mean a posteriori rests on a remarkable interaction between +first-order optimality conditions and a Hermite series expansion of the potential v. +Hereafter, we replace θ by x and let V = nv, π ∝ e−V . The focal point of this work are the first order +optimality equations for the minimization (1.1): +∇m,SKL( N(m, S) ∥ π) +�� +(m,S)=( ˆm, ˆS) = 0. +This is also equivalent to setting the Bures-Wasserstein gradient of KL( p ∥ π) to zero at p = N( ˆm, ˆS) as +in [LCB+22]. Explicitly, we obtain that (m, S) = ( ˆm, ˆS) is a solution to +E [∇V (m + S1/2Z)] = 0, +E [∇2V (m + S1/2Z)] = S−1, +(EV ) +where Z ∼ N(0, Id) and S1/2 is the positive definite symmetric square root of S; see [LCB+22] for this +calculation. In some sense, the fact that N( ˆm, ˆS) minimizes the KL divergence to π does not explain why +ˆm is such an accurate estimate of mπ. Rather, the true reason has to do with properties of solutions to the +fixed point equations (EV ). +To see why, consider the function ¯V (x) = V ( ˆm + ˆS1/2x). If π ∝ e−V is close to the density of N( ˆm, ˆS), +then ¯π ∝ e− ¯V should be close to the density of N(0, Id). In other words, we should have that ¯V (x) ≈ +const. + ∥x∥2/2. This is ensured by the first order optimality equations (EV ). Indeed, note that (EV ) can +be written in terms of ¯V as +E [∇ ¯V (Z)] = 0, +E [∇2 ¯V (Z)] = Id. +(1.5) +3 + +Mean approx. error m - mπ +( +10-2 +10-3 +m = m* (Laplace) +m = mn (Gaussian VI) +10- +10-6 +102 +103 +nCovariance approx. error IlS - Shm +- S = (n-v(m×))-1 (Laplace) +- S = Sn (Gaussian VI) +10-3 +10- +10-5 +102 +103 +nAs we explain in Section 3.4, the equations (1.5) set the first and second order coefficients in the Hermite +series expansion of ¯V to 0 and Id, respectively. As a result, ¯V (x) − ∥x∥2/2 = const. + r3(x), where r3 is a +Hermite series containing only third and higher order Hermite polynomials. The accuracy of the Gaussian +VI mean and covariance estimates stems from the fact that the Hermite remainder r3 is of order r3 ∼ 1/√n, +and the fact that r3 is orthogonal to linear and quadratic functions with respect to the Gaussian measure. +See Section 3.4 for a high-level summary of this Hermite series based error analysis. +Related Work. +The literature on VI can be roughly divided into statistical and algorithmic works. Works +on the statistical side have focused on the contraction of variational posteriors around a ground truth +parameter in the large n (sample size) regime. +(We use “variational posterior” as an abbreviation for +variational approximation to the posterior.) For example, [WB19] prove an analogue of the Bernstein-von +Mises theorem for the variational posterior, [ZG20] study the contraction rate of the variational posterior +around the ground truth in a nonparametric setting, and [AR20] study the contraction rate of variational +approximations to tempered posteriors, in high dimensions. +A key difference between these works and ours is that here, we determine how well the statistics of the +variational posterior match those of the posterior itself, rather than those of a limiting (n → ∞) distribution. +We are only aware of one other work studying the problem of quantifying posterior approximation accuracy. +In [HY19], the authors consider a Bayesian model with “local” latent variables (one per data point) and +global latent variables, and they study the mean field variational approximation, given by the product +measure closest to the true posterior in terms of KL divergence. They show that the the mean ˆm of their +approximation satisfies √n∥ ˆm − mπ∥ ≲ 1/n1/4. +Since the algorithmic side of VI is not our focus here, we simply refer the reader to the work [LCB+22] and +references therein. This work complements our analysis in that it provides rigorous convergence guarantees +for an algorithm that solves the optimization problem (1.1). +Organization of the paper. +The rest of the paper is organized as follows. In Section 2, we first redefine +( ˆm, ˆS) as a certain “canonical” solution to the first order optimality conditions (EV ). We then state our +assumptions and main result on the Gaussian VI mean and covariance approximation errors, and present +a numerical result confirming the n scaling of our bound. In Section 3, we give an overview of the proof, +and in Section 4 we flesh out the details. Section 5 outlines the proof of the existence and uniqueness of +the aforementioned “canonical” solution ( ˆm, ˆS) to (EV ). In the Appendix, we derive a multivariate Her- +mite series remainder formula and then prove a number of supplementary results omitted from the main text. +Notation. For two k-tensors T, Q ∈ (Rd)⊗k, we define +⟨T, Q⟩ = +d +� +i1,...,ik=1 +Ti1...ikQi1...ik, +and let ∥T∥F = ⟨T, T⟩1/2 be the Frobenius norm of T. We will more often make use of the operator norm +of T, denoted simply by ∥ · ∥: +∥T∥ = +sup +∥u1∥≤1,...,∥uk∥≤1 +⟨u1 ⊗ · · · ⊗ uk, T⟩, +(1.6) +where the supremum is over vectors u1, . . . , uk ∈ Rd. For positive scalars a, b, we write a ≲ b to denote +that a ≤ Cb for an absolute constant C (the only exception to this notation is (1.3) above, in which ≲ also +incorporated a v dependent factor). We let +mπ = E π[X], +Sπ = Covπ(X) = E π[(X − mπ)(X − mπ)T ]. +Finally, for a function V with a unique global minimizer m∗, we let HV denote ∇2V (m∗). +2 +Statement of Main Result +Throughout the rest of the paper, we write π ∝ e−nv. Note that v may depend on n in a mild fashion as is +often the case for Bayesian posteriors. We also define V = nv. +4 + +In light of the centrality of the fixed point equations (EV ), we begin the section by redefining ( ˆm, ˆS) as +solutions to (EV ) rather than minimizers of the KL divergence objective (1.1). These definitions diverge +only in the case that V is not strongly convex. Indeed, if V is strongly convex then KL( · ∥ π) is strongly +geodesically convex in the submanifold of normal distributions; see, e.g., [LCB+22]. Therefore, in this case, +there is a unique minimizer ˆπ of the KL divergence, corresponding to a unique solution ( ˆm, ˆS) ∈ Rd × Sd +++ +to (EV ). In general, however, if (m, S) solve (EV ) this does not guarantee that m is a good estimator of +mπ. To see this, consider the equations in the following form, recalling that v = V/n: +E [∇v(m + S1/2Z)] = 0, +S E [∇2v(m + S1/2Z)] = 1 +nId. +(2.1) +Let x ̸= m∗ be a critical point of v, that is, ∇v(x) = 0, and consider the pair (m, S) = (x, 0). For this (m, S) +we have +E [∇v(m + S1/2Z)] = ∇v(x) = 0, +S E [∇2v(m + S1/2Z)] = 0 ≈ 1 +nId. +Thus (x, 0) is an approximate solution to (2.1), and by continuity, we expect that there is an exact solution +nearby. In other words, to each critical point x of v is associated a solution (m, S) ≈ (x, 0) of (2.1). The +solution (m, S) of (2.1) which we are interested in, then, is the one near (m∗, 0). Lemma 1 below formalizes +this intuition; we show there is a unique solution (m, S) to (EV ) in the set +RV = +� +(m, S) ∈ Rd×Sd +++ : S ⪯ 2H−1, +∥ +√ +H +√ +S∥2 + ∥ +√ +H(m − m∗)∥2 ≤ 8 +� +, +(2.2) +where H = ∇2V (m∗) = n∇2v(m∗). +Note that due to the scaling of H with n, the set RV is a small +neighborhood of (m∗, 0). We call this unique solution (m, S) in RV the “canonical” solution of (EV ). +We expect the Gaussian distribution corresponding to this canonical solution to be the minimizer of (1.1), +although we have not proved this. Regardless of whether it is true, we will redefine ( ˆm, ˆS) to denote the +canonical solution. Indeed, whether or not N( ˆm, ˆS) actually minimizes the KL divergence or is only a local +minimizer is immaterial for the purpose of estimating mπ. +In the rest of this section, we state our assumptions on v, a lemma guaranteeing a canonical solution +ˆm, ˆS to (EV ), and our main results bounding the mean and covariance errors of the Laplace and Gaussian +VI approximations. +2.1 +Assumptions +Our main theorem rests on rather mild assumptions on the regularity of the potential v. +Assumption V0. The function v is at least C3 and has a unique global minimizer x = m∗. +Let α2 be a lower bound on λmin(∇2v(m∗)) and β2 be an upper bound on λmax(∇2v(m∗)). +Assumption V1. There exists r > 0 such that N := nr ≥ d3 and +√r +α2√α2 +sup +∥y∥≤1 +���∇3v +� +m∗ + +� +r/α2 y +���� ≤ 1 +2. +(2.3) +Note that the left-hand side of (2.3) is monotonically increasing with r. Indeed, changing variables to +z = +� +r/α2 y, we see that the supremum is taken over the domain {∥z∥ ≤ +� +r/α2}, which grows with r. +Furthermore, the left-hand side equals zero when r = 0. Therefore, this assumption states that we can +increase r from 0 up to a large multiple of d3/n, while keeping the left-hand side below 1/2. +Remark 2.1. Define β3 = 1 +2α2√α2/√r, so that r = 1 +4α3 +2/β2 +3. By Assumption V1, we have +sup +∥y∥≤1 +∥∇3v(m∗ + +� +r/α2 y)∥ ≤ β3. +5 + +Hence, we can also think of β3 as an upper bound on ∥∇3v∥. For future reference, we also define +C2,3 := 1/r = 4β2 +3 +α3 +2 +. +(2.4) +Assumption V2 (Polynomial growth of ∥∇kv∥, k = 3, 4). For some 0 < q ≲ 1 we have +√r +α2√α2 +���∇3v +� +m∗ + +� +r/α2 y +���� ≤ 1 + ∥y∥q, +∀y ∈ Rd. +(2.5) +Here, r is from Assumption V1. If v is C4, we additionally assume that +r +α2 +2 +���∇4v +� +m∗ + +� +r/α2 y +���� ≲ 1 + ∥y∥q, +∀y ∈ Rd +with the same q and r. +Note that Assumption V1 guarantees that (2.5) is satisfied inside the unit ball {∥y∥ ≤ 1}. Therefore, +(2.5) simply states that we can extend the constant bound 1/2 to a polynomial bound outside the unit ball. +Also, note that if (2.5) is satisfied for some q only up to a constant factor (i.e. ≲) in the region {∥y∥ ≥ 1} +then we can always increase q to ensure the inequality is satisfied exactly. +Assumption V3 (Growth of v away from the minimum). Let q be as in Assumption V2. Then +v (m∗ + x) ≥ d + 12q + 36 +n +log( +� +nβ2∥x∥), +∀∥x∥ ≥ +� +r/β2. +(2.6) +See Section 3 below for further explanation of the intuition behind and consequences of the above as- +sumptions. +2.2 +Main result +We are now ready to state our main results. First, we characterize the Gaussian VI parameters ( ˆmπ, ˆSπ): +Lemma 1. Let Assumptions V0, V1, V2 be satisfied and assume √nr/d ≥ 40 +√ +2( +√ +3 + +� +(2q)!), where r, q +are from Assumptions V1, V2, respectively. Define H = ∇2V (m∗) = n∇2v(m∗). Then there exists a unique +(m, S) = ( ˆmπ, ˆSπ) in the set +RV = +� +(m, S) ∈ Rd × Sd +++ : S ⪯ 2H−1, ∥ +√ +H +√ +S∥2 + ∥ +√ +H(m − m∗)∥2 ≤ 8 +� +which solves (EV ). Moreover, ˆSπ satisfies +2/3 +nβ2 +Id ⪯ ˆSπ ⪯ +2 +nα2 +Id. +(2.7) +We now state our bounds on the mean and covariance errors. For simplicity, we restrict ourselves to the +case v ∈ C4. See Theorem 1-W for results in the case v ∈ C3 \ C4. +Theorem 1 (Accuracy of Gaussian VI). Let Assumption V3 and the assumptions from Lemma 1 be satisfied, +and ˆmπ, ˆSπ be as in this lemma. Recall the definition of C2,3 from (2.4). If v ∈ C4, then +∥ ˆmπ − mπ∥ ≲ +1 +√nα2 +�C2,3d3 +n +�3/2 +∥ ˆSπ − Sπ∥ ≲ +1 +nα2 +C2,3d3 +n +. +(2.8) +In Section 3.3, we prove that Lemma 1 and Theorem 1 are a consequence of analogous statements for a +certain affine invariant density. See that subsection, and Section 3 more generally, for proof overviews. +6 + +2.3 +An example: Logistic Regression +As noted in the introduction, our results show that Gaussian VI yields very accurate mean and covariance +approximations; in fact, the mean estimate is a full factor of 1/n more accurate than the mean estimate given +by the Laplace approximation. Neither our bounds nor those on the Laplace error in [Spo22] and [KGB22] +are proven to be tight, but we will now confirm numerically that the bounds give the correct asymptotic +scalings with n for a logistic regression example. We also show how to check the assumptions for this example. +In logistic regression, we observe n covariates xi ∈ Rd and corresponding labels yi ∈ {0, 1}. The labels +are generated randomly from the covariates and a parameter z ∈ Rd via +p(yi | xi, z) = s(xT +i z)yi(1 − s(xT +i z))1−yi, +where s(a) = (1 + e−a)−1 is the sigmoid. In other words, yi ∼ Bern(s(xT +i z)). We take the ground truth z +to be z = e1 = (1, 0, . . . , 0), and we generate the xi, i = 1, . . . , n i.i.d. from N(0, λ2Id), so in particular the +covariates themselves do not depend on z. We take a flat prior, so that the posterior distribution of z is +simply the likelihood, π(z) = πn(z | x1:n) ∝ e−nv(z), where +v(z) = − 1 +n +n +� +i=1 +log p(yi | xi, z) += − 1 +n +n +� +i=1 +� +yi log s(xT +i z) + (1 − yi) log(1 − s(xT +i z)) +� +. +(2.9) +Numerical Simulation +For the numerical simulation displayed in Figure 1, we take d = 2 and n = 100, 200, . . . , 1000. For each n, we +draw ten sets of covariates xi, i = 1, . . . , n from N(0, λ2Id) with λ = +√ +5, yielding ten posterior distributions +πn(· | x1:n). We then compute the Laplace and VI mean and covariance approximation errors for each n +and each of the ten posteriors at a given n. The solid lines in Figure 1 depict the average approximation +errors over the ten distributions at each n. The shaded regions depict the spread of the middle six out of +ten approximation errors. See Appendix D for details about the simulation. +In the left panel of Figure 1 depicting the mean error, the slopes of the best fit lines are −1.04 and −2.02 +for Laplace and Gaussian VI, respectively. For the covariance error in the righthand panel, the slopes of the +best fit lines are −2.09 and −2.12 for Laplace and Gaussian VI. This confirms that our bounds, the mean +bound of [KGB22] and the bound (1.4) (also implied by results in [Spo22]) are tight in their n dependence. +Verification of Assumptions +It is well known that the likelihood (2.9) is convex, and has a finite global minimizer z = m∗ (the MLE) +provided the data xi, i = 1, . . . , n are not linearly separable. Assumption V0 is satisfied in this case. For +simplicity, we verify the remaining assumptions in the case that n is large enough that we can approximate +v by the population log likelihood v∞, whose global minimizer is m∗ = e1, the ground truth vector. Using +this approximation, we show in Appendix D that +α2 ≳ λ2s′(λ), +β2 ≤ λ2 +4 , +and +∥∇3v∞(z)∥ ≤ β3 := 2λ3, +∀z ∈ Rd. +(2.10) +To verify Assumption V1, we need to find r such that +sup +∥z−m∗∥≤√ +r/α2 +∥∇3v(z)∥ ≤ α3/2 +2 +2√r . +Using the uniform bound (2.10) on ∥∇3v∥, it suffices to take r = +α3 +2 +4β2 +3 , in which case +C2,3 = 1 +r = 4β2 +3 +α3 +2 +≲ +1 +s′(λ)3 ≲ (1 + cosh(λ))3, +(2.11) +7 + +using that s′(λ) = s(λ)(1 − s(λ)) = 1 +2(1 + cosh(λ))−1. Thus Assumption V1 is satisfied as long as n ≥ d3/r, +which is true provided n is larger than a constant multiple of (1 + cosh(λ))3. Next, we can use (2.10) and a +similar bound on ∥∇4v∥ (which is also bounded uniformly over Rd) to show that Assumption V2 is satisfied +with q = 0. It remains to check Assumption V3, which we do in Appendix D using the convexity of v. +Indeed, convexity immediately implies at least linear growth away from any point. We conclude that the +conditions of Theorem 1 are met. +3 +Proof Overview: Affine Invariant Rescaling and Hermite Ex- +pansion +In this section, we overview the proof of Theorem 1. We start in Section 3.1 by explaining the affine invariance +inherent to this problem. This motivates us to rescale V = nv to obtain a new affine invariant function W. +In Section 3.2, we state Assumptions W0-W3 on W, which include the definition of a scale-free parameter +N intrinsic to W. We then state our main results for W: Lemma 1-W and Theorem 1-W. In Section 3.3, +we deduce Lemma 1 and Theorem 1 for V from the lemma and theorem for W. We outline the proof of +Theorem 1-W in Section 3.4. The proof of Lemma 1-W is of a different flavor, and is postponed to Section 5. +3.1 +Affine Invariance +To prove Theorem 1, we will bound the quantities +∥ ˆS−1/2 +π +( ˆmπ − mπ)∥, +∥ ˆS−1/2 +π +Sπ ˆS−1/2 +π +− Id∥. +(3.1) +As shown in Section 3.3, combining the bounds on (3.1) with bounds on ∥ ˆSπ∥ will give the desired estimates +in Theorems 1. The reason for considering (3.1) rather than directly bounding the quantities in the theorems +is explained in Section 3.4. +In the following lemma, we show that the quantities (3.1) are affine invariant. We discuss the implications +of this fact at the end of the subsection. First, define +Definition 3.1. Let f be a C2 function with unique global minimizer m∗f, and let Hf = ∇2f(m∗f). Then +Rf = +� +(m, S) ∈ Rd×Sd +++ : S ⪯ 2H−1 +f , +∥ +√ +Hf +√ +S∥2 + ∥ +√ +Hf(m − m∗f)∥2 ≤ 8 +� +. +(3.2) +Lemma 3.1. Let V1, V2 ∈ C2(Rd), where V2(x) = V1(Ax + b) for some b ∈ Rd and invertible A ∈ Rd×d. Let +πi ∝ e−Vi, i = 1, 2. Then the pair ( ˆm1, ˆS1) is a unique solution to (EV1) in the set RV1 if and only if the +pair ( ˆm2, ˆS2) given by +ˆm2 = A−1( ˆm1 − b), +ˆS2 = A−1 ˆS1A−T +(3.3) +is a unique solution to (EV2) in the set RV2. Furthermore, +∥ ˆS−1/2 +2 +( ˆm2 − mπ2)∥ = ∥ ˆS−1/2 +1 +( ˆm1 − mπ1)∥, +∥ ˆS−1/2 +2 +Sπ2 ˆS−1/2 +2 +− Id∥ = ∥ ˆS−1/2 +1 +Sπ1 ˆS−1/2 +1 +− Id∥. +(3.4) +See Lemma C.1 of Appendix C for the proof of the first statement. The proof of (3.4) follows from (3.3), +the fact that +mπ2 = A−1(mπ1 − b), +Sπ2 = A−1Sπ1A−T , +(3.5) +and the following lemma +Lemma 3.2. Let C, D ∈ Sd +++ be symmetric positive definite matrices and x ∈ Rd. Then ∥C−1/2x∥ = +√ +xT C−1x and +∥C−1/2DC−1/2 − Id∥ = sup +u̸=0 +uT Du +uT Cu − 1. +This is a simple linear algebra result, so we omit the proof. +8 + +Discussion. +Lemma 3.1 shows that our bounds on the quantities (3.1) should themselves be affine invariant, i.e. the same +bounds should hold if we replace V = nv by any function in the set {V (A·+b) : A ∈ Rd×d invertible, b ∈ Rd}. +This motivates us to identify an affine-invariant large parameter N. It is clear that n itself cannot be the +correct parameter N because n is not well-defined: nv = (n/c)(cv) for any c > 0. Another natural candidate +for N, which removes this degree of freedom, is N = λmin(∇2V (m∗)). However, λmin(∇2V (m∗)) is not +affine-invariant. Indeed, replacing V (x) by V (cx), for example, changes λmin by a factor of c2. To obtain an +affine invariant bound, we will define N in Assumption W1 below as a parameter intrinsic to the function +W = W[V ] given by +W(x) = V (H−1/2 +V +x + m∗V ). +(3.6) +It is straightforward to show that for any other V2(x) = V (Ax + b), we have +W[V2](x) = V2(H−1/2 +V2 +x + m∗V2) = V (H−1/2 +V +x + m∗V ) = W[V ](x). +In other words, any function V2 in the set {V (A · +b) : A ∈ Rd×d invertible, b ∈ Rd} maps to the same, +affine invariant W. This function is the “correct” object of study, and any bounds we obtain must follow +from properties intrinsic to W. +3.2 +Assumptions and Results for W +In this section, we state assumptions on W, one of which identifies an appropriate affine invariant parameter +N intrinsic to W. +This parameter is such that as N increases, the measure ρ ∝ e−W is more closely +approximated by a Gaussian. We then state results on the existence and uniqueness of solutions ˆmρ, ˆSρ to +the first order optimality equations (EW ), and obtain bounds in terms of d and N on the quality of the VI +approximation to the mean and covariance of ρ. +Assumption W0. Let W be at least C3, with unique global minimizer x = 0, and ∇2W(0) = Id. Moreover, +assume without loss of generality that W(0) = 0. +Next, we identify N as a parameter quantifying the size of ∥∇3W∥ in a certain neighborhood of zero: +Assumption W1. There exists N ≥ d3 such that +√ +N sup +∥x∥≤1 +∥∇3W( +√ +Nx)∥ ≤ 1 +2. +(3.7) +This definition ensures that N scales proportionally to n. Indeed, suppose W1 is the affine invariant +function corresponding to the equivalence class containing n1v, and let W1 satisfy (3.7) with N = N1. +Then the affine invariant W2 corresponding to the equivalence class containing n2v is given by W2(x) = +n2 +n1 W1( +√n1 +√n2 x). From here it is straightforward to see that W2 satisfies (3.7) with N = N2, where N2/N1 = +n2/n1. To further understand the intuition behind this assumption, consider the following lemma. +Lemma 3.3. Let W satisfy Assumptions W0 and W1 and let C ≤ +� +N/d. Then +����W(x) − ∥x∥2 +2 +���� ≤ C3 +12 +d +√ +d +√ +N +, +∀∥x∥ ≤ C +√ +d, +W(x) ≥ ∥x∥2 +4 +, +∀∥x∥ ≤ +√ +N. +(3.8) +The lemma shows that N quantifies how close W is to a quadratic, and therefore how close ρ ∝ e−W is +to being Gaussian. +Proof. Taylor expanding W(x) to second order for ∥x∥ ≤ C +√ +d and using (3.7), we have +|W(x) − ∥x∥2/2| ≤ 1 +3! +sup +∥x∥≤C +√ +d +∥x∥3∥∇3W(x)∥ ≤ C3 +12 +d +√ +d +√ +N +. +(3.9) +9 + +The second inequality in (3.8) follows from the fact that ∇2W(x) ⪰ 1 +2Id for all ∥x∥ ≤ +√ +N, as we now show. +Taylor expanding ∇2W(x) to zeroth order, we get that +∥∇2W(x) − ∇2W(0)∥ ≤ +sup +∥x∥≤ +√ +N +∥x∥∥∇3W(x)∥ ≤ 1 +2. +Since ∇2W(0) = Id it follows that ∇2W(x) ⪰ 1 +2Id. +Assumption W2 (Polynomial growth of ∥∇kW∥, k = 3, 4). There exists 0 < q ≲ 1 such that +√ +N +���∇3W +�√ +Nx +���� ≤ 1 + ∥x∥q +∀x ∈ Rd. +(3.10) +If W is C4, then the following bound also holds with the same q: +N +���∇4W +�√ +Nx +���� ≲ 1 + ∥x∥q, +∀x ∈ Rd. +(3.11) +The N 1 in (3.11) is also chosen to respect the proportional scaling of N with n: if the affine invariant +W1 corresponding to n1v satisfies (3.11) with N = N1, then the affine invariant W2 corresponding to n2v +satisfies (3.11) with the same q and N = N2, where N2/N1 = n2/n1. +Note that Assumption W1 guarantees that (3.10) is satisfied inside the unit ball; therefore, (3.10) simply +states that we can extend the constant bound 1/2 to a polynomial bound outside of this ball. Also, note +that if the inequality is satisfied for some q only up to a constant factor (i.e. ≲) in the region {∥x∥ ≥ 1}, +then we can always increase q to ensure the inequality is satisfied exactly. +Assumption W2 implies that expectations of the form E [∥∇kW(Y )∥p], k = 3, 4, decay with N. Indeed, +we have +Lemma 3.4. Let p ≥ 0 and Y ∈ Rd be a random variable such that E [∥Y ∥pq] < ∞, where q is from +Assumption W2. Let k = 3 or 4, corresponding to the cases W ∈ C3 or W ∈ C4, respectively. Then +E [∥∇kW(Y )∥p] ≲ N −p( k +2 −1) � +1 + E +� +∥Y/ +√ +d∥pq�� +. +Proof. By Assumption W2, +E [∥∇kW(Y )∥p] ≲ N −p( k +2 −1)E +�� +1 + ∥Y/ +√ +N∥q�p� +≤ N −p( k +2 −1)E +�� +1 + ∥Y/ +√ +d∥q�p� +≲ N −p( k +2 −1) � +1 + E +� +∥Y/ +√ +d∥pq�� +. +(3.12) +In the second line we used that d ≤ N. +If E [∥Y/ +√ +d∥pq] is d-independent, as for Gaussian random variables, then the above bound reduces to +E [∥∇kW(Y )∥p] ≲ N −p( k +2 −1). +Assumption W3 (Separation from Zero; Growth at Infinity). We have +W(x) ≥ (d + 12q + 36) log ∥x∥, +∀∥x∥ ≥ +√ +N, +where q is from Assumption W2. +Remark 3.1. For consistency with the previous assumptions, let us also reformulate this one in terms of +W( +√ +Nx): +W( +√ +Nx) ≥ (d + 12q + 36) log +√ +N + (d + 12q + 36) log ∥x∥, +∀∥x∥ ≥ 1. +(3.13) +Recall that inside the unit ball, W( +√ +Nx) is no less than N∥x∥2/4, by Lemma 3.3. Therefore, the value of +W( +√ +Nx) increases up to at least N/4 as x approaches unit norm. We can interpret (3.13) as saying that +outside the unit ball, we must maintain constant separation of order d log N from zero, and W(x) must grow +at least logarithmically in ∥x∥ as ∥x∥ → ∞. +10 + +We now state the existence and uniqueness of solutions to (EW ) in the region RW . +Lemma 1-W. Take Assumptions W0, W1, and W2 to be true, and assume +√ +N/d ≥ 40 +√ +2( +√ +3 + +� +(2q)!), +where q is from Assumption W2. Then there exists a unique (m, S) = ( ˆmρ, ˆSρ) ∈ RW , +RW = {(m, S) ∈ Rd × Sd +++ : S ⪯ 2Id, ∥S∥ + ∥m∥2 ≤ 8}, +(3.14) +solving (EW ). The matrix ˆSρ furthermore satisfies +2 +3Id ⪯ ˆSρ ⪯ 2Id. +(3.15) +See Section 5 for the proof. Note that RW as defined here is the same as in Definition 3.1, since m∗W = 0 +and HW = ∇2W(0) = Id. We will make frequent use of the following inequality, which summarizes the +bounds on ˆmρ, ˆSρ guaranteed by the lemma: +∥ ˆmρ∥ ≤ 2 +√ +2, +2 +3Id ⪯ ˆSρ ⪯ 2Id. +(3.16) +Theorem 1-W. Take Assumptions W0, W1, W2, and W3 to be true, and let ( ˆmρ, ˆSρ) be as in the above +lemma. Then +∥ ˆS−1/2 +ρ +( ˆmρ − mρ)∥ ≲ +� +� +� +d3 +N +if W ∈ C3, +� +d3 +N +�3/2 +, +if W ∈ C4. +, +∥ ˆS−1/2 +ρ +Sρ ˆS−1/2 +ρ +− Id∥ ≲ d3 +N . +3.3 +From V to W and back +In the following sections, we prove Lemma 1-W and Theorem 1-W. In Lemma C.2 in the appendix, we show +that Assumptions V0-V3 imply Assumptions W0-W3 with N = nr and the same q. From these results, +Lemma 1 and Theorem 1 easily follow. +Proof of Lemma 1. Let ρ ∝ e−W , where W is defined as in (3.6). By Lemma C.2, the assumptions on V +imply the assumptions on W. Hence, we can apply Lemma 1-W to conclude there is a unique ( ˆmρ, ˆSρ) ∈ RW +solving (EW ), with 2 +3Id ⪯ ˆSρ ⪯ 2Id. Since W is an affine transformation of V , it follows by Lemma 3.1 that +there exists a unique ( ˆmπ, ˆSπ) ∈ RV solving (EV ), with ˆSπ = H−1/2 +V +ˆSρH−1/2 +V +. The inequality (2.7) for π +can be deduced from the corresponding inequality (3.15) for ˆSρ. +Proof of Theorem 1. First note that +∥ ˆmπ − mπ∥ ≤ ∥ ˆS1/2 +π +∥∥ ˆS−1/2 +π +( ˆmπ − mπ)∥ ≲ +1 +√nα2 +∥ ˆS−1/2 +π +( ˆmπ − mπ)∥, +and +∥ ˆSπ − Sπ∥ = ∥ ��S1/2 +π +( ˆS−1/2 +π +Sπ ˆS−1/2 +π +− Id) ˆS1/2 +π +∥ +≲ +1 +nα2 +∥ ˆS−1/2 +π +Sπ ˆS−1/2 +π +− Id∥, +(3.17) +using the bound on ˆSπ from Lemma 1. Next note that by Lemma 3.1 (affine invariance) we have +∥ ˆS−1/2 +π +( ˆmπ − mπ)∥ = ∥ ˆS−1/2 +ρ +( ˆmρ − mρ)∥, +∥ ˆS−1/2 +π +Sπ ˆS−1/2 +π +− Id∥ = ∥ ˆS−1/2 +ρ +Sρ ˆS−1/2 +ρ +− Id∥. +(3.18) +Apply Theorem 1-W to conclude, recalling that N = nr and C2,3 = 1/r so that d3/N = d3/nr = C2,3d3/n. +11 + +3.4 +Overview of Theorem 1-W proof +For brevity let m = ˆmρ, S = ˆSρ, and σ = S1/2. We continue to denote the mean and covariance of ρ by mρ +and Sρ, respectively. Let ¯W(x) = W(m + σx) and note that the optimality equations (EW ) can be written +as +E [∇ ¯W(Z)] = 0, +E [∇2 ¯W(Z)] = Id. +(3.19) +The proof of Theorem 1-W is based on several key observations. +1) The optimality conditions (3.19) imply that the Hermite series expansion of ¯W is given by ¯W(x) = +const. + 1 +2∥x∥2 + r3(x), where +r3(x) = +� +k≥3 +1 +k!⟨ck( ¯W), Hk(x)⟩. +(3.20) +2) The assumptions on W imply that r3 ∼ N −1/2. +3) We can represent the quantities of interest from Theorem 1-W as expectations with respect to ¯X ∼ +¯ρ ∝ e− ¯ +W : +∥σ−1(mρ − m)∥ = sup +∥u∥=1 +E [f1,u( ¯X)], +∥σ−1Sρσ−1 − Id∥ ≤ sup +∥u∥=1 +E [f2,u( ¯X)] + ∥σ−1(mρ − m)∥2, +(3.21) +where +f1,u(x) = uT x, +f2,u(x) = (uT x)2 − 1. +4) We have +E [f( ¯X)] = E [f(Z)e−r3(Z)] +E [e−r3(Z)] += E [f(Z)(1 − r3(Z) + r3(Z)2/2 + . . . )] +E [e−r3(Z)] +(3.22) +5) We have E [f(Z)] = 0 and E [f(Z)r3(Z)] = 0 for f = f1,u, f2,u, because the remainder r3 is orthogonal +to linear and quadratic f with respect to the Gaussian measure. +Therefore, the leading order term in E [f( ¯X)] is 1 +2E [f(Z)r3(Z)2] ∼ N −1 for both f = f1,u and f = f2,u, +and hence by (3.21) the quantities of interest are no larger than N −1. This is the essence of the proof when +W ∈ C3. +Now that we have given this overview, let us go into a few more details about the above points, and +consider the case W ∈ C4. +1) We can write W(x) = const. + 1 +2∥x∥2 + r3(x), where r3 is the third order Hermite series +remainder. The Hermite series expansion of ¯W is defined as +¯W(x) = +∞ +� +k=0 +1 +k!⟨ck( ¯W), Hk(x)⟩, +ck( ¯W) := E [ ¯W(Z)Hk(Z)]. +(3.23) +Here, the ck and Hk(x) are tensors in (Rd)⊗k. Specifically, Hk(x) is the tensor of all order k Hermite +polynomials, enumerated as H(α) +k +, α ∈ [d]k with some entries repeating. For k = 0, 1, 2, the Hermite tensors +are given by +H0(x) = 1, +H1(x) = x, +H2(x) = xxT − Id. +See Appendix A.1 and B.1 for further details on Hermite series. Distinct Hermite polynomials are orthogonal +to each other with respect to the Gaussian weight. In particular, if f is an order k polynomial and ℓ > k +then +E [f(Z)H(α) +ℓ +(Z)] = 0, +∀α ∈ [d]ℓ. +12 + +In general, the Hk are given by +Hk(x)e−∥x∥2/2 = (−1)k∇ke−∥x∥2/2. +(3.24) +This representation of the Hermite polynomials leads to the following, “Gaussian integration by parts” +identity for a k-times differentiable function f: +E [f(Z)Hk(Z)] = E [∇kf(Z)]. +(3.25) +This is a generalization of Stein’s identity, E [Zif(Z)] = E [∂xif(Z)]. +Since ¯W is at least three times +differentiable, we can use Gaussian integration by parts to write c1, c2 as +c1( ¯W) := E [ ¯W(Z)H1(Z)] = E [∇ ¯W(Z)] = 0, +c2( ¯W) := E [ ¯W(Z)H2(Z)] = E [∇2 ¯W(Z)] = Id, +(3.26) +where the last equality in each line comes from the optimality conditions (3.19). Therefore the Hermite +series expansion of ¯W takes the form +¯W(x) = E [ ¯W(Z)] + ⟨0, H1(x)⟩ + 1 +2⟨Id, H2(x)⟩ + r3(x) += E [ ¯W(Z)] + 0 + 1 +2(∥x∥2 − d) + r3(x) += const. + 1 +2∥x∥2 + r3(x), +(3.27) +where r3 is the third order remainder. +2) The assumptions imply r3 ∼ 1/ +√ +N. Indeed, since W is C3 and k ≥ 3, we can apply “partial” +Gaussian integration by parts to express ck as +ck = E [Hk(Z) ¯W(Z)] = E [Hk−3(Z) ⊗ ∇3 ¯W(Z)]. +But by Assumptions W1 and W2 we have that ∥∇3W∥ ∼ 1/ +√ +N, and hence ∥∇3 ¯W∥ ≤ ∥σ∥3∥∇W∥ ∼ 1/ +√ +N, +since σ ⪯ +√ +2Id by Lemma 1-W. Therefore each ck ∼ 1/ +√ +N for k ≥ 3, so r3 ∼ 1/ +√ +N as well. +Now suppose W ∈ C4, and write r3 as r3(x) = +1 +3!⟨c3, H3(x)⟩ + r4(x). We know ⟨c3, H3(x)⟩ ∼ 1/ +√ +N, +and by an analogous argument as for r3, we can show that each of the coefficients ck, k ≥ 4 has order 1/N. +Hence r4 ∼ 1/N, so that r3 = O(N −1/2) + O(N −1) and r2 +3 = O(N −1) + O(N −3/2) + O(N −2). We can then +show that the order N −1 term in r2 +3 is orthogonal to f1,u with respect to the Gaussian weight, and hence +E [f1,u(Z)r3(Z)2] is order N −3/2. This is why the mean error is smaller when W ∈ C4. +We will prove 3) in the next section, and 4) follows directly from the representation (3.27). 5) follows +from the definition of r3 as a sum of third and higher order Hermite polynomials. This discussion explains +how the N −1 and N −3/2 scalings arise in Theorem 1-W. Obtaining the correct scaling with dimension d +requires a bit more work. The scaling with d of the overall error bound depends, among other things, on the +scaling with d of expectations of the form E [rk(Z)p], k = 3, 4 (see Lemma 4.1 below for further discussion +of the bound’s d-dependence). +We show that E [rk(Z)p] ∼ E [ +� +∥Z∥k�p] ∼ dpk/2 using the following explicit formula for rk. This result +is known in one dimension; see Section 4.15 in [Leb72]. However, we could not find the multidimensional +version in the literature, so we have proved it here. +Proposition 3.1. Assume ¯W ∈ Ck for k = 3 or k = 4. Let ¯W(x) = �∞ +j=0 +1 +j!⟨cj( ¯W), Hj(x)⟩ be the Hermite +series expansion of ¯W, and define +rk(x) = ¯W(x) − +k−1 +� +j=0 +1 +j!⟨cj( ¯W), Hj(x)⟩. +(3.28) +Then +rk(x) = +� 1 +0 +(1 − t)k−1 +(k − 1)! E +�� +∇k ¯W ((1 − t)Z + tx) , Hk(x) − Z ⊗ Hk−1(x) +�� +dt. +(3.29) +13 + +Note that (3.29) is analogous to the integral form of the remainder of a Taylor series. We state and prove +this proposition in greater generality in Appendix B.1 below. Carefully applying Cauchy-Schwarz to the +inner product in this formula (using the operator norm rather than the Frobenius norm, which would incur +additional dimension dependence), allows us to bound E [|rk(Z)|p] by a product of expectations. One ex- +pectation involves ∥∇k ¯W∥p ∼ N p(1−k/2), and the other expectation, stemming from the Hk and Z ⊗ Hk−1 +on the right-hand side of the inner product, involves a (pk)th degree polynomial in ∥Z∥. This explains the +dpk/2 scaling of E [|rk(Z)|p]. +4 +Proof of Theorem 1-W +Let ¯X ∼ ¯ρ ∝ e− ¯ +W , where ¯W(x) = W(m + σx), and σ = ˆS1/2 +ρ +, m = ˆmρ are from Lemma 1-W. Also, let +r3(x) = +� +k≥3 +1 +k!⟨ck( ¯W), Hk(x)⟩ +be the remainder of the Hermite expansion of ¯W. +Lemma 4.1 (Preliminary Bound). If v ∈ C3, then we have +∥σ−1(m − mρ)∥ +≲ +� +E r3(Z)4 + +� +E r3(Z)6 + +� +E r3( ¯X)6 sup +∥u∥=1 +� +E (uT ¯X)2 +(4.1) +and +∥σ−1Sρσ−1 − Id∥ ≲∥σ−1(m − mρ)∥2 + +� +E r3(Z)4 + +� +E r3(Z)6 ++ +� +E r3( ¯X)6 sup +∥u∥=1 +� +E ((uT ¯X)2 − 1)2 +(4.2) +If v ∈ C4, then +∥σ−1(m − mρ)∥ ≲ +� +E r3(Z)6 + +� +E r3( ¯X)6 sup +∥u∥=1 +� +E (uT ¯X)2 ++ sup +∥u∥=1 +��� +u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)] +��� + +� +E r4(Z)4. +(4.3) +Remark 4.1. From the discussion in the previous section, we know c3, r3 ∼ N −1/2 and c4, r4 ∼ N −1. +Therefore, we can easily read off the N-dependence of the overall error bound from (4.1) and (4.3). The +d-dependence of the terms of the form +� +E [rk(Z)p] can be computed from our explicit formula for rk, as +discussed above. Furthermore, simple Laplace-type integral bounds in Section 4.3 show that E [f( ¯X)] ≲ +E [f(Z)], so the d-dependence of the ¯X expectations is the same as that of the Z expectations. Finally, the +d-dependence of ⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3 ⊗ H4]⟩ can be estimated using the structure of the Hermite tensors; +in particular, we show at most O(d4) of the d8 entries of E [Z ⊗ H3 ⊗ H4] are nonzero. +Proof. First, we prove point 3) from the above proof overview. Recall that f1,u(x) = uT x and f2,u(x) = +(uT x)2 − 1. Note that we can write ¯X = σ−1(X − m), where X ∼ ρ ∝ e−W . Therefore, E ¯X = σ−1(mρ − m) +and hence +∥σ−1(mρ − m)∥ = ∥E ¯X∥ = sup +∥u∥=1 +E [uT ¯X] = sup +∥u∥=1 +E [f1,u( ¯X)]. +Next, note that Cov( ¯X) = σ−1Sρσ−1, and hence +∥σ−1Sρσ−1 − Id∥ = ∥Cov( ¯X) − Id∥ ≤ ∥E [ ¯X ¯XT − Id]∥ + ∥E ¯XE ¯XT ∥ +≤ sup +∥u∥=1 +E [uT ( ¯X ¯XT − Id)u] + ∥E ¯X∥2 += sup +∥u∥=1 +E [(uT ¯X)2 − 1] + ∥E ¯X∥2 += sup +∥u∥=1 +E [f2,u( ¯X)] + ∥σ−1(mρ − m)∥2. +(4.4) +14 + +Now, recalling that ¯W(x) = const. + ∥x∥2/2 + r3(x), note that +E [f( ¯X)] = E [f(Z)e−r3(Z)] +E [e−r3(Z)] +. +(4.5) +Write +e−r3(Z) = 1 − r3(Z) + 1 +2r3(Z)2 − 1 +3!r3(Z)3eξ(Z), +where ξ(Z) lies on the interval between 0 and −r3(Z). The key insight is that for f = f1,u and f = f2,u (at +most second order polynomials), f is orthogonal to both 1 and r3, since r3 is a series of Hermite polynomials +of order greater than 2. Therefore, +E +� +f(Z)e−r3(Z)� += E +� +f(Z) +� +1 − r3(Z) + 1 +2r3(Z)2 − 1 +3!r3(Z)3eξ(Z) +�� += E +� +f(Z) +�1 +2r3(Z)2 − 1 +3!r3(Z)3eξ(Z) +�� +. +(4.6) +Combining (4.6) with (4.5), we get +E[f(Z)] = 1 +2 +E +� +f(Z)r3(Z)2� +E +� +e−r3(Z)� +− 1 +3! +E [f(Z)r3(Z)3eξ(Z)] +E +� +e−r3(Z)� +=: I1 + I2. +(4.7) +Using Jensen’s inequality and that E [r3(Z)] = 0, we have E [e−r3(Z)] ≥ 1. Hence, +|I1| ≲ +��E +� +f(Z)r3(Z)2��� . +(4.8) +To bound I2, note that eξ ≤ 1 + e−r3, since ξ ≤ 0 if r3 ≥ 0 and ξ ≤ −r3 if r3 ≤ 0. Hence, +|I2| ≲ +E +� +|f(Z)| |r3(Z)|3� +E +� +e��r3(Z)� ++ +E +� +|f(Z)| |r3(Z)|3 e−r3(Z)� +E +� +e−r3(Z)� +≤ E +� +|f(Z)| |r3(Z)|3� ++ +E +� +|f(Z)| |r3(Z)|3 e−r3(Z)� +E +� +e−r3(Z)� +, +(4.9) +again using that E [e−r3(Z)] ≥ 1. Furthermore, using the conversion between Z and ¯X expectations (4.5), +observe that +E +� +|f(Z)| |r3(Z)|3 e−r3(Z)� +E +� +e−r3(Z)� += E +���f( ¯X) +�� ��r3( ¯X) +��3� +. +Incorporating this into the above bound on |I2| we get +|I2| ≤ E +� +|f(Z)| |r3(Z)|3� ++ E +���f( ¯X) +�� ��r3( ¯X) +��3� +. +(4.10) +Applying Cauchy-Schwarz to (4.10) we get +|I2| ≤ +� +E [r3(Z)6] +� +E f(Z)2 + +� +E r3( ¯X)6 +� +E f( ¯X)2. +Adding this inequality to (4.8), we get +��E [f( ¯X)] +�� ≲ +��E +� +f(Z)r3(Z)2��� + +� +E [r3(Z)6] +� +E f(Z)2 ++ +� +E +� +r3( ¯X)6�� +E +� +f( ¯X)2� +. +(4.11) +Taking f(x) = uT x and f(x) = (uT x)2−1 and applying Cauchy-Schwarz to the first term in (4.11) gives (4.1) +and (4.2), respectively. If v ∈ C4 and f(x) = uT x, we can refine the bound (4.11), specifically the first term +E [(uT Z)r3(Z)2]. Write +r3(x) = 1 +3! ⟨c3, H3(x)⟩ + r4(x). +15 + +Then +r3(x)2 = +1 +(3!)2 +� +c⊗2 +3 , H3(x)⊗2� ++ 2 +3!r4(x) ⟨c3, H3(x)⟩ + r4(x)2. +(4.12) +To get the first summand on the right we use the fact that ⟨T, S⟩2 = ⟨T ⊗2, S⊗2⟩. Substituting x = Z +in (4.12), multiplying by the scalar uT Z, and taking the expectation of the result gives +E +�� +uT Z +� +r3(Z)2� += +1 +(3!)2 +� +c⊗2 +3 , E +�� +uT Z +� +H3(Z)⊗2�� ++ 2 +3!E +� +(uT Z)r4(Z) ⟨c3, H3(Z)⟩ +� ++ E [(uT Z)r4(Z)2] += 2 +3!E +� +(uT Z)r4(Z) ⟨c3, H3(Z)⟩ +� ++ E +�� +uT Z +� +r4(Z)2� +. +(4.13) +For the term on the right-hand side of the first line of (4.13), note that we have chosen to move the scalar +uT Z onto the second tensor H⊗2 +3 +in the tensor dot product, and we take the Z expectation only after doing +so. This term drops out in the second line because each entry of (uT Z)H3(Z)⊗2 is a polynomial containing +only odd powers of Z. To see why, see the primer on Hermite polynomials in Section A.1. +Next, let g(x) = (uT x) ⟨c3, H3(x)⟩, so that +E +� +(uT Z)r4(Z) ⟨c3, H3(Z)⟩ +� += E [g(Z)r4(Z)]. +Since E [g(Z)2] < ∞ and r4 is the tail of a convergent Hermite series, we have +E [g(Z)r4(Z)] = E +� ∞ +� +k=4 +g(Z) 1 +k! ⟨ck, Hk(Z)⟩ +� += +∞ +� +k=4 +1 +k!E [g(Z) ⟨ck, Hk(Z)⟩]. +Furthermore, g is a fourth order polynomial, and is therefore orthogonal to all Hermite polynomials of order +greater than four. As a result, the above sum simplifies to +E [g(Z)r4(Z)] = 1 +4!E [g(Z) ⟨c4, H4(Z)⟩] += 1 +4!E [(uT Z) ⟨c3, H3(Z)⟩ ⟨c4, H4(Z)⟩] += 1 +4! ⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)]⟩ . +(4.14) +Combining these calculations and applying Cauchy-Schwarz to the term E [(uT Z)r4(Z)2] gives the prelimi- +nary bound (4.3). +4.1 +Combining the bounds +In the following sections, we bound each of the terms appearing in (4.1), (4.2), (4.3). For convenience, we +compile these bounds below, letting τ = d3/N. +Lemma 4.2 gives +|⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3 ⊗ H4]⟩| ≲ d +7 +2 +N +3 +2 ≤ τ +3 +2 +(4.15) +Corollary 4.1 applied with Y = Z gives +� +E [r3(Z)4] ≲ d3 +N = τ +� +E [r3(Z)6] ≲ +�d3 +N +� 3 +2 += τ +3 +2 +� +E [r4(Z)4] ≲ d4 +N 2 ≤ τ 2. +(4.16) +16 + +Corollary 4.1 applied with Y = ¯X, together with Corollary 4.2, give +� +E [r3( ¯X)6] ≲ e +√ +d3/N +�d3 +N +� 3 +2 += e +√ττ +3 +2 . +Finally, Corollary 4.3 gives +sup +∥u∥=1 +� +E [(uT ¯X)2] ≲ e +√ +d3/N = e +√τ, +sup +∥u∥=1 +� +E [((uT ¯X)2 − 1)2] ≲ e +√ +d3/N = e +√τ. +(4.17) +Substituting all of these bounds into (4.1), (4.2), and (4.3) finishes the proof of Theorem 1-W. +4.2 +Hermite-related Bounds +In this section we bound ⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)]⟩ as well as E [rk(Z)p] for k = 3, 4, p = 4, 6 and +E [r3( ¯X)6]. We take all of the assumptions to be true, either in the W ∈ C3 case or W ∈ C4. +Lemma 4.2. If v ∈ C4 then +|⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)]⟩| ≲ d7/2N −3/2. +(4.18) +Proof. We use Lemma B.3 in Appendix B.1, which shows that +⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)]⟩ = ⟨u ⊗ c3, c4⟩. +(4.19) +Writing c3 = �d +i,j,k=1 cijk +3 ei ⊗ ej ⊗ ek and noting that |cijk +3 | ≤ ∥c3∥, we get +|⟨u ⊗ c3, c4⟩| ≤ +d +� +i,j,k=1 +|cijk +3 | |⟨u ⊗ ei ⊗ ej ⊗ ek, c4⟩| +≤ d3∥c3∥∥c4∥ +(4.20) +As explained in Section 3.4, since v ∈ C4 we have ck = ck( ¯W) = E [∇k ¯W(Z)], k = 3, 4. Hence +∥c3∥ ≤ E ∥∇3 ¯W(Z)∥ ≤ ∥σ∥3E ∥∇3W(m + σZ)∥ ≲ N −1/2. +(4.21) +To get the last inequality, we used (3.16) to bound ∥m∥, ∥σ∥ by a constant, and we applied Lemma 3.4 with +Y = m + σZ, p = 1, k = 3. Note that E [∥(m + σZ)/ +√ +d∥s] ≲ 1 for any s ≥ 0, so the bound in the lemma +reduces to N −1/2. The lemma applies since E [∥m + σZ∥s] ≲ +√ +d +s for all s �� 0. Analogously, +∥c4∥ ≲ N −1. +(4.22) +Substituting the bounds (4.21), (4.22) into (4.20) and using the equality (4.19) gives the bound in the +statement of the lemma. +We now compute bounds on expectations of the form E [|rk(Z)|p], k = 3, 4, and on E [r3( ¯X)6]. Using the +exact formula (3.29) for rk, we obtain the following bound: +Corollary 4.1 (Corollary (B.1) in Appendix B.2). Let k = 3 if W ∈ C3 and k = 4 if W ∈ C4. Let Y ∈ Rd +be a random variable such that E [∥Y ∥s] < ∞ for all 0 ≤ s ≤ 2pk + 2pq, where q is from Assumption W2. +Then +E [|rk(Y )|p] ≲ +� +dk +N k−2 +� p +2 �� +E ∥Y/ +√ +d∥2kp + +� +E ∥Y/ +√ +d∥2(k−1)p + 1 +� +× +� +1 + +� +E +� +∥Y/ +√ +d∥2pq�� +(4.23) +17 + +Taking Y = Z, the expectations in (4.23) are all bounded by constants, so we immediately obtain +E [|rk(Z)|p] ≲ +� +dk +N k−2 +� p +2 +. +The corollary also applies to Y = ¯X, k = 3, p = 6. This is because, as we show in Corollary 4.2 in the +next section, E [∥X/ +√ +d∥s] ≲ exp(2 +� +d3/N) < ∞ for all s ≤ 36 + 12q = 2pk + 2pq. Since ¯X = σ−1(X − m), +using (3.16) we conclude that also E [∥ ¯X/ +√ +d∥s] ≲ exp(2 +� +d3/N) < ∞ for all s ≤ 36 + 12q. Hence (4.23) +gives +E [r3( ¯X)6] ≲ exp +� +2 +� +d3/N +� �d3 +N +�3 +. +4.3 +Bounds on X Moments +In this section we bound expectations of the form E [(aT X)p], ∥a∥ ≲ 1, and E [∥X∥p], both of which take +the form +E [f(X)] = +� +Rd f(x)e−W (x)dx +� +Rd e−W (x)dx +, +0 ≤ f(x) ≲ ∥x∥p. +(4.24) +To evaluate this integral, we break up the numerator into inner, middle, and outer regions +I = +� +∥x∥ ≤ 2 +√ +2 +√ +d +� +, +M = +� +2 +√ +2 +√ +d ≤ ∥x∥ ≤ +√ +N +� +, +O = +� +∥x∥ ≥ +√ +N +� +. +We then bound E [f(X)] as +E [f(X)] = E +� +f(X)1I(X) +� ++ E +� +f(X)1M(X) +� ++ E +� +f(X)1O(X) +� +≲ +1 +� +I e−W (x)dx +�� +I +f(x)e−W (x)dx + +� +M +∥x∥pe−W (x)dx + +� +O +∥x∥pe−W (x)dx +� +. +(4.25) +The inner region I is chosen so that (1) for x ∈ I, we can approximate e−W (x) by e−∥x∥2/2 and (2) the +standard Gaussian density places O(1) mass on I. This will allow us to show that +� +I f(x)e−W (x)dx +� +I e−W (x)dx +≲ E [f(Z)]. +The middle region M is chosen so that (1) e−W (x) is bounded by another, greater variance Gaussian +density, namely e−∥x∥2/4, and (2) this density places exponentially little mass on M. +The bound on +� +M ∥x∥pe−W (x)dx/ +� +I e−W (x)dx therefore involves a ratio of Gaussian normalization constants that grows +exponentially in d, but this growth is neutralized by the exponentially decaying Gaussian tail probability. +Finally, in O we use Assumption W3 to bound the integral +� +O ∥x∥pe−W (x) by a number decaying expo- +nentially in n times the tail integral of a function ∥x∥−r. The following four short lemmas carry out this +program. We let τ = d3/N in the statements and proofs below. +Lemma 4.3. We have +� +∥x∥≤2 +√ +2 +√ +d +e−W (x)dx ≳ e−2√τ√ +2π +d, +where τ = d3/N. +Proof. By Lemma 3.3 with C = 2 +√ +2, we have +e−W (x) ≥ e−∥x∥2/2e− 4 +√ +2 +3 +d +√ +d/ +√ +N ≥ e−∥x∥2/2e−2√τ, +∥x∥ ≤ 2 +√ +2 +√ +d. +Therefore, +� +∥x∥≤2 +√ +2 +√ +d +e−W (x)dx ≥ e−2√τ +� +∥x∥≤2 +√ +2 +√ +d +e− 1 +2 ∥x∥2dx += e−2√τ√ +2π +dP(∥Z∥ ≤ 2 +√ +2 +√ +d) ≳ e−2√τ√ +2π +d, +(4.26) +as desired. +18 + +Lemma 4.4. Let f ≥ 0. Then E [f(X)1I(X)] ≲ e2√τE [f(Z)]. In particular, E [∥X∥p1I(X)] ≲ e2√τdp/2. +Proof. Using Lemma 4.3 and Lemma 3.3, +E [f(X){X ∈ I}] ≤ +� +I f(x)e−W (x)dx +� +I e−W (x)dx +≲ e2√τ +√ +2π +d +� +I +f(x)e−W (x)dx +≲ e2√τ +√ +2π +d +� +I +f(x)e− 1 +2 ∥x∥2dx ≤ e2√τE [f(Z)], +(4.27) +as desired. +Lemma 4.5. We have E [∥X∥p1M(X)] ≲ e2√τ for all p ≥ 0. +Proof. Using Lemma 4.3 and Lemma 3.3, +E [∥X∥p1M(X)] ≤ +� +M ∥x∥pe−W (x)dx +� +I e−W (x)dx +≲ e2√τ +√ +2π +d +� +M +∥x∥pe−∥x∥2/4dx +≤ e2√τ +√ +2π +d +� +∥x∥≥2 +√ +2 +√ +d +∥x∥pe−∥x∥2/4dx +(4.28) +We now change variables as x = +√ +2y, so that ∥x∥pdx is bounded above by +√ +2 +d+p∥y∥pdy. Hence +E [∥X∥p1M(X)] ≲ +√ +2 +d+p� e2√τ +√ +2π +d +� +∥y∥≥2 +√ +d +∥y∥pe− 1 +2 ∥y∥2dz +� +≲ e2√τ√ +2 +d+pE +� +∥Z∥p{∥Z∥ ≥ 2 +√ +d} +� +≲ e2√τ �√ +2 +d+pdp/2e−d/4� +≲ e2√τ. +(4.29) +Lemma 4.6. For all p ≤ 12q + 36 we have E [∥X∥p1O(X)] ≲ e2√τ. +Proof. Using Lemma 4.3 and Assumption W3, we get +E [∥X∥p1O(X)] ≤ +� +∥x∥≥ +√ +N ∥x∥pe−W (x)dx +� +I e−W (x)dx +≲ e2√τ +� +∥x∥≥ +√ +N +∥x∥p−d−12q−36dx +≲ e2√τ +� ∞ +√ +N +rp−12q−36−1dr +≲ e2√τ√ +N +p−12q−36 ≲ e2√τ. +(4.30) +In the third line, we left out the surface area of the (d − 1)-sphere, which is an at most O(1) factor. +The above three lemmas immediately imply +Corollary 4.2. For all p ≤ 12q + 36 we have E [∥X∥p] ≲ dp/2e2√τ. +We also have +19 + +Corollary 4.3. Let ¯X = σ−1(X − m), where ∥σ−1∥, ∥m∥ ≲ 1. +If ∥u∥ = 1 then E [(uT ¯X)2] ≲ e2√τ, +E [((uT ¯X)2 − 1)2] ≲ e2√τ. +Proof. We have E [((uT ¯X)2 − 1)2] ≲ E [(uT ¯X)4] + 1, so it suffices to show E [(uT ¯X)k] ≲ e2√τ for k = 2, 4. +Since ¯X = σ−1(X − m), we have +E [(uT ¯X)k] ≲ E [(uT σ−1X)k] + ∥σ−1m∥k. +By the assumptions on σ and m, the term ∥σ−1m∥k is bounded by a constant, so it remains to show +E [(aT X)k] ≲ e2√τ, where a = σ−1u. Using Lemmas 4.4, 4.5, and 4.6 and noting that ∥a∥ ≲ 1, we get +E [(aT X)k] ≲ E [(aT X)k1I(X)] + E [∥X∥k1M(X)] + E [∥X∥k1O(X)] +≲ e2√τ(E [(aT Z)k] + 1) ≲ e2√τ, +(4.31) +as desired. +5 +Proof of Lemma 1-W +In this section, we use m ∈ Rd, σ ∈ Rd×d to denote generic arguments. Consider the equations (EW ), which +we rewrite in the following form: +E [∇W(m + σZ)] = 0, +(5.1) +E [∇2W(m + σZ)] = (σσT )−1. +(5.2) +Note that these equations are well-defined for all (σ, m) ∈ Rd×d×Rd, although we can only expect uniqueness +of solutions in a subset of Sd +++ × Rd; indeed, (5.1) and (5.2) only depend on σ through S = σσT , which has +multiple solutions σ. We now restate Lemma 1-W using the following notation: +Br(0, 0) = {(σ, m) ∈ Rd×d × Rd : ∥σ∥2 + ∥m∥2 ≤ r2}, +Br = {σ ∈ Rd×d : ∥σ∥ ≤ r}, +Sc1,c2 = {σ ∈ Sd ++ : c1Id ⪯ σ ⪯ c2Id}. +(5.3) +In particular, note that S0,r ⊂ Br. +Lemma 5.1. Let W satisfy Assumptions W0, W1, W2, and W3, and assume +√ +N/d ≥ 40 +√ +2( +√ +3+ +� +(2q)!). +Let r = 2 +√ +2, c1 = +� +2/3, and c2 = +√ +2. There exists a unique pair (σ, m) ∈ Br(0, 0) ∩ S0,r/2 × Rd satisfying +(5.1) and (5.2), and this pair is such that σ ∈ Sc1,c2. +Let us sketch the proof of the lemma. Let f : Rd×d × Rd → Rd be given by f(σ, m) = E [∇W(m + +σZ)]. Note that f(0, 0) = 0, so by the Implicit Function Theorem, there exists a map m(σ) defined in a +neighborhood of σ = 0 such that f(σ, m(σ)) = 0. In Lemma 5.3, we make this statement quantitative, +showing that for r = 2 +√ +2 we have the following result: for every σ ∈ Br/2 there is a unique m = m(σ) such +that (σ, m) ∈ Br(0, 0) and f(σ, m) = 0. Since S0,r/2 ⊂ Br/2, we have in particular that any solution (σ, m) +to (5.1) in the region Br(0, 0) ∩ S0,r/2 × Rd is of the form (σ, m(σ)). Thus it remains to prove there exists +a unique solution σ ∈ S0,r/2 to the equation E [∇2W(m(σ) + σZ)] = (σσT )−1. To do so, we rewrite this +equation as F(σ) = σ, where +F(σ) = E [∇2W(m(σ) + σZ)]−1/2. +We show in Lemma 5.4 that F is well-defined on S0,r/2, a contraction, and satisfies F(S0,r/2) ⊂ Sc1,c2 ⊂ +S0,r/2. Thus by the Contraction Mapping Theorem, there is a unique σ ∈ S0,r/2 satisfying F(σ) = σ. But +since F maps S0,r/2 to Sc1,c2, the fixed point σ necessarily lies in Sc1,c2. This finishes the proof. +Using a quantitative statement of the Inverse Function Theorem given in [Lan93], the following lemma +determines the size of the neighborhood in which the map m(σ) is defined. +20 + +Lemma 5.2. Let f = (f1, . . . , fd) : Rd×d ×Rd → Rd be C3, where Rd×d is the set of d×d matrices, endowed +with the standard matrix operator norm. Suppose f(0, 0) = 0, ∇σf(0, 0) = 0, ∇mf(σ, m) is symmetric for +all m, σ, and ∇mf(0, 0) = Id. Let r > 0 be such that +sup +(σ,m)∈Br(0,0) +∥∇f(σ, m) − ∇f(0, m∗)∥op ≤ 1 +4. +(5.4) +Then for each σ ∈ Rd×d such that ∥σ∥ ≤ r/2 there exists a unique m = m(σ) ∈ Rd such that f(σ, m(σ)) = 0 +and (σ, m(σ)) ∈ Br(0, 0). Furthermore, the map σ �→ m(σ) is C2, with +1 +2Id ⪯ ∇mf(σ, m) +�� +m=m(σ) ⪯ 3 +2Id, +∥∇σm(σ)∥op ≤ 1. +(5.5) +See Appendix E for careful definitions of the norms appearing above, as well as the proof of the lemma. +Lemma 5.3. Let f : Rd×d × Rd → Rd be given by f(σ, m) = E [∇W(σZ + m)]. Then all the conditions of +Lemma 5.2 are satisfied; in particular, (5.4) is satisfied with r = 2 +√ +2. Thus the conclusions of Lemma 5.2 +hold with this choice of r. +Lemma 5.4. Let r = 2 +√ +2 and σ ∈ S0,r/2 �→ m(σ) ∈ Rd be the restriction of the map furnished by +Lemmas 5.2 and 5.3 to symmetric nonnegative matrices. Then the function F given by +F(σ) = E [∇2W(m(σ) + σZ)]−1/2 +is well-defined and a strict contraction on S0,r/2. Moreover, +F(S0,r/2) ⊆ Sc1,c2 ⊆ S0,r/2, +where c1 = +� +2/3, c2 = +√ +2 = r/2. +This lemma concludes the proof of Lemma 5.1 since by the Contraction Mapping Theorem there is +a unique fixed point σ ∈ S0,r/2 of F, and F(σ) = σ is simply a reformulation of the second optimality +equation (5.2). We know σ must lie in Sc1,c2 since F maps S0,r/2 to this set. See Appendix E for the proofs +of the above lemmas. +Acknowledgments +A. Katsevich is supported by NSF grant DMS-2202963. P. Rigollet is supported by NSF grants IIS-1838071, +DMS-2022448, and CCF-2106377. +A +Hermite Series Remainder +A.1 +Brief Primer +Here is a brief primer on Hermite polynomials, polynomials, and series expansions. We let Hk : R → R, +k = 0, 1, 2, . . . be the kth order probabilist’s Hermite polynomial. We have H0(x) = 1, H1(x) = x, H2(x) = +x2 − 1, H3(x) = x3 − 3x. For all k ≥ 1, we can generate Hk+1 from the recurrence relation +Hk+1(x) = xHk(x) − kHk−1(x), +k ≥ 1. +(A.1) +In particular, Hk(x) is an order k polynomial given by a sum of monomials of the same parity as k. The Hk +are orthogonal with respect to the Gaussian measure; namely, we have E [Hk(Z)Hj(Z)] = k!δjk. We also +note for future reference that +E [ZHk(Z)Hk+1(Z)] = E [(Hk+1(Z) + kHk−1(Z)) Hk+1(Z)] = (k + 1)!, +(A.2) +using the recurrence relation (A.1). +21 + +The Hermite polynomials are given by products of Hermite polynomials, and are indexed by γ ∈ +{0, 1, 2, . . . }d. Let γ = (γ1, . . . , γd), with γj ∈ {0, 1, 2, . . . }. Then +Hγ(x1, . . . , xd) = +d +� +j=1 +Hγj(xj), +which has order |γ| := �d +j=1 γj. Note that if |γ| = k then Hγ(x) is given by a sum of monomials of the +same parity as k. Indeed, each Hγj(xj) is a linear combination of xγj−2p +j +, p ≤ ⌊γj/2⌋. Thus Hγ(x) is a +linear combination of monomials of the form �d +j=1 xγj−2pj +j +, which has total order k − 2 � +j pj. Using the +independence of the entries of Z = (Z1, . . . , Zd), we have +E [Hγ(Z)Hγ′(Z)] = γ! +d +� +j=1 +δγj,γ′ +j, +where γ! := �d +j=1 γj!. The Hγ can also be defined explicitly as follows: +e−∥x∥2/2Hγ(x) = (−1)|γ|∂γ � +e−∥x∥2/2� +, +(A.3) +where ∂γf(x) = ∂γ1 +x1 . . . ∂γd +xdf(x). This leads to the useful Gaussian integration by parts identity, +E [f(Z)Hγ(Z)] = E [∂γf(Z)], +if f ∈ C|γ|(Rd). +The Hermite polynomials Hγ, γ ∈ {0, 1, . . . }d, form a complete orthogonal basis of the Hilbert space +of functions f : Rd → R with inner product ⟨f, g⟩ = E [f(Z)g(Z)]. In particular, if f : Rd → R satisfies +E [f(Z)2] < ∞, then f has the following Hermite expansion: +f(x) = +� +γ∈{0,1,... }d +1 +γ!cγ(f)Hγ(x), +cγ(f) := E [f(Z)Hγ(Z)]. +(A.4) +Let +rk(x) = f(x) − +� +|γ|≤k−1 +1 +γ!cγ(f)Hγ(x) +(A.5) +be the remainder of the Hermite series expansion of f after taking out the order ≤ k − 1 polynomials. We +can write rk as an integral of f against a kernel. Namely, define +K(x, y) = +� +|γ|≤k−1 +1 +γ!Hγ(x)Hγ(y). +(A.6) +Note that +E [f(Z)K(x, Z)] = +� +|γ|≤k−1 +1 +γ!cγ(f)Hγ(x) +is the truncated Hermite series expansion of f. Therefore, the remainder rk can be written as +rk(x) = f(x) − E [f(Z)K(x, Z)] = E [(f(x) − f(Z))K(x, Z)], +using that E [K(x, Z)] = 1. +B +Exact Expression for the Remainder +Lemma B.1. Let k ≥ 1 and rk, K, be as in (A.5), (A.6), respectively. Assume that f ∈ C1, and that +∥∇f(x)∥ ≲ ec∥x∥2 for some 0 ≤ c < 1/2. Then +rk(x) = E [(f(x) − f(Z))K(x, Z)] += +� 1 +0 +d +� +i=1 +� +|γ|=k−1 +1 +γ!E [∂if((1 − t)Z + tx) (Hγ+ei(x)Hγ(Z) − Hγ+ei(Z)Hγ(x))]dt. +(B.1) +22 + +The proof relies on the following identity: +Lemma B.2. For each i = 1, . . . , d, it holds that +K(x, y) = +1 +xi − yi +� +|γ|=k−1 +1 +γ! (Hγ+ei(x)Hγ(y) − Hγ+ei(y)Hγ(x)) . +(B.2) +The proof of this identity is given at the end of the section. +Proof of Lemma B.1. Write +f(x) − f(Z) = +� 1 +0 +(x − Z)T ∇f((1 − t)Z + tx)dt += +d +� +i=1 +� 1 +0 +(xi − Zi)∂if((1 − t)Z + tx)dt, +(B.3) +so that, using (B.2), we have +E [(f(x) − f(Z))K(x, Z)] += +d +� +i=1 +� +|γ|=k−1 +1 +γ!E +�� 1 +0 +∂if((1 − t)Z + tx) (Hγ+ei(x)Hγ(Z) − Hγ+ei(Z)Hγ(x)) dt +� +. +(B.4) +By assumption, +sup +t∈[0,1] +|∂if((1 − t)Z + tx)| (|Hγ(Z)| + |Hγ+ei(Z)|) +≲ exp +� +c∥Z∥2 + 2c∥Z∥∥x∥ +� +(|Hγ(Z)| + |Hγ+ei(Z)|) +(B.5) +for some 0 ≤ c < 1/2. The right-hand side is integrable with respect to the Gaussian measure, and therefore +we can interchange the expectation and the integral in (B.4). Therefore, +E [(f(x) − f(Z))K(x, Z)] += +� 1 +0 +d +� +i=1 +� +|γ|=k−1 +1 +γ!E [∂if((1 − t)Z + tx) (Hγ+ei(x)Hγ(Z) − Hγ+ei(Z)Hγ(x))]dt. +(B.6) +Proof of Lemma B.2. Without loss of generality, assume i = 1. To simplify the proof, we will also assume +d = 2. The reader can check that the proof goes through in the same way for general d. By the recursion +relation (A.1) for 1d Hermite polynomials, we get that +Hγ1+1,γ2(x) = x1Hγ1,γ2(x) − γ1Hγ1−1,γ2(x), +where x = (x1, x2). Multiply this equation by Hγ(y) (where y = (y1, y2)) and swap x and y, to get the two +equations +Hγ1+1,γ2(x)Hγ1,γ2(y) = x1Hγ1,γ2(x)Hγ1,γ2(y) − γ1Hγ1−1,γ2(x)Hγ1,γ2(y), +Hγ1+1,γ2(y)Hγ1,γ2(x) = y1Hγ1,γ2(x)Hγ1,γ2(y) − γ1Hγ1−1,γ2(y)Hγ1,γ2(x). +(B.7) +Let +Sγ1,γ2 = Hγ1+1,γ2(x)Hγ1,γ2(y) − Hγ1+1,γ2(y)Hγ1,γ2(x). +Subtracting the second equation of (B.7) from the first one, and using the Sγ1,γ2 notation, gives +Sγ1,γ2 = (x1 − y1)Hγ1,γ2(x)Hγ1,γ2(y) + γ1Sγ1−1,γ2 +(B.8) +23 + +and hence +Sγ1,γ2 +γ1!γ2! = (x1 − y1)Hγ1,γ2(x)Hγ1,γ2(y) +γ1!γ2 ++ +Sγ1−1,γ2 +(γ1 − 1)!γ2!. +Iterating this recursive relationship γ1 − 1 times, we get +Sγ1,γ2 +γ1!γ2! = (x1 − y1) +γ1−1 +� +j=0 +Hγ1−j,γ2(x)Hγ1−j,γ2(y) +(γ1 − j)!γ2! ++ S0,γ2 +0!γ2! . +(B.9) +Now, we have +S0,γ2 = H1,γ2(x)H0,γ2(y) − H1,γ2(y)H0,γ2(x) += H1(x1)Hγ2(x2)Hγ2(y2) − H1(y1)Hγ2(y2)Hγ2(x2) += (x1 − y1)Hγ2(x2)Hγ2(y2) += (x1 − y1)H0,γ2(x)H0,γ2(y). +(B.10) +Therefore, +S0,γ2 +0!γ2! = (x1 − y1)Hγ1−j,γ2(x)Hγ1−j,γ2(y) +(γ1 − j)!γ2! +, +j = γ1 +so (B.9) can be written as +Sγ1,γ2 +γ1!γ2! = (x1 − y1) +γ1 +� +j=0 +Hγ1−j,γ2(x)Hγ1−j,γ2(y) +(γ1 − j)!γ2! +and hence +1 +x1 − y1 +� +γ1+γ2=k−1 +Sγ1,γ2 +γ1!γ2! = +� +γ1+γ2=k−1 +γ1 +� +j=0 +Hγ1−j,γ2(x)Hγ1−j,γ2(y) +(γ1 − j)!γ2! += +� +γ1+γ2≤k−1 +Hγ1,γ2(x)Hγ1,γ2(y) +γ1!γ2! += K(x, y), +(B.11) +using the observation that +{(γ1 − j, γ2) : γ1 + γ2 = k − 1, 0 ≤ j ≤ γ1} += {(˜γ1, γ2) : ˜γ1 + γ2 ≤ k − 1}. +(B.12) +Substituting back in the definition of Sγ1,γ2 gives the desired result. +B.1 +Hermite Series Remainder in Tensor Form +Using (B.1), it is difficult to obtain an upper bound on |rk(x)|, since we need to sum over all γ of order k −1. +In this section, we obtain a more compact representation of rk in terms of a scalar product of k-tensors. +We then take advantage of a very useful representation of the tensor of order-k Hermite polynomials, as an +expectation of a vector outer product. This allows us to bound the scalar product in the rk formula in terms +of an operator norm rather than a Frobenius norm (the latter would incur larger d dependence). +First, let us put all the unique kth order Hermite polynomials into a tensor of dk entries, some of which +are repeating, enumerated by multi-indices α = (α1, . . . , αk) ∈ [d]k. Here, [d] = {1, . . . , d}. We do so as +follows: given α ∈ [d]k, define γ(α) = (γ1(α), . . . , γd(α)) by +γj(α) = +k +� +ℓ=1 +1{αℓ = j}, +i.e. γj(α) counts how many times index j appears in α. For this reason, we use the term counting index to +denote indices of the form γ = (γ1, . . . , γd) ∈ {0, 1, 2, . . . }d, whereas we use the standard term “multi-index” +24 + +to refer to the α’s. Note that we automatically have |γ(α)| = k if α ∈ [d]k. Now, for x ∈ Rd, define H0(x) = 1 +and Hk(x), k ≥ 1 as the tensor +Hk(x) = {Hγ(α)(x)}α∈[d]k, +x ∈ Rd. +When enumerating the entries of Hk, we write H(α) +k +to denote Hγ(α). Note that for each γ with |γ| = k, +there are +�k +γ +� +α’s such that γ(α) = γ. +Example B.1. Consider the α = (i, j, j, k, k, k) entry of the tensor H6(x), where i, j, k ∈ [d] are all distinct. +We count that i occurs once, j occurs twice, and k occurs thrice. Thus +H(i,j,j,k,k,k) +6 +(x) = H1(xi)H2(xj)H3(xk) = xi(x2 +j − 1)(x3 +k − 3xk). +The first two tensors H1, H2 can be written down explicitly. For the entries of H1, we simply have H(i) +1 (x) = +H1(xi) = xi, i.e. H1(x) = x. For the entries of H2, we have H(i,i) +2 +(x) = H2(xi) = x2 +i − 1 and H(i,j) +2 +(x) = +H1(xi)H1(xj) = xixj, i ̸= j. Thus H2(x) = xxT − Id. +We now group the terms in the Hermite series expansion (A.4) based on the order |γ|. Consider all γ in +the sum such that |γ| = k. We claim that +� +|γ|=k +1 +γ!cγ(f)Hγ(x) = 1 +k! +� +α∈[d]k +cγ(α)(f)H(α) +k +(x). +(B.13) +Indeed, for a fixed γ such that |γ| = k, there are +�k +γ +� +α’s in [d]k for which γ(α) = γ, and the summands in +the right-hand sum corresponding to these α’s are all identical, equalling cγ(f)Hγ(x). Thus we obtain +�k +γ +� +copies of cγ(f)Hγ(x), and it remains to note that +�k +γ +� +/k! = 1/γ!. +Analogously to Hk(x), define the tensor ck ∈ (Rd)⊗k, whose α’th entry is +c(α) +k += cγ(α) = E [f(Z)Hγ(α)(Z)] = E [f(Z)H(α) +k +(Z)]. +We then see that the sum (B.13) can be written as +1 +k!⟨ck, Hk(x)⟩, and hence the series expansion of f can +be written as +f(x) = +∞ +� +k=0 +1 +k!⟨ck(f), Hk(x)⟩, +ck(f) := E [f(Z)Hk(Z)]. +(B.14) +The main result of this section is Lemma B.4 below, in which we express rk in terms of a tensor scalar +product. However, let us first prove the following lemma, which is needed to bound the term ⟨u ⊗ c3 ⊗ +c4, E [Z ⊗ H3 ⊗ H4]⟩ in the preliminary bound (4.3). +Lemma B.3. Let cp, cp+1 be symmetric tensors in (Rd)p and (Rd)p+1, respectively.Then +⟨u ⊗ cp ⊗ cp+1, E [Z ⊗ Hp(Z) ⊗ Hp+1(Z)]⟩ = (p + 1)!⟨u ⊗ cp, cp+1⟩. +(B.15) +Proof. Let T = E [Z ⊗ Hp ⊗ Hp+1]. First ,we characterize the non-zero entries of T using the counting index +notation. In counting index notation, a typical entry of T takes the form E [ZiHγ(Z)Hγ′(Z)], where i ∈ [d], +|γ| = p, and |γ′| = p + 1. Now, +E [ZiHγ(Z)Hγ′(Z)] = E [ZiHγi(Zi)Hγ′ +i(Zi)] +� +j̸=i +E [Hγj(Zj)Hγ′ +j(Zj)] += E [ZiHγi(Zi)Hγ′ +i(Zi)] +� +j̸=i +δγj,γj′ γj! +(B.16) +For this to be nonzero, we must have γj = γj′ for all j ̸= i. But since |γ| = p and |γ′| = p + 1, it follows that +we must have γ′ +i = γi + 1.Hence γ′ = γ + ei, where ei is the ith unit vector. To summarize, Ti,γ,γ′ is only +25 + +nonzero if γ′ = γ + ei. In this case, we have +E [ZiHγ(Z)Hγ+ei(Z)] = E [ZiHγi(Zi)Hγi+1(Zi)] +� +j̸=i +γj! += (γi + 1)! +� +j̸=i +γj! = (γ + ei)!. +(B.17) +To get the second line we used the following recurrence relation for 1-d Hermite polynomials: xHk(x) = +Hk+1(x) + kHk−1(x) for all k ≥ 1. Now, we take the inner product (B.15) using counting index notation, +recalling that each γ such that |γ| = p shows up in the tensor Hp exactly p!/γ! times: +� +u ⊗ cp ⊗ cp+1, E [Z ⊗ Hp(Z) ⊗ Hp+1(Z)] +� += +d +� +i=1 +� +|γ|=p +� +|γ′|=p+1 +p! +γ! +(p + 1)! +(γ′)! +uicγcγ′E [ZiHγ(Z)Hγ′(Z)] += +d +� +i=1 +� +|γ|=p +p! +γ! +(p + 1)! +(γ + ei)!uicγcγ+ei(γ + ei)! += (p + 1)! +d +� +i=1 +� +|γ|=p +p! +γ!uicγcγ+ei += (p + 1)! +d +� +i=1 +d +� +j1,...,jp=1 +uicγ(j1,...,jp)cγ(i,j1,...,jp) += (p + 1)! +d +� +i=1 +d +� +j1,...,jp=1 +uic(j1,...,jp) +p +c(i,j1,...,jp) +p+1 += ⟨u ⊗ cp, cp+1⟩ +(B.18) +Lemma B.4. Let f satisfy the assumptions of Lemma B.1, and additionally, assume f ∈ Ck. Then the +remainder rk, given as (B.1) in Lemma B.1, can also be written in the form +rk(x) = +� 1 +0 +(1 − t)k−1 +(k − 1)! E +�� +∇kf ((1 − t)Z + tx) , Hk(x) − Z ⊗ Hk−1(x) +�� +(B.19) +Proof. Recall that ∂γ := ∂γ1 +1 . . . ∂γd +d , and that +Hγ(z)e−∥z∥2/2 = (−1)|γ|∂γ(e−∥z∥2/2). +We then have for |γ| = k − 1, +E [∂if((1 − t)Z + tx)Hγ(Z)] = (1 − t)k−1E [∂γ+eif], +E [∂if((1 − t)Z + tx)Hγ+ei(Z)] = (1 − t)k−1E [∂γ+eifZi], +(B.20) +using the fact that f ∈ Ck. We omitted the argument (1 − t)Z + tx from the right-hand side for brevity. +To get the second equation, we moved only γ of the γ + ei derivatives from e−∥z∥2/2 onto ∂if, leaving +−∂i(e−∥z∥2/2) = zi. Substituting these two equations into (B.1), we get +E [(f(x) − f(Z))K(x, Z)] += +� 1 +0 +(1 − t)k−1 +d +� +i=1 +� +|γ|=k−1 +1 +γ!E [(∂γ+eif) (Hγ+ei(x) − ZiHγ(x))]dt += +1 +(k − 1)! +� 1 +0 +(1 − t)k−1 +d +� +i=1 +� +|γ|=k−1 +�k − 1 +γ +� +E [(∂γ+eif) (Hγ+ei(x) − ZiHγ(x))]dt +(B.21) +26 + +Now, define the sets +A = {(i, γ + ei) : i = 1, . . . , d, γ ∈ {0, 1, . . . }d, |γ| = k − 1}, +B = {(i, ˜γ) : ˜γ ∈ {0, 1, . . . }d, |˜γ| = k, ˜γi ≥ 1}. +(B.22) +It is straightforward to see that A = B. Therefore, +d +� +i=1 +� +|γ|=k−1 +�k − 1 +γ +� +E [∂γ+eif]Hγ+ei(x) += +� +|˜γ|=k +� +i: ˜γi≥1 +� k − 1 +˜γ − ei +� +E [∂˜γf]H˜γ(x) += +� +|˜γ|=k +� +i: ˜γi≥1 +�k +˜γ +� ˜γi +k E [∂˜γf]H˜γ(x) += +� +|˜γ|=k +�k +˜γ +� +E [∂˜γf]H˜γ(x) = ⟨E [∇kf], Hk(x)⟩ +(B.23) +Next, note that +� +|γ|=k−1 +�k − 1 +γ +� +∂γ∂ifHγ(x) = ⟨∇k−1∂if, Hk−1(x)⟩, +and therefore +d +� +i=1 +� +|γ|=k−1 +�k − 1 +γ +� +E [(∂γ+eif)Zi]Hγ(x) = E [⟨∇kf, Z ⊗ Hk−1(x)⟩]. +(B.24) +Substituting (B.23) and (B.24) into (B.21) gives +rk(x) = E [(f(x) − f(Z))K(x, Z)] += +� 1 +0 +(1 − t)k−1 +(k − 1)! E +�� +∇kf ((1 − t)Z + tx) , Hk(x) − Z ⊗ Hk−1(x) +�� +(B.25) +In the next section, we obtain a pointwise upper bound on |rk(x)| in the case f = ¯W. In order for this +bound to be tight in its dependence on d, we need a supplementary result on inner products with Hermite +tensors. To motivate this supplementary result, consider bounding the inner product in (B.19) by the product +of the Frobenius norms of the tensors on either side. As a rough heuristic, ∥∇kf∥F ∼ dk/2∥∇kf∥, where +recall that ∥∇kf∥ is the operator norm of ∇kf. Therefore, we would prefer to bound the inner product in +terms of ∥∇kf∥ to get a tighter dependence on d. Apriori, however, this seems impossible, since Hk(x) is not +given by an outer product of k vectors. But the following representation of the order k Hermite polynomials +will make this possible. +Hk(x) = E [(x + iZ)⊗k], +(B.26) +where Z ∼ N(0, Id). Using (B.26), we can bound scalar products of the form ⟨∇kf, Hk(x)⟩ and ⟨∇kf, Z ⊗ +Hk−1(x)⟩ in terms of the operator norm of ∇kf. More generally, we have the following lemma. +Lemma B.5. Let T ∈ (Rd)⊗k be a k-tensor, and v ∈ Rd. Then for all 0 ≤ ℓ ≤ k, we have +|⟨T, v⊗ℓ ⊗ Hk−ℓ(x)⟩| ≲ ∥T∥∥v∥ℓ(∥x∥k−ℓ + d +k−ℓ +2 ). +Proof. Using (B.26), we have +⟨T, v⊗ℓ ⊗ Hk−ℓ(x)⟩ = E [⟨T, v⊗ℓ ⊗ (x + iZ)⊗(k−ℓ)⟩] +27 + +and hence +|⟨T, v⊗ℓ ⊗ Hk−ℓ(x)⟩| ≤ E |⟨T, v⊗ℓ ⊗ (x + iZ)⊗k−ℓ⟩| +≤ ∥T∥∥v∥ℓE +� +∥x + iZ∥k−ℓ� +≲ ∥T∥∥v∥ℓ(∥x∥k−ℓ + +√ +d +k−ℓ). +(B.27) +B.2 +Hermite-Related Proofs from Section 4.2 +In this section, we return to the setting in the main text. +We let W satisfy all the assumptions from +Section 3.2, m ∈ Rd, σ ∈ Rd×d be such that ∥m∥, ∥σ∥ ≲ 1, and ¯W(x) = W(m + σx). Also, let +rk(x) = ¯W(x) − +k−1 +� +j=0 +1 +j! +� +cj( ¯W), Hj(x) +� +, +(B.28) +where cj( ¯W) = E [ ¯W(Z)Hj(Z)] as usual. +Combining Lemmas B.4 and B.5 allows us to upper bound quantities of the form E [|rk(Y )|p] in terms of +the operator norm of ∇kW. +Corollary B.1. Let rk be as in (B.28), the remainder of the Hermite series expansion of ¯W, where k = 3 +if W ∈ C3 and k = 4 if W ∈ C4. Let Y ∈ Rd be a random variable such that E [∥Y ∥s] < ∞ for all +0 ≤ s ≤ 2pk + 2pq, where q is from Assumption W2. Then +E [|rk(Y )|p] ≲ +� +dk +N k−2 +� p +2 �� +E ∥Y/ +√ +d∥2kp + +� +E ∥Y/ +√ +d∥2(k−1)p + 1 +� +× +� +1 + +� +E +� +∥Y/ +√ +d∥2pq�� +(B.29) +Proof. Let ∇k ¯W be shorthand for ∇k ¯W((1 − t)Z + tY ). Using (B.19) for f = ¯W, we have +|rk(Y )| ≲ +� 1 +0 +E Z +���� +∇k ¯W, Hk(Y ) − Z ⊗ Hk−1(Y ) +���� +dt. +(B.30) +Raising this inequality to the pth power and applying Jensen’s inequality twice, we have +|rk(Y )|p ≲ +� 1 +0 +E Z +���� +∇k ¯W, Hk(Y ) − Z ⊗ Hk−1(Y ) +���p� +dt. +(B.31) +We now take the Y -expectation of both sides, and we are free to assume Y is independent of Z. Note +that the integrand on the right-hand side can be bounded by a∥Y ∥p(q+k) + b for some a and b, since +∥∇k ¯W((1 − t)Z + tY )∥ ≲ (1 + ∥Z∥ + ∥Y ∥)q by Assumption W2, and since the tensors Hk(Y ), Hk−1(Y ) are +made up of at most order k polynomials of Y . Since E [∥Y ∥pq+pk] < ∞ by assumption, we can bring the +Y -expectation inside the integral. Hence +E [|rk(Y )|p] ≲ +� 1 +0 +E +���� +∇k ¯W, Hk(Y ) − Z ⊗ Hk−1(Y ) +���p� +dt, +(B.32) +where the expectation is over both Z and Y . Next, using Lemma B.5 we have +����⟨∇k ¯W, Hk(Y )−Z ⊗ Hk−1(Y )⟩ +���� +p +≲ ∥∇k ¯W∥p � +∥Y ∥kp + d +kp +2 + ∥Z∥p∥Y ∥(k−1)p + ∥Z∥pd +(k−1)p +2 +� +. +(B.33) +28 + +Substituting this into (B.32) we have +E [|rk(Y )|p] ≲ +� 1 +0 +E +���∇k ¯W +��p � +∥Y ∥kp + d +kp +2 + ∥Z∥p∥Y ∥(k−1)p + ∥Z∥pd +(k−1)p +2 +�� +dt +≲ +� 1 +0 +E +� +∥∇k ¯W∥2p� 1 +2 +�� +E ∥Y ∥2kp + d +p +2 +� +E ∥Y ∥2(k−1)p + d +kp +2 +� +dt +≤ d +kp +2 +�� +E ∥Y/ +√ +d∥2kp + +� +E ∥Y/ +√ +d∥2(k−1)p + 1 +� +× +� 1 +0 +E +���∇k ¯W +��2p� 1 +2 dt. +(B.34) +We used Cauchy-Schwarz and the independence of Y and Z to get the second line. Finally, recall that +∇k ¯W = ∇k ¯W((1 − t)Z + tY ) and note that since ∥σ∥ ≲ 1, we have +∥∇k ¯W((1 − t)Z + tY )∥ ≲ ∥∇kW(m + (1 − t)σZ + tσY )∥. +We now apply Lemma 3.4 with Y = m + (1 − t)σZ + tσY . Note that +E +����m + (1 − t)σZ + tσY +��/ +√ +d +�2pq� +≲ 1 + E +� +∥Y/ +√ +d∥2pq� +and hence +E +���∇k ¯W +��2p� 1 +2 ≲ E +���∇kW(m + (1 − t)σZ + tσY ) +��2p� 1 +2 +≲ +� +1 + +� +E +� +∥Y/ +√ +d∥2pq�� +N p(1−k/2) +(B.35) +for all t ∈ [0, 1]. Combining this inequality with (B.34) and noting that dkp/2N p(1−k/2) = (dk/N k−2)p/2 +gives (B.29). +C +Proofs Related to Affine Invariance +Recall the equations +E [∇V (m + S1/2Z)] = 0, +E [∇2V (m + S1/2Z)] = S−1 +(EV ) +and the definition of RV for a measure π ∝ e−V : +RV = +� +(m, S) ∈ Rd×Sd +++ : S ⪯ 2H−1, +∥ +√ +H +√ +S∥2 + ∥ +√ +H(m − m∗)∥2 ≤ 8 +� +, +(C.1) +where m∗ = argminx∈Rd V (x) and H = ∇2V (m∗). +Lemma C.1. Let V2(x) = V1(Ax + b) for some A ∈ Rd×d invertible and b ∈ Rd. Then the pair (m1, S1) is +a unique solution to (EV1) in the set RV1 if and only if the pair (m2, S2) given by +m2 = A−1(m1 − b), +S2 = A−1S1A−T +(C.2) +is a unique solution to (EV2) in the set RV2. +Proof. It suffices to prove the following two statements. (1) If (m1, S1) ∈ RV1 solves (EV1) then (m2, S2) +given by (C.2) lies in RV2 and solves (EV2). (2) If (m2, S2) ∈ RV2 solves (EV2) then (m1, S1) given by +m1 = Am2 + b, S1 = AS2AT lies in RV1 and solves (EV1). +29 + +We prove the first statement, and the second follows by a symmetric argument. So let (m1, S1) ∈ RV1 +solve (EV1). We first show (m2, S2) given by (C.2) solves (EV2). We have +∇V2(x) = AT ∇V1(Ax + b), +∇2V2(x) = AT ∇2V1(Ax + b)A. +(C.3) +Note also that if σ = A−1S1/2 +1 +then σσT = A−1S1A−T . We therefore have +E +� +∇V2 +� +A−1(m1 − b) + +� +A−1S1A−T �1/2 Z +� � += E +� +∇V2 +� +A−1(m1 − b) + A−1S1/2 +1 +Z +�� += AT E +� +∇V1 +� +m1 + S1/2 +1 +Z +�� += 0. +(C.4) +Similarly, +E +� +∇2V2 +� +A−1(m1 − b) + +� +A−1S1A−T �1/2 Z +� � += E +� +∇2V2 +� +A−1(m1 − b) + A−1S1/2 +1 +Z +�� += AT E +� +∇2V1 +� +m1 + S1/2 +1 +Z +�� +A += AT S−1 +1 A = S−1 +2 . +(C.5) +To conclude, we show (m2, S2) ∈ RV2. Let m∗i be the global minimizer of Vi and Hi = ∇2V (m∗i), i = 1, 2. +Then m∗2 = A−1(m∗1 − b) and H2 = AT H1A. Since S1 ⪯ 2H−1 +1 , it follows that +S2 = A−1S1A−T ⪯ 2A−1H−1 +1 A−T = 2H−1 +2 . +Furthermore, direct substitution shows that +∥ +√ +H2(m2 − m∗2)∥2 = (m2 − m∗2)T H2(m2 − m∗2) += (m1 − m∗1)T H1(m1 − m∗1) = ∥ +√ +H1(m1 − m∗1)∥2. +(C.6) +Finally, note that +∥ +� +H2 +� +S2∥2 = ∥ +� +S2H2 +� +S2∥ += sup +u̸=0 +uT √S2H2 +√S2u +∥u∥2 += sup +u̸=0 +uT H2u +∥√S2 +−1u∥2 = sup +u̸=0 +uT H2u +uT S−1 +2 u += sup +u̸=0 +uT H1u +uT S−1 +1 u = ∥ +� +H1 +� +S1∥2. +(C.7) +Therefore, +∥ +� +H2 +� +S2∥2 + ∥ +√ +H2(m2 − m∗2)∥2 = ∥ +� +H1 +� +S1∥2 + ∥ +√ +H1(m1 − m∗1)∥2 ≤ 8. +Recall that +W(x) = nv +� +m∗ + +√ +nH +−1x +� +, +H = ∇2v(m∗), +30 + +and that N = nr, where r is from Assumption V1. The following preliminary calculation will be useful for +showing Assumptions V1, V2 imply Assumptions W1, W2, respectively. Given x ∈ Rd, let +y = √α2 +√ +H +−1x. +We have +√ +N∥∇3W( +√ +Nx)∥ ≤ +√ +N +√nα2 +3 +���∇3(nv) +� +m∗ + +√ +nH +−1√ +Nx +���� += +√r +α2√α2 +���∇3v +� +m∗ + √r +√ +H +−1x +���� += +√r +α2√α2 +����∇3v +� +m∗ + +� r +α2 +y +����� . +(C.8) +Analogously, +N∥∇4W( +√ +Nx)∥ ≤ +N +√nα2 +4 +���∇4(nv) +� +m∗ + +√ +nH +−1√ +Nx +���� += r +α2 +2 +���∇4v +� +m∗ + √r +√ +H +−1x +���� += r +α2 +2 +����∇4v +� +m∗ + +� r +α2 +y +����� . +(C.9) +Lemma C.2. Assumptions V1, V2, and V3 imply Assumptions W1, W2, and W3 with N = nr, where r is +from Assumption V1. +Proof. Let y = √α2 +√ +H +−1x. Note that ∥y∥ ≤ ∥x∥ and in particular, if ∥x∥ ≤ 1 then ∥y∥ ≤ 1. To show +that V1 implies W1, note that by the above calculation we have +√ +N sup +∥x∥≤1 +∥∇3W( +√ +Nx)∥ ≤ +√r +α2√α2 +sup +∥y∥≤1 +����∇3v +� +m∗ + +� r +α2 +y +����� ≤ 1 +2, +(C.10) +as desired. To show that W2 implies V2, fix x ∈ Rd and note that +√ +N∥∇3W( +√ +Nx)∥ ≤ +√r +α2√α2 +����∇3v +� +m∗ + +� r +α2 +y +����� +≤ 1 + ∥y∥q ≤ 1 + ∥x∥q, +(C.11) +as desired. The calculation for the fourth derivative is analogous. +To show that Assumption V3 implies W3, fix ∥x∥ ≥ +√ +N and let y = +√ +nH +−1x, so that W(x) = +nv(y + m∗). Note that ∥y∥ ≥ ∥x∥/√nβ2 ≥ +� +N/(nβ2) = +� +r/β2. Hence we can apply Assumption V3 to +conclude that +W(x) = nv(m∗ + y) ≥ (d + 12q + 36) log(∥ +� +nβ2y∥) +≥ (d + 12q + 36) log(∥ +√ +nHy∥) = (d + 12q + 36) log ∥x∥. +(C.12) +as desired. +D +Logistic Regression Example +Details of Numerical Simulation +For the numerical simulation displayed in Figure 1, we take d = 2 and n = 100, 200, . . . , 1000. For each +n, we draw ten sets of covariates xi, i = 1, . . . , n from N(0, λ2Id) with λ = +√ +5, yielding ten posterior +31 + +distributions πn(· | x1:n). +For each πn we compute the ground truth mean and covariance by directly +evaluating the integrals, using a regularly spaced grid (this is feasible in two dimensions). The mode m∗ of +πn is found by a standard optimization procedure, and the Gaussian VI estimates ˆm, ˆS are computed using +the procedure described in [LCB+22]. We used the authors’ implementation of this algorithm, found at +https://github.com/marc-h-lambert/W-VI. We then compute the Laplace and VI mean and covariance +approximation errors for each n and each of the ten posteriors at a given n. The solid lines in Figure 1 depict +the average approximation errors over the ten distributions at each n. The shaded regions depict the spread +of the middle eight out of ten approximation errors. +Verifying the Assumptions +As discussed in Section 2.3, we make the approximation +v(z) ≈ v∞(z) = −E [Y log s(XT z) + (1 − Y ) log(1 − s(XT z)]. +Here, X ∼ N(0, λ2Id) and Y | X ∼ Bernoulli(s(X1)), since X1 = eT +1 X. +Recall that s is the sigmoid, +s(a) = (1 + ea)−1. Below, the parameters α2, β2, etc. are all computed for the function v∞. +Note that z = e1 is the global minimizer of v∞. We have ∇2v∞(z) = E [s′(XT z)XXT ] and in particular, +∇2v∞(e1) = E [s′(X1)XXT ]. Also, +s′(a) = s(a)(1 − σ(a)) = +1 +2(1 + cosh(a)) ∈ (0, 1/4]. +To lower bound λmin(∇2v∞(e1)), note that for ∥u∥ = 1 we have +uT ∇2v∞(e1)u = E [s′(X1)(XT u)2] += u2 +1E [s′(X1)X2 +1] + λ2 +d +� +j=2 +u2 +jE [s′(X1)] +≥ s′(λ) +� +�u2 +1E [X2 +1{|X1| ≤ λ}] + λ2 +d +� +j=2 +u2 +jP(|X1| ≤ λ) +� +� +≳ λ2s′(λ), +(D.1) +and hence α2 ≳ λ2s′(λ). Using that s′ ≤ 1/4, we also have the upper bound +λmax(∇2v∞(e1)) ≤ λ2 +4 = β2. +(D.2) +Next, we need to upper bound ∥∇3v∞∥. We have +∇3v∞(z) = E [s′′(XT z)X⊗3], +s′′(a) = s(a)(1 − s(a))(1 − 2s(a)), +so that +∥∇3v∞(z)∥ = +sup +∥u1∥=∥u2∥=∥u3∥=1 +E +� +s′′(XT z) +3 +� +k=1 +(uT +k X) +� +. +One can show that s′′(a) ∈ [−1, 1] for all a ∈ R. Hence +E +� +s′′(XT z) +3 +� +k=1 +(uT +k X) +� +≤ E +� 3 +� +k=1 +|uT +k X| +� +≤ +3 +� +k=1 +E +� +|uT +k X|3�1/3 ≤ 2λ3. +(D.3) +Here, we used that uT +k X +d= N(0, λ2), whose third absolute moment is bounded by 2λ3. We therefore get the +bound +∥∇3v∞(z)∥ ≤ β3 := 2λ3. +(D.4) +32 + +Note that this constant bound holds for all z ∈ Rd. Next, we need to find r such that +sup +∥z−m∗∥≤√ +r/α2 +∥∇3v∞(z)∥ ≤ α3/2 +2 +2√r . +Using the uniform bound (D.4) on ∥∇3v∞∥, it suffices to take r such that +β3 = α3/2 +2 +2√r =⇒ r = α3 +2 +4β2 +3 +≳ s′(λ)3. +(D.5) +Finally, we verify Assumption V3. To do so, recall that v∞ is convex. Therefore, if y lies on the line +segment between 0 and z, with ∥y∥ = +� +r/β2 < ∥z∥, then +v∞(m∗ + z) − v∞(m∗) ≥ +∥z∥ +� +r/β2 +(v∞(m∗ + y) − v∞(m∗)) +≥ +� +β2/r +inf +∥y∥=√ +r/β2 +[v∞(m∗ + y) − v∞(m∗)] ∥z∥. +(D.6) +It is clear that if λ is a constant then the parameters in this inequality, as well as the infimum, are lower +bounded by absolute constants. Therefore, since ∥z∥ ≥ log ∥z∥, Assumption V3 is satisfied. +E +Proofs from Section 5 +The proofs in this section rely on tensor-matrix and tensor-vector scalar products. Let us review the rules +of such scalar products, and how to bound the operator norms of these quantities. Let v ∈ Rd, A ∈ Rd×d, +and T ∈ Rd×d×d. We define the vector ⟨T, A⟩ ∈ Rd and the matrix ⟨T, v⟩ ∈ Rd×d by +⟨T, A⟩i = +d +� +j,k=1 +TijkAjk, +i = 1, . . . , d, +⟨T, v⟩ij = +d +� +k=1 +Tijkvk, +i, j = 1, . . . , d. +(E.1) +We will always sum over the last two or last one indices of the tensor. Note that the norm of the matrix +⟨T, v⟩ is given by ∥⟨T, v⟩∥ = sup∥u∥=∥w∥=1 uT ⟨T, v⟩w, and we have +uT ⟨T, v⟩w = +d +� +i,j=1 +uiwj +d +� +k=1 +Tijkvk = ⟨T, u ⊗ w ⊗ v⟩ ≤ ∥T∥∥v∥. +Therefore, ∥⟨T, v⟩∥ ≤ ∥T∥∥v∥. +We also review the notion of operator norm for derivatives of a function, and note the distinction between +this kind of operator norm and the standard tensor operator norm. Specifically, consider a C2 function +f = (f1, . . . , fd) : Rd×d ×Rd → Rd, where Rd×d is endowed with the standard matrix norm. Then ∇σf(σ, m) +is a linear functional from Rd×d to Rd, and we let ⟨∇σf(σ, m), A⟩ ∈ Rd denote the application of ∇σf(σ, m) +to A. Note that we can represent ∇σf by the d × d × d tensor (∇σjkfi)d +i,j,k=1, so that ⟨∇σf(σ, m), A⟩ +coincides with the definition given above of tensor-matrix scalar products. However, ∥∇σf∥op is not the +standard tensor operator norm. Rather, +∥∇σf∥op = +sup +A∈Rd×d,∥A∥=1 +∥⟨∇σf, A⟩∥ = +sup +A∈Rd×d,∥A∥=1, +u∈Rd,∥u∥=1 +⟨∇σf, A ⊗ u⟩. +We continue to write ∥∇σf∥ to denote the standard tensor operator norm, i.e. +∥∇σf∥ = +sup +u,v,w∈Rd, +∥u∥=∥v∥=∥w∥=1 +⟨∇σf, u ⊗ v ⊗ w⟩. +33 + +Note also that ∇mf ∈ Rd×d is a matrix, and that +max +� +∥∇σf(σ, m)∥op , ∥∇mf(σ, m)∥op +� +≤ ∥∇f(σ, m)∥op ≤ ∥∇σf(σ, m)∥op + ∥∇mf(σ, m)∥op. +(E.2) +Finally, recall the notation +Br(0, 0) = {(σ, m) ∈ Rd×d × Rd : ∥σ∥2 + ∥m∥2 ≤ r2}, +Br = {σ ∈ Rd×d : ∥σ∥ ≤ r}, +Sc1,c2 = {σ ∈ Sd ++ : c1Id ⪯ σ ⪯ c2Id}. +(E.3) +Lemma E.1. Let f = (f1, . . . , fd) : Rd×d×Rd → Rd be C3, where Rd×d is the set of d×d matrices, endowed +with the standard matrix operator norm. Suppose f(0, 0) = 0, ∇σf(0, 0) = 0, ∇mf(σ, m) is symmetric for +all m, and ∇mf(0, 0) = Id. Let r > 0 be such that +sup +(σ,m)∈Br(0,0) +∥∇f(σ, m) − ∇f(0, 0)∥op ≤ 1 +4. +(E.4) +Then for each σ ∈ Rd×d such that ∥σ∥ ≤ r/2 there exists a unique m = m(σ) ∈ Rd such that f(σ, m(σ)) = 0 +and (σ, m(σ)) ∈ Br(0, 0). Furthermore, the map σ �→ m(σ) is C2, with +1 +2Id ⪯ ∇mf(σ, m) +�� +m=m(σ) ⪯ 3 +2Id, +∥∇σm(σ)∥op ≤ 1. +(E.5) +The proof uses the following lemma. +Lemma E.2 (Lemma 1.3 in Chapter XIV of [Lan93]). Let U be open in a Banach space E, and let f : U → E +be of class C1. Assume that f(0) = 0 and f ′(0) = I. Let r > 0 be such that ¯Br(0) ⊂ U. If +|f ′(z) − f ′(x)| ≤ s, +∀z, x ∈ ¯Br(0) +for some s ∈ (0, 1), then f maps ¯Br(0) bijectively onto ¯B(1−s)r(0). +Proof of Lemma E.1. Let φ : Rd×d × Rd → Rd×d × Rd be given by φ(σ, m) = (σ, f(σ, m)), so that φ(0, 0) = +(0, 0), and +∇φ(σ, m) = +� +Id×d +0 +∇σf(σ, m) +∇mf(σ, m) +� +. +(E.6) +For each (σ, m), (σ′, m′) ∈ Br(0, 0), we have +∥∇φ(σ, m) − ∇φ(σ′, m′)∥op = ∥∇f(σ, m) − ∇f(σ′, m′)∥op +≤ 2 +sup +(σ,m)∈Br(0,0) +∥∇f(σ, m) − ∇f(0, 0)∥op ≤ 1 +2. +(E.7) +Note also that ∇φ(0, 0) is the identity. Thus by Lemma E.2, we have that φ is a bijection from Br(0, 0) +to Br/2(φ(0, 0)) = Br/2(0, 0). Now, fix any σ ∈ Rd×d such that ∥σ∥ ≤ r/2. Then (σ, 0) ∈ Br/2(0, 0), and +hence there exists a unique (σ′, m) ∈ Br(0, 0) such that (σ, 0) = φ(σ′, m) = (σ′, f(σ′, m)). Thus σ = σ′ and +f(σ, m) = 0. In other words, for each σ such that ∥σ∥ ≤ r/2 there exists a unique m = m(σ) such that +(σ, m(σ)) ∈ Br(0, 0) and such that 0 = f(σ, m). +The map σ �→ m(σ) is C2 by standard Implicit Function Theorem arguments. To show that the first +inequality of (E.5) holds, note that we have +∥∇mf(σ, m(σ)) − ∇mf(0, 0)∥op ≤ ∥∇f(σ, m(σ)) − ∇f(0, 0)∥op ≤ 1/4 ≤ 1/2 +34 + +by (E.4) since we know that (σ, m(σ)) ∈ Br(0, 0). Thus, +Id = ∇2W(0) = ∇mf(0, 0) +=⇒ 1 +2Id ⪯ ∇mf(σ, m(σ)) ⪯ 3 +2Id. +(E.8) +To show the second inequality of (E.5), we first need the supplementary bound +∥∇σf(σ, m(σ))∥op = ∥∇σf(σ, m(σ)) − ∇σf(0, 0)∥op ≤ 1/2 +(E.9) +which holds by the same reasoning as above. Now, +∂σjkm = −∇mf(σ, m)−1∂σjkf(σ, m) ∈ Rd +by standard Implicit Function Theorem arguments, where ∇mf(σ, m) is a matrix, ∂σjkf(σ, m) is a vector, +and ∇σm, ∇σf are linear maps from Rd×d to Rd. Hence by the first inequality in (E.5) combined with (E.9) +we have +∥∇σm(σ)∥op = sup +∥A∥=1 +∥⟨∇σm(σ), A⟩∥ += sup +∥A∥=1 +∥∇mf(σ, m)−1 +d +� +j,k=1 +∂σjkf(σ, m)Ajk∥ += sup +∥A∥=1 +∥∇mf(σ, m)−1⟨∇σf, A⟩∥ +≤ ∥∇mf(σ, m)−1∥∥∇σf∥op ≤ 2 × 1 +2 = 1. +(E.10) +Lemma E.3. Let f : Rd×d × Rd → Rd be given by f(σ, m) = E [∇W(σZ + m)]. Then all the conditions of +Lemma E.1 are satisfied; in particular, (E.4) is satisfied with r = 2 +√ +2. Thus the conclusions of Lemma E.1 +hold with this choice of r. +Proof. Note that f is C2 thanks to the fact that W is C3 and ∇W grows polynomially by Assumption W2. +We then immediately have f(0, 0) = ∇W(0) = 0, ∇mf(σ, m) = E [∇2W(m + σZ)] is symmetric for all m, σ, +and ∇mf(0, 0) = ∇2W(0) = Id. To show ∇σf(0, 0) = 0, we compute the i, j, k term of this tensor: +∂σjkfi = ∂σjkE [∂iW(m + σZ)] = E [∂2 +ijW(m + σZ)Zk], +so that ∂σjkfi(0, 0) = E [∂2 +i,jW(0)Zk] = 0. It remains to show that for r = 2 +√ +2 we have +sup +(σ,m)∈Br(0,0) +∥∇f(σ, m) − ∇f(0, 0)∥op ≤ 1 +4. +(E.11) +First, note that +sup +(σ,m)∈Br(0,0) +∥∇f(σ, m) − ∇f(0, 0)∥op ≤ r +sup +(σ,m)∈Br(0,0) +∥∇2f(σ, m)∥op, +where ∇2f(σ, m) is a bilinear form on (Rd×d × Rd)2, and we have +∥∇2f(σ, m)∥op ≤ ∥∇2 +σf(σ, m)∥op + 2∥∇σ∇mf(σ, m)∥op + ∥∇2 +mf(σ, m)∥op. +For f(σ, m) = E [∇W(σZ + m)], these second order derivatives are given by +∂2 +mi,mjf(σ, m) = E [∂2 +i,j∇W(m + σZ)], +∂mi∂σjkf(σ, m) = E [∂2 +i,j∇W(m + σZ)Zk], +∂2 +σjk,σℓpf(σ, m) = E [∂2 +j,ℓ∇W(m + σZ)ZkZp], +(E.12) +35 + +each a vector in Rd. From the first line, we get that ∥∇2 +mf(σ, m)∥op ≤ E ∥∇3W(m + σZ)∥, where ∥∇3W∥ +is the standard tensor norm. From the second line, we get +∥∇m∇σf(σ, m)∥op += +sup +∥A∥=1,∥x∥=1 +����E +� +d +� +i,j,k=1 +∂2 +i,j∇W(m + σZ)ZkxiAjk +����� += +sup +∥A∥=1,∥x∥=1 +����E +� +d +� +i,j=1 +∂2 +i,j∇W(m + σZ)xi(AZ)j +����� += +sup +∥A∥=1,∥x∥=1 +����E +�� +∇3W(m + σZ), x ⊗ AZ +������ +≤ +sup +∥A∥=1,∥x∥=1 +E +� +∥x∥∥AZ∥∥∇3W(m + σZ)∥ +� +≤ +√ +d +� +E [∥∇3W(m + σZ)∥2]. +(E.13) +A similar computation gives +∥∇2 +σf(σ, m)∥op ≤ +sup +∥A∥=1,∥B∥=1 +E [∥AZ∥∥BZ∥∥∇3W(m + σZ)∥] +≤ 2d +� +E [∥∇3W(m + σZ)∥2] ≲ d/ +√ +N. +(E.14) +Thus overall we have +∥∇2f(σ, m)∥op ≤ (2d + 2 +√ +d + 1) +� +E [∥∇3W(m + σZ)∥2] +≤ 5d +sup +(σ,m)∈Br(0,0) +� +E [∥∇3W(m + σZ)∥2] +(E.15) +and hence +sup +(σ,m)∈Br(0,0) +∥∇f(σ, m) − ∇f(0, 0)∥op +≤ 5rd +sup +(σ,m)∈Br(0,0) +� +E [∥∇3W(m + σZ)∥2] +≤ 10 +√ +2d +√ +N +( +√ +3 + +� +(2q)!), +(E.16) +where in the last line we applied Lemma E.5 and substituted r = 2 +√ +2. To conclude, recall that ( +√ +3 + +� +(2q)!)/ +√ +N ≤ 1/(40 +√ +2d) by the assumption in the statement of Lemma 5.1. +Lemma E.4. Let r = 2 +√ +2 and σ ∈ S0,r/2 �→ m(σ) ∈ Rd be the restriction to symmetric nonnegative +matrices of the map furnished by Lemmas 5.2 and 5.3 . Then the function F given by +F(σ) = E [∇2W(m(σ) + σZ)]−1/2 +is well-defined and a strict contraction on S0,r/2. Moreover, +F(S0,r/2) ⊆ Sc1,c2 ⊆ S0,r/2, +where c1 = +� +2/3, c2 = +√ +2 = r/2. +Proof. First, let G(σ) = E [∇2W(m(σ) + σZ)] and f(σ, m) = E [∇W(m + σZ)] as in Lemma E.3. Note that +∇mf(σ, m) = E [∇2W(σZ + m)], so that G(σ) = ∇mf(σ, m)|m=m(σ) and hence by (E.5) of Lemma E.1 we +have +1 +2Id ⪯ G(σ) ⪯ 3 +2Id, +∀σ ∈ S0,r/2. +(E.17) +36 + +But then G(σ) has a unique invertible symmetric positive definite square root, and we define F(σ) = G(σ)−1/2 +to be the inverse of this square root. Moreover, using (E.17), it follows that +c1Id ⪯ F(σ) ⪯ c2Id, +∀σ ∈ S0,r/2, +where c1 = +� +2/3 and c2 = +√ +2 = r/2. In other words, F(S0,r/2) ⊆ Sc1,c2 ⊆ S0,r/2. It remains to show F is +a contraction on S0,r/2. Let σ1, σ2 ∈ S0,r/2. We will first bound ∥G(σ1) − G(σ2)∥. We have +∥G(σ1) − G(σ2)∥ ≤ ∥σ1 − σ2∥ +sup +σ∈S0,r/2 +∥∇σG(σ)∥op, +(E.18) +and +∥∇σG(σ)∥op = sup +∥A∥=1 +��� +∇σG(σ), A +��� += sup +∥A∥=1 +����E +� +∇3W, +� +A, ∇σ (m(σ) + σZ) +������. +(E.19) +Here, the quantities inside of the ∥ · ∥ on the right are matrices. Indeed, ⟨∇σG, A⟩ denotes the application +of ∇σG to A. Since G sends matrices to matrices, ∇σG is a linear functional which also sends matrices to +matrices. In the third line, ∇σ(m(σ) + σZ) should be interpreted as a linear functional from Rd×d to Rd, so +⟨A, ∇σ(m(σ) + σZ)⟩ is a vector in Rd, and the inner product of this vector with the d × d × d tensor ∇3W +is a matrix. Using that ∥⟨T, x⟩∥ ≤ ∥T∥∥x∥, as explained at the beginning of this section, we have +���� +� +∇3W, +� +A, ∇σ (m(σ) + σZ) +������ ≤ +��∇3W +�� ∥⟨A, ∇σ (m(σ) + σZ)⟩∥ +≤ ∥∇3W∥∥∇σ(m(σ) + σZ)∥op += ∥∇3W∥∥∇σm(σ) + Z ⊗ Id∥op ≤ ∥∇3W∥(1 + ∥Z∥). +(E.20) +To get the last bound, we used that ∥∇σm(σ)∥op ≤ 1, shown in Lemma E.3. We also use the fact that +∥Z ⊗ Id∥op = sup∥A∥=1 ∥ ⟨A, Z ⊗ Id⟩ ∥ = sup∥A∥=1 ∥AZ∥ = ∥Z∥. (Recall that since Z ⊗ Id is part of ∇σm, +we are considering Z ⊗ Id as an operator on matrices rather than as a d × d × d tensor, and this is why we +take the supremum over matrices A.) +Substituting (E.20) back into (E.18), we get +∥G(σ1) − G(σ2)∥ ≤ ∥σ1 − σ2∥ +sup +σ∈S0,r/2 +E +���∇3W (m(σ) + σZ) +�� (1 + ∥Z∥) +� +≤ ∥σ1 − σ2∥ +√ +2(1 + +√ +d) +√ +3 + +� +(2q)! +√ +N +≤ ∥σ1 − σ2∥1 + +√ +d +40d +. +(E.21) +The second inequality is by Cauchy-Schwarz and Lemma E.5 below. The third inequality uses that ( +√ +3 + +� +(2q)!)/ +√ +N ≤ 1/(40 +√ +2d), by the assumption in the statement of Lemma 5.1. Now, note that thanks to +Lemma E.3, both λmin(G(σ1)) and λmin(G(σ2)) are bounded below by 1/2. Using Lemma E.6, we therefore +have +∥F(σ1) − F(σ2)∥ ≤ +√ +2∥G(σ1) − G(σ2)∥ +≤ 1 + +√ +d +20 +√ +2d ∥σ1 − σ2∥. +(E.22) +Hence F is a strict contraction. +Lemma E.5. Assume 4 +√ +2 ≤ +� +N/d. Then +sup +(σ,m)∈B2 +√ +2(0,0) +E [∥∇3W(m + σZ)∥2] ≤ (3 + (2q)!)/N. +37 + +Proof. Fix ∥m∥, ∥σ∥ ≤ 2 +√ +2, so that 2∥m∥ ≤ +√ +N and 2∥σ∥ +√ +d ≤ +√ +N. By Assumption W2, we have +NE [∥∇3W(m + σZ)∥2] ≤ 2 + 2E +� +∥(m + σZ)/ +√ +N∥2q� +≤ 2 + 22q +��∥m∥ +√ +N +�2q ++ +� ∥σ∥ +√ +N +�2q +E [∥Z∥2q] +� +≤ 2 + +�2∥m∥ +√ +N +�2q ++ +� +2∥σ∥ +√ +d +√ +N +�2q +(2q)! +≤ 3 + (2q)!. +(E.23) +Lemma E.6. Let A0 and A1 be psd, and A1/2 +0 +, A1/2 +1 +their unique psd square roots. Assume without loss of +generality that λmin(A0) ≤ λmin(A1). Then +∥A−1/2 +1 +− A−1/2 +0 +∥ ≤ +∥A1 − A0∥ +2λmin(A0)3/2 . +Proof. First note that +A−1/2 +1 +− A−1/2 +0 += A−1/2 +1 +(A1/2 +0 +− A1/2 +1 +)A−1/2 +0 +and hence +∥A−1/2 +1 +− A−1/2 +0 +∥ ≤ ∥A−1/2 +1 +∥∥A−1/2 +1 +∥∥A1/2 +1 +− A1/2 +0 +∥ ≤ ∥A1/2 +1 +− A1/2 +0 +∥ +λmin(A0) +. +Now, define At = A0 + t(A1 − A0) and let Bt = A1/2 +t +, where Bt is the unique psd square root of At. We +then have ∥A1/2 +1 +− A1/2 +0 +∥ ≤ supt∈[0,1] ∥ ˙Bt∥. We will now express ˙Bt in terms of ˙At and Bt. Differentiating +B2 +t = At, we get +Bt ˙Bt + ˙BtBt = ˙At = A1 − A0. +(E.24) +Now, one can check that the solution ˙Bt to this equation is given by +˙Bt = +� ∞ +0 +e−sBt(A1 − A0)e−sBtds +and hence +∥ ˙Bt∥ ≤ ∥A1 − A0∥ +� ∞ +0 +∥e−sBt∥2dt = ∥A1 − A0∥ +2λmin(Bt) = ∥A1 − A0∥ +2 +� +λmin(At) +. +Now note that λmin(At) ≥ λmin(A0), since At is just a convex combination of A0 and A1. Hence ∥ ˙Bt∥ ≤ +∥A1 − A0∥/2 +� +λmin(A0) for all t ∈ [0, 1]. Combining all of the above estimates gives +∥A−1/2 +1 +− A−1/2 +0 +∥ ≤ +∥A1 − A0∥ +2λmin(A0)3/2 . +References +[AR20] +P. Alquier and J. Ridgway. Concentration of tempered posteriors and of their variational approx- +imations. The Annals of Statistics, 48(3):1475–1497, 2020. +[BKM17] +D. M. Blei, A. Kucukelbir, and J. D. McAuliffe. Variational inference: A review for statisticians. +Journal of the American Statistical Association, 112(518):859–877, 2017. +[CC96] +M. K. Cowles and B. P. Carlin. Markov chain Monte Carlo convergence diagnostics: A compar- +ative review. Journal of the American Statistical Association, 91(434):883–904, 1996. +38 + +[DD21] +K. Daudel and R. Douc. Mixture weights optimisation for alpha-divergence variational inference. +In Advances in Neural Information Processing Systems, volume 34, pages 4397–4408, 2021. +[DDP21] +K. Daudel, R. Douc, and F. Portier. +Infinite-dimensional gradient-based descent for alpha- +divergence minimisation. The Annals of Statistics, 49(4):2250–2270, 2021. +[HY19] +W. Han and Y. Yang. +Statistical inference in mean-field variational Bayes. +arXiv preprint +arXiv:1911.01525, 2019. +[Kat23] +A. Katsevich. The dimension dependence of the Laplace approximation. In preparation, 2023. +[KGB22] +M. J. Kasprzak, R. Giordano, and T. Broderick. How good is your Gaussian approximation of +the posterior? Finite-sample computable error bounds for a variety of useful divergences. arXiv +preprint arXiv:2209.14992, 2022. +[Lan93] +S. Lang. Real and Functional Analysis. Graduate Texts in Mathematics. Springer New York, NY, +3 edition, 1993. +[LCB+22] M. Lambert, S. Chewi, F. Bach, S. Bonnabel, and P. Rigollet. Variational inference via Wasser- +stein gradient flows. arXiv preprint arXiv:2205.15902, 2022. +[Leb72] +N. Lebedev. Special Functions and Their Applications,. Dover Publications, 1972. +[Spo22] +V. Spokoiny. Dimension free non-asymptotic bounds on the accuracy of high dimensional Laplace +approximation. arXiv preprint arXiv:2204.11038, 2022. +[VdV00] +A. W. Van der Vaart. Asymptotic statistics, volume 3. Cambridge university press, 2000. +[WB19] +Y. Wang and D. M. Blei. Frequentist consistency of variational Bayes. Journal of the American +Statistical Association, 114(527):1147–1161, 2019. +[ZG20] +F. Zhang and C. Gao. Convergence rates of variational posterior distributions. The Annals of +Statistics, 48(4):2180–2207, 2020. +39 +