diff --git "a/1tE3T4oBgHgl3EQfngpH/content/tmp_files/load_file.txt" "b/1tE3T4oBgHgl3EQfngpH/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/1tE3T4oBgHgl3EQfngpH/content/tmp_files/load_file.txt" @@ -0,0 +1,746 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf,len=745 +page_content='Multivariate Regression via Enhanced Response Envelope: Envelope Regularization and Double Descent Oh-Ran Kwon and Hui Zou School of Statistics, University of Minnesota Abstract The envelope model provides substantial efficiency gains over the standard multi- variate linear regression by identifying the material part of the response to the model and by excluding the immaterial part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In this paper, we propose the enhanced response envelope by incorporating a novel envelope regularization term in its formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' It is shown that the enhanced response envelope can yield better prediction risk than the original envelope estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The enhanced response envelope naturally handles high- dimensional data for which the original response envelope is not serviceable without necessary remedies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In an asymptotic high-dimensional regime where the ratio of the number of predictors over the number of samples converges to a non-zero constant, we characterize the risk function and reveal an interesting double descent phenomenon for the first time for the envelope model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' A simulation study confirms our main theoret- ical findings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Simulations and real data applications demonstrate that the enhanced response envelope does have significantly improved prediction performance over the original envelope method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Keywords: Double descent, Envelope model, High-dimension asymptotics, Prediction, Reg- ularization 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='04625v1 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='ME] 11 Jan 2023 1 Introduction The envelope model first introduced by Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2010) is a modern approach to estimat- ing an unknown regression coefficient matrix β ∈ Rr×p in multivariate linear regression of the response vector y ∈ Rr on the predictors x ∈ Rp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' It was shown by Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2010) that the envelope estimator of β results in substantial efficiency gains relative to the standard maximum likelihood estimator of β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The gains arise by identifying the part of the response vector that is material to the regression and by excluding the immaterial part in the estima- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The original envelope model has been later extended to the envelope model based on excluding immaterial parts of the predictors to the regression by Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2013) then established the connection between the latter envelope model and partial least squares, providing a statistical understanding of partial least squares algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The success of the envelope models and their theories motivated some authors to propose new envelope models by applying or extending the core idea of envelope modeling to various statistical models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The two most common are the response envelope models and the predictor envelope models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The response envelope models (predictor envelope models) achieve estima- tion and prediction gains by eliminating the variability arising from the immaterial part of the responses (predictors) that is invariant to the changes in the predictors (responses).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Pa- pers on response envelope models include the original envelope model (Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2010), the partial envelope model (Su and Cook, 2011), the scaled response envelope model (Cook and Su, 2013), the reduced-rank envelope model (Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2015), the sparse envelope model (Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2016), the Bayesian envelope model (Khare et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2017), the tensor response enve- lope model (Li and Zhang, 2017), the envelope model for matrix variate regression (Ding and Cook, 2018), and the spatial envelope model for spatially correlated data (Rekabdarkolaee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Papers on predictor envelope models include the envelope model for predictor reduction (Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2013), the envelope model for generalized linear models and Cox’s proportional hazard model (Cook and Zhang, 2015a), the scaled predictor envelope model (Cook and Su, 2016), the envelope quantile regression model (Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2020), the envelope model for the censored quantile regression (Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2022), tensor envelope partial least squares regression (Zhang and Li, 2017), and envelope-based sparse partial least squares regression (Zhu and Su, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' For a comprehensive review of the envelope models, readers 2 are referred to Cook (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' High-dimensional data have become common in many fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' It is only natural to consider the performance of the envelope model under high dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The likelihood-based method to estimate β under both the response/predictor envelope model is not serviceable for high- dimensional data because the likelihood-based method requires the inversion of the sample covariance matrix of predictors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Hence, one has to find effective ways to mitigate this issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' For the predictor envelope model, its connection to partial least squares provides one solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Partial least squares (De Jong, 1993) can be used for estimating β for the predictor envelope model (Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The partial least squares algorithm is an iterative moment-based algorithm involving the sample covariance of predictors and the sample covariance between the response vector and predictors, which does not require inversion of the sample covariance matrix of predictors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In addition, the algorithm provides the root-n consistent estimator of β in the predictor envelope model with the number of predictors fixed (Chun and Kele¸s, 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2013) and can yield accurate prediction in the asymptotic high-dimensional regime when the response is univariate (Cook and Forzani, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Motivated by this, Zhu and Su (2020) introduced envelope-based sparse partial least squares and showed the consistency of the estimator for the sparse predictor envelope model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Zhang and Li (2017) proposed a tensor envelope partial least squares algorithm, which provides the consistent estimator for the tensor predictor envelope model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Another way to apply predictor envelope models for high-dimensional data is by selecting the principal components of predictors and then using likelihood-based estimation on the principal components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' This simple remedy is adapted by Rimal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2019) to compare the prediction performance of the likelihood-based predictor envelope method, principal component regression, and partial least squares regression for high-dimensional data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Their extensive numerical study showed that this simple remedy produced better prediction performance than principal component regression and partial least squares regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The impact of high dimensions is more severe for the response envelope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' There is far less work on making the response envelope model serviceable for high- dimensional data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The Bayesian approach for the response envelope model (Khare et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2017) can handle high-dimensional data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The sparse envelope model (Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2016) which performs variable selection on the responses can handle data with the sample size smaller 3 than the number of responses, but still requires the number of predictors smaller than the number of sample size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In this paper, we propose the enhanced response envelope for high-dimensional data by incorporating a novel envelope regularization term in its formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The envelope regu- larization term respects the fundamental idea of the original envelope model by considering the presence of the material and immaterial parts of the response in the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The en- hancements are twofold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' First, our enhanced response envelope estimator can handle both low- and high-dimensional data, while the original envelope estimator can only handle low- dimensional data where the sample size n is smaller than the number of predictors p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' From the connection between the original envelope estimator and the enhanced response envelope estimator in low-dimension, we extend the definition of the original envelope estimator to high-dimensional data by considering the limiting case of the enhanced response envelope es- timator with a vanishing regularization parameter;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' see the discussion in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Second, we prove that the enhanced response envelope can reduce the prediction risk relative to the original envelope for all values of n and p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Moreover, we study the asymptotics of the predic- tion risk for the original envelope estimator and the enhanced response envelope estimator when both n, p → ∞ and their ratio converges to a nonzero constant p/n → γ ∈ (0, ∞).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' This kind of asymptotic regime has been considered in high-dimensional machine learning theory (El Karoui, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Dobriban and Wager, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Liang and Rakhlin, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Hastie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2022) for analyzing the behavior of prediction risk of certain predictive models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We derive an interesting asymptotic prediction risk curve for the envelope estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The risk increases as γ increases, and then decreases after γ > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' This phenomenon is known as the double descent phenomenon in the machine learning literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Although the double descent phenomenon has been observed for neural networks and ridgeless regression (Belkin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Hastie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2022), this is the first time that such a phenomenon is shown for the envelope models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The rest of the paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We review the original envelope model and the corresponding envelope estimator in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2, we introduce a new regularization term called the envelope regularization based on which we propose the enhanced response envelope in section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The enhanced response envelope estimator nat- urally provides a definition for the envelope estimator when p > n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='4 describes 4 how to implement this new method in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='1, we prove that the enhanced response envelope can yield better prediction risk than the original envelope for any (n, p) pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Considering n, p → ∞ and p/n → γ ∈ (0, ∞), we derive the limiting prediction risk result of the original envelope and the enhanced response envelope in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' This result along with our simulation study in Section 4 verify the double descent phenomenon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Real data analyses are presented in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Proofs of theorems are provided in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 2 Enhanced response envelope 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='1 Review of envelope model Envelope model Let us begin with the classical multivariate linear regression model of a response vector y ∈ Rr given a predictor vector x ∈ Rp: y = βx + ε, ε ∼ N(0, Σ), (1) where ε is the error vector with a positive definite Σ and independent to x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' β ∈ Rr×p is an unknown matrix of regression coefficients and x ∼ Px where Px is a distribution on Rp such that E(x) = 0 and Cov(x) = Σx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We omit an intercept by assuming E(y) = 0 for easy communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The envelope model allows for the possibility that there is a part of the response vector that is unaffected by changes in the predictor vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Specifically, let E ⊆ Rr be a subspace such that for all x1 and x2, (i) QEy|(x = x1) ∼ QEy|(x = x2) and (ii) PEy ⊥⊥ QEy|x, (2) where PE is the projection onto E and QE = I − PE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Condition (i) states that the marginal distribution of QEy is invariant to changes in x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Condition (ii) says that QEy does not affect PEy if x is provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Conditions together imply that PE includes the relevant depen- dency information of y on x (the material part) while QE is the irrelevant information (the immaterial part).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Let B = span(β).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The conditions in (2) hold if and only if span(β) = B ⊆ E and Σ = PEΣPE + QEΣQE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (3) 5 The definition of an envelope introduced by Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2007, 2010) formalizes the smallest subspace satisfying the conditions in (2) using the equivalence relation of (2) and (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The envelope is defined as the intersection of all subspaces E satisfying (3) and is denoted by EΣ,B, Σ-envelope of B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The envelope model arises by parameterizing the multivariate linear model in terms of the envelope EΣ,B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The parameterization is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Let u = dim(EΣ,B), Γ ∈ Rr×u be any semi-orthogonal basis matrix for EΣ,B, and Γ0 ∈ Rr×(r−u) is any semi-orthogonal basis matrix for the orthogonal complement of EΣ,B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Then the multivariate linear model can be written as y = Γηx + ε, ε ∼ N(0, ΓΩΓT + Γ0Ω0ΓT 0 ), (4) where β = Γη with η ∈ Ru×p, and Ω ∈ Rr×r and Ω0 ∈ R(r−u)×(r−u) are symmetric positive definite matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Model (4) is called the envelope model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Envelope estimator The parameters in the envelope model are estimated by maximizing the likelihood function from model (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Assume that p+r < n and u is the dimension u of the envelope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' SX = n−1XTX, SY = n−1YTY, SY,X = n−1YTX, and SY|X = SY−SY,XS−1 X SX,Y, where Y ∈ Rn×r has rows yT i and X ∈ Rn×p has rows xT i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The envelope estimator of β is determined as ˆEΣ,B = span{arg min G∈Gr(r,u)(log |GTSY|XG| + log |GTS−1 Y G|)}, (5) where Gr(r, u) = {G ∈ Rr×u : G is a semi-orthogonal matrix}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Define ˆΓ as any semi- orthogonal basis matrix for ˆEΣ,B and let ˆΓ0 be any semi-orthogonal basis matrix for the orthogonal complement of ˆEΣ,B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The estimator of β is given by ˆβ = ˆΓˆΓTSY,XS−1 X , (6) and Σ is estimated by ˆΣ = ˆΓ ˆΩˆΓ + ˆΓT 0 ˆΩ0ˆΓ0 where ˆΩ = ˆΓTSY|XˆΓ, ˆΩ0 = ˆΓT 0 SY ˆΓ0, (7) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2 Envelope regularization In this section, we introduce the envelope regularization term that respects the fundamental idea in the envelope model by considering the presence of material and immaterial parts, 6 PEΣ,By and QEΣ,By, in the regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We define the envelope regularization term as ρ(η, Ω) = tr(ηTΩ−1η).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (8) The envelope model distinguishes between PEΣ,By and QEΣ,By in the estimation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The log-likelihood function of the envelope model is decomposed into two log-likelihood functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' One is the log-likelihood function for the multivariate regression of ΓTy on x, ΓTy = ηx+ΓTε where ΓTε ∼ N(0, Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The other is the log-likelihood function for the zero- mean model of ΓT 0 y, ΓT 0 y = ΓT 0 ε where ΓT 0 ε ∼ N(0, Ω0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The envelope regularization term (8) is the function of η and Ω, the parameters in the likelihood for the material part of the envelope model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The envelope regularization term (8) can be seen as imposing the Frobenius norm regularization on the coefficient after standardizing the material part of the regression to have uncorrelated errors, Ω−1/2ΓTy = Ω−1/2ηx + Ω−1/2ΓTε where Ω−1/2ΓTε ∼ N(0, I).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We emphasize that the envelope regularization is different from the ridge regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' While the ridge regularization ∥β∥2 F is the quadratic function of β, the envelope regulariza- tion is not because the components of Ω are not fixed values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The envelope regularization is the function of both η and Ω, and thus is optimized over η and Ω simultaneously, as shown in the next subsection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='3 The proposed estimator We only assume that r ≤ n but p is allowed to be bigger than n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The log-likelihood function under the envelope model (4) is Lu(η, EΣ,B, Ω, Ω0) = − (nr/2) log(2π) − (n/2) log |ΓΩΓT + Γ0Ω0ΓT 0 | − (1/2) n � i=1 (yi − Γηxi)T(ΓΩΓT + Γ0Ω0ΓT 0 )−1(yi − Γηxi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' By incorporating the envelope regularization term ρ given in the last subsection, we propose the following enhanced response envelope estimator via penalized maximum likelihood: arg max{Lu(η, EΣ,B, Ω, Ω0) − n 2λ · ρ(η, Ω)}, (9) where λ > 0 serves as a regularization parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 7 Let SX = n−1XTX, SY = n−1YTY, SY,X = n−1YTX, Sλ X = SX + λI and Sλ Y|X = SY − SY,X(Sλ X)−1SX,Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' After some basic calculations, (9) can be expressed as ˆEΣ,B(λ) = span{arg min G∈Gr(r,u)(log |GTSλ Y|XG| + log |GTS−1 Y G|)}, (10) where Gr(r, u) = {G ∈ Rr×u : G is a semi-orthogonal matrix}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Let ˆΓλ be any semi-orthogonal basis matrix for ˆEΣ,B(λ) and ˆΓ0,λ be any semi-orthogonal basis matrix for the orthogonal complement of ˆEΣ,B(λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The enhanced envelope estimator of β is given by ˆβ(λ) = ˆΓλˆΓT λSY,X(Sλ X)−1 (11) and Σ is estimated by ˆΣ(λ) = ˆΓλ ˆΩ(λ)ˆΓλ + ˆΓT 0,λ ˆΩ0(λ)ˆΓ0,λ where ˆΩ(λ) = ˆΓT λSλ Y|XˆΓλ, ˆΩ0(λ) = ˆΓT 0,λSY ˆΓ0,λ, (12) The enhanced response envelope estimator can naturally handle the case where p ≥ n−r, while the original envelope estimator (5) does not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Motivated by the definition of ridgeless regression (Hastie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2022), we can consider taking the limit of the enhanced response envelope estimator with λ → 0+: ˆEΣ,B = span{arg min G∈Gr(r,u)( lim λ→0+ log |GTSλ Y|XG| + log |GTS−1 Y G|)}, ˆβ = lim λ→0+ ˆβ(λ) (13) We take (13) as the definition of envelope estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Obviously, when p < n−r, this extended definition recovers the original envelope estimator (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' This definition enables the use of the envelope estimator when p ≥ n−r, without altering the definition of the original envelope estimator (5) when p < n−r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In practice, we implement (13) by computing the enhanced response envelope estimator (10) with a very small value of λ such as 10−8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' As the enhanced response envelope estimator (9) has flexibility on λ, the enhanced re- sponse envelope estimator with an appropriate choice of λ can yield better prediction risk compared to the envelope estimator, which is discussed in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We discuss the Grass- mannian manifold optimization required in (10) in the next subsection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='4 Implementation Suppose that the dimension u is specified and λ is given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Our proposed estimator ˆEΣ,B(λ) for EΣ(B) requires the optimization over the Grassmannian G(u, r).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Such a computation 8 problem exists for the original envelope model as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' So far, the best-known algorithm for solving envelope models is the algorithm introduced by Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Thus, we employ their algorithm to compute ˆEΣ,B(λ) in (10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Note that we standardize X so that each column has a mean of 0 and a standard deviation of 1 before fitting any model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In practice, the tuning parameter λ and the dimension u of the envelope are unknown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We use the cross-validation method to choose (u, λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' For the original envelope, u can be selected by using AIC, BIC, LRT or cross-validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' BIC and LRT may be preferred as shown by simulations in Su and Cook (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Because the enhanced response envelope model has an additional tuning parameter λ, we propose to use cross-validation to find the best tuning parameter combination of u and λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We have implemented the enhanced response envelope method in R and the code is available upon request.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 3 Theory In this section, we show that the enhanced response envelope can reduce the prediction risk over the envelope for any (n, p) pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We then consider the asymptotic setting when n, p → ∞ p/n → γ ∈ (0, ∞).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' This asymptotic regime has been considered in the literature (El Karoui, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Dobriban and Wager, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Liang and Rakhlin, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Hastie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2022) for analyzing the behavior of prediction risk of certain predictive models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In our discussion, we consider the case where EΣ(B) is known, which has been assumed in the existing envelope papers to understand the core mechanism of envelope methodologies (Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Cook and Zhang, 2015a,b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='1 Reduction in prediction risk Consider a test point xnew ∼ Px.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' For an estimator ˆβ, we define the prediction risk as R( ˆβ|X) = E[∥ ˆβxnew − βxnew∥2|X].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Note that this definition has the bias-variance decomposition, R( ˆβ|X) = ∥bias(vec( ˆβ)|X)∥2 + tr{Var(vec( ˆβ)|X)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 9 Let Γ be a semi-orthogonal basis matrix for EΣ,B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Following the discussion in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='3, we take (13) as the definition of the envelope estimator ˆβΓ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The prediction risk of ˆβΓ is R( ˆβΓ|X) = vecT(β)[ΠXΣxΠX ⊗ Ir]vec(β) � �� � bias2 + tr(Ω) n tr(S+ XΣx) � �� � var , where ΠX = Ip − S+ XSX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The prediction risk of the enhanced response envelope estimator ˆβΓ(λ) is R( ˆβΓ(λ)|X) = E[∥ ˆβΓ(λ)xnew − βxnew∥2|X] = λ2vecT(β)[(SX + λI)−1Σx(SX + λI)−1 ⊗ Ir]vec(β) � �� � bias2 + tr(Ω) n tr(ΣxSX(SX + λI)−2) � �� � var .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (14) Theorem 1 shows that using the envelope regularization always improves the prediction risk of the envelope model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' There always exists a λ > 0 such that R( ˆβΓ(λ)|X) < R( ˆβΓ|X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2 Limiting prediction risk and double descent phenomenon The asymptotics of the envelope model are well-established in the case where n diverges while p is fixed (Cook et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2010), while not in a high-dimensional asymptotic setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In this section, we examine the limiting risk of both the enhanced response envelope estimator and the envelope estimator in the high-dimensional asymptotic regime where n, p → ∞ with p/n → γ ∈ (0, ∞).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The number of response variables r is fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' This kind of asymptotic regime has been considered in high-dimensional machine learning theory (El Karoui, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Dobriban and Wager, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Liang and Rakhlin, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Hastie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2022) for analyzing the behavior of prediction risk of certain predictive models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Let x = Σ1/2 x x∗, where E(x∗) = 0 and Cov(x∗) = Ip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Then the envelope model (4) of y on x can be expressed as the envelope model of y on x∗: y = Γηx + ε = Γη∗x∗ + ε, where η∗ = ηΣ1/2 and ε ∼ N(0, ΓΩΓT + Γ0Ω0ΓT 0 ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We take advantage of the invariance property of the envelope model in the analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Considering the envelope on (y, x∗) amounts to assuming the covariance of the predictor is Ip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 10 0 2 4 6 8 0 5 10 15 20 25 γ Limiting prediction risk Envelope Enhanced response envelope Figure 1: The limiting prediction risks of the enhanced response envelope with λ∗ = tr(Ω)γ/c2 (gray solid line) and the envelope (black solid line), illustrating Theorem 2 when tr(Ω) = 10 and tr(βTβ) = 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Assume that x has a bounded 4th moment and that tr(ηTη) = c2 for all n, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Then as n, p → ∞, such that p/n → γ ∈ (0, ∞), almost surely, R( ˆβΓ|X) → � � � � � tr(Ω) γ 1−γ for γ < 1 c2(1 − 1 γ) + tr(Ω) 1 γ−1 for γ > 1, and R( ˆβΓ(λ∗)|X) → tr(Ω)γm(−λ∗), where λ∗ = tr(Ω)γ/c2 and m(z) = 1−γ−z−√ (1−γ−z)2−4γz (2γz) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Figure 1 visualizes the limiting prediction risk curves in Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' It plots the limiting risks of envelope (black solid line) and the enhanced response envelope with λ∗ = tr(Ω)γ/c2 (dark-gray solid line), when tr(Ω) = 10 and tr(ηTη) = 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We have four remarks from Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The limiting risk of envelope increases before γ = 1 and then decreases after γ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The double descent phenomenon has been observed in popular methods such as neural networks, kernel machines and ridgeless regression (Belkin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Hastie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2022), but this is the first time that such a result is established 11 in the envelope literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Second, the enhanced response envelope estimator always has a better asymptotic prediction risk than the envelope estimator (for any c2, tr(Ω), and γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Third, in Theorem 1, we show the existence of a λ that gives a smaller prediction risk of the enhanced response envelope than the envelope estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In an asymptotic regime, we specify such a λ value: λ∗ = tr(Ω)γ/c2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Lastly, the gap between two limiting prediction risks, limn,p→∞ R( ˆβΓ|X) and n,p→∞R( ˆβΓ(λ∗)|X), increases as γ increases from 0 to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' It is easy to see as 1 1−γ > m(−λ∗), 0 < γ < 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 4 Simulation In this section, we use simulations to compare the performance of the enhanced response envelope estimator and the envelope estimator in terms of the prediction risk, E[∥ ˆβxnew − βxnew∥2|X] = tr[( ˆβ − β)Cov(xnew)( ˆβ − β)T].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In addition, we use simulations to have a numeric illustration of the double descent phenomenon to confirm the asymptotic theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We consider a setting where yi ∈ R3 is generated from the model yi = βxi + εi, εi ∼ N(0, Σ), i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' , n, and xi ∈ Rp is generated independently from xi ∼ N(0, Σx(ρ)) where (i, j)th element of Σx(ρ) ∈ Rp×p is ρ|i−j|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The covariance matrix Σ is set using three orthonormal vectors and has eigenvalues 10, 8 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The columns of Γ are the second and third eigenvectors of Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Each component of ˜η ∈ R2×p is generated from the standard normal distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We then set η = √ 10 · ˜η/∥˜η∥F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In this setting, tr(ηTη) = 10, tr(Ω) = 10, and tr(Ω0) = 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We assume that dim(EΣ,B) = 2 is known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Prediction risk comparison In this simulation, we try different combinations of n, p and ρ where n ∈ {50, 90, 200, 500}, p/n ∈ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='8, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2} and ρ ∈ {0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='8}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We compare the prediction risk of the enhanced response envelope estimator to three different estimators: the envelope estimator, multivariate linear regression, and multivariate ridge regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' For the enhanced response envelope and the multivariate ridge regression, we perform ten-fold cross-validation on simulated data to select λ among equally spaced 100 candidate λ-values in the scale of logarithm base 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We compute the envelope estimator for data 12 with n ≤ p−r by taking a very small value of λ = 10−8 in the enhanced response envelope estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We fit multivariate regression model to n < p data by taking a tiny value of λ = 10−8 in the multivariate ridge regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We then calculate the prediction risk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' This process is repeated 100 times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In Table 1, we report the prediction risk averaged over 100 replications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' First, we see that the prediction risks from the enhanced response envelope are consistently smaller than the envelope, as indicated in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Second, the enhanced response envelope consistently gives smaller prediction risks compared to the multivariate ridge regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' When u = r, the enhanced response envelope model reduces to the multivariate ridge regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Therefore, the prediction risk of the enhanced envelope model can be smaller than that of multivariate ridge regression as long as tr(Ω0) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Double descent confirmation This simulation is designed to support Theorem 2 and to illustrate the double descent phenomenon in the envelope model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We set n ∈ {200, 2000} and ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' p/n varies from 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='1 to 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We compute the envelope and the enhanced response envelope with setting λ∗ = tr(Ω)p/(nc2) = p/n on simulated data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We then calculate the prediction risk for each estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Again, we fit n ≤ p−r data to the envelope estimator by taking a very small value of λ = 10−8 in the enhanced response envelope estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Figure 2 displays the prediction risks from n = 2000 with various p values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The gray rectangle points denote the prediction risk for the enhanced response envelope estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The black triangle points are the prediction risk for the envelope estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We see a fascinating double descent prediction risk curve for the envelope model, as discussed in Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Also, the enhanced response envelope gives a smaller prediction risk across the entire range of p/n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Figure 3 plots the prediction risk curves from n = 200.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We see that Figure 3 exhibits the same messages for the much smaller sample size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Although Theorem 2 is established when considering EΣ,B is known, we did not use this information in the actual estimation in the simulation study, yet the core message of Theorem 2 is confirmed by the simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 13 n p Enhanced envelope Envelope Multivariate linear reg Multivariate ridge reg Example 1: p/n = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='1, ρ = 0 50 5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='31 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='11) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='40 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='12) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='39 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='17) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='04 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='12) 90 9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='24 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='08) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='41 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='10) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='33 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='13) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='92 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='09) 200 20 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='16 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='04) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='26 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='05) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='31 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='05) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='93 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='04) 500 50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='06 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='03) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='18 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='03) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='28 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='04) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='85 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='04) Example 2: p/n = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='8, ρ = 0 50 40 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='73 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='18) 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='89 (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='80) 104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='45 (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='09) 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='16 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='11) 90 72 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='44 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='14) 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='10 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='93) 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='81 (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='24) 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='05 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='08) 200 160 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='86 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='10) 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='50 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='99) 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='33 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='63) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='91 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='06) 500 400 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='67 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='04) 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='61 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='85) 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='17 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='11) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='89 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='03) Example 3: p/n = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2, ρ = 0 50 60 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='23) 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='70 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='33) 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='79 (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='83) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='08 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='11) 90 108 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='58 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='13) 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='01 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='36) 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='38 (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='60) 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='98 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='07) 200 240 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='07) 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='91 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='18) 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='94 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='83) 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='82 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='04) 500 600 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='78 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='04) 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='43 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='91) 103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='33 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='55) 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='75 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='03) Example 4: p/n = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='1, ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='8 50 5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='76 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='11) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='98 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='19) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='39 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='17) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='84 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='07) 90 9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='05) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='40 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='08) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='33 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='13) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='45 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='06) 200 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='90 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='03) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='30 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='04) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='31 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='05) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='31 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='03) 500 50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='78 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='02) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='19 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='03) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='28 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='04) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='22 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='02) Example 5: p/n = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='8, ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='8 50 40 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='16 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='17) 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='50 (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='34) 104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='45 (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='09) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='76 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='12) 90 72 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='78 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='15) 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='14 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='85) 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='81 (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='24) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='63 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='10) 200 160 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='32 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='05) 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='40 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='99) 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='33 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='63) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='28 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='05) 500 400 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='09 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='03) 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='69 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='87) 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='17 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='11) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='05 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='03) Example 6: p/n = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2, ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='8 50 60 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='24 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='23) 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='43 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='12) 104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='17 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='37) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='80 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='14) 90 108 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='41 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='12) 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='84 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='68) 103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='01 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='03) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='16 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='09) 200 240 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='05 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='07) 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='46 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='30) 109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='34 (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='17) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='98 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='06) 500 600 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='86 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='03) 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='98) 112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='62 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='71) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='82 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='03) Table 1: Prediction risk, averaged over 100 replications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The standard error is given in paren- theses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' For n ≤ p−r data, we compute the envelope by taking a very small value of λ = 10−8 in the enhanced response envelope;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' see the definition of the envelope estimator (13) in Sec- tion 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' For n < p data, we fit the multivariate regression model by taking a tiny value of λ = 10−8 in the multivariate ridge regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 14 0 2 4 6 8 0 5 10 15 20 25 p/n Prediction risk Envelope Enhanced response envelope Figure 2: Prediction risk of the envelope and the enhanced response envelope with λ∗ = tr(Ω)p/(nc2), when n = 2000 and p varies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' For n ≤ p−r data, we fit the envelope by taking a very small value of λ = 10−8 in the enhanced response envelope estimator;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' see the definition of the envelope estimator (13) in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 5 Real data In this section, we use two real datasets to illustrate the enhanced response envelope esti- mator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We use air pollution data in which the number of samples is bigger than the number of predictors (n > p) in the next subsection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In Subsection 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2, we analyze near-infrared spectroscopy data in which the number of predictors is much bigger than the number of predictors (p ≫ n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We compare the prediction performance of the enhanced response envelope estimator to the envelope estimator, multivariate regression, and multivariate ridge regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='1 Air pollution data The air pollution data are available and obtained directly from Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='5 of Johnson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The response vector y ∈ R5 consists of atmospheric concentrations of CO, NO, NO2, O3, and HC, recorded at noon in the Los Angeles area on 42 different days.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The two predictors 15 0 2 4 6 8 0 5 10 15 20 25 p/n Prediction risk Envelope Enhanced response envelope Figure 3: Prediction risk of the envelope and the enhanced response envelope with λ∗ = tr(Ω)p/(nc2), when n = 200 and p varies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' For n ≤ p−r data, we fit the envelope by taking a very small value of λ = 10−8 in the enhanced response envelope estimator;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' see the definition of the envelope estimator (13) in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' are wind speed and solar radiation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' This data were analyzed in Cook (2018) to illustrate the effectiveness of the original envelope model compared to the standard multivariate regression model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' They showed that the asymptotic standard errors of estimated components of β from the envelope model are significantly reduced compared to those from the standard multivariate regression model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We use the data to predict atmospheric concentrations from wind speed and solar radiation and compare the prediction performance of the enhanced response envelope estimator to the envelope estimator, the standard multivariate regression, and multivariate ridge regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' To compare the prediction performance, we borrow the nested cross validation idea (Wang and Zou, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Bates et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', 2021), in which an inner cross-validation is performed to tune a model and an outer cross-validation is performed to provide a prediction error of the tuned model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We adopt the leave-one-out cross-validation (LOOCV) procedure for the outer loop because the LOOCV error is an unbiased estimator of the generalization error of the tuned model and is shown to have nice performance compared to other methods for 16 Enhanced envelope Envelope Multivariate linear reg Multivariate ridge reg Error 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='859 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='951 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='192 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='124 Table 2: Air pollution data: prediction error of the enhanced response envelope method, the original envelope method, the multivariate linear regression, and the multivariate ridge regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' estimating generalization errors (Wang and Zou, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We take the ith observation out from the data and set the remaining n−1 observations as the training set to fit and tune models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We standardize X of the training set so that each column has a mean of 0 and a standard deviation of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We perform ten-fold cross-validation to select (u, λ) from a fine grid of u ∈ {0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' , 5} and 20 equally spaced candidate λ-values in the scale of logarithm base 10 for the enhanced response envelope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' For the envelope, we perform ten-fold cross-validation to choose u from {0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' , 5}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' For the multivariate ridge model, ten-fold cross-validation is performed to select λ from 20 equally spaced λ-values in the scale of logarithm base 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The ith observation we take out at the beginning is set as the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We standardize xi of the test set using the mean and standard deviation of the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We then calculate the squared prediction error, ∥yi − ˆβ(−i)xi∥2 2/r, where ˆβ(−i) is the estimated regression coefficient derived from the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We repeat this process for i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' , n and report �n i=1 ∥yi − ˆβ(−i)xi∥2 2/(nr) in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We see that the enhanced response envelope estimator gives the smallest prediction error among all competitors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2 Near-infrared spectroscopy data of fresh cattle manure Near-infrared spectroscopy data of cattle manure were collected by Gog´e et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The data are available in the Data INRAE Repository at https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='15454/JIGO8R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' This data contain 73 cattle manure samples that were analyzed by near-infrared spectroscopy using a NIRFlex device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Near-infrared spectra were recorded every 2 nm from 1100 to 2498 nm on fresh homogenized samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In addition, the cattle manure samples were analyzed for three chemical properties: the amount of dry matter, magnesium oxide, and potassium 17 Enhanced envelope Envelope Multivariate linear reg Multivariate ridge reg Error 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='437 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='460 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='692 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='492 Table 3: Near-infrared spectroscopy data: prediction error from the enhanced response enve- lope method, the envelope method, the multivariate linear regression, and the multivariate ridge regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We compute the envelope estimator by taking a very small value of λ = 10−8 in the enhanced response envelope estimator;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' see the definition of the envelope estimator (13) in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We fit the multivariate regression model by taking a very small value of λ = 10−8 in the multivariate ridge regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' oxide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We use the data of 62 cattle manure samples which have no missing values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We standardize each chemical property to have a sample mean of 0 and a standard deviation of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In our analysis, we consider the multivariate linear model, where xi ∈ R700 is the vector of near-infrared spectroscopy measurements and yi ∈ R3 is the vector of three chemical measurements to predict the three chemical properties from the absorbance spectra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In Table 3, we report the prediction error which is calculated using the same procedure described in the previous subsection, except that u is chosen from {0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' , 3}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Again, We see that the enhanced response envelope estimator has the smallest prediction error among all competitors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 6 Discussion In this paper, we have developed a novel envelope regularization function which is used to define the enhanced envelope estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We have shown that the enhanced envelope estimator is indeed better than the un-regularized envelope estimator in prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The asymptotic analysis of the risk function of envelope reveals, for the first time in the envelope literature, an interesting double descent phenomenon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The numeric examples in this work also suggest that the enhanced response envelope estimator is a promising new tool for multivariate regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 18 Although this paper is focused on the case where the number of responses (r) is less than the number of samples and the number of predictors, it is interesting to consider the case when r → ∞ in ultrahigh-dimensional problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2016) studied the response envelope for r → ∞ but p is fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' When both p, r > n and diverge, there are additional technical issues to be addressed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' For example, we may need another penalty term to handle the issues caused by the large r in the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' This direction of research will be investigated in a separate paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' References Bai, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Miao, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Pan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2007), “On asymptotics of eigenvectors of large sample covariance matrix,” The Annals of Probability, 35, 1532–1572.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Bai, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' and Yin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='-Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2008), “Limit of the smallest eigenvalue of a large dimensional sample covariance matrix,” in Advances In Statistics, World Scientific, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 108–127.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Bates, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Hastie, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Tibshirani, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2021), “Cross-validation: what does it estimate and how well does it do it?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='00673.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Belkin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Hsu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Ma, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Mandal, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2019), “Reconciling modern machine-learning practice and the classical bias–variance trade-off,” Proceedings of the National Academy of Sciences, 116, 15849–15854.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Chun, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' and Kele¸s, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2010), “Sparse partial least squares regression for simultaneous dimension reduction and variable selection,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72, 3–25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Cook, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2018), An Introduction to Envelopes: Dimension Reduction for Efficient Estima- tion in Multivariate Statistics, Wiley Series in Probability and Statistics, Wiley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Cook, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' and Forzani, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2019), “Partial least squares prediction in high-dimensional regression,” The Annals of Statistics, 47, 884–908.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Cook, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Forzani, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Su, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2016), “A note on fast envelope estimation,” Journal of Multivariate Analysis, 150, 42–54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 19 Cook, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Forzani, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2015), “Envelopes and reduced-rank regression,” Biometrika, 102, 439–456.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Cook, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Helland, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Su, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2013), “Envelopes and partial least squares regression,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75, 851–877.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Cook, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Li, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Chiaromonte, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2007), “Dimension reduction in regression without matrix inversion,” Biometrika, 94, 569–584.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' — (2010), “Envelope models for parsimonious and efficient multivariate linear regression,” Statistica Sinica, 927–960.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Cook, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' and Su, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2013), “Scaled envelopes: scale-invariant and efficient estimation in multivariate linear regression,” Biometrika, 100, 939–954.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' — (2016), “Scaled predictor envelopes and partial least-squares regression,” Technometrics, 58, 155–165.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Cook, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' and Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2015a), “Foundations for envelope models and methods,” Journal of the American Statistical Association, 110, 599–611.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' — (2015b), “Simultaneous envelopes for multivariate linear regression,” Technometrics, 57, 11–25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' De Jong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (1993), “SIMPLS: an alternative approach to partial least squares regression,” Chemometrics and intelligent laboratory systems, 18, 251–263.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Ding, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' and Cook, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2018), “Matrix variate regressions and envelope models,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80, 387–408.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Ding, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Su, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Zhu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2020), “Envelope quantile regression,” Statistica Sinica.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Dobriban, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' and Wager, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2018), “High-dimensional asymptotics of prediction: Ridge regression and classification,” The Annals of Statistics, 46, 247–279.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 20 El Karoui, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2018), “On the impact of predictor geometry on the performance on high- dimensional ridge-regularized generalized robust regression estimators,” Probability Theory and Related Fields, 170, 95–175.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Gog´e, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Thuri`es, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Fouad, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Damay, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Davrieux, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Moussard, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Le Roux, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Trupin-Maudemain, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Val´e, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Morvan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2021), “Dataset of chemical and near- infrared spectroscopy measurements of fresh and dried poultry and cattle manure,” Data in Brief, 34, 106647.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Hastie, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Montanari, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Rosset, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Tibshirani, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2022), “Surprises in high- dimensional ridgeless least squares interpolation,” The Annals of Statistics, 50, 949–986.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Johnson, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Wichern, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2002), Applied multivariate statistical analysis, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 5, Prentice hall Upper Saddle River, NJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Khare, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Pal, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Su, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2017), “A bayesian approach for envelope models,” The Annals of Statistics, 196–222.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' and Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2017), “Parsimonious tensor response regression,” Journal of the American Statistical Association, 112, 1131–1146.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Liang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' and Rakhlin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2020), “Just Interpolate: Kernel “Ridgeless” Regression can generalize,” Annals of Statistics, 48, 1329–1347.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Rekabdarkolaee, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Wang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Naji, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Fuente, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2020), “NEW PARSIMONIOUS MULTIVARIATE SPATIAL MODEL,” Statistica Sinica, 30, 1583–1604.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Rimal, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Almøy, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Sæbø, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2019), “Comparison of multi-response prediction meth- ods,” Chemometrics and Intelligent Laboratory Systems, 190, 10–21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Su, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' and Cook, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2011), “Partial envelopes for efficient estimation in multivariate linear regression,” Biometrika, 98, 133–146.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' — (2013), “Estimation of multivariate means with heteroscedastic errors using envelope models,” Statistica Sinica, 213–230.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 21 Su, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Zhu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2016), “Sparse envelope model: efficient estimation and response variable selection in multivariate linear regression,” Biometrika, 103, 579– 593.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Wang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' and Zou, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2021), “Honest leave-one-out cross-validation for estimating post- tuning generalization error,” Stat, 10, e413.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' and Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2017), “Tensor envelope partial least-squares regression,” Technomet- rics, 59, 426–436.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', Van Keilegom, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', and Ding, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2022), “Envelopes for censored quantile regression,” Scandinavian Journal of Statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Zhu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' and Su, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2020), “Envelope-based sparse partial least squares,” The Annals of Statistics, 48, 161–182.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' A Proofs of Theorems A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='1 Proof of Theorem 1 Note that R( ˆβΓ(λ)|X) = λ2tr(β(SX + λI)−1Σx(SX + λI)−1βT) + tr(Ω) n tr(ΣxSX(SX + λI)−2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Therefore, we have ∂ ∂λR( ˆβΓ(λ)|X) = 2λ · tr(βSX(SX + λI)−2Σx(SX + λI)−1βT) − 2tr(Ω) n tr(ΣxSX(SX + λI)−3) ≤ p � i=1 � 2λ · σi(βTβ) − 2tr(Ω) n � σi(ΣxSX(SX + λI)−3), where σi(M) denotes the i-th largest eigenvalue of M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The inequality above comes from Von Neumann’s trace inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 22 Since ∂ ∂λR( ˆβΓ(λ)|X) < 0 if λ < tr(Ω)/(nσ1 � βTβ) � , R( ˆβΓ(λ)|X) is a monotonically decreasing function if 0 ≤ λ ≤ tr(Ω)/(nσ1 � βTβ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Therefore, we have R( ˆβΓ(λ)|X) < tr(Ω) n tr(ΣxS+ X), when 0 < λ < tr(Ω)/(nσ1 � βTβ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Since tr(Ω) n tr(ΣxS+ X) ≤ R( ˆβΓ|X), we prove the theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2 Proof of Theorem 2 Our analyses of limiting prediction risk follow that of Hastie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' As Σx = I, R( ˆβΓ|X) = vecT(β)[ΠX ⊗ Ir]vec(β) + tr(Ω) n tr(S+ X), R( ˆβΓ(λ)|X) = λ2tr(β(SX + λI)−2βT) + tr(Ω) n tr(SX(SX + λI)−2), where ΠX = Ip − S+ XSX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='1 Proof for envelope estimator when γ < 1 Let us consider the case where p/n → γ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' From Theorem 1 of Bai and Yin (2008), σmin(SX) ≥ (1 − √γ)2/2 and σmax(SX) ≤ 2(1 + √γ)2 almost surely for all sufficiently large n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Therefore, in this case, SX is invertible and the bias term of R( ˆβΓ|X) is 0, almost surely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The variance term of R( ˆβΓ|X) is tr(Ω) n tr(S+ X) = p · tr(Ω) n � 1 sdFSX(s), where FSX(s) is the spectral measure of SX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' By the Marchenko-Pastur theorem, which says that FSX → Fγ, and the Portmanteau theorem, � 2(1+√γ)2/ (1−√γ)2/2 1 sdFSX(s) → � 2(1+√γ)2/ (1−√γ)2/2 1 sdFγ(s) = � 1 sdFγ(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 23 The equality is because the support of Fγ is [(1 − √γ)2, (1 + √γ)2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We can also remove the upper and lower limits of integration on the left-hand side by Theorem 1 of Bai and Yin (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Thus, combining above results, we arrive at R( ˆβΓ|X) → γ · tr(Ω) � 1 sdFγ(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The Stieltjes transformation of Fγ is given by m(z) = � 1 s − zdFγ(s) = (1 − γ − z) − � (1 − γ − z)2 − 4γz) 2γz , for any real z < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' By taking the limit z → 0−, the proof is completed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2 Proof for envelope estimator when γ > 1 The variance term of R( ˆβΓ|X) is tr(Ω) n tr(S+ X) = tr(Ω) n tr((XXT/n)+) = tr(Ω) p tr((XXT/p)+).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Considering n/p → τ = 1/γ < 1, by the same arguments from the proof above, we conclude that tr(Ω) n tr(S+ X) → tr(Ω) 1 γ − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Let β = [bT 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' bT r ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The bias term is vecT(β)[ΠX ⊗ Ir]vec(β) = r � i=1 bT i ΠXbi = r � i=1 lim z→0+ zbT i (SX + zI)−1bi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' From Theorem 1 of Bai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' (2007), we have that zbT i (SX + zI)−1bi → z � 1 s + zFγ(s) = z∥bi∥2m(−z) a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=', for any i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' , r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We further have that r � i=1 zbT i (SX + zI)−1bi → zc2m(−z) a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' By the Arzela-Ascoli theorem and the Moore-Osgood theorem, we exchange limits and arrive at lim z→0+ r � i=1 zbT i (SX + zI)−1bi → c2 lim z→0+ zm(−z) = c2(1 − 1/γ) a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Combining the variance and the bias terms, we complete the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 24 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content='3 Proof for enhanced envelope estimator We use the similar techniques from the envelope estimator for both variance and bias terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The variance term of R( ˆβΓ(λ)) becomes tr(Ω) n tr(SX(SX + λI)−2) → γtr(Ω) � s (s + λ)2Fγ(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Let gn,λ(η) = λ · tr(β(SX + λ(1 + η)I)−1βT), η ∈ [−1/2, 1/2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The bias term of R( ˆβΓ(λ)) is λ2tr(β(SX + λI)−2βT) = − ∂ ∂ηgn(λ, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' Because gn,λ(η) → λc2m(−λ(1 + η)) = λc2 � 1 s + λ(1 + η)dFγ(s), and derivative and limit are exchangeable, we have that λ2tr(β(SX + λI)−2βT) → λ2c2 � 1 (s + λ)2dFγ(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' We can conclude that, R( ˆβΓ(λ)) → � λ2c2 + s · γtr(Ω) (s + λ)2 Fγ(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' The right-hand side is minimized at λ∗ = γtr(Ω)/c2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' In such case, the right-hand side becomes γtr(Ω) · m(−λ∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'} +page_content=' 25' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1tE3T4oBgHgl3EQfngpH/content/2301.04625v1.pdf'}