diff --git "a/7tAyT4oBgHgl3EQfc_c0/content/tmp_files/load_file.txt" "b/7tAyT4oBgHgl3EQfc_c0/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/7tAyT4oBgHgl3EQfc_c0/content/tmp_files/load_file.txt" @@ -0,0 +1,1667 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf,len=1666 +page_content='Inference for Large Panel Data with Many Covariates∗ Markus Pelger† Jiacheng Zou‡ December 31, 2022 Abstract This paper proposes a new method for covariate selection in large dimensional panels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We de- velop the inferential theory for large dimensional panel data with many covariates by combining post-selection inference with a new multiple testing method specifically designed for panel data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our novel data-driven hypotheses are conditional on sparse covariate selections and valid for any regularized estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Based on our panel localization procedure, we control for family-wise error rates for the covariate discovery and can test unordered and nested families of hypothe- ses for large cross-sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As an easy-to-use and practically relevant procedure, we propose Panel-PoSI, which combines the data-driven adjustment for panel multiple testing with valid post-selection p-values of a generalized LASSO, that allows to incorporate priors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In an empir- ical study, we select a small number of asset pricing factors that explain a large cross-section of investment strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our method dominates the benchmarks out-of-sample due to its better control of false rejections and detections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Keywords: panel data, high-dimensional data, LASSO, number of covariates, post-selection inference, multiple testing, adaptive hypothesis, step-down procedures, factor model JEL classification: C33, C38, C52, C55, G12 ∗We thank conference and seminar participants at Stanford, the California Econometric conference and the NBER-NSF SBIES conference for helpful comments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Jiacheng Zou gratefully acknowledges the generous support by the MS&E Departmental Fellowship, and Charles & Katherine Lin Fellowship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' †Stanford University, Department of Management Science & Engineering, Email: mpelger@stanford.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' ‡Stanford University, Department of Management Science & Engineering, Email: jiachengzou@stanford.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='edu arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00292v1 [econ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='EM] 31 Dec 2022 1 Introduction Our goal is the selection of a parsimonious sparse model from a large set of candidate covariates that explains a large dimensional panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This problem is common in many social science applica- tions, where a large number of potential covariates are available to explain the time-series of a large cross-section of units or individuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' An example is empirical asset pricing, where the literature has produced a “factor zoo” of potential risk factors to explain the large cross-section of stock returns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This problem requires a large panel, as a successful asset pricing model should explain the many available investment strategies, resulting in a large panel of test assets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' At the same time, there is no consensus about which are the appropriate factors, which leads to a statistical selection problem from a large set of candidate risk factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' So far, the literature has only provided solutions for one of the two subproblems, while keeping the dimensionality of the other problem small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our paper closes this gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The inferential theory on a large panel with many covariates is a challenging problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As a first step, we have to select a sparse set of covariates from a large pool of candidates with a regularized estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The challenge is to provide valid p-values from this estimation that account for the post-selection inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Furthermore, researchers might want to impose economic priors on which variables should be more likely to be selected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The second challenge is that the panel cross-section results in a large number of p-values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, some of them are inadvertently very small, which if left unaddressed leads to “p-hacking”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The multiple testing adjustment conditional on the selected subset of covariates from the first step is a novel problem, and requires to redesign what hypotheses should be tested jointly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A naive counting of all tests is overly conservative, and the test design and simultaneity counts need to be conditional on the covariate selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This paper proposes a new method for covariate selection in large dimensional panels, tackling all of the above challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We develop the inferential theory for large dimensional panel data with many covariates by combining post-selection inference with a new multiple testing method specifi- cally designed for panel data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our novel data-driven hypotheses are conditional on sparse covariate selections and valid for any regularized estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Based on our panel localization procedure, we control for family-wise error rates for the covariate discovery and can test unordered and nested families of hypotheses for large cross-sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As an easy-to-use and practically relevant procedure, we propose Panel-PoSI, which combines the data-driven adjustment for panel multiple testing with valid post-selection p-values of a generalized LASSO, that allows to incorporate priors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our paper proposes the novel conceptual idea of data-driven hypotheses family for panels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This allows us to put forward a unifying framework of valid post-selection inference and multiple test- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Leveraging our data-driven hypotheses family, we adjust for multiple testing with a localized simultaneity count, which increases the power, while maintaining false discovery rate control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' An essential step for a formal statistical test is to formulate the hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This turns out to be non-trivial for a large panel with a first stage selection step for the covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It is a fundamental insight of our paper, that the hypothesis of our test has to be conditional on the selected set of 1 active covariates of the first stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Once we have defined the appropriate hypothesis, we can deal with the multiple testing adjustment, which by construction is also conditional on the selection step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our method is a disciplined approach based on formal statistical theory to construct and in- terpret a parsimonious model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It goes beyond the selection of a sparse set of covariates as it also provides the inferential theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This is important as it allows to rank the covariates based on their statistical significance and can also be applied for relatively short time horizons, where cross- validation for tuning a regularization parameter might not be reliable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We answer the question which covariates are needed to explain the full panel jointly, and can also accommodate “weak” covariates or factors that only affect a small subset of the cross-sectional units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our data-driven hypothesis perspective exploits the geometric structure implied by the first stage selection step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Given valid post-selection p-values of a regularized sparse estimator from time-series regressions, we collect them across the large cross-section into a “matrix” of p-values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Only active coefficients, that are selected in the first stage, contribute p-value entries, whereas covariates that were non-active lead to “holes” in this matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We leverage the non-trivial shape of this matrix to form our adaptive hypotheses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This allows us to make valid multiple testing adjusted inference statements, for which we design a panel modified Bonferroni-type procedure that can control for the family-wiser error rate (FWER) in discovery of the covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As one loosens the FWER requirements, the inferential thresholds admits more and more explanatory variables, which suggests that the amount of covariates we expect to admit and the FWER control level form an “false-discovery control frontier”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We provide a method that allows us to traverse the inferential results and determine the least number of covariates that have to be included given a user-specified FWER level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In other words, we provide a statistical significance test for the number of factors in a panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We propose the novel procedure Panel-PoSI, which combines the data-driven adjustment for panel multiple testing with valid post-selection p-values of a generalized LASSO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' While our multiple testing procedure is valid for any sparsity constrained model, Panel-PoSI is an easy-to-use and prac- tically relevant special case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We propose Weighted-LASSO for the first stage selection regression and provide valid p-values through post-selection inference (PoSI), which yields a truncated-Gaussian distribution for an adjusted LASSO estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This geometric perspective is less common in the LASSO literature, but has the advantage that it avoids the use of infeasible quantities, in particu- lar the second moment of the large set of potential covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The Weighted-LASSO generalizes LASSO by allowing to put weights onto prior belief sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For example, a researcher might have economic knowledge that she wants to include in her statistical selection method, and impose an in- finite prior weight to include specific covariates in the sparse selection model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our Weighted-LASSO makes several contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' First, the expression for the truncated conditional distribution with weights become much more complex than for the special case of the conventional LASSO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Second, we provide a simple, easy-to-use and asymptotically valid conditional distribution in the case of an estimated noise variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 2 We demonstrate in simulations and empirically that our inferential theory allows us to select better models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We compare different estimation approaches to select covariates and show that our approach better trades off false discovery and correct selections and hence results in a better out- of-sample performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our empirical analysis studies the fundamental problem in asset pricing of selecting a parsimonious factor model from a large set of candidate factors that can jointly explain the asset prices of a large cross-section of investment strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We consider a standard data set of 114 candidate asset pricing factors to explain 243 double sorted anomaly portfolios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We show that Panel PoSI selects 3 factors which form the best model to explain out-of-sample the expected returns and the variations of the test assets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The selected factors are economically meaningful and we can rank them based on their relative importance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A prior on the Fama-French factors does not improve the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our findings contributes to the discussion about the number of asset pricing factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The rest of the paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Section 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 relates our work to the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Section 2 introduces the model and the Weighted-LASSO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Section 3 discusses the appropriate hypotheses to be considered for inference on the entire panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Section 4 proposes a joint unordered test for the panel using multiple testing adjustment so that we can maintain FWER control, and shows how to traverse this procedure to acquire the least factor count associated with each FWER target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In section 5 we consider the case of nested hypotheses, where the covariates observe a fixed ordering, which is of independent interest, and we propose a step-down procedure for this setting that maintains false discovery control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Section 6 provides the results of our simulation and Section 7 discusses our empirical studies on a large asset pricing panel data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Section 8 concludes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The proofs and more technical details are available in the Online Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 Related Literature The problem of multiple testing is an active area of research with a long history.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The statistical inference community has studied the problem of controlling the classical FWER since Bonferroni (1935), and controlling for false-discover rate (FDR) going back to Benjamini and Hochberg (1995) and Benjamini and Yekutieli (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Bonferroni (1935) allows for arbitrary correlation in the test statistics because its validity comes from a simple union bound argument, and is in fact the optimal test when statistics are “close to independent” under true sparse non-nulls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' FDR control on the other hand requires a discussion about the estimated covariance in the test statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Recent developments include a stream of papers led by Barber and Cand´es (2015) and Cand´es, Fan, Janson, and Lv (2018), which constructs a generative model to produce fake data and control for FDR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Fithian and Lei (2022) is a more recent work that iteratively adjusts the threshold for each hypothesis in the family to seek finite sample exact FDR control and dominates Benjamini and Hochberg (1995) and Benjamini and Yekutieli (2001) in terms of power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Another notion on temporal false discovery control has been revived more recently by Johari, Koomen, Pekelis, and Walsh (2021), who consider the industry practice of constantly checking p-values and provide an early stopping in line with Siegmund (1985) that adjusts for bias from sequentially picking favorable 3 evidence, whereas we consider a static panel that is not an on-going experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' There are cases where the covariates warrant a natural order such that the hypothesis family possesses a special testing logic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A hierarchical structure in covariates arises when the inclusion of the next covariate only make sense if the previous covariates is included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' An example is the use of principal component (PC) factors, where PCs are included sequentially from the dominating one to the least dominating one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We distinguish this from putting weights and assigning importance on features because this variant of family of hypotheses warrants a new definition of FWER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We propose a step-down procedure that can be considered as a panel extension of G’Sell, Wager, Chouldechova, and Tibshirani (2016), relying on an approximation of the R´enyi representation of p-values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The step-down control for nested FWER is based on Simes (1986), which along with Bonferroni (1935) can be seen as comparing sorted p-values against linear growth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our framework contributes to estimating the number of principal component factors in a panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' There are have been many studies that provide consistent estimators for the number of PCs based on the divergence in eigenvalues of the covariance matrix, which include Onatski (2010), Ahn and Horenstein (2013) and Pelger (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Another direction uses sequential testing procedures that presume correct nested family of hypotheses, which include Kapetanios (2010) and Choi, Taylor, and Tibshirani (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In contrast, we characterize the least amount of factors (which can also be based on principal components), which should be expected when a FWER rate is provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The nested version of our procedure is close in nature to a panel version of “when-to-stop” problem of a multiple testing procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The problem of post-LASSO statistical testing for small dimensional cross-sections is studied in a stream of papers including Meinshausen and B¨uhlmann (2006), Zhang and Zhang (2014), van de Geer, B¨uhlmann, Ritov, and Dezeure (2014) and Javanmard and Montanari (2018), which consider inference statements by debiasing the LASSO estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' An alternative stream of post-selection or post-machine learning inference literature includes Chernozhukov, Hansen, and Spindler (2015), Kuchibhotla, Brown, Buja, George, and Zhao (2018) and Zrnic and Jordan (2020), who provide non-parametric post-selection or post-regularization valid confidence intervals and p-values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' These papers do not make conditional statements and presume that the researcher sets the hypotheses before seeing the data, which we will refer to as data agnostic hypothesis family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We follow a dif- ferent train of thought that treats LASSO, among a family of conic maximum likelihood estimator, as a polyhedral constraint on the support of the response variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This geometric perspective that provides inferential theory post-LASSO is pioneered by the work of Lee, Sun, Sun, and Taylor (2016) and followed up by Fithian, Sun, and Taylor (2017) and Tian and Taylor (2018), assum- ing Gaussian linear model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Markovic, Xia, and Taylor (2018) extends the results to LASSO with cross-validation, Tian, Loftus, and Taylor (2018) discusses a square-root LASSO variant that takes unknown covariance into consideration and Tian and Taylor (2017) considers the asymptotic results when removing the Gaussian assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This body literature is often referred to as PoSI, and traverses the Karush-Kuhn-Tucker (KKT) condition of a LASSO optimization problem to show that the LASSO fit can be expressed as a polyhedral constraint on the support of the response 4 variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We extend this work by allowing to put weights onto prior belief sets, and by bringing it to the panel setting with multiple testing adjustment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 2 Sparse linear models We consider a large dimensional panel data set Y ∈ RT×N which we want explain with a large number of potential covariates X ∈ RT×J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The panel data and explanatory variables are both observed over T time periods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 The size of the cross-section N and the dimension of the covariate candidate set J are both large in our problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We assume a linear relationship between Y and X: Yt,n = J � j=1 Xt,jβ(n) j + ϵt,n for n = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', N, which reads in matrix notation as Y = Xβ + ϵ (1) We refer to the coefficients β as loading matrix, where the nth column β(n) corresponds to the nth unit and β(n) j denotes the loading of the nth unit on the jth covariate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The remainder term ϵ is unexplained noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We assume that a sparse linear model can explain jointly the full panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Formally, a sparse linear model with s active covariates is Y = XSβS + ϵ (2) where s = |S| is the cardinality of the set of active covariates S = {j : ∃β(j) n ̸= 0, n ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', N}, that is, the set of covariates with non-zero loadings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' XS is the subset of covariates that belong to S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our goal is to estimate this low dimensional model, that can explain the full panel, from a large number of candidate covariates, and provide a valid inferential theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Note that our sparse model formulation allows for two important properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' First, different units can be explained by different covariates with different loadings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This means that β(n) ̸= β(m) for n ̸= m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For example, a subset of the cross-sectional units might be modeled by different covariates than the remaining part of the panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Second, we can accommodate “weak” covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A covariate is included in S if it is required by at least one cross-sectional unit requires as explanatory variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In other words, a sparse model can include covariates in XS that explain only a very small subset of the panel Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The first step is to estimate the sparse models over the time-series for each unit separately due to the heterogeneity in the loadings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In a second step, we provide the valid inferential theory for the loadings on the full panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The time-series estimation requires an appropriate regularization to 1Our setting and multiple testing results can be readily extended to the case of unbalanced panel, although we focus on the balanced panel case for now to highlight the core multiple testing insight of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We will further discuss on this once we introduce our main procedure in Section 4 5 select a small subset of covariates that contains all the relevant covariates for each unit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We allow for a prior belief weight ω ∈ ¯RJ +, so that different X can have different relative penalizations, and a global λ ∈ R+ scalar penalty parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For the nth unit, we denote its β(n) estimate as ˆβ(n) and the active set M(n) = {j : ˆβ(n) j ̸= 0} as the set of j’s with non-zero loadings ˆβ(n) j .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A general regularized linear estimator solves the following optimization problem ˆβ(j)(λ, ω) = arg min β 1 2T ∥Y (j) − Xβ∥2 2 + λ · f(β, ω) (3) for a penalty function f and appropriate weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In this paper, we consider the weighted LASSO estimator with the regularization function f(β, ω) = J � j=1 fj(βj, ωj) where fj(βj, ωj) = � � � |βj| ωj ωj < ∞ 0 o.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (4) and weights ωj > 0 for all j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', J} and ∥ω−1∥1 = J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We assume that the penalty λ is selected such that the set ∥ˆβ(j)∥0 = |M(j)| is low dimensional.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Importantly, we do not need to assume that the selected set contains all active covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our goal it is provide a valid inferential theory conditional on the selected set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our estimator generalizes the conventional LASSO with the l1 regularization function of Tibshirani (1996) by allowing for different relative weighting in the penalty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Importantly, we also allow for an infinite weight, which can be interpreted as a prior on a set of covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This allows researchers to take advantage of prior information and for example ensure that a specific set of covariates will always be included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The weighted LASSO will be particularly relevant in our empirical study, where we can answer the question which risk factors should be added to a given set of economically motivated risk factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our weighted LASSO formulation can also be interpreted as a Bayesian estimator with the canonical Laplacian prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Conventional regression theory will not provide correct inferential statements on the weighted- LASSO estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We face two challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' First, regularized estimation results in a bias, which needs to be corrected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Second and more challenging, post-selection inference changes the distribu- tion of the estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' When we observe an active ˆβ(n) j from (3), it would be incorrect to simply calculate its p-value from a conventional t-distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This invalidity stems from the fact that conditional on observing a LASSO output, β(n) j must be large enough in magnitude for its ˆβ(n) j to be active.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In other words, the probability distribution of the estimators is truncated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The correct inference has to be conditional on the covariates being selected by the LASSO estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, valid p-values have to be the tail probability conditional on being in the selection set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The key to quantify such styles of inference is to recognize that a sparsity constrained estimator is typically the result of solving Karush-Kuhn-Tucker (KKT) conditions, which can in turn be geometrically characterized as polyhedral constraints on the support of response variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This is first established in Lee, Sun, Sun, and Taylor (2016), who provide the stylized results that Post-Selection Inference (PoSI) of debiased non-weighted LASSO estimators can be calculated as 6 polyhedral truncation on Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This line of research is also referred to as Selective Inference in other literature such as Taylor and Tibshirani (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We extend this line of literature to allow for the Weighted-LASSO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We derive these results with assumptions common in the PoSI LASSO literature, detailed in Appendix A, and referred to as conventional regularity conditions for ease of exhibition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' THEOREM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Truncated Gaussian Distribution of Weighted-LASSO Under conventional regularity conditions, the debiased estimate ¯βi for the i-th Weighted-LASSO active covariate is conditionally distributed as ¯βi|Weighted-LASSO ∼ T N {η⊤Y :AY ≤b(ω)} (5) where T N A is truncated-Gaussian with truncation A, and the weights ω only appear in b(ω) Theorem 1 has two elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' First, it debiases the LASSO estimate by a shifting argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' While we use a geometric argument to remove the bias, the bias adjustment takes the usual form in the LASSO literature as for example in Belloni and Chernozhukov (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The debiased LASSO estimator simply equals a standard OLS estimation on the subset Mn selected by the Weighted- Lasso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Second, the distribution of the linear coefficients is not a usual Gaussian distribution, but it is truncated due to studying post-selection coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This geometric perspective is less common in the LASSO literature, but provides several advantages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' One advantage of the geometric approach is that it avoids the use of infeasible quantities, in particular the second moment of the large set of potential covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Furthermore, the distribution result is not asymptotic in T, but also valid in finite samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We can obtain these results because we make the stronger assumption that the data is normally distributed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Appendix A provides the detailed information on constructing ¯β and the definitions of η, A, b(ω) along with lemmas that lead up to this result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It also discusses extensions and the effect of estimating the variance of the noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The empirical analysis is based on the explicit form of Theorem 1 formulated in Theorem A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our Weighted-LASSO results make several contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' First, the expression for the trun- cated conditional distribution with weights become much more complex than for the special case of the conventional LASSO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Second, we provide a simple, easy-to-use and asymptotically valid conditional distribution in the case of an estimated noise variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Last but not least, we show the formal connection with alternative debiased LASSO estimators by showing that debiasing can be interpreted as one step in a Newton-Ralphson method of solving a constrained optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Theorem 1 allows us to obtain valid p-values for Weighted-LASSO coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We obtain these p values from the simulated cumulative distribution function of the truncated Gaussian distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Crucially, all results for multiple testing adjustment in panels that we study in the following sections neither require us to use a weighted Lasso estimator nor to use the p-values implied by Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We only require to have a set of valid p-values for sparsity constrained models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' These can be obtained with any suitable regularized estimator and post-selection inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The key element is the selection of a low dimensional subset with p-values conditional on this selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We propose the weighted LASSO conditional inference results as an example of the type of sparsity constraint 7 models we are interested in, and demonstrate a machinery with which we can obtain valid p-values for sparsity constrained models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In our empirical studies, we use Weighted-LASSO as our sparsity constrained model since we want to specify strong prior beliefs on a few covariates and it is common practice to use LASSO in the context of our empirical studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Nonetheless, the testing methods in the next sections accommodate any sparse estimator, and can be detached from inference for Weighted-LASSO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 3 Data-Driven Hypotheses Our goal is to provide formal statistical tests that allow us to establish a joint model across a large cross-section with potentially weak covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This requires us to provide a form of statistical significance test with multiple testing adjustment that properly accounts for covariates that only ex- plain a small subset of the cross-sectional units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This is important as in many problems in economic and finance there is substantial cross-sectional variation in the explanatory power of covariates, and a model that simply minimizes an average error metric might neglect weaker covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' An essential step for a formal statistical test is to formulate the hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This turns out to be non-trivial for a large panel with a first stage selection step for the covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It is a fundamental insight of our paper, that the hypothesis of our test has to be conditional on the selected set of active covariates of the first stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Once we have defined the appropriate hypothesis, we can deal with the multiple testing adjustment, which by construction is also conditional on the selection step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The hypothesis formulation and test construction only requires valid p-values from a first stage selection estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The results of the next two sections do not depend on a specific model for obtaining these p-values and the active set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The results are valid for any model including non- linear ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The input to the analysis is a N ×J matrix, which specifies which covariates are active for each unit and the corresponding p-values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The Weighted-LASSO is only one possible model, but it can be replaced by any regularized model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We have introduced the sparse linear model as it is the horse race model for many problems in economics and finance, and therefore of practical relevance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We illustrate the concept of a data-driven hypothesis with a simple example, which we will use throughout this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For simplicity we assume that we have J = 4 covariates and want to explain N = 6 cross-sectional units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In the first stage, we have estimated a Weighted-LASSO and have obtained the post-selection valid p-values for each of the N units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We collect the fitted sparse estimator ¯β(n) for the nth unit in the matrix ¯β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Note, that this matrix has “holes” due to the sparsity for each ¯β(n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Figure 1(a) illustrates ¯β for this example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Similarly, we collect the corresponding p-values in the matrix P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For the nth unit, we only have p-values for those covariates that are active in the nth linear sparse model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Thus, Figure 1(b) also has white boxes showing the same pattern of unavailable p-values due to the conditioning on the output of the linear sparse model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' These holes can appear at different positions for each 8 Figure 1: Illustrative example of data-driven selection (a) Matrix ¯β (b) Matrix P of p-values This figure illustrates in a simple example the data-driven selection of a linear sparse model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In a first stage, we have estimated a regularized sparse linear model for each of the N = 6 units with J = 4 covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Each row represents the selected covariates with their estimated coefficients and p-values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The columns represent the J = 4 different covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The grey shaded boxes represent the active set, while white boxes indicate the inactive covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The numbers are purely for demonstrative purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' unit, which makes this problem non-trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This non-trivial shape of either subplot (a) or (b) is completely data-driven and a consequence of linear sparse model selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We show that the hypothesis should be formed around these non trivial shapes as well, which is why we name it the data-driven hypothesis family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We want to test which covariates are jointly insignificant in the full panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A data-agnostic approach would simply test if all covariates are jointly insignificant, independent of the data-driven selection step in the first stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A data-agnostic hypothesis is unconditional as it does not depend on any model output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' However, as we will show, this perspective is problematic for the high- dimensional panel setting with many covariates as it ignores the dimension reduction from the selection step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Therefore, an unconditional multiple testing adjustment accounts for “too many” tests, which severely reduces the power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We propose to form the hypothesis conditional on the first stage selection step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The data-driven hypothesis only tests the significance of the covariates that were included in the selection, and hence can drastically reduce the number of hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' However, given the non-trivial shape of the active set, the multiple testing adjustment for the data-driven hypothesis is more challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Before formally defining the families of hypothesis, we illustrate them in our running example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 9 1 2 3 4 1 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='43 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='15 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='19 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='59 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='44 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='44 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='53 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='46 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='701 2 3 4 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='127 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='587 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='005 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='871 3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='526 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='001 4 :0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='001 ≤0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='001 5 :0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='001 6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='102 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='010The data-agnostic hypothesis HA for explaining the full panel takes the following form: HA = {HA0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' HA0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' HA0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' HA0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4} = {β(1) 1 =β(2) 1 = β(3) 1 = β(4) 1 = β(5) 1 = β(6) 1 = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' β(1) 2 =β(2) 2 = β(3) 2 = β(4) 2 = β(5) 2 = β(6) 2 = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' β(1) 3 =β(2) 3 = β(3) 3 = β(4) 3 = β(5) 3 = β(6) 3 = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' β(1) 4 =β(2) 4 = β(3) 4 = β(4) 4 = β(5) 4 = β(6) 4 = 0} (6) The data-driven hypothesis HD only includes the active set and hence equals HD = {β(2) 1 =0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' β(1) 2 =β(3) 2 = β(5) 2 = β(6) 2 = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' β(1) 3 =β(2) 3 = β(3) 3 = β(4) 3 = β(5) 3 = β(6) 3 = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' β(2) 4 =β(4) 4 = β(5) 4 = 0} (7) Obviously,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' HA has a larger cardinality of |HA| = 24 > |HD| = 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This holds in general, unless the first stage selects all covariates for each unit, in which case the two hypotheses coincide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Formally, the data-agnostic family of hypothesis is defined as follows: DEFINITION 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Data-agnostic family The data-agnostic family of hypotheses is HA = {HA0,i|i ∈ [d]} where HA0,i = � j∈[N] H(j) A0,i and H(j) A0,i : β(j) i = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (8) It is evident that HA does not need any model output or exploratory analysis, so it is indeed data-agnostic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As soon as we use a sparsity constrained model that has censoring capabilities, we no longer observe (Y , X) from its data generating process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Consequently, unless our hypotheses depend on how we built the model, or equivalently on how the data was censored, the data-agnostic hypotheses forgo power without any benefit in false discovery control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Therefore, we formulate the hypothesis on the ith covariate H(j) 0,i only if i ∈ M(j), that is, it is in the active set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Conditional on observing the model output, there is no inference statement to be made about H(j) 0,i if i /∈ M(j), because its estimator is censored by the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We denote as Ki the set of units for which the ith covariate is active.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We define the cross- sectional hypothesis for the ith covariate as: H0,i = � j∈Ki H(j) 0,i ����M, ∀i : Ki ̸= ∅ (9) By combining all covariates {i : Ki ̸= ∅} that show up at least once in one of the active sets of our sparse linear estimators, we arrive at a data-driven hypothesis associated with our panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This is defined as follows: 10 DEFINITION 2 (Data-driven family).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The data-driven family of hypotheses conditional on M is HD = {H0,i|i : Ki ̸= ∅} (10) This demonstrates the non-trivial nature of writing down a hypothesis in high-dimensional panel: we can only collect Ki - the set of units for which the ith covariate is active - after seeing the sparse selection estimation result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 4 Multiple Testing Adjustment for Data-Driven Hypothesis 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 Simultaneity Counts through Panel Localization We show how to adjust for multiple testing of data-driven hypotheses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Given the p-values p(j) i for i ∈ M and j ∈ Ki, we form the data-driven hypothesis HD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our goal is to reject members of HD while controlling the Type I error, and the common way to measure such error is the family- wise error rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This is the same underlying logic that is used to define confidence intervals and determine significance of covariates in a conventional setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The crucial difference is that we need to account for multiple testing given the large number of cross-sectional units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The family-wise error rate (FWER) is defined as follows: DEFINITION 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Family-wise error rate Let V denote the number of rejections of H(j) 0,i |M(j) when the null hypothesis is true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The family- wise error rate (FWER) is P(V ≥ 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Similar to the conventional definition, we simply count the false rejections V and define FWER as the probability of making at least one false rejection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Importantly, Definition 3 accounts for the fact that we might repeatedly test on � j∈[N] |Mj| rather than a single hypothesis test of the form H(j) 0,i : β(j) i = 0|M(j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our contribution to FWER control in the panel setting is thus to take into consideration both the multiplicities in units and covariates when we deal with the “matrix” of p-values P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' To achieve this goal, we propose a new simultaneity account for the ith covariate, calculated as Ni = � j∈Ki |Mj| (11) Figure 2 illustrates the simultaneity counting for our running example with N = 6 units and J = 4 covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The blue boxes represent the active set for a specific covariate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The yellow boxes indicate the “co-active” covariates, which have to be accounted for in a multiple testing adjustment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In the case of the first covariate j = 1, only the second unit n = 2 has selected this covariate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This second unit has also selected covariate j = 3 and j = 4, which are jointly tested with the first covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, they are “co-active”, and the simultaneity count equals N1 = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Intuitively, Nj represents all relevant comparisons for the jth covariate because it counts how many covariates 11 Figure 2: Simultaneity counts Ni in the illustrative example (a) N1 = 3 (b) N2 = 9 (c) N3 = 14 (d) N4 = 8 This figure shows the simultaneity counts Ni in the illustrative example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The subplots represent the simultaneity counts for the J = 4 covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The blue boxes indicate the active set Kj of the j covariates, while yellow boxes indicate the “co-active” covariates of the jth covariate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The simultaneity counts are the sum of yellow and blue boxes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' are active with the jth covariate in the regressions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, Nj quantifies the number of “multiple tests” for each covariate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In subplot 2(a), we see that K1 = {2} for the 1st covariate, indicated by the blue box, because it is only active in the second unit’s regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The multiple testing adjustment needs to consider all yellow boxes, and N1 = 3 is thus the total count of 1 blue and 2 yellow boxes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Similarly, for the second covariate, K2 = {1, 3, 5, 6}, so we shade boxes yellow for the 2nd, 3rd and 5th units and obtain N2 = 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We can already see that our design of simultaneity count takes all relevant pairwise comparisons into considerations, but avoids counting the white boxes - which would cause overcounting and result in over-conservatism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our multiplicity counting is a generalization of the classical Bonferroni adjustment for multiple testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A conventional Bonferroni method for the data-agnostic hypothesis HA has a simultaneity count of |HA| = N · J = 24 for testing each covariate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A direct application of a vanilla Bonfer- roni method to the panel of all selected units and the data-driven hypothesis HD, would use a simultaneity count of |HD| = 14 for testing each covariate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our proposed multiplicity counting is a refinement that leverages the structure of the problem, and takes the heterogeneity of the active sets for each covariate into account.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our count has only N1 = 3, N2 = 9 and N4 = 8 for the covariates j = 1, 2 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Only for covariate j = 3 is the simultaneity count the same as a vanilla Bonferroni count applied to HD, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' N3 = 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In addition to the simultaneity count of each covariate, we need an additional “global” metric for our testing procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We define a panel cohesion coefficient ρ as a scalar that measures how 12 1 2 3 4 2 4 5 61 2 3 4 1 2 4 5 61 2 3 4 1 2 4 5 62 3 4 1 2 3 4 5 6Figure 3: Illustration of the cohesion coefficient (a) ρ = J−1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='25 (b) ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='44 (c) ρ = 1 This figure illustrate the cohesion coefficient ρ in three separate examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It shows the smallest, largest and in- between cases of ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The columns represent the J = 4 different covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='The blue boxes indicate the active sets for each panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' sparse or de-centralized the proposed hypotheses family is: ρ = � �� j |Kj| Nj � � −1 (12) The panel cohesion coefficient ρ is conditional on the data-driven selection of the overall panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It is straightforward to compute once we observe the sparse selection of the panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This coefficient takes values between J−1 and 1,2 where larger values of ρ imply that the active set is more dependent in the cross-section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This can be interpreted as that the panel Y has a stronger dependency due to the covariates X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Intuitively, in the extreme case when ρ = J−1, the panel can be separated into J smaller problems, each containing a subset of response units explained by only one covariate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Thus the panel would be very incohesive, and could be studied with J independent tests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In the other extreme, if ρ approaches 1, the first-stage models include all active covariates for all units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We consider this as a very cohesive panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' If ρ is between theses bounds, the panel is cohesive in a non-trivial way such that some units can be explained by some covariates and there is no clear separation of the panel into independent subproblems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Figure 3 illustrates the panel cohesion coefficient in three examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The subplots show three active sets that are different from our running example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The left subplot 3(a) shows the extreme case of ρ = J−1, where the panel is the least cohesive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The right subplot 3(c) illustrates the other extreme for ρ = 1, where the panel is the most cohesive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The middle subplot 3(b) is the complex case of a medium cohesion coefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 2We prove this bound in the Appendix, without leveraging sparsity of first-stage models but rather as an algebraic result with intuitive interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 13 1 2 3 4 1 2 3 4 5 61 2 3 4 1 2 4 5 61 2 3 4 1 2 4 5 6Our novel simultaneity count and cohesiveness measure are the basis for modifying a Bonferroni test for FWER-controlled inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Theorem 2 formally states the FWER control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The proof is in the Online Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' THEOREM 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' FWER control The following rejection rule has FWER≤ γ on HD: min n∈Kj � p(n)(j) � ≤ ρ γ Nj ⇒ Reject H0,j (13) where p(n)(j) are valid p-values for each univariate unit n, and ρ is the panel cohesion coefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This completes the joint testing procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' First, we calculate p-values after running a sparse linear estimator time-series regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Second, we use the sparse linear estimator output to write down a hypothesis and, third, we provide a FWER control inference procedure by combining the p-values across the cross-section and test the hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The difference between a naive Bonferroni and our FWER control is particularly pronounced for weak covariates that affect only a subset of the cross-sectional units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Given a FWER control level of γ, the rejection threshold for a naive Bonferroni test is γ JN for every covariate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The rejection threshold for our FWER control is always higher, and differs in particular when Nj is small and ρ is large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This is the case for weak covariates in a cohesive panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As it is common in statistical inference, we focus on Type I error control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Type II error rates require the specification of alternatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' While we do not provide formal theoretical results for the power of our inference approach, we show comprehensively in the simulation and empirical part, that our approach has substantially higher power than conventional approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We point out that the validity of our procedure holds for unbalanced panels as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This is because even when there are different number of observations for the nth and mth units, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Tn ̸= Tm for n ̸= m, they can still be estimated separately in the first stage of the regularized regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The hypothesis testing and selection of a parsimonious model only requires the matrix P of valid p-values, which can be based on different samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 Least Number of Covariates: Traversing the Threshold The typical logic of statistical inference is to determine which covariates we should admit from XM, given a significance level γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We use K to denote the number of selected covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' When γ is specified as a lower quantity, we expect K to decrease as well, that is, the rejection becomes harsher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As the number of admitted covariates of our procedure is monotone in γ, we want to ask the following converse question: How low do we need to set γ such that we reject K covariates?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Concretely, we are interested in finding: 14 γ∗(K) = sup � � �γ|K = J � j=1 1 � min n∈Kj � p(n) j � ≤ ρ γ Nj �� � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (14) Let pj = minn∈Kj{p(n) j } be the 1st order statistic for j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Then (14) is simply the K-th order statistics of Njpj/ρ: γ∗(K) = min{Nipi/ρ|∃j1, j2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', jK ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', J} : Nipi ≥ Njkpjk}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (15) Since this minimization scan is monotone, we can determine how many covariates at least should be admitted, given a control level, which is similar to the “SimpleStop” procedure described in Choi, Taylor, and Tibshirani (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The following corollary formalizes this inversion method that finds the least number of covariates to admit: COROLLARY 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Least number of covariates Given the FWER level γ, there exists a unique number K∗(γ) such that K∗(γ) = � � � arg max0≤K≤J γ∗(K) ≤ γ ∃K : γ∗(K) ≤ γ d o.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (16) The statement simply states that the simplest linear model should have at least K∗(γ) covariates for a given γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Note that it is possible that, for example, γ∗(5) and γ∗(6) are both equal to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05, while γ∗(7) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In this case the minimum number of covariates is K∗(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05) = 6 because it does not hurt FWER-wise to include 6 covariates in the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, we are making a slightly different statement than that there would be exactly K∗(γ) covariates in the true linear model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The number of covariates is obviously conditional on the set of candidate covariates X, and we can only make statements for this given set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In our empirical study we consider candidate asset pricing factors X to explain the investment strategies Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' More generally, the linear model that we consider is often referred to as a factor model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Therefore, we will also refer to the selected covariates as factors, and use these two expressions as synonyms moving forward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This directly links our procedure to the literature on estimating the number of factors to explain a panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A common approach in this literature is to use statistics based on the eigenvalues of either Y or X to make statements about the underlying factor structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our approach is different, as it provides significance levels for the selected factors and FWER control for the number of factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Table 1 illustrates the estimation of the number of factors and their ranking with our running example introduced in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We calculate the simultaneity counts Ni’s as given in (11) and demonstrated in Figure 2, and pi as the smallest p-values associated with the ith covariate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Then, the rejection rule in Theorem 2 is based on whether a pre-specified level γ satisfies pi < ργ Ni , which is equivalent to Ni · piρ < γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Thus, the natural ranking of the covariates is to sort all covariates in descending order of the 15 Table 1: Sorted p-values for the running example Factor (j) pj Simultaneity count for HD Conventional Bonferroni for HA ρ−1 · Nj ρ−1 · Nj · pj J · N J · N · pj 3 < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='001 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='001 24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='002 4 < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='001 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='001 24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='003 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='005 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='024 24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='120 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='002 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='028 24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='051 This table constructs “significance” levels for the running example introduce in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We compare the simul- taneity count for the data-driven hypotheses HD and a onventional Bonferroni count for data-agnostic hypotheses HA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The products Nj · pj, respectively J · N · pj, can be interpreted as the significance levels for the corresponding approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Given a FWER control γ all factors with ρ−1 · Nj · pj (respectively J · N · pj) below this threshold are selected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Ni · pi/ρ values as shown in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It is then trivial to determine K∗(γ) for any choice of γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For example, for γ = 1%, we would select factors 3 and 4, but not 1 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' On the other hand, for γ > 2%, we would include all four factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, the ranking of Nipi/ρ directly maps into K∗(γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The list of Nipi/ρ encompasses more information than just the number of factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Naturally, it provides an importance ranking of the factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Furthermore, the number Ni reveals if significant factors are “weak”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In our case, factor 1 has N1 = 3, which indicates that it affects only a small number of hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Its p-value p1 is sufficiently small to still imply significance in terms of FWER control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For comparison, Table 1 also includes the corresponding analysis for the data-agnostic hypoth- esis and a conventional Bonferroni correction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The Bonferroni analysis uses the same p-values but a different multiple testing adjustment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In our case, the p values would be multiplied by J ·N = 24 as this corresponds to the total number of hypothesis tests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This will obviously make the inference substantially more conservative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Indeed, even for a FWER control of γ = 4%, we would only select factors 3 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We would need to raise the FWER control to γ = 12% to include factor 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, weak factors, like factor 1, are more likely to be discarded by the data-agnostic hypothesis with conventional multiple testing adjustment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We want to emphasize that a data-agnostic hypotheses with conventional Bonferroni correction does provide correct FWER control, but it is overly conservative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' By construction, the data-agnostic Bonferroni approach will test a larger number of hypothesis, which means that the corresponding “significance levels” will always be lower or equal to our data-driven simultaneity count.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Second, the data-agnostic Bonferroni approach does not differentiate the “strength” of the factors, while our approach provides a selection-based heterogeneous adjustment of the p-values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This is essential for detecting weak factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Having introduced all building blocks of our novel method to detect covariates, we put the entire procedure together as “Panel-PoSI”: PROCEDURE 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Panel-PoSI The Panel-PoSI procedure consists of the following steps: 16 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For each unit n = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', N unit, we fit a linear sparse model ˆβ(n) X,Y (c, ω) given (X, Y , λ, ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We suggest cross-validation to select the LASSO penalty λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We construct the sparse estimators ¯β(n) and the corresponding p-values for the active covariates for each unit, and collect them in the “matrix” of p-values P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We collect the panel-level sparse model selection event M and construct the data-driven hy- pothesis HD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Given the FWER control level γ and based on the the simultaneity counts Nj, we make inference decision for the sparse model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We can rank covariates in terms of their significance and select a parsimonious model that explains the full panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As we have now all results in place, we can summarize the advantages of our procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' First, we want to clarify that our goals and results are different from just some form of optimal shrink- age selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Selecting a shrinkage parameter with some form of cross-validation in a regularized estimator like LASSO does not provide the same insights and model that we do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A shrinkage estimator can either be applied to each unit separately, as we do it in our first step, or to the full panel in a LASSO panel regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The separate covariate selection for each cross-sectional unit does not answer the question which covariates are needed to explain the full panel jointly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A shrinkage selection on the full panel for some form of panel LASSO can neglect weaker factors, as those receive a low weight in the cross-validation objective function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Second, tuning parameter selection with cross-validation requires a sufficiently large amount of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our approach is attrac- tive as we can do the complete analysis on the same data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' That means, an initial LASSO is used to first reduce the number of covariates, but this set is then further trimmed down using inferential theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, we can construct a parsimonious model even for data with a relatively short time horizon, but large cross-sectional dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Third, the statements that we can make are much richer than a simple variable selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We can formally assess the relative importance of factors in terms of their significance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The model selection is directly linked to a form of significance level, which allows us to assess the relevance of including more factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Last but not least, we can also make statements about the strength of factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In summary, Panel-PoSI is a disciplined approach based on formal statistical theory to construct and interpret a parsimonious model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 5 Ordered Multiple Testing on Nested Hypothesis Family So far, our hypothesis family HD has no hierarchy and consequently, we have not imposed a sequential structures on the admission order of covariates of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' However, there are cases where the covariates or factors warrant a natural order such that the family possesses a special testing logic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A hierarchical structure in covariates arises when the inclusion of the next covariate only make sense if the previous covariates is included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' One example would be if the next covariates refines a property of the previous covariate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Another case is the use of principal component (PC) factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The conventional logic is to include PCs sequentially from the dominating one to the 17 least dominating one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This is similar to the motivation for Choi, Taylor, and Tibshirani (2017), but different from them, we treat the PCs as exogenous without taking the estimation of PCs explicitly into account.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In this section, we will use exogenous PCs as hierarchical covariates, as this is the main example in our empirical study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' However, all the results hold for any set of exogenous hierarchical covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Without loss of generality, we presume X has the jth column as the jth nested factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A k-order nested model N(k) is of the following form N(k) model : Y = X[k]β[k] (17) where [k] = {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', k} is the set that includes indices up to k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For example, a hierarchical three factor model corresponds to X{1,2,3}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' When formulating our hypothesis family, we must represent the sequential testing structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This is reflected in our definition of nested families of hypotheses: DEFINITION 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Data-driven nested family The data-driven nested family of hypotheses conditional on M is HN = {HN,k : k = 0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', J}, HN,k = � j∈Kk H(j) N,k ����M, H(j) N,k : {i′ : β(j) i′ ̸= 0} ≤ k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (18) HN,0 completes the case when no rejection on any factor is made.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Whenever HN,k is true, then HN,k′ is also true for k < k′ ≤ J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Moreover, in the cases where Kk = ∅ but Kk′ ̸= ∅ with k < k′, the notation ensures that the hypothesis HN,k is included in HN simply because Kk′ is present.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In other words, if a less dominating hypothesis HN,k′ is suggested by data (that is, its active set is non-empty Kk′ ̸= ∅), HN would automatically include all HN,k for k ≤ k′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The FWER control property needs to be adapted to the nested nature of this family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Choi, Taylor, and Tibshirani (2017) argue that the proper measurement is to control for ordered factor count over-estimation with level γ, as follows: DEFINITION 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' FWER for nested family For a test that rejects HN,k for k = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', ˆk of HN, the FWER control at the level γ satisfies P(ˆk ≥ s) ≤ γ, where s is the true factor count.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Given the hierarchical belief about the model, we need to add the following additional assump- tion: ASSUMPTION 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Tail p-values Under H(j) N,k, there is p(j)(i′) iid ∼ Unif [0, 1] if i′ > k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Assumption 1 only needs to hold for the tail hierarchical covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In the case of PCs, it only applies to the lower order tail PC factors that should not be included for a given null hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For example, if the true model is HN,s, we only need p(j)(i) iid ∼ Unif[0, 1] for i > s, which is a usual type of assumption in this literature such as in G’Sell, Wager, Chouldechova, and Tibshirani 18 Figure 4: Example of hierarchical simultaneity counts Norder k for HN (a) N order 4 = 3 (b) N order 3 = 5 (c) N order 2 = 8 (d) N order 1 = 12 This figure shows the simultaneity counts N order i in an illustrative example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The subplots represent the simultaneity counts for the J = 4 covariates and N = 6 units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The dark blue columns present the active factors, while the light blue columns capture factors of higher-order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The sub-plots from left-to-right represent our calculation order from the highest-order factor to the 1st factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Moreover, because the nested nature guarantees that the higher-order PCs are more likely to be null, a step-down procedure is expected to increase the power relative to a step-up procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As our focus is to control for false discoveries, we also need to adjust our simultaneity counts to the sequential testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Concretely, we consider first taking a union to obtain the active unit set Korder k and then calculate conservative simultaneity counts Norder k : Korder k = � i∈{k,k+1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=',J} Ki, Norder k = � j∈Korder k |Mj|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (19) It is possible for some |Mk| to be 0 (that is, the kth PC could be inactive for all units), but its Norder k would be 0 if and only if higher-order PCs all have |Mk′| = 0 for k′ > k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Figure 4 illustrates the process of our step-down simultaneity count.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' From the left, we start with factor k = 4 and move step-wise down to factor k = 1 on the right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The dark blue columns present the active factors, while the light blue columns capture factors of higher-order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In the left-most sub-figure, we only need to account for the 4th PC, implying Norder 4 = 3, whereas in the mid-left sub-figure, the 3rd PC has Norder 3 = 2 + 3 = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Eventually, in the right-most sub-figure, we have swept through the entire panel and the 1st PC has a simultaneity count of Norder 1 = 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Now we can introduce a step-down procedure adapted to the nested structure of HN: PROCEDURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Step-down rejection of nested ordered family HN The step-down rejection procedure consists of the following steps: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For each k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', J} calculate the ordered simultaneity count Norder k .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 19 1 2 3 4 1 2 3 4 5 61 2 4 1 2 4 5 61 2 4 1 2 4 5 61 2 4 1 2 3 4 5 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For each k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', J} calculate the approximated R´enyi representation Zorder k and its trans- formed reversed order statistics qorder k : Zorder k = J � i=k � j∈Ki ln(p(j)(k)) Norder 1 − Norder i+1 1{i ̸= J}, qorder k = exp(−Zorder k ) (20) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Reject hypothesis 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', ˆk, where ˆk = max{k : qorder k ≤ γNorder k JN }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This procedure will have FWER control at level γ as stated in the following theorem: THEOREM 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' FWER control for ordered hypothesis Under Assumption 1, Procedure 2 has FWER control of γ for the ordered hypothesis HN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The proof is deferred to the Online Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This design extends Procedure 2 from G’Sell, Wager, Chouldechova, and Tibshirani (2016) and “Rank Estimation” from Choi, Taylor, and Tib- shirani (2017), both of which focus on a single sequence of p-values rather than the panel setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In Step 2, we use Assumption 1 to transform p-values into ln(p(j)(k)), which are i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' standard exponential random variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Since the family HN has J members, we need to modify our simul- taneity count and in a sense condense the panel into a sequence of statistics associated with the ordered covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We built a staircase sequence of conservative simultaneity count Norder k in Step 1 to accumulate the number of p-values we use up to the kth ordered covariate, starting from the end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' By the R´enyi representation of R´enyi (1953), the Zorder k of Step 2 approximate exponential order statistics and the qorder k approximate uniform order statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The nature of these approxima- tions is to create a more conservative rejection, the technical details of which are examined in the proof in our Online Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Finally, we run the order statistics through a step-down procedure proposed by Simes (1986) so that we find the ˆk largest number of ordered covariates rejected by the data with FWER control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Also note that even if the global null, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' HN,0, is true, and every linear sparse model active set is empty, that is Norder 1 = 0, the procedure in Step 3 is still valid because we do not reject HN,1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 6 Simulation We demonstrate in simulations that our inferential theory allows us to select better models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We compare different estimation approaches to select covariates and show that our approach bet- ter trades off false discovery and correct selections and hence results in a better out-of-sample performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Table 2 summarizes the benchmark models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our framework contributes among three dimen- sions: the selection step for the sparse model, the construction of the hypothesis and the multiple testing adjustment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We consider variations for these three dimensions which yields in total six estimation methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' By varying the different elements of the estimators, we can understand the benefit of each component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Table 2: Summary of estimation methods ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Name ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Abbreviation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Selection ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Hypothesis ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Multiple Testing ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Rejection rule ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Naive OLS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='N-OLS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='OLS without LASSO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Agnostic HA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='No adjustment ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='pOLS < γ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Bonferroni OLS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='B-OLS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='OLS without LASSO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Agnostic HA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='No adjustment ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='pOLS < ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='γ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='JN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Naive LASSO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='N-LASSO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='LASSO without PoSI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Agnostic HA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='No adjustment ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='pLASSO < γ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Bonferroni Naive LASSO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='B-LASSO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='LASSO without PoSI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Agnostic HA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Bonferroni ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='pLASSO < ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='γ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='JN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Bonferroni PoSI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='B-PoSI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='LASSO with PoSI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Agnostic HA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Bonferroni ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='pPoSI < ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='γ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='JN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Panel PoSI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='P-PoSI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='LASSO with PoSI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Data-driven HD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Simultaneity count ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='pPoSI < ργ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Ni ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='This table compares the different methods to estimate a set of covariates from a large dimensional panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For each method, we list the name and abbreviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The selection refers to the regression approach for each univariate time-series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The hypothesis is either agnostic or data-driven given the selected subset of covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The multiple testing adjustment includes no adjustment, a conventional Bonferroni adjustment and our novel simultaneity count for a data-driven hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The rejection rules combine the valid p-values and multiple testing adjustment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' pOLS is the p-value for a conventional t-statistics of an OLS estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' pLASSO is the p-value without removing the lasso bias or adjusting for post-selection inference, that is, it is simply the OLS p-values using the selected subset of regressors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' pPoSI is the debiased post-selection adjusted p-value based on Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our baseline model is Panel PoSI, which uses post-selection inference LASSO, and a simultane- ity count for a data driven hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The first component that we modify is the selection of the sparse model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A simple OLS regression without shrinkage does not produce a sparse model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This gives us the methods Naive OLS and Bonferroni OLS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A conventional LASSO results in a sparse selection, but the p-values are not adjusted for the post-selection inference and the bias adjustment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The corresponding models are the Naive LASSO and the Bonferroni Naive LASSO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The second component is the hypothesis, which is agnostic for methods besides Panel PoSI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For the comparison models, we either consider no multiple testing adjustment or the conventional Bonferroni adjust- ment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Under the multiple testing adjustment we obtain the Bonferroni OLS, the Bonferroni Naive LASSO and the Bonferroni PoSI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The outcome of all the estimations are adjusted p-values for the covariates, which we use to select our model for a given target threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For a given value of γ we include a covariate if its adjusted p-value is below the critical values summarized in the last column of Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We simulate a simple and transparent model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our panel follows the linear model Yt,n = J � j=1 Xt,jβ(n) j + ϵt,n for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', T, n = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', N and j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='., J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The covariates and errors are sampled independently as normally distributed random variables: Xt,j iid ∼ N(0, 1), ϵt iid ∼ N(0, Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The noise is either generated as independent noise with covariance matrix Σ = σ2I or as cross- sectionally dependent noise with non-zero off-diagonal elements Σij = κ and diagonal elements Σii = σ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Note that our theorems for PoSI assume homogeneous noise, while dependent noise violates our assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, the dependent noise allows us to test how robust our method is 21 Figure 5: Design of loadings β This figure demonstrates the setting of our simulations with 10 factors, where loadings are shaded based on whether they are active.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In this staircase setting, the first factor affects all units, the 2nd factor affects 90%, and so on, and lastly the 10th factor affects 10% of all units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' to misspecification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We set σ2 = 2 and κ = 1, but the results are robust to all these choices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We construct the active set based on the staircase structure depicted in Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Of the J covariates in X, we have K = 10 active independent factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Figure 5 demonstrates the setting for the 10 factors, where loadings are shaded based on whether they are active.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The first factor affects all units, the 2nd factor affects 90%, and so on, and lastly the 10th factor affects 10% of all units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This setting is relevant, and also challenging from a multiple testing perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It results in a large cohesion coefficient ρ, which makes the correct FWER control even more important.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The loadings are sampled from a uniform distribution, if they are in the active set: β(n) j iid ∼ Unif � −1 2, 1 2 � for j in the active set, β(n) j = 0 for j outside the active set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We simulate a panel of dimension N = 120, J = 100 and T = 300 with K = 10 active factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The first half of the time-series observations is used for the in-sample estimation and selection, while the second half serves for the out-of-sample analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' All results are averages of 100 simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We use the covariates selected on the in-sample data for regressions out-of-sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our focus is on the inferential theory, and not on the bias correction for shrinkage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, we first use the inferential theory on the in-sample data to select our set of covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Second, we use the selected subset of covariates in an OLS regression on the in-sample data to obtain the loadings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Last but not least, we apply the estimated loadings of the selected subset to the out-of-sample data to obtain the model fit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Note that this procedure helps a Naive LASSO, which in contrast to PoSI LASSO does not have a bias correction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The out-of-sample explained variation is measured by R2, which is the sum of explained variation normalized by the total variation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The rejection FWER is set to γ = 5% or γ = 1%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The LASSO shrinkage penalty λ is selected by 5-fold cross-validation on the in-sample data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 22 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='" .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' ".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' " " " "Table 3: Simulation Comparison between Selection Methods Independent noise Method # Selections # False Selections # Correct Selections OOS R2 FWER γ = 5% Panel PoSI 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='8 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='9 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0% Bonferroni PoSI 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='7 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0% Bonferroni Naive LASSO 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0% Naive LASSO 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4% Bonferroni OLS 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='7% Naive OLS 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 144.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2% FWER γ = 1% Panel PoSI 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='5 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='6% Bonferroni PoSI 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='7 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2% Bonferroni Naive LASSO 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0% Naive LASSO 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3% Bonferroni OLS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='5% Naive OLS 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='5 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='9 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3% Cross-sectionally dependent noise Method # Selections # False Selections # Correct Selections OOS R2 FWER γ = 5% Panel PoSI 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='9 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0% Bonferroni PoSI 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2% Bonferroni Naive LASSO 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0% Naive LASSO 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='5% Bonferroni OLS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3% Naive OLS 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='7 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='7 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='8% FWER γ = 1% Panel PoSI 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='6 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3% Bonferroni PoSI 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='9% Bonferroni Naive LASSO 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0% Naive LASSO 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0% Bonferroni OLS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4% Naive OLS 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='8 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='8% This table compares the selection results for different methods in a simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For each method we report the num- ber of selected covariates, the number of falsely selected covariates and the number of correctly selected covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We also report the out-of-sample R2 of the models that estimated with the selected covariates on the out-of-sample data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' All results are averages of 100 simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The rejection FWER is set to γ = 5% or γ = 1%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We simulate a panel of dimension N = 120, J = 100, T = 300.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The first half of time-series observations is used for the in-sample estimation and selection, while the second half serves for the out-of-sample analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The panel is generated by 10 independent factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The active set of the factors follows the staircase structure of Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The first factor affects all units, the second 90%, and lastly the 10th factor affects 10%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The unknown error variance is estimated based as a homogenous sample variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The noise is either generated as independent noise with covariance matrix Σ = σ2I or as cross-sectionally dependent noise with Σij = κ and Σii = σ2 for σ2 = 2 and κ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Table 3 compares the selection results for the different methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For each method we report the number of selected covariates, the number of falsely selected covariates and the number of correctly 23 selected covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We also report the out-of-sample R2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The upper panel shows the results for independent noise, while the lower panel collects the results for cross-sectionally dependent noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' PanelPoSI clearly dominates all models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It provides the best trade-off between correct and false selection, which results in the best out-of-sample performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In the case of γ = 5% and independent noise, Panel PoSI selects 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='8 factors in a model generated by 10 factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='9 of these factors are correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A simple Bonferroni correction is overly conservative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The Bonferroni PoSI selects only 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='7 correct factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' While this overly conservative selection protects against false discovery, it omits over half of the relevant factors which lowers the out-of-sample performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Using post-selection inference is important, as a naive lasso provides wrong p-values which makes the overly conservative selection even worse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The other extreme is to have neither shrinkage nor multiple testing adjustment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As expected the naive OLS has an extreme number of false selections with a correspondingly terrible out-of-sample performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As expected, tightening the FWER control to 1% lowers the number of false rejections, but also the number of correct selections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It reveals again that Panel PoSI provides the best inferential theory among the benchmark models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Panel PoSI selects 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='5 correct covariates, while it controls the false rejections at 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The overly conservative Bonferroni methods select even fewer correct covariates, which further deteriorates the out-of-sample performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The gap in OOS R2 between Panel PoSI and Bonferroni PoSI widens to 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' All the other approaches cannot be used for a meaningful selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Panel PosI performs well, even when some of the underlying assumptions are not satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The lower panel of Table 3 shows the results for dependent noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As the dependence in the noise is relatively strong, it can be interpreted as omitting a relevant factor in the set of candidate covariates X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Even thought the PoSI theory is developed for homogeneous noise, Panel PoSI continues to perform very well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In contrast, the comparison methods perform even worse, and the Bonferroni approaches select even less correct covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 7 Empirical Analysis 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 Data and Problem Our empirical analysis studies a fundamental problem in asset pricing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We select a parsimonious factor model from a large set of candidate factors that can jointly explain the asset prices of a large cross-section of investment strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our data is standard and obtained from the data libraries of Kenneth French and Hou, Xue, and Zhang (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We consider monthly excess returns from January 1967 to December 2021, which results in a time dimension of T = 660.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our test assets are the N = 243 double-sorted portfolios of Kenneth French’s data library summarized in Table A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 in the Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The candidate factors are J = 114 univariate long-short factors based on the data of Hou, Xue, and Zhang (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We include all univariate portfolio sorts from their data library that are available for our time period, and construct top minus bottom decile factor portfolios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In addition, we include the five Fama-French factors of 24 Fama and French (2015) from Kenneth French’s data library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our analysis projects out the excess return of the market factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We are interested in the question which factors explain the component that is orthogonal to market movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, we regress out the market factor from the test assets and use the residuals as test assets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We also do not include a market factor in the set of long-short candidate factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The original test assets have a market component as they are long only portfolios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our results are essentially the same when we include the market component in the test assets, with the only difference that we would need to include the market factor as an additional factor in our parsimonious models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The market factor would always be selected by all models as significant, but this by itself is neither a novel nor interesting result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We present in-sample and out-of-sample results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The in-sample analysis uses the first 330 observations (January, 1967 to June, 1994), while the out-of-sample results are based on the second 330 observations (July, 1994 to December, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As in the simulation, we first use the inferential theory on the in-sample data to select our set of covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Second, we use the selected subset of covariates in an OLS regression on the in-sample data to obtain the loadings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Last but not least, we use the estimated loadings on the selected subset of factors for the out-of-sample model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The LASSO penalty λ is selected via 5-fold cross-validation on the in-sample data to minimize the squared errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3 Hence, LASSO represents a first-stage dimension reduction tool, and we need the inferential theory to select our final sparse model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We allow our selection to impose a prior on two of the most widely used asset pricing models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' More specifically, we estimate models without a prior, and two specific priors that impose an infinite weight on the Fama-French 3 factors (FF3) and the Fama-French 5 factors (FF5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This prior as part of PoSI LASSO enforces that the FF3 and FF5 factors are included in the active set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Note that because we work with data orthogonal to the market return, we do not include the market factor in the prior, but only the size and value factors for FF3 and in addition the investment and profitability factor for FF5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We denote these weights by ωFF3 and ωFF5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This is an example where the researcher has economic knowledge that she wants to include in her statistical selection method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We evaluate the models with standard metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The root-mean-squared error (RMSE) is based on the squared residuals relative to the estimated factor models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, in-sample the models are estimated to minimize the RMSE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The pricing error is the economic quantity of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It is the time-series mean of the residual component of the factor model, and corresponds to the mean return that is not explained by the risk premia and exposure to the factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In summary, we obtain the residuals as ˆϵ = Yt,n − XS ˆβS for the selected factors, where the loadings are estimated on the in-sample data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The metrics are the RMSE and mean absolute pricing error (MAPE): RMSE = � � � � 1 N T N � i=1 T � t=1 ˆϵ2, MAPE = 1 N N � i=1 ����� 1 T T � t=1 ˆϵ ����� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 3We select λ from the grid exp(a) · log J/ √ T with a = −8, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This grid choice satisfies the Assumptions in Chatterjee (2014) and hence Assumption A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 25 In addition to Panel PoSI without and with the FF3 and FF5 priors, we consider the benchmark methods of Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We compare Panel PoSI (P-PoSI), Panel PoSI with infinite priors on FF3 and FF5 (P-PoSI ωFF3 respectively ωFF5), Bonferroni Naive LASSO (B-LASSO), Naive LASSO (N- LASSO), Bonferroni OLS (B-OLS) and Naive OLS (N-OLS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our main analysis sets the FWER control to the usual γ = 5%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 Asset Pricing Results Panel PoSI selects parsimonious factor models with the best out-of-sample performance among the benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For the FWER rate of γ = 5% the number of factors differs substantially among the different methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Panel PoSI selects 3 factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Imposing infinite priors on FF3 or FF5 results in 4 and 5 factors for P-PoSI ωFF3 respectively ωFF5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In contrast, the alternative approaches select too many factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Bonferroni Naive LASSO includes 10, Naive Lasso 70, Bonferroni OLS 107 and Naive OLS 114.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' These over-parametrized models lead to overfitting of the in-sample data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Figure 6 shows in-sample and out-of-sample RMSE for each set of double-sorts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The composition of the double sorts is summarized in Table A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 in the Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The in-sample performance in the left subfigure has the expected result that more factors mechanically decrease the RMSE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The important findings are in the right subfigure with the out-of-sample RMSE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The uniformly best performing model is Panel PoSI without any priors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In fact, imposing a prior on the Fama- French factors increases the out-of-sample RMSE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The conventional LASSO and OLS estimates have substantially higher RMSE, which can be more than twice as large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The Panel PoSI models also explain the average returns the best.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In Figure 7, we compare the mean absolute pricing errors among the benchmarks for each set of double sorts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Importantly, the pricing errors are not used as in objective function of the estimation, and hence the fact that the models with the smallest RMSE explain expected returns is an economic finding supporting arbitrage pricing theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our Panel PoSI has the smallest out-of-sample pricing errors, which can be up to six times smaller compared to the OLS estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Including the Fama-French factors as a prior does not improve the models, except for the profitability and investment double sort, which uses the same information as two of the Fama-French factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The Panel PoSI models select economically meaningful factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Table 4 reports the ranking of factors based on their FWER bound without prior and infinite prior weights on the Fama-French 3 and 5 factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The rows are ordered based on sorted ascending ρ−1Njpj, which corresponds to the FWER bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It allows us to infer the number of factors for different levels of FWER control values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Setting γ = 5% leads to 3, 4 and respectively 5 factors, while a γ = 1% results in 2, 4 and 5 factors, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In addition to their significance, we can infer the relative importance of factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The baseline PoSI with γ = 5% selects a size, dollar trading volume and value factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The size and value factors are among the most widely used asset pricing factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Their selection is in line with their economic importance and confirms the Fama-French 3 factor model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The dollar trading volume factor is less conventional, but is correlated with many assets in our cross-sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The size factor is the most 26 Figure 6: RMSE across cross-sections (a) In-sample (b) Out-of-sample This figure shows the in-sample and out-of-sample root-mean-squared errors (RMSE) for each cross-section of test assets for different factor models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The test assets are the N = 243 double-sorted portfolios, and we show the RMSE for each set of double-sorts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The rejection FWER is set to γ = 5% The candidate factors are the 114 univariate factor portfolios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The time dimension is T = 660.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We use the first half for the in-sample estimation and selection, while the second half serves for the out-of-sample analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We compare Panel PoSI (P-PoSI), Panel PoSI with infinite priors on FF3 and FF5 (P-PoSI ωFF3 respectively ωFF5), Bonferroni LASSO (B-LASSO), Naive LASSO (N-LASSO), Bonferroni OLS (B-OLS) and Naive OLS (N-OLS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' important as measured by the FWER bound, that is, the product of the number of relevant assets and its minimum p-value are the smallest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The short term reversal factor is less important and would require a FWER control of 10% to be included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Imposing a prior affects the p-values of PoSI and the simultaneity count.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For example, the cohesiveness coefficient increases from ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='16 for no priors to ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='18 in the case of the two priors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, the FWER bounds of all factors can change when we impose a prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The FF3 prior increases the significance of the short-term reversal factor, which is widely used in asset pricing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Interestingly, even for a FF5 prior, the profitability and investment factors remain insignificant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3 Number of Factors Our method contributes to the discussion about the number of asset pricing factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Many popular asset pricing models suggest between three and six factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our approach allows a disci- 27 OP 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='32 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='35 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='49 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='82 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='02 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='01 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00 INV ME 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='16 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='18 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='30 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='58 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='74 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='73 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='62 Prior60 ME 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='13 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='16 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='29 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='88 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='99 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='94 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='93 Prior12 ME 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='23 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='49 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='68 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='53 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='53 Priorl ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='96 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='06 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='32 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='44 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='43 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='42 OP ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='97 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='07 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='32 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='43 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='42 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='42 INV ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='92 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='02 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='01 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='01 EP ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='71 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='98 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='06 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='06 DP ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='90 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00 CFP BEME 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='53 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='57 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='72 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='29 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='28 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='27 OP BEME 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='37 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='40 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='56 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='91 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='07 INV BE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='99 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='01 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='39 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='48 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='47 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='46 ME N-OLS B-OLS N-LASSO B-LASSO P-POSI P-POSI P-POSI WFF3 WFF5OP 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='72 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='62 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='33 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='96 INV ME 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='03 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='80 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='28 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='22 Prior60 ME 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='39 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='35 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='16 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='12 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='82 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='02 Prior12 ME 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='14 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='82 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='29 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='78 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='23 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='24 Priorl ME 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='87 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='84 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='65 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='18 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='91 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='93 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='95 OP ME 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='83 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='79 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='58 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='94 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='98 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='99 INV ME 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='39 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='39 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='31 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='15 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='99 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='01 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='01 EP ME 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='19 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='14 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='90 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='82 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='88 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='87 DP ME 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='35 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='34 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='29 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='16 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='97 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='99 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00 CFP BEME 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='42 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='30 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='98 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='39 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='62 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='67 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='67 OP BEME 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='88 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='78 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='45 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='79 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='46 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='52 INV BE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='84 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='39 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='28 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='30 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='31 ME N-OLS B-OLS N-LASSO B-LASSO P-POSI P-POSI P-POSI WFF3 wFF5Figure 7: MAPE across cross-sections (a) In-sample (b) Out-of-sample This figure shows the mean absolute pricing errors (MAPE) for each cross-section of test assets for different factor models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The test assets are the N = 243 double-sorted portfolios, and we show the average |α| for each set of double sorts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The rejection FWER is set to γ = 5% The candidate factors are the 114 univariate factor portfolios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The time dimension is T = 660.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We use the first half for the in-sample estimation and selection, while the second half serves for the out-of-sample analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We compare Panel PoSI (P-PoSI), Panel PoSI with infinite priors on FF3 and FF5 (P-PoSI ωFF3 respectively ωFF5), Bonferroni LASSO (B-LASSO), Naive LASSO (N-LASSO), Bonferroni OLS (B-OLS) and Naive OLS (N-OLS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' plined estimate for the number of factors based on inferential theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The level of sparsity of a linear model also depends on the rotation of the covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Therefore, we also study the principal components (PCs) of the covariates X as candidate factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In this case, we use the step-down procedure, which we refer to as “Ordered PoSI” or O-POSI for short.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Figure 8 shows the number of factors for different FWER rates γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The factor count is obtained by traversing K∗(γ) equal to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='01, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='02, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Panel PoSI without priors selects 2 factors for γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='01 and 3 for γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Once, we impose an infinite weight on the Fama-French 3 factors, we select 4 factors for all FWER levels, while the prior on the Fama-French 5 factors results in a 5 factor model for all FWER levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The Ordered PoSI with PCA rotated factors selects 3 factors for all FWER levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In summary, our results confirm that depending on the desired significance, the number of asset pricing factors for a good model seems to be between 2 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Note that our analysis is orthogonal to the market factor, which would also be added to the final model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Thus, 28 OP 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='17 INV ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 Prior60 ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='38 Prior12 ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='19 Priorl ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 OP ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 INV ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 EP ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 DP ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 CFP BEME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='17 OP BEME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='13 INV BE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 ME N-OLS B-OLS N-LASSO B-LASSO P-POSI P-POSI P-POSI WFF3 WFF5Op 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='02 INV ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 Prior60 ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 Prior12 ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 Priorl ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 OP ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 INV ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 EP ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='08 DP ME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='07 CFP BEME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 OP BEME 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 INV BE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='09 ME N-OLS B-OLS N-LASSO B-LASSO P-POSI P-POSI P-POSI WFF3 wFF5Table 4: Selected factors with Panel PoSI Factor Nj pj ρ−1Njpj Order No prior Size (SMB) 1824 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0001 1 Dollar Trading Volume (dtv 12) 2099 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0001 2 Value (HML) 1191 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0280 3 Short-Term Reversal (srev) 1050 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0974 4 Forecast Revisions (rev 1) 242 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00018 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2782 5 Investment (CMA) 998 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00112 >0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='9999 6 Profitability (RMW) 797 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00123 >0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='9999 7 FF3 prior (ωFF3) Size (SMB) 2802 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0001 1 Value (HML) 2802 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0001 2 Dollar Trading Volume (dtv 12) 779 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0017 3 Short-Term Reversal (srev) 1106 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0049 4 Profitability (RMW) 819 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00006 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2527 5 Investment (CMA) 874 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00087 >0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='9999 6 FF5 prior (ωFF5) Size (SMB) 2911 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0001 1 Value (HML) 2911 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0001 2 Forecast Revisions (rev 1) 230 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0005 3 Short-Term Reversal (srev) 1140 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0052 4 Dollar Trading Volume (dtv 12) 661 <0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='0072 5 Profitability (RMW) 2911 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1937 6 Investment (CMA) 2911 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1996 7 Gross profits-to-assets (gpa) 1151 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='00013 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='8382 8 This table reports ranking of factors based on their FWER bound for no prior, and infinite weight priors on the Fama-French 3 and 5 factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The test assets are the N = 243 double-sorted portfolios and the candidate factors are J = 114 univariate long-short factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The rows are ordered based on sorted ascending ρ−1Njpj, which corresponds to the FWER bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' the final model would have between 3 and 5 factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Table 5 further confirms our findings about the number of asset pricing factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We compare the number of factors for γ = 5% selected either from the univariate high-minus-low factors (HL), their PCA rotation or the combination of the high-minus-low factors and their PCs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Panel PoSI selects consistently 3 factors from the long-short factors and their PCs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' When combined, PoSI selects 4 factors, which is plausible as the optimal sparse model can be different for this larger set of candidate factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The Bonferroni PoSI is overly conservative and selects only 2 HL factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The models based on Naive LASSO or OLS select excessively many factors independent of the rotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Overall, the findings support that parsimonious asset pricing models can be described by three to four factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Of course, any discussion about the number of asset pricing factors is always subject to the choice of test assets and candidate factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 29 Figure 8: Number of selected factors for different FWER (a) Univariate factors with priors (P-POSI) (b) PCA rotated factors (O-POSI) This figure shows the number of selected factors to explain the test assets of double-sorted portfolios for different FWER rates γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The factor count is obtained by traversing K∗(γ) for γ ranging from 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='01 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The left subfigure uses univariate high-minus-low factors as candidate factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We consider the case of no prior, and the cases of an infinite weight on the Fama-French 3 factor model (ωFF3) and an infinite weight on the Fama-French 5 factor model (ωFF5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The right subfigure uses the PCA rotation as candidate factors with the step-down procedure Ordered PoSI (O-POSI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Table 5: Number of selected factors for different methods HL PCs HL + PCs Panel PoSI 3 3 4 Bonferroni PoSI 2 3 2 Bonferroni Naive LASSO 10 29 10 Naive LASSO 70 50 76 Bonferroni OLS 107 13 117 Naive OLS 114 50 164 This figure shows the number of selected factors to explain the test assets of double-sorted portfolios for different methods and different sets of candidate factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The rejection FWER is set to γ = 5%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The factor count is obtained by traversing K∗(γ) for γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The number of factors is selected on the in-sample data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For PCs, we use the step-down method for the nested hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 8 Conclusion This paper proposes a new method for covariate selection in large dimensional panels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We develop the conditional inferential theory for large dimensional panel data with many covariates by combining post-selection inference with a new multiple testing method specifically designed for panel data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our novel data-driven hypotheses are conditional on sparse covariate selections and valid for any regularized estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Based on our panel localization procedure, we control for family-wise error rates for the covariate discovery and can test unordered and nested families of hypotheses for large cross-sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We provide a method that allows us to traverse the inferential 30 P-PoSI P-PoSL WE3 6 P-PoSI WFF5 Factor count 5 5 5 5 5 4 4 4 44 4 3 3 2 2 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='17 6 PC count 5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 3 3 3 3 3 : 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1results and determine the least number of covariates that have to be included given a user-specified FWER level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As an easy-to-use and practically relevant procedure, we propose Panel-PoSI, which combines the data-driven adjustment for panel multiple testing with valid post-selection p-values of a gen- eralized LASSO, that allows to incorporate weights for priors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In an empirical study, we select a small number of asset pricing factors that explain a large cross-section of investment strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our method dominates the benchmarks out-of-sample due to its better control of false rejections and detections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A Post-selection Inference with Weighted-LASSO A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 Weighted-LASSO: Linear Truncation Results This appendix collects the assumptions and formal statements underlying Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We present the results for the Weighted-LASSO, which includes the conventional LASSO as a special case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In order to ensure uniqueness of the LASSO solution, we impose the following condition, which is standard in the LASSO literature: Definition A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' General position The matrix X ∈ RT×J has columns in general position if the affine span of any J0 + 1 points (σ1Xi1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', σJ0+1XiJ0+1) in RT for arbitrary σ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='σd0+1 ∈ {±1} does not contain any element of {±Xi : i /∈ {i1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', iJ0+1}}, where J0 < J4 and Xi denotes ith column of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This position notion will help us to avoid ambiguity in the LASSO solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Note that this condition is a much weaker requirement than full-rank of X, and states that if one constructs a J0-dimensional subspace, it must contain at most J0 + 1 entries of {±X1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', ±XJ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Even though this appears to be a complicated and mechanical condition, by a union argument it turns out that with probability 1, if the entries of X ∈ RT×J are drawn from a continuous probability distribution on RT×J then X is in general position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='5 Then, we will be able to discuss the LASSO solution for general design with relative ease, thanks to Lemma 3 of Tibshirani (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It shows that if X lie in general position, it is sufficient to have a unique LASSO solution regardless of the penalty scalar λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This condition will later be used in establishing our Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We can now state the formal assumptions: Assumption A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Unique low dimensional model (a) Low dimensional truth: The data satisfies Y = XSβS + ϵ where |S| = O(1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (b) General position design: The covariates X have columns in general position as given by Definition A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 4The original condition needs to hold for J0 < min{T, J} but in the scope of our study, we consider T > J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 5See Donoho (2006) and §2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 of Tibshirani (2013) for more discussions on uniqueness and general position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 31 We start our analysis with the simpler model of known error variance, and later extend it to the case of estimated unknown variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Assumption A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Gaussian residual with known variance The residuals are distributed as ϵ ∼ N(0, Σ) where Σ is known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Before formalizing the inferential theory, we need to clarify the quantity for which we want to make inference statements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As stated before, we only test the hypothesis on a covariate if its LASSO estimate turns out active.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This is exactly the approach how researchers in practice conduct explorations in high-dimensional datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In other words, we focus on ˆβM and quantities associated with it, where M denotes the active set of selected covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We study the inferential theory of the “debiased estimator”, which is a shifted version of the LASSO fit as defined below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We show that this debiased estimator is unbiased, consistent and follows a truncated Gaussian distribution, with profound connections to the debiased LASSO lit- erature such as Javanmard and Montanari (2018), but has different properties by a subtle different descent direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' More concretely, given M, clearly ˆY = XM ˆβM is the fitted value since ˆβ−M = 0, where −M is the complement of the set M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We let ˆϵM := Y − XM ˆβM be the residual from the LASSO estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' By considering only the partial LASSO loss of ℓ(Y, XM, λ, β) and given we are currently at the LASSO estimator ˆβ, the Newton step is X+ Mˆϵ following (Boyd and Vandenberghe, 2004, § 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2), where we denote X+ M = (X⊤ MXM)−1X⊤ M as the pseudo-inverse of the active subma- trix of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The invertibility of X⊤ MXM either is observed when we are in the fixed design regime or happens almost surely when we are dealing with continuous quantities, as a consequence of Assumption A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1(b) as argued in Tibshirani (2013) and Lee, Sun, Sun, and Taylor (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Now we can formally define the main object of our inferential theories: Definition A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Debiased Estimator The debiased Weighted-LASSO estimator ¯βM given M is given by ¯βM = ˆβM + X+ MˆϵM (21) It is now evident why some of the literature refers to the debiased estimator also as the one-step estimator: given that ˆβM solves the Karush-Kuhn-Tucker (KKT) condition and reaches the optimal sub-gradient for the full loss ℓ(Y,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' X,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' β),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' our debiased estimator ¯βM is the result of moving one more Newton-Ralphson method step after ˆβM,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' but only taking XM rather than X as a whole into the likelihood loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, the update step is actually only a partial update from the LASSO solution point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Intuitively, ¯βM should still be close to solving the KKT conditions, and would exactly solve the KKT conditions if XM happen to be the true covariates (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' XM = XS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' If we were to take a Newton’s method step with gradient and Hessian calculated with the entirety of data X, or equivalently taking a full update from the stationary point, we will recover the ˆβd M proposed in Javanmard and Montanari (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The material difference is that the full-update would require the J ×J precision matrix Ω = Γ−1, where Γ = X⊤X if X assumed fixed or Γ = E[X⊤X] if X assumed to be generated from a stationary process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Using ℓ(Y, XM, λ, β) instead of ℓ(Y, X, λ, β), 32 our debiased estimator would not need the full Hessian, which is leveraging LASSO’s screening property and uses (X⊤ MXM)−1X⊤ M (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' X+ M) as a much lower-dimensional alternative of ΩX⊤.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Without loss of generality, we assume that the covariate indexed i ≤ |M| is part of M, and we can always rearrange the columns of X to have the first |M| covariates as active.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Let η = (X+ M)⊤ei ∈ RT be a vector where ei ∈ R|M| is a vector with 1 at ith coordinate and 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, the η vector is the linear mapping from Y to the ith coordinate of an OLS estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' In particular, the debiased estimator and the response satisfy the following relationship: Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Debiased Estimator is OLS-post-LASSO The debiased estimator is a linear mapping of Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Specifically, given η = (X+ M)⊤ei: ¯βi = η⊤Y (22) Moreover, ¯βM is the OLS estimate of regressing XM on Y : ¯βM = arg min β 1 2T ∥Y − XMβ∥2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (23) The proof of Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 is deferred to the Online Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Although its proof is simple, this lemma reveals that our debiased estimator is the same as the least-square after LASSO estimator proposed in Belloni and Chernozhukov (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our strategy to obtain a rigorous statistical inferential theory with p-values is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' First we perform an algebraic manipulation to transform ˆβM into ¯βM in the linear form of (22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Then, we follow the strategy in Lee, Sun, Sun, and Taylor (2016) to traverse the KKT subgradient optimal equations for general X by writing it equivalently into a truncation in the form of {AY ≤ b}, as we will do in Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Finally we will circle back to ˆβM by the linear mapping between ¯βM and Y and the distributional results induced by the fact that Y is truncated by {AY ≤ b}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For our Weighted-LASSO, the KKT sub-gradient equations are X⊤(X ˆβ − Y ) + λ � s v � ⊙ ω−1 = 0 where � � � si = sign(ˆβi) if ˆβi ̸= 0, ωi < ∞ vi ∈ [−1, 1] if ˆβi = 0, ωi < ∞ (24) In other words, when ω is specified, the KKT conditions can be identified using the tuple of {M, s}, where M is the active covariates set and s is the signs of LASSO fit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This is a consequence of how LASSO KKT condition can separate the slacks into s for active variables and v for inactive variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' If we have infinite importance weights (J ̸= ∅), we would simply need si < ∞ for i ∈ J because λsi/ωi = 0 is guaranteed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We rigorously characterize the KKT sub-gradient conditions as a combinations of signs and infinity norm bounds conditions by the following lemma, which parallels Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 of Lee, Sun, Sun, and Taylor (2016): Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Selection in norm equivalency 33 Consider the following random variables w(M, s, ω) = (X⊤ MXM)−1(X⊤ MY − λs ⊙ ω−1 M ) u(M, s, ω) = ω−M ⊙ � X⊤ −M(X+ M)⊤s ⊙ ω−1 M + 1 λX⊤ −M(I − PM)Y � (25) where PM = XMX+ M ∈ RT×|M| is the projection matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The Weighted-LASSO selection can be written equivalently as {M, s} = {sign(w(M, s, ω)) = s, ∥u(M, s, ω)∥∞ < 1} (26) Using this characterization, we are then able to provide the distributional results for the debiased estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Consider ξ = Ση(η⊤Ση)−1 ∈ RT as a covariance-scaled version of our η, and a mapping of Y using residual projection matrix: z = (I − ξη⊤)Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Note that z can be calculated once we observe (X, Y ), so it can be conditioned on were we to do so.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We will soon see that the truncation set will depend on the variable z, but this does not cause any issues thanks to the following lemma, the proof of which is deferred to the Online Appendix: Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Ancillarity in truncation The projected z and the debiased estimator ¯βi are independently distributed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' As a result of Lemma A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3, when describing the distribution of ¯βi, we can use z in its truncation conditions as long as we condition on z as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' To simplify notation, we can collect all quantities we need to condition on into ˜ M := ((M, s), z, ω, X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Now we can assemble the consequences of Lemmas A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3 to arrive at the truncated Gaussian statements for the debiased estimator similar to Lee, Sun, Sun, and Taylor (2016), but for weighted-LASSO: Theorem A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Truncated Gaussian Under Assumptions A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 for i ∈ M, ¯βi is conditionally distributed as: ¯βi| ˜ M ∼ T N(βi, η⊤Ση;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' [V −(z), V +(z)]) (27) where T N is a truncated Gaussian with mean βi, variance η⊤Ση and truncation set [V −(z), V +(z)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' βi denotes the ith entry of the true β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The vector of signs is s = sign(ˆβM) ∈ R|M| and the truncation set depends on A = � �� λ−1X⊤ −M(I − PM) −λ−1X⊤ −M(I − PM) −diag(s)X+ M � �� ∈ R(2J−|M|)×T , b = � �� ω−1 −M − X⊤ −M(X+ M)⊤s ⊙ ω−1 M ω−1 −M + X⊤ −M(X+ M)⊤s ⊙ ω−1 M −λ · diag(s)(X⊤ MXM)−1s ⊙ ω−1 M � �� ∈ R2J−|M| V −(z) = max j:(Aξ)j<0 bj − (Az)j (Aξ)j , V +(z) = min j:(Aξ)j>0 bj − (Az)j (Aξ)j .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Notice that Theorem A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 is decoupled across M, which is to say we are able to deal with 1-dimensional statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We arrive at this form because the construction of (V −, V +) over the 34 extreme points of the linear inequality system (or vertices of the polyhedral) has decomposed the dimensionality of the truncation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This decoupling is of significant practical value, in that it would be otherwise a non-trivial task to calculate a statistic of multivariate (in our case |M|-dimensional) truncated Gaussian and then marginalize over |M| − 1 dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 Weighted-LASSO Quasi-Linear Truncation with Estimated Variance This section generalizes the distribution results to the practically relevant case when the noise variance is unknown and has to be estimated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This becomes a challenging problem for post-selection inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We replace Assumption A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 by the following assumption: Assumption A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Gaussian residual with simple unknown variance The residuals are distributed as ϵi iid ∼ N(0, σ2) where σ2 is unknown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The simple structure of unknown variance of Assumption A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 is common in the post-selection inference literature as for example in Lee, Sun, Sun, and Taylor (2016) and Tian, Loftus, and Taylor (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A feasible conditional distribution replaces σ2 with an estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Under Assumption A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2, we can estimate the variance using LASSO residuals and then reiterate the previous truncation arguments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The most common standard variance estimator is ˆσ2(Y ) = ∥Y − X ˆβ∥2 2/(T − |M|).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (28) In classical regression analysis, the normally distributed estimated coefficient divided by an estimated standard deviation follows a t-statistic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hence, we would expect that a truncated normal debiased estimator divided by a sample standard deviation might yield a truncated t-distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' However, the arguments are substantially more involved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Simply using ˆσ(Y ) of (28) in the expres- sion η⊤Ση of Theorem A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 changes the truncation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Specifically, Y having truncated support means ˆσ(Y )2 is not χ2-distributed supported on the entire R+, which makes the support of ¯β/ˆσ(Y ) non- trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Therefore, in order to correctly assess the truncation of the studentized quantity, we have to disentangle how much truncation is implied in ˆσ(Y )−1 and ¯β simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Geometrically, as ˆσ(Y ) is a non-linear function of Y and ¯β, the truncation on Y is in fact no longer of the simple linear form {AY ≤ b} such as in Theorem A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Instead of a polyhedral induced by affine constraints, we have a “quasi-affine constraints” form of {CY ≤ ˆσ(Y )b} because LASSO KKT conditions preserve the estimated variance throughout the arguments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Thus, both sides of the inequality CY ≤ ˆσ(Y )b have Y , and in right-hand-side the ˆσ(Y ) is non-linear in Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' A significantly more complex set of arguments are needed compute the exact truncation, which is equivalent to solve for a |M|-system of non-linear inequalities rather than linear inequalities that constrain the support of Y for inference on each ¯βi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Theorem A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 shows the appropriate truncation based on those arguments: Theorem A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Truncated t-distribution for estimated variance Under Assumptions A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3, and the null hypothesis that βi = 0, the studentized quantity 35 ¯βi/∥η∥ˆσ(Y ) follows ¯βi/∥η∥ˆσ(Y ) ∼ TTd;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='Ω, (29) where TT is a truncated t-distribution with d degrees of freedom and truncation set Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The truncation set Ω = � i∈M{t : t √ Wνi + ξi √ d + t2 ≤ −θi √ W} is an |M|-intersection of simple inequality-induced intervals based on the following quantities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The active signs are denoted as s = sign(ˆβM) ∈ R|M|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The scaled LASSO equivalent penalty is ˜λ2 = λ2 ˆσ2(Y )·(T−|M|)+∥(X+ M)⊤s⊙ω−1 M ∥2 2λ2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' θi = (˜λsi � T − |M| 1 − ˜λ2∥(X+ M)⊤s ⊙ ω−1 M ∥2 2 ) · e⊤ i � (X⊤ MXM)−1s ⊙ ω−1� for i ∈ M C = −diag(s)X+ M ∈ R|M|×T , ν = Cη ∈ R|M|, ξ = C(PM − ηη⊤)Y ∈ R|M|, d = tr(I − PM), W = ˆσ2(Y ) · d + (η⊤Y )2 The quantities θ and C describe the quasi-linear constraints, whereas ν and ξ transform them into the form of Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Note that the Ω set is obtained from solving a low-dimensional set of quadratic inequalities that do not necessarily yield a single interval after intersection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We provide a proof of this result in the Online Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Using Theorem A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2 in practice poses several challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' First, the computations are much more involved, especially as each βi requires calculation of Ω which includes |M| actual constraints, each of which involves solving a simple but still non-linear inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' It is non-trivial to ensure that the numerical stability holds at every step of the calculations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Second, since Ω is not necessarily an interval, it is harder to interpret the truncation and also calculate the cumulative density function through Monte-Carlo simulations when there is a non-trivial truncation structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Third, in fact, the authors in Tian, Loftus, and Taylor (2018) recommend a regularized likelihood minimizing variance estimator that deviates from the simple ˆσ(Y ), which would in turn involves more numerical integration and optimization steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Last but not least, this result was proposed initially for studying scale-LASSO, which is why there has to be a penalty term transformation of λ to ˜λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Our goal is to provide a set of tools that can be useful for a wide range of applications including the LASSO with l2 squared norm loss rather than un-squared norm loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' These implementation difficulties are also discussed in more detail in the Online Appendix, which provides the accompanying proofs and the exact forms of the truncations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We provide a practical solution based on an asymptotic normal argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' We impose the standard assumption that we have a consistent estimator of the residual variance: Assumption A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Consistent estimator ˆσ(Y ) Given λ, the residual variance estimator is consistent ˆσ(Y ) p→ σ2 as T → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' This general assumption includes many common scenarios such as the results specified in Corol- lary 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 of van de Geer and B¨uhlmann (2011), or in Theorem 2 of Chatterjee (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' For example, for diminishing c � log(J)/T → 0 as J, T grow and our Assumptions A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1 and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3, we obtain con- sistency of ˆσ(Y ) of (28) by Chatterjee (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 36 Theorem A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Asymptotic truncated normal distribution Suppose Assumptions A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='3 and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='4 hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Under the null hypothesis that βi = 0 and for T → ∞ the studentized quantity ¯βi/∥η∥ˆσ(Y ) follows ¯βi/∥η∥ˆσ(Y ) ∼ TNΩ, (30) where TN is a truncated normal distribution with truncation Ω = [V −(z)/∥η∥2ˆσ(Y );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' V +(z)/∥η∥2ˆσ(Y )], where V −(z) and V +(z) are the same as in Theorem A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' The asymptotic distribution result has several advantages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' First, it is intuitive since it parallels the classical OLS inference with a t-statistic converging to Gaussianity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Secondly, it is computa- tionally more tractable than results of Appendix Theorem A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' With this result, one could obtain asymptotically valid post-selection p-values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' B Appendix: Empirics Table A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='1: Compositions of DS portfolios Sorted by # portfolios Sorted by # portfolios Sorted by # portfolios Sorted by # portfolios BEME, INV 25 ME, CFP 6 ME, INV 25 ME, Prior1 25 BEME, OP 25 ME, DP 6 ME, OP 25 ME, Prior12 25 ME, BE 25 ME, EP 6 OP, INV 25 ME, Prior60 25 This table lists the composition of double sorted portfolios that we use as test assets in our empirical study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' All the double sorted portfolios are from Kenneth French’s data library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' References Ahn, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Horenstein (2013): “Eigenvalue Ratio Test for the Number of Factors,” Econo- metrica, 81(3), 1203–1227.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Barber, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Cand´es (2015): “Controlling the false discovery rate via knockoffs,” The Annals of Statistics, 43(5), 2055 – 2085.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Belloni, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Chernozhukov (2013): “Least squares after model selection in high-dimensional sparse models,” Bernoulli, 19(2), 521 – 547.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Benjamini, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hochberg (1995): “Controlling the False Discovery Rate: A Practical and Pow- erful Approach to Multiple Testing,” Journal of the Royal Statistical Society.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Series B (Methodological), 57(1), 289–300.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Benjamini, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Yekutieli (2001): “The control of the false discovery rate in multiple testing under dependency,” The Annals of Statistics, 29(4), 1165 – 1188.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Bonferroni, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (1935): “Il calcolo delle assicurazioni su gruppi di teste.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=',” In Studi in Onore del Professore Salvatore Ortu Carbon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Boyd, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Vandenberghe (2004): Convex optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Cambridge University Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 37 Cand´es, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Fan, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Janson, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Lv (2018): “Panning for gold: ‘model-X‘ knockoffs for high dimensional controlled variable selection,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(3), 551–577.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Chatterjee, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (2014): “Assumptionless consistency of the Lasso,” Working paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Chernozhukov, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hansen, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Spindler (2015): “Valid Post-Selection and Post-Regularization Inference: An Elementary, General Approach,” Annual Review of Economics, 7(1), 649–688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Choi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Taylor, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Tibshirani (2017): “Selecting the number of principal components: Esti- mation of the true rank of a noisy matrix,” The Annals of Statistics, 45(6), 2590 – 2617.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Donoho, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (2006): “For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution,” Communications on Pure and Applied Mathematics, 59(6), 797–829.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Fama, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' French (2015): “A five-factor asset pricing model,” Journal of Financial Eco- nomics, 116(1), 1–22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Fithian, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Lei (2022): “Conditional calibration for false discovery rate control under depen- dence,” The Annals of Statistics, 50(6), 3091–3118.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Fithian, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Sun, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Taylor (2017): “Optimal Inference After Model Selection,” Working paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' G’Sell, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Wager, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Chouldechova, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Tibshirani (2016): “Sequential selection pro- cedures and false discovery rate control,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(2), 423–444.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Hou, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Xue, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Zhang (2018): “Replicating Anomalies,” The Review of Financial Studies, 33(5), 2019–2133.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Javanmard, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Montanari (2018): “Debiasing the lasso: Optimal sample size for Gaussian designs,” The Annals of Statistics, 46(6A), 2593 – 2622.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Johari, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Koomen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Pekelis, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Walsh (2021): “Always Valid Inference: Continuous Monitoring of A/B Tests,” Operations Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Kapetanios, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (2010): “A Testing Procedure for Determining the Number of Factors in Approximate Factor Models With Large Datasets,” Journal of Business & Economic Statistics, 28(3), 397–409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Kuchibhotla, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Brown, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Buja, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' George, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Zhao (2018): “Valid Post-selection Inference in Assumption-lean Linear Regression,” Working paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Sun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Sun, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Taylor (2016): “Exact post-selection inference, with application to the lasso,” The Annals of Statistics, 44(3), 907–927.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Markovic, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Xia, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Taylor (2018): “Unifying approach to selective inference with applications to cross-validation,” Working paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Meinshausen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' B¨uhlmann (2006): “High-dimensional graphs and variable selection with the Lasso,” The Annals of Statistics, 34(3), 1436 – 1462.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Onatski, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (2010): “Determining the number of factors from empirical distribution of eigenvalues,” The Review of Economics and Statistics, 92(4), 1004–1016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Pelger, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (2019): “Large-dimensional factor modeling based on high-frequency observations,” Journal of Econometrics, 208(1), 23–42, Special Issue on Financial Engineering and Risk Management.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' R´enyi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (1953): “On the theory of order statistics,” Acta Mathematica Academiae Scientiarum Hungarica, 4, 191–231.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 38 Siegmund, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (1985): Sequential Analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Springer-Verlag.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Simes, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (1986): “An Improved Bonferroni Procedure for Multiple Tests of Significance,” Biometrika, 73(3), 751–754.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Taylor, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Tibshirani (2015): “Statistical learning and selective inference,” Proceedings of the National Academy of Sciences, 112(25), 7629–7634.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Tian, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Loftus, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Taylor (2018): “Selective inference with unknown variance via the square-root lasso,” Biometrika, 105(4), 755–768.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Tian, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Taylor (2017): “Asymptotics of Selective Inference,” Scandinavian Journal of Statistics, 44(2), 480–499.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (2018): “Selective inference with a randomized response,” The Annals of Statistics, 46(2), 679–710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Tibshirani, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (1996): “Regression Shrinkage and Selection via the Lasso,” Journal of the Royal Statistical Society.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Series B (Methodological), 58(1), 267–288.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Tibshirani, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' (2013): “The lasso problem and uniqueness,” Electronic Journal of Statistics, 7, 1456 – 1490.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' van de Geer, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' B¨uhlmann (2011): Statistics for high dimensional data methods, theory and applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' van de Geer, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' B¨uhlmann, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Ritov, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Dezeure (2014): “On asymptotically optimal confidence regions and tests for high-dimensional models,” The Annals of Statistics, 42(3), 1166 – 1202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Zhang (2014): “Confidence intervals for low dimensional parameters in high dimensional linear models,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1), 217–242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Zrnic, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=', and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' Jordan (2020): “Post-Selection Inference via Algorithmic Stability,” Working paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'} +page_content=' 39' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tAyT4oBgHgl3EQfc_c0/content/2301.00292v1.pdf'}