diff --git "a/7tAyT4oBgHgl3EQfc_c0/content/tmp_files/2301.00292v1.pdf.txt" "b/7tAyT4oBgHgl3EQfc_c0/content/tmp_files/2301.00292v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/7tAyT4oBgHgl3EQfc_c0/content/tmp_files/2301.00292v1.pdf.txt" @@ -0,0 +1,2858 @@ +Inference for Large Panel Data with Many Covariates∗ +Markus Pelger† +Jiacheng Zou‡ +December 31, 2022 +Abstract +This paper proposes a new method for covariate selection in large dimensional panels. We de- +velop the inferential theory for large dimensional panel data with many covariates by combining +post-selection inference with a new multiple testing method specifically designed for panel data. +Our novel data-driven hypotheses are conditional on sparse covariate selections and valid for +any regularized estimator. Based on our panel localization procedure, we control for family-wise +error rates for the covariate discovery and can test unordered and nested families of hypothe- +ses for large cross-sections. As an easy-to-use and practically relevant procedure, we propose +Panel-PoSI, which combines the data-driven adjustment for panel multiple testing with valid +post-selection p-values of a generalized LASSO, that allows to incorporate priors. In an empir- +ical study, we select a small number of asset pricing factors that explain a large cross-section of +investment strategies. Our method dominates the benchmarks out-of-sample due to its better +control of false rejections and detections. +Keywords: panel data, high-dimensional data, LASSO, number of covariates, post-selection +inference, multiple testing, adaptive hypothesis, step-down procedures, factor model +JEL classification: C33, C38, C52, C55, G12 +∗We thank conference and seminar participants at Stanford, the California Econometric conference and the NBER-NSF +SBIES conference for helpful comments. Jiacheng Zou gratefully acknowledges the generous support by the MS&E Departmental +Fellowship, and Charles & Katherine Lin Fellowship. +†Stanford University, Department of Management Science & Engineering, Email: mpelger@stanford.edu. +‡Stanford University, Department of Management Science & Engineering, Email: jiachengzou@stanford.edu +arXiv:2301.00292v1 [econ.EM] 31 Dec 2022 + +1 +Introduction +Our goal is the selection of a parsimonious sparse model from a large set of candidate covariates +that explains a large dimensional panel. This problem is common in many social science applica- +tions, where a large number of potential covariates are available to explain the time-series of a large +cross-section of units or individuals. An example is empirical asset pricing, where the literature has +produced a “factor zoo” of potential risk factors to explain the large cross-section of stock returns. +This problem requires a large panel, as a successful asset pricing model should explain the many +available investment strategies, resulting in a large panel of test assets. At the same time, there is +no consensus about which are the appropriate factors, which leads to a statistical selection problem +from a large set of candidate risk factors. So far, the literature has only provided solutions for one +of the two subproblems, while keeping the dimensionality of the other problem small. Our paper +closes this gap. +The inferential theory on a large panel with many covariates is a challenging problem. As a first +step, we have to select a sparse set of covariates from a large pool of candidates with a regularized +estimator. The challenge is to provide valid p-values from this estimation that account for the +post-selection inference. Furthermore, researchers might want to impose economic priors on which +variables should be more likely to be selected. The second challenge is that the panel cross-section +results in a large number of p-values. Hence, some of them are inadvertently very small, which if +left unaddressed leads to “p-hacking”. The multiple testing adjustment conditional on the selected +subset of covariates from the first step is a novel problem, and requires to redesign what hypotheses +should be tested jointly. A naive counting of all tests is overly conservative, and the test design +and simultaneity counts need to be conditional on the covariate selection. +This paper proposes a new method for covariate selection in large dimensional panels, tackling +all of the above challenges. We develop the inferential theory for large dimensional panel data with +many covariates by combining post-selection inference with a new multiple testing method specifi- +cally designed for panel data. Our novel data-driven hypotheses are conditional on sparse covariate +selections and valid for any regularized estimator. Based on our panel localization procedure, we +control for family-wise error rates for the covariate discovery and can test unordered and nested +families of hypotheses for large cross-sections. As an easy-to-use and practically relevant procedure, +we propose Panel-PoSI, which combines the data-driven adjustment for panel multiple testing with +valid post-selection p-values of a generalized LASSO, that allows to incorporate priors. +Our paper proposes the novel conceptual idea of data-driven hypotheses family for panels. This +allows us to put forward a unifying framework of valid post-selection inference and multiple test- +ing. Leveraging our data-driven hypotheses family, we adjust for multiple testing with a localized +simultaneity count, which increases the power, while maintaining false discovery rate control. An +essential step for a formal statistical test is to formulate the hypothesis. This turns out to be +non-trivial for a large panel with a first stage selection step for the covariates. It is a fundamental +insight of our paper, that the hypothesis of our test has to be conditional on the selected set of +1 + +active covariates of the first stage. Once we have defined the appropriate hypothesis, we can deal +with the multiple testing adjustment, which by construction is also conditional on the selection +step. +Our method is a disciplined approach based on formal statistical theory to construct and in- +terpret a parsimonious model. It goes beyond the selection of a sparse set of covariates as it also +provides the inferential theory. +This is important as it allows to rank the covariates based on +their statistical significance and can also be applied for relatively short time horizons, where cross- +validation for tuning a regularization parameter might not be reliable. We answer the question +which covariates are needed to explain the full panel jointly, and can also accommodate “weak” +covariates or factors that only affect a small subset of the cross-sectional units. +Our data-driven hypothesis perspective exploits the geometric structure implied by the first +stage selection step. +Given valid post-selection p-values of a regularized sparse estimator from +time-series regressions, we collect them across the large cross-section into a “matrix” of p-values. +Only active coefficients, that are selected in the first stage, contribute p-value entries, whereas +covariates that were non-active lead to “holes” in this matrix. We leverage the non-trivial shape +of this matrix to form our adaptive hypotheses. This allows us to make valid multiple testing +adjusted inference statements, for which we design a panel modified Bonferroni-type procedure +that can control for the family-wiser error rate (FWER) in discovery of the covariates. As one +loosens the FWER requirements, the inferential thresholds admits more and more explanatory +variables, which suggests that the amount of covariates we expect to admit and the FWER control +level form an “false-discovery control frontier”. We provide a method that allows us to traverse the +inferential results and determine the least number of covariates that have to be included given a +user-specified FWER level. In other words, we provide a statistical significance test for the number +of factors in a panel. +We propose the novel procedure Panel-PoSI, which combines the data-driven adjustment for +panel multiple testing with valid post-selection p-values of a generalized LASSO. While our multiple +testing procedure is valid for any sparsity constrained model, Panel-PoSI is an easy-to-use and prac- +tically relevant special case. We propose Weighted-LASSO for the first stage selection regression and +provide valid p-values through post-selection inference (PoSI), which yields a truncated-Gaussian +distribution for an adjusted LASSO estimator. This geometric perspective is less common in the +LASSO literature, but has the advantage that it avoids the use of infeasible quantities, in particu- +lar the second moment of the large set of potential covariates. The Weighted-LASSO generalizes +LASSO by allowing to put weights onto prior belief sets. For example, a researcher might have +economic knowledge that she wants to include in her statistical selection method, and impose an in- +finite prior weight to include specific covariates in the sparse selection model. Our Weighted-LASSO +makes several contributions. First, the expression for the truncated conditional distribution with +weights become much more complex than for the special case of the conventional LASSO. Second, +we provide a simple, easy-to-use and asymptotically valid conditional distribution in the case of an +estimated noise variance. +2 + +We demonstrate in simulations and empirically that our inferential theory allows us to select +better models. We compare different estimation approaches to select covariates and show that our +approach better trades off false discovery and correct selections and hence results in a better out- +of-sample performance. Our empirical analysis studies the fundamental problem in asset pricing of +selecting a parsimonious factor model from a large set of candidate factors that can jointly explain +the asset prices of a large cross-section of investment strategies. We consider a standard data set +of 114 candidate asset pricing factors to explain 243 double sorted anomaly portfolios. We show +that Panel PoSI selects 3 factors which form the best model to explain out-of-sample the expected +returns and the variations of the test assets. The selected factors are economically meaningful and +we can rank them based on their relative importance. A prior on the Fama-French factors does not +improve the model. Our findings contributes to the discussion about the number of asset pricing +factors. +The rest of the paper is organized as follows. Section 1.1 relates our work to the literature. +Section 2 introduces the model and the Weighted-LASSO. Section 3 discusses the appropriate +hypotheses to be considered for inference on the entire panel. Section 4 proposes a joint unordered +test for the panel using multiple testing adjustment so that we can maintain FWER control, and +shows how to traverse this procedure to acquire the least factor count associated with each FWER +target. In section 5 we consider the case of nested hypotheses, where the covariates observe a fixed +ordering, which is of independent interest, and we propose a step-down procedure for this setting +that maintains false discovery control. Section 6 provides the results of our simulation and Section +7 discusses our empirical studies on a large asset pricing panel data set. Section 8 concludes. The +proofs and more technical details are available in the Online Appendix. +1.1 +Related Literature +The problem of multiple testing is an active area of research with a long history. The statistical +inference community has studied the problem of controlling the classical FWER since Bonferroni +(1935), and controlling for false-discover rate (FDR) going back to Benjamini and Hochberg (1995) +and Benjamini and Yekutieli (2001). Bonferroni (1935) allows for arbitrary correlation in the test +statistics because its validity comes from a simple union bound argument, and is in fact the optimal +test when statistics are “close to independent” under true sparse non-nulls. FDR control on the +other hand requires a discussion about the estimated covariance in the test statistics. +Recent +developments include a stream of papers led by Barber and Cand´es (2015) and Cand´es, Fan, +Janson, and Lv (2018), which constructs a generative model to produce fake data and control +for FDR. Fithian and Lei (2022) is a more recent work that iteratively adjusts the threshold for +each hypothesis in the family to seek finite sample exact FDR control and dominates Benjamini +and Hochberg (1995) and Benjamini and Yekutieli (2001) in terms of power. Another notion on +temporal false discovery control has been revived more recently by Johari, Koomen, Pekelis, and +Walsh (2021), who consider the industry practice of constantly checking p-values and provide an +early stopping in line with Siegmund (1985) that adjusts for bias from sequentially picking favorable +3 + +evidence, whereas we consider a static panel that is not an on-going experiment. +There are cases where the covariates warrant a natural order such that the hypothesis family +possesses a special testing logic. A hierarchical structure in covariates arises when the inclusion of +the next covariate only make sense if the previous covariates is included. An example is the use of +principal component (PC) factors, where PCs are included sequentially from the dominating one +to the least dominating one. We distinguish this from putting weights and assigning importance +on features because this variant of family of hypotheses warrants a new definition of FWER. We +propose a step-down procedure that can be considered as a panel extension of G’Sell, Wager, +Chouldechova, and Tibshirani (2016), relying on an approximation of the R´enyi representation of +p-values. The step-down control for nested FWER is based on Simes (1986), which along with +Bonferroni (1935) can be seen as comparing sorted p-values against linear growth. Our framework +contributes to estimating the number of principal component factors in a panel. There are have been +many studies that provide consistent estimators for the number of PCs based on the divergence in +eigenvalues of the covariance matrix, which include Onatski (2010), Ahn and Horenstein (2013) and +Pelger (2019). Another direction uses sequential testing procedures that presume correct nested +family of hypotheses, which include Kapetanios (2010) and Choi, Taylor, and Tibshirani (2017). +In contrast, we characterize the least amount of factors (which can also be based on principal +components), which should be expected when a FWER rate is provided. The nested version of +our procedure is close in nature to a panel version of “when-to-stop” problem of a multiple testing +procedure. +The problem of post-LASSO statistical testing for small dimensional cross-sections is studied in +a stream of papers including Meinshausen and B¨uhlmann (2006), Zhang and Zhang (2014), van de +Geer, B¨uhlmann, Ritov, and Dezeure (2014) and Javanmard and Montanari (2018), which consider +inference statements by debiasing the LASSO estimator. An alternative stream of post-selection +or post-machine learning inference literature includes Chernozhukov, Hansen, and Spindler (2015), +Kuchibhotla, Brown, Buja, George, and Zhao (2018) and Zrnic and Jordan (2020), who provide +non-parametric post-selection or post-regularization valid confidence intervals and p-values. These +papers do not make conditional statements and presume that the researcher sets the hypotheses +before seeing the data, which we will refer to as data agnostic hypothesis family. We follow a dif- +ferent train of thought that treats LASSO, among a family of conic maximum likelihood estimator, +as a polyhedral constraint on the support of the response variable. This geometric perspective +that provides inferential theory post-LASSO is pioneered by the work of Lee, Sun, Sun, and Taylor +(2016) and followed up by Fithian, Sun, and Taylor (2017) and Tian and Taylor (2018), assum- +ing Gaussian linear model. Markovic, Xia, and Taylor (2018) extends the results to LASSO with +cross-validation, Tian, Loftus, and Taylor (2018) discusses a square-root LASSO variant that takes +unknown covariance into consideration and Tian and Taylor (2017) considers the asymptotic results +when removing the Gaussian assumption. This body literature is often referred to as PoSI, and +traverses the Karush-Kuhn-Tucker (KKT) condition of a LASSO optimization problem to show +that the LASSO fit can be expressed as a polyhedral constraint on the support of the response +4 + +variable. We extend this work by allowing to put weights onto prior belief sets, and by bringing it +to the panel setting with multiple testing adjustment. +2 +Sparse linear models +We consider a large dimensional panel data set Y ∈ RT×N which we want explain with a large +number of potential covariates X ∈ RT×J. The panel data and explanatory variables are both +observed over T time periods.1 The size of the cross-section N and the dimension of the covariate +candidate set J are both large in our problem. We assume a linear relationship between Y and X: +Yt,n = +J +� +j=1 +Xt,jβ(n) +j ++ ϵt,n +for n = 1, ..., N, +which reads in matrix notation as +Y = Xβ + ϵ +(1) +We refer to the coefficients β as loading matrix, where the nth column β(n) corresponds to the nth +unit and β(n) +j +denotes the loading of the nth unit on the jth covariate. The remainder term ϵ is +unexplained noise. +We assume that a sparse linear model can explain jointly the full panel. Formally, a sparse +linear model with s active covariates is +Y = XSβS + ϵ +(2) +where s = |S| is the cardinality of the set of active covariates S = {j : ∃β(j) +n +̸= 0, n ∈ {1, ..., N}, +that is, the set of covariates with non-zero loadings. XS is the subset of covariates that belong to +S. Our goal is to estimate this low dimensional model, that can explain the full panel, from a large +number of candidate covariates, and provide a valid inferential theory. +Note that our sparse model formulation allows for two important properties. First, different +units can be explained by different covariates with different loadings. This means that β(n) ̸= β(m) +for n ̸= m. +For example, a subset of the cross-sectional units might be modeled by different +covariates than the remaining part of the panel. Second, we can accommodate “weak” covariates. A +covariate is included in S if it is required by at least one cross-sectional unit requires as explanatory +variable. In other words, a sparse model can include covariates in XS that explain only a very +small subset of the panel Y . +The first step is to estimate the sparse models over the time-series for each unit separately due +to the heterogeneity in the loadings. In a second step, we provide the valid inferential theory for +the loadings on the full panel. The time-series estimation requires an appropriate regularization to +1Our setting and multiple testing results can be readily extended to the case of unbalanced panel, although we +focus on the balanced panel case for now to highlight the core multiple testing insight of our method. We will further +discuss on this once we introduce our main procedure in Section 4 +5 + +select a small subset of covariates that contains all the relevant covariates for each unit. We allow +for a prior belief weight ω ∈ ¯RJ ++, so that different X can have different relative penalizations, and +a global λ ∈ R+ scalar penalty parameter. For the nth unit, we denote its β(n) estimate as ˆβ(n) +and the active set M(n) = {j : ˆβ(n) +j +̸= 0} as the set of j’s with non-zero loadings ˆβ(n) +j +. A general +regularized linear estimator solves the following optimization problem +ˆβ(j)(λ, ω) = arg min +β +1 +2T ∥Y (j) − Xβ∥2 +2 + λ · f(β, ω) +(3) +for a penalty function f and appropriate weights. In this paper, we consider the weighted LASSO +estimator with the regularization function +f(β, ω) = +J +� +j=1 +fj(βj, ωj) +where fj(βj, ωj) = +� +� +� +|βj| +ωj +ωj < ∞ +0 +o.w. +(4) +and weights ωj > 0 for all j ∈ {1, ..., J} and ∥ω−1∥1 = J. +We assume that the penalty λ is +selected such that the set ∥ˆβ(j)∥0 = |M(j)| is low dimensional. Importantly, we do not need to +assume that the selected set contains all active covariates. Our goal it is provide a valid inferential +theory conditional on the selected set. Our estimator generalizes the conventional LASSO with +the l1 regularization function of Tibshirani (1996) by allowing for different relative weighting in +the penalty. Importantly, we also allow for an infinite weight, which can be interpreted as a prior +on a set of covariates. +This allows researchers to take advantage of prior information and for +example ensure that a specific set of covariates will always be included. The weighted LASSO +will be particularly relevant in our empirical study, where we can answer the question which risk +factors should be added to a given set of economically motivated risk factors. Our weighted LASSO +formulation can also be interpreted as a Bayesian estimator with the canonical Laplacian prior. +Conventional regression theory will not provide correct inferential statements on the weighted- +LASSO estimates. We face two challenges. First, regularized estimation results in a bias, which +needs to be corrected. Second and more challenging, post-selection inference changes the distribu- +tion of the estimators. When we observe an active ˆβ(n) +j +from (3), it would be incorrect to simply +calculate its p-value from a conventional t-distribution. This invalidity stems from the fact that +conditional on observing a LASSO output, β(n) +j +must be large enough in magnitude for its ˆβ(n) +j +to +be active. In other words, the probability distribution of the estimators is truncated. +The correct inference has to be conditional on the covariates being selected by the LASSO +estimator. Hence, valid p-values have to be the tail probability conditional on being in the selection +set. The key to quantify such styles of inference is to recognize that a sparsity constrained estimator +is typically the result of solving Karush-Kuhn-Tucker (KKT) conditions, which can in turn be +geometrically characterized as polyhedral constraints on the support of response variables. This +is first established in Lee, Sun, Sun, and Taylor (2016), who provide the stylized results that +Post-Selection Inference (PoSI) of debiased non-weighted LASSO estimators can be calculated as +6 + +polyhedral truncation on Y . This line of research is also referred to as Selective Inference in other +literature such as Taylor and Tibshirani (2015). We extend this line of literature to allow for the +Weighted-LASSO. We derive these results with assumptions common in the PoSI LASSO literature, +detailed in Appendix A, and referred to as conventional regularity conditions for ease of exhibition. +THEOREM 1. Truncated Gaussian Distribution of Weighted-LASSO +Under conventional regularity conditions, the debiased estimate ¯βi for the i-th Weighted-LASSO +active covariate is conditionally distributed as +¯βi|Weighted-LASSO ∼ T N {η⊤Y :AY ≤b(ω)} +(5) +where T N A is truncated-Gaussian with truncation A, and the weights ω only appear in b(ω) +Theorem 1 has two elements. First, it debiases the LASSO estimate by a shifting argument. +While we use a geometric argument to remove the bias, the bias adjustment takes the usual form +in the LASSO literature as for example in Belloni and Chernozhukov (2013). The debiased LASSO +estimator simply equals a standard OLS estimation on the subset Mn selected by the Weighted- +Lasso. Second, the distribution of the linear coefficients is not a usual Gaussian distribution, but it +is truncated due to studying post-selection coefficients. This geometric perspective is less common +in the LASSO literature, but provides several advantages. One advantage of the geometric approach +is that it avoids the use of infeasible quantities, in particular the second moment of the large set of +potential covariates. Furthermore, the distribution result is not asymptotic in T, but also valid in +finite samples. We can obtain these results because we make the stronger assumption that the data +is normally distributed. Appendix A provides the detailed information on constructing ¯β and the +definitions of η, A, b(ω) along with lemmas that lead up to this result. It also discusses extensions +and the effect of estimating the variance of the noise. The empirical analysis is based on the explicit +form of Theorem 1 formulated in Theorem A.3. +Our Weighted-LASSO results make several contributions. First, the expression for the trun- +cated conditional distribution with weights become much more complex than for the special case +of the conventional LASSO. Second, we provide a simple, easy-to-use and asymptotically valid +conditional distribution in the case of an estimated noise variance. Last but not least, we show the +formal connection with alternative debiased LASSO estimators by showing that debiasing can be +interpreted as one step in a Newton-Ralphson method of solving a constrained optimization. +Theorem 1 allows us to obtain valid p-values for Weighted-LASSO coefficients. We obtain these +p values from the simulated cumulative distribution function of the truncated Gaussian distribution. +Crucially, all results for multiple testing adjustment in panels that we study in the following sections +neither require us to use a weighted Lasso estimator nor to use the p-values implied by Theorem +1. We only require to have a set of valid p-values for sparsity constrained models. These can be +obtained with any suitable regularized estimator and post-selection inference. The key element is +the selection of a low dimensional subset with p-values conditional on this selection. We propose +the weighted LASSO conditional inference results as an example of the type of sparsity constraint +7 + +models we are interested in, and demonstrate a machinery with which we can obtain valid p-values +for sparsity constrained models. In our empirical studies, we use Weighted-LASSO as our sparsity +constrained model since we want to specify strong prior beliefs on a few covariates and it is common +practice to use LASSO in the context of our empirical studies. Nonetheless, the testing methods +in the next sections accommodate any sparse estimator, and can be detached from inference for +Weighted-LASSO. +3 +Data-Driven Hypotheses +Our goal is to provide formal statistical tests that allow us to establish a joint model across a +large cross-section with potentially weak covariates. This requires us to provide a form of statistical +significance test with multiple testing adjustment that properly accounts for covariates that only ex- +plain a small subset of the cross-sectional units. This is important as in many problems in economic +and finance there is substantial cross-sectional variation in the explanatory power of covariates, and +a model that simply minimizes an average error metric might neglect weaker covariates. +An essential step for a formal statistical test is to formulate the hypothesis. This turns out to be +non-trivial for a large panel with a first stage selection step for the covariates. It is a fundamental +insight of our paper, that the hypothesis of our test has to be conditional on the selected set of +active covariates of the first stage. Once we have defined the appropriate hypothesis, we can deal +with the multiple testing adjustment, which by construction is also conditional on the selection +step. +The hypothesis formulation and test construction only requires valid p-values from a first stage +selection estimator. The results of the next two sections do not depend on a specific model for +obtaining these p-values and the active set. The results are valid for any model including non- +linear ones. The input to the analysis is a N ×J matrix, which specifies which covariates are active +for each unit and the corresponding p-values. The Weighted-LASSO is only one possible model, +but it can be replaced by any regularized model. We have introduced the sparse linear model as +it is the horse race model for many problems in economics and finance, and therefore of practical +relevance. +We illustrate the concept of a data-driven hypothesis with a simple example, which we will +use throughout this section. For simplicity we assume that we have J = 4 covariates and want +to explain N = 6 cross-sectional units. In the first stage, we have estimated a Weighted-LASSO +and have obtained the post-selection valid p-values for each of the N units. We collect the fitted +sparse estimator ¯β(n) for the nth unit in the matrix ¯β. Note, that this matrix has “holes” due to +the sparsity for each ¯β(n). Figure 1(a) illustrates ¯β for this example. +Similarly, we collect the corresponding p-values in the matrix P . For the nth unit, we only +have p-values for those covariates that are active in the nth linear sparse model. Thus, Figure +1(b) also has white boxes showing the same pattern of unavailable p-values due to the conditioning +on the output of the linear sparse model. These holes can appear at different positions for each +8 + +Figure 1: Illustrative example of data-driven selection +(a) Matrix ¯β +(b) Matrix P of p-values +This figure illustrates in a simple example the data-driven selection of a linear sparse model. In a first stage, we have +estimated a regularized sparse linear model for each of the N = 6 units with J = 4 covariates. Each row represents +the selected covariates with their estimated coefficients and p-values. The columns represent the J = 4 different +covariates. The grey shaded boxes represent the active set, while white boxes indicate the inactive covariates. The +numbers are purely for demonstrative purposes. +unit, which makes this problem non-trivial. This non-trivial shape of either subplot (a) or (b) +is completely data-driven and a consequence of linear sparse model selection. We show that the +hypothesis should be formed around these non trivial shapes as well, which is why we name it the +data-driven hypothesis family. +We want to test which covariates are jointly insignificant in the full panel. A data-agnostic +approach would simply test if all covariates are jointly insignificant, independent of the data-driven +selection step in the first stage. A data-agnostic hypothesis is unconditional as it does not depend +on any model output. +However, as we will show, this perspective is problematic for the high- +dimensional panel setting with many covariates as it ignores the dimension reduction from the +selection step. Therefore, an unconditional multiple testing adjustment accounts for “too many” +tests, which severely reduces the power. +We propose to form the hypothesis conditional on the first stage selection step. The data-driven +hypothesis only tests the significance of the covariates that were included in the selection, and hence +can drastically reduce the number of hypothesis. However, given the non-trivial shape of the active +set, the multiple testing adjustment for the data-driven hypothesis is more challenging. +Before formally defining the families of hypothesis, we illustrate them in our running example. +9 + +1 +2 +3 +4 +1 +-5.43 +2.15 +2 +-1.10 +4.78 +-0.08 +m +0.19 +4.59 +4 +4.44 +2.10 +5 +1.44 +4.53 +2.10 +6 +-0.46 +4.701 +2 +3 +4 +1 +0.127 +0.587 +2 +0.005 +0.001 +0.871 +3 +0.526 +0.001 +4 +:0.001 +≤0.001 +5 +:0.0010.001 +0.001 +6 +0.102 +0.010The data-agnostic hypothesis HA for explaining the full panel takes the following form: +HA = {HA0,1, HA0,2, HA0,3, HA0,4} += {β(1) +1 +=β(2) +1 += β(3) +1 += β(4) +1 += β(5) +1 += β(6) +1 += 0, +β(1) +2 +=β(2) +2 += β(3) +2 += β(4) +2 += β(5) +2 += β(6) +2 += 0, +β(1) +3 +=β(2) +3 += β(3) +3 += β(4) +3 += β(5) +3 += β(6) +3 += 0, +β(1) +4 +=β(2) +4 += β(3) +4 += β(4) +4 += β(5) +4 += β(6) +4 += 0} +(6) +The data-driven hypothesis HD only includes the active set and hence equals +HD = {β(2) +1 +=0, +β(1) +2 +=β(3) +2 += β(5) +2 += β(6) +2 += 0, +β(1) +3 +=β(2) +3 += β(3) +3 += β(4) +3 += β(5) +3 += β(6) +3 += 0, +β(2) +4 +=β(4) +4 += β(5) +4 += 0} +(7) +Obviously, HA has a larger cardinality of |HA| = 24 > |HD| = 14. This holds in general, unless the +first stage selects all covariates for each unit, in which case the two hypotheses coincide. +Formally, the data-agnostic family of hypothesis is defined as follows: +DEFINITION 1. Data-agnostic family +The data-agnostic family of hypotheses is +HA = {HA0,i|i ∈ [d]} +where HA0,i = +� +j∈[N] +H(j) +A0,i and H(j) +A0,i : β(j) +i += 0. +(8) +It is evident that HA does not need any model output or exploratory analysis, so it is indeed +data-agnostic. +As soon as we use a sparsity constrained model that has censoring capabilities, we no longer +observe (Y , X) from its data generating process. Consequently, unless our hypotheses depend on +how we built the model, or equivalently on how the data was censored, the data-agnostic hypotheses +forgo power without any benefit in false discovery control. Therefore, we formulate the hypothesis +on the ith covariate H(j) +0,i only if i ∈ M(j), that is, it is in the active set. Conditional on observing +the model output, there is no inference statement to be made about H(j) +0,i if i /∈ M(j), because its +estimator is censored by the model. +We denote as Ki the set of units for which the ith covariate is active. We define the cross- +sectional hypothesis for the ith covariate as: +H0,i = +� +j∈Ki +H(j) +0,i +����M, +∀i : Ki ̸= ∅ +(9) +By combining all covariates {i : Ki ̸= ∅} that show up at least once in one of the active sets of our +sparse linear estimators, we arrive at a data-driven hypothesis associated with our panel. This is +defined as follows: +10 + +DEFINITION 2 (Data-driven family). The data-driven family of hypotheses conditional on M +is +HD = {H0,i|i : Ki ̸= ∅} +(10) +This demonstrates the non-trivial nature of writing down a hypothesis in high-dimensional +panel: we can only collect Ki - the set of units for which the ith covariate is active - after seeing +the sparse selection estimation result. +4 +Multiple Testing Adjustment for Data-Driven Hypothesis +4.1 +Simultaneity Counts through Panel Localization +We show how to adjust for multiple testing of data-driven hypotheses. Given the p-values p(j) +i +for i ∈ M and j ∈ Ki, we form the data-driven hypothesis HD. Our goal is to reject members of +HD while controlling the Type I error, and the common way to measure such error is the family- +wise error rate. This is the same underlying logic that is used to define confidence intervals and +determine significance of covariates in a conventional setup. The crucial difference is that we need +to account for multiple testing given the large number of cross-sectional units. The family-wise +error rate (FWER) is defined as follows: +DEFINITION 3. Family-wise error rate +Let V denote the number of rejections of H(j) +0,i |M(j) when the null hypothesis is true. The family- +wise error rate (FWER) is P(V ≥ 1). +Similar to the conventional definition, we simply count the false rejections V and define FWER +as the probability of making at least one false rejection. +Importantly, Definition 3 accounts for the fact that we might repeatedly test on � +j∈[N] |Mj| +rather than a single hypothesis test of the form H(j) +0,i : β(j) +i += 0|M(j). Our contribution to FWER +control in the panel setting is thus to take into consideration both the multiplicities in units and +covariates when we deal with the “matrix” of p-values P . To achieve this goal, we propose a new +simultaneity account for the ith covariate, calculated as +Ni = +� +j∈Ki +|Mj| +(11) +Figure 2 illustrates the simultaneity counting for our running example with N = 6 units and +J = 4 covariates. The blue boxes represent the active set for a specific covariate. The yellow boxes +indicate the “co-active” covariates, which have to be accounted for in a multiple testing adjustment. +In the case of the first covariate j = 1, only the second unit n = 2 has selected this covariate. This +second unit has also selected covariate j = 3 and j = 4, which are jointly tested with the first +covariates. Hence, they are “co-active”, and the simultaneity count equals N1 = 3. Intuitively, +Nj represents all relevant comparisons for the jth covariate because it counts how many covariates +11 + +Figure 2: Simultaneity counts Ni in the illustrative example +(a) N1 = 3 +(b) N2 = 9 +(c) N3 = 14 +(d) N4 = 8 +This figure shows the simultaneity counts Ni in the illustrative example. The subplots represent the simultaneity +counts for the J = 4 covariates. The blue boxes indicate the active set Kj of the j covariates, while yellow boxes +indicate the “co-active” covariates of the jth covariate. The simultaneity counts are the sum of yellow and blue +boxes. +are active with the jth covariate in the regressions. Hence, Nj quantifies the number of “multiple +tests” for each covariate. +In subplot 2(a), we see that K1 = {2} for the 1st covariate, indicated by the blue box, because +it is only active in the second unit’s regression. The multiple testing adjustment needs to consider +all yellow boxes, and N1 = 3 is thus the total count of 1 blue and 2 yellow boxes. Similarly, for +the second covariate, K2 = {1, 3, 5, 6}, so we shade boxes yellow for the 2nd, 3rd and 5th units +and obtain N2 = 9. We can already see that our design of simultaneity count takes all relevant +pairwise comparisons into considerations, but avoids counting the white boxes - which would cause +overcounting and result in over-conservatism. +Our multiplicity counting is a generalization of the classical Bonferroni adjustment for multiple +testing. A conventional Bonferroni method for the data-agnostic hypothesis HA has a simultaneity +count of |HA| = N · J = 24 for testing each covariate. A direct application of a vanilla Bonfer- +roni method to the panel of all selected units and the data-driven hypothesis HD, would use a +simultaneity count of |HD| = 14 for testing each covariate. Our proposed multiplicity counting is +a refinement that leverages the structure of the problem, and takes the heterogeneity of the active +sets for each covariate into account. Our count has only N1 = 3, N2 = 9 and N4 = 8 for the +covariates j = 1, 2 and 4. Only for covariate j = 3 is the simultaneity count the same as a vanilla +Bonferroni count applied to HD, i.e. N3 = 14. +In addition to the simultaneity count of each covariate, we need an additional “global” metric +for our testing procedure. We define a panel cohesion coefficient ρ as a scalar that measures how +12 + +1 +2 +3 +4 +2 +4 +5 +61 +2 +3 +4 +1 +2 +4 +5 +61 +2 +3 +4 +1 +2 +4 +5 +62 +3 +4 +1 +2 +3 +4 +5 +6Figure 3: Illustration of the cohesion coefficient +(a) ρ = J−1 = 0.25 +(b) ρ = 0.44 +(c) ρ = 1 +This figure illustrate the cohesion coefficient ρ in three separate examples. It shows the smallest, largest and in- +between cases of ρ. The columns represent the J = 4 different covariates.The blue boxes indicate the active sets for +each panel. +sparse or de-centralized the proposed hypotheses family is: +ρ = +� +�� +j +|Kj| +Nj +� +� +−1 +(12) +The panel cohesion coefficient ρ is conditional on the data-driven selection of the overall panel. It is +straightforward to compute once we observe the sparse selection of the panel. This coefficient takes +values between J−1 and 1,2 where larger values of ρ imply that the active set is more dependent in +the cross-section. This can be interpreted as that the panel Y has a stronger dependency due to +the covariates X. Intuitively, in the extreme case when ρ = J−1, the panel can be separated into +J smaller problems, each containing a subset of response units explained by only one covariate. +Thus the panel would be very incohesive, and could be studied with J independent tests. In the +other extreme, if ρ approaches 1, the first-stage models include all active covariates for all units. +We consider this as a very cohesive panel. If ρ is between theses bounds, the panel is cohesive in +a non-trivial way such that some units can be explained by some covariates and there is no clear +separation of the panel into independent subproblems. +Figure 3 illustrates the panel cohesion coefficient in three examples. The subplots show three +active sets that are different from our running example. The left subplot 3(a) shows the extreme +case of ρ = J−1, where the panel is the least cohesive. The right subplot 3(c) illustrates the other +extreme for ρ = 1, where the panel is the most cohesive. The middle subplot 3(b) is the complex +case of a medium cohesion coefficient. +2We prove this bound in the Appendix, without leveraging sparsity of first-stage models but rather as an algebraic +result with intuitive interpretations. +13 + +1 +2 +3 +4 +1 +2 +3 +4 +5 +61 +2 +3 +4 +1 +2 +4 +5 +61 +2 +3 +4 +1 +2 +4 +5 +6Our novel simultaneity count and cohesiveness measure are the basis for modifying a Bonferroni +test for FWER-controlled inference. Theorem 2 formally states the FWER control. The proof is +in the Online Appendix. +THEOREM 2. FWER control +The following rejection rule has FWER≤ γ on HD: +min +n∈Kj +� +p(n)(j) +� +≤ ρ γ +Nj +⇒ Reject H0,j +(13) +where p(n)(j) are valid p-values for each univariate unit n, and ρ is the panel cohesion coefficient. +This completes the joint testing procedure. First, we calculate p-values after running a sparse +linear estimator time-series regression. Second, we use the sparse linear estimator output to write +down a hypothesis and, third, we provide a FWER control inference procedure by combining the +p-values across the cross-section and test the hypothesis. +The difference between a naive Bonferroni and our FWER control is particularly pronounced +for weak covariates that affect only a subset of the cross-sectional units. Given a FWER control +level of γ, the rejection threshold for a naive Bonferroni test is +γ +JN for every covariate. The rejection +threshold for our FWER control is always higher, and differs in particular when Nj is small and ρ +is large. This is the case for weak covariates in a cohesive panel. +As it is common in statistical inference, we focus on Type I error control. Type II error rates +require the specification of alternatives. While we do not provide formal theoretical results for the +power of our inference approach, we show comprehensively in the simulation and empirical part, +that our approach has substantially higher power than conventional approaches. +We point out that the validity of our procedure holds for unbalanced panels as well. +This +is because even when there are different number of observations for the nth and mth units, i.e. +Tn ̸= Tm for n ̸= m, they can still be estimated separately in the first stage of the regularized +regression. The hypothesis testing and selection of a parsimonious model only requires the matrix +P of valid p-values, which can be based on different samples. +4.2 +Least Number of Covariates: Traversing the Threshold +The typical logic of statistical inference is to determine which covariates we should admit from +XM, given a significance level γ. We use K to denote the number of selected covariates. When +γ is specified as a lower quantity, we expect K to decrease as well, that is, the rejection becomes +harsher. +As the number of admitted covariates of our procedure is monotone in γ, we want to ask +the following converse question: How low do we need to set γ such that we reject K covariates? +Concretely, we are interested in finding: +14 + +γ∗(K) = sup +� +� +�γ|K = +J +� +j=1 +1 +� +min +n∈Kj +� +p(n) +j +� +≤ ρ γ +Nj +�� +� +� . +(14) +Let pj = minn∈Kj{p(n) +j +} be the 1st order statistic for j = 1, ..., J. Then (14) is simply the K-th +order statistics of Njpj/ρ: +γ∗(K) = min{Nipi/ρ|∃j1, j2, ..., jK ∈ {1, ..., J} : Nipi ≥ Njkpjk}. +(15) +Since this minimization scan is monotone, we can determine how many covariates at least +should be admitted, given a control level, which is similar to the “SimpleStop” procedure described +in Choi, Taylor, and Tibshirani (2017). The following corollary formalizes this inversion method +that finds the least number of covariates to admit: +COROLLARY 1. Least number of covariates +Given the FWER level γ, there exists a unique number K∗(γ) such that +K∗(γ) = +� +� +� +arg max0≤K≤J γ∗(K) ≤ γ +∃K : γ∗(K) ≤ γ +d +o.w. +(16) +The statement simply states that the simplest linear model should have at least K∗(γ) covariates +for a given γ. Note that it is possible that, for example, γ∗(5) and γ∗(6) are both equal to 0.05, +while γ∗(7) > 0.05. In this case the minimum number of covariates is K∗(0.05) = 6 because it does +not hurt FWER-wise to include 6 covariates in the model. Hence, we are making a slightly different +statement than that there would be exactly K∗(γ) covariates in the true linear model. The number +of covariates is obviously conditional on the set of candidate covariates X, and we can only make +statements for this given set. +In our empirical study we consider candidate asset pricing factors X to explain the investment +strategies Y . More generally, the linear model that we consider is often referred to as a factor model. +Therefore, we will also refer to the selected covariates as factors, and use these two expressions as +synonyms moving forward. This directly links our procedure to the literature on estimating the +number of factors to explain a panel. A common approach in this literature is to use statistics based +on the eigenvalues of either Y or X to make statements about the underlying factor structure. Our +approach is different, as it provides significance levels for the selected factors and FWER control +for the number of factors. +Table 1 illustrates the estimation of the number of factors and their ranking with our running +example introduced in Figure 1. We calculate the simultaneity counts Ni’s as given in (11) and +demonstrated in Figure 2, and pi as the smallest p-values associated with the ith covariate. Then, +the rejection rule in Theorem 2 is based on whether a pre-specified level γ satisfies pi < ργ +Ni , which +is equivalent to Ni · piρ < γ. +Thus, the natural ranking of the covariates is to sort all covariates in descending order of the +15 + +Table 1: Sorted p-values for the running example +Factor (j) +pj +Simultaneity count for HD +Conventional Bonferroni for HA +ρ−1 · Nj +ρ−1 · Nj · pj +J · N +J · N · pj +3 +< 0.001 +22.1 +< 0.001 +24 +0.002 +4 +< 0.001 +11.1 +0.001 +24 +0.003 +1 +0.005 +4.7 +0.024 +24 +0.120 +2 +0.002 +14.3 +0.028 +24 +0.051 +This table constructs “significance” levels for the running example introduce in Figure 1. We compare the simul- +taneity count for the data-driven hypotheses HD and a onventional Bonferroni count for data-agnostic hypotheses +HA. The products Nj · pj, respectively J · N · pj, can be interpreted as the significance levels for the corresponding +approach. Given a FWER control γ all factors with ρ−1 · Nj · pj (respectively J · N · pj) below this threshold are +selected. +Ni · pi/ρ values as shown in Table 1. It is then trivial to determine K∗(γ) for any choice of γ. For +example, for γ = 1%, we would select factors 3 and 4, but not 1 and 2. On the other hand, for +γ > 2%, we would include all four factors. Hence, the ranking of Nipi/ρ directly maps into K∗(γ). +The list of Nipi/ρ encompasses more information than just the number of factors. Naturally, it +provides an importance ranking of the factors. Furthermore, the number Ni reveals if significant +factors are “weak”. In our case, factor 1 has N1 = 3, which indicates that it affects only a small +number of hypothesis. Its p-value p1 is sufficiently small to still imply significance in terms of +FWER control. +For comparison, Table 1 also includes the corresponding analysis for the data-agnostic hypoth- +esis and a conventional Bonferroni correction. The Bonferroni analysis uses the same p-values but +a different multiple testing adjustment. In our case, the p values would be multiplied by J ·N = 24 +as this corresponds to the total number of hypothesis tests. This will obviously make the inference +substantially more conservative. Indeed, even for a FWER control of γ = 4%, we would only select +factors 3 and 4. We would need to raise the FWER control to γ = 12% to include factor 1. Hence, +weak factors, like factor 1, are more likely to be discarded by the data-agnostic hypothesis with +conventional multiple testing adjustment. +We want to emphasize that a data-agnostic hypotheses with conventional Bonferroni correction +does provide correct FWER control, but it is overly conservative. By construction, the data-agnostic +Bonferroni approach will test a larger number of hypothesis, which means that the corresponding +“significance levels” will always be lower or equal to our data-driven simultaneity count. Second, +the data-agnostic Bonferroni approach does not differentiate the “strength” of the factors, while +our approach provides a selection-based heterogeneous adjustment of the p-values. This is essential +for detecting weak factors. +Having introduced all building blocks of our novel method to detect covariates, we put the entire +procedure together as “Panel-PoSI”: +PROCEDURE 1. Panel-PoSI +The Panel-PoSI procedure consists of the following steps: +16 + +1. For each unit n = 1, ..., N unit, we fit a linear sparse model ˆβ(n) +X,Y (c, ω) given (X, Y , λ, ω). We +suggest cross-validation to select the LASSO penalty λ. We construct the sparse estimators +¯β(n) and the corresponding p-values for the active covariates for each unit, and collect them +in the “matrix” of p-values P . +2. We collect the panel-level sparse model selection event M and construct the data-driven hy- +pothesis HD. +3. Given the FWER control level γ and based on the the simultaneity counts Nj, we make +inference decision for the sparse model. We can rank covariates in terms of their significance +and select a parsimonious model that explains the full panel. +As we have now all results in place, we can summarize the advantages of our procedure. First, +we want to clarify that our goals and results are different from just some form of optimal shrink- +age selection. Selecting a shrinkage parameter with some form of cross-validation in a regularized +estimator like LASSO does not provide the same insights and model that we do. +A shrinkage +estimator can either be applied to each unit separately, as we do it in our first step, or to the +full panel in a LASSO panel regression. The separate covariate selection for each cross-sectional +unit does not answer the question which covariates are needed to explain the full panel jointly. A +shrinkage selection on the full panel for some form of panel LASSO can neglect weaker factors, +as those receive a low weight in the cross-validation objective function. Second, tuning parameter +selection with cross-validation requires a sufficiently large amount of data. Our approach is attrac- +tive as we can do the complete analysis on the same data. That means, an initial LASSO is used +to first reduce the number of covariates, but this set is then further trimmed down using inferential +theory. Hence, we can construct a parsimonious model even for data with a relatively short time +horizon, but large cross-sectional dimension. Third, the statements that we can make are much +richer than a simple variable selection. We can formally assess the relative importance of factors +in terms of their significance. The model selection is directly linked to a form of significance level, +which allows us to assess the relevance of including more factors. Last but not least, we can also +make statements about the strength of factors. In summary, Panel-PoSI is a disciplined approach +based on formal statistical theory to construct and interpret a parsimonious model. +5 +Ordered Multiple Testing on Nested Hypothesis Family +So far, our hypothesis family HD has no hierarchy and consequently, we have not imposed a +sequential structures on the admission order of covariates of X. However, there are cases where +the covariates or factors warrant a natural order such that the family possesses a special testing +logic. A hierarchical structure in covariates arises when the inclusion of the next covariate only +make sense if the previous covariates is included. One example would be if the next covariates +refines a property of the previous covariate. Another case is the use of principal component (PC) +factors. +The conventional logic is to include PCs sequentially from the dominating one to the +17 + +least dominating one. This is similar to the motivation for Choi, Taylor, and Tibshirani (2017), +but different from them, we treat the PCs as exogenous without taking the estimation of PCs +explicitly into account. In this section, we will use exogenous PCs as hierarchical covariates, as this +is the main example in our empirical study. However, all the results hold for any set of exogenous +hierarchical covariates. +Without loss of generality, we presume X has the jth column as the jth nested factor. A +k-order nested model N(k) is of the following form +N(k) model : Y = X[k]β[k] +(17) +where [k] = {1, ..., k} is the set that includes indices up to k. For example, a hierarchical three +factor model corresponds to X{1,2,3}. When formulating our hypothesis family, we must represent +the sequential testing structure. This is reflected in our definition of nested families of hypotheses: +DEFINITION 4. Data-driven nested family +The data-driven nested family of hypotheses conditional on M is +HN = {HN,k : k = 0, 1, ..., J}, +HN,k = +� +j∈Kk +H(j) +N,k +����M, +H(j) +N,k : {i′ : β(j) +i′ +̸= 0} ≤ k. +(18) +HN,0 completes the case when no rejection on any factor is made. Whenever HN,k is true, then +HN,k′ is also true for k < k′ ≤ J. Moreover, in the cases where Kk = ∅ but Kk′ ̸= ∅ with k < k′, +the notation ensures that the hypothesis HN,k is included in HN simply because Kk′ is present. +In other words, if a less dominating hypothesis HN,k′ is suggested by data (that is, its active set is +non-empty Kk′ ̸= ∅), HN would automatically include all HN,k for k ≤ k′. +The FWER control property needs to be adapted to the nested nature of this family. Choi, +Taylor, and Tibshirani (2017) argue that the proper measurement is to control for ordered factor +count over-estimation with level γ, as follows: +DEFINITION 5. FWER for nested family +For a test that rejects HN,k for k = 1, 2, ..., ˆk of HN, the FWER control at the level γ satisfies +P(ˆk ≥ s) ≤ γ, where s is the true factor count. +Given the hierarchical belief about the model, we need to add the following additional assump- +tion: +ASSUMPTION 1. Tail p-values +Under H(j) +N,k, there is p(j)(i′) iid +∼ Unif [0, 1] if i′ > k. +Assumption 1 only needs to hold for the tail hierarchical covariates. In the case of PCs, it only +applies to the lower order tail PC factors that should not be included for a given null hypothesis. +For example, if the true model is HN,s, we only need p(j)(i) iid +∼ Unif[0, 1] for i > s, which is a +usual type of assumption in this literature such as in G’Sell, Wager, Chouldechova, and Tibshirani +18 + +Figure 4: Example of hierarchical simultaneity counts Norder +k +for HN +(a) N order +4 += 3 +(b) N order +3 += 5 +(c) N order +2 += 8 +(d) N order +1 += 12 +This figure shows the simultaneity counts N order +i +in an illustrative example. The subplots represent the simultaneity +counts for the J = 4 covariates and N = 6 units. The dark blue columns present the active factors, while the light +blue columns capture factors of higher-order. The sub-plots from left-to-right represent our calculation order from +the highest-order factor to the 1st factor. +(2016). Moreover, because the nested nature guarantees that the higher-order PCs are more likely +to be null, a step-down procedure is expected to increase the power relative to a step-up procedure. +As our focus is to control for false discoveries, we also need to adjust our simultaneity counts +to the sequential testing. Concretely, we consider first taking a union to obtain the active unit set +Korder +k +and then calculate conservative simultaneity counts Norder +k +: +Korder +k += +� +i∈{k,k+1,...,J} +Ki, +Norder +k += +� +j∈Korder +k +|Mj|. +(19) +It is possible for some |Mk| to be 0 (that is, the kth PC could be inactive for all units), but its +Norder +k +would be 0 if and only if higher-order PCs all have |Mk′| = 0 for k′ > k. +Figure 4 illustrates the process of our step-down simultaneity count. From the left, we start +with factor k = 4 and move step-wise down to factor k = 1 on the right. The dark blue columns +present the active factors, while the light blue columns capture factors of higher-order. In the +left-most sub-figure, we only need to account for the 4th PC, implying Norder +4 += 3, whereas in the +mid-left sub-figure, the 3rd PC has Norder +3 += 2 + 3 = 5. Eventually, in the right-most sub-figure, +we have swept through the entire panel and the 1st PC has a simultaneity count of Norder +1 += 12. +Now we can introduce a step-down procedure adapted to the nested structure of HN: +PROCEDURE 2. Step-down rejection of nested ordered family HN +The step-down rejection procedure consists of the following steps: +1. For each k ∈ {1, ..., J} calculate the ordered simultaneity count Norder +k +. +19 + +1 +2 +3 +4 +1 +2 +3 +4 +5 +61 +2 +4 +1 +2 +4 +5 +61 +2 +4 +1 +2 +4 +5 +61 +2 +4 +1 +2 +3 +4 +5 +62. For each k ∈ {1, ..., J} calculate the approximated R´enyi representation Zorder +k +and its trans- +formed reversed order statistics qorder +k +: +Zorder +k += +J +� +i=k +� +j∈Ki +ln(p(j)(k)) +Norder +1 +− Norder +i+1 1{i ̸= J}, +qorder +k += exp(−Zorder +k +) +(20) +3. Reject hypothesis 1, 2, ..., ˆk, where ˆk = max{k : qorder +k +≤ γNorder +k +JN +}. +This procedure will have FWER control at level γ as stated in the following theorem: +THEOREM 3. FWER control for ordered hypothesis +Under Assumption 1, Procedure 2 has FWER control of γ for the ordered hypothesis HN. +The proof is deferred to the Online Appendix. This design extends Procedure 2 from G’Sell, +Wager, Chouldechova, and Tibshirani (2016) and “Rank Estimation” from Choi, Taylor, and Tib- +shirani (2017), both of which focus on a single sequence of p-values rather than the panel setting. +In Step 2, we use Assumption 1 to transform p-values into ln(p(j)(k)), which are i.i.d. standard +exponential random variables. Since the family HN has J members, we need to modify our simul- +taneity count and in a sense condense the panel into a sequence of statistics associated with the +ordered covariates. We built a staircase sequence of conservative simultaneity count Norder +k +in Step +1 to accumulate the number of p-values we use up to the kth ordered covariate, starting from the +end. By the R´enyi representation of R´enyi (1953), the Zorder +k +of Step 2 approximate exponential +order statistics and the qorder +k +approximate uniform order statistics. The nature of these approxima- +tions is to create a more conservative rejection, the technical details of which are examined in the +proof in our Online Appendix. Finally, we run the order statistics through a step-down procedure +proposed by Simes (1986) so that we find the ˆk largest number of ordered covariates rejected by +the data with FWER control. Also note that even if the global null, i.e. HN,0, is true, and every +linear sparse model active set is empty, that is Norder +1 += 0, the procedure in Step 3 is still valid +because we do not reject HN,1. +6 +Simulation +We demonstrate in simulations that our inferential theory allows us to select better models. +We compare different estimation approaches to select covariates and show that our approach bet- +ter trades off false discovery and correct selections and hence results in a better out-of-sample +performance. +Table 2 summarizes the benchmark models. Our framework contributes among three dimen- +sions: the selection step for the sparse model, the construction of the hypothesis and the multiple +testing adjustment. We consider variations for these three dimensions which yields in total six +estimation methods. By varying the different elements of the estimators, we can understand the +benefit of each component. +20 + +Table 2: Summary of estimation methods +Name +Abbreviation +Selection +Hypothesis +Multiple Testing +Rejection rule +Naive OLS +N-OLS +OLS without LASSO +Agnostic HA +No adjustment +pOLS < γ +Bonferroni OLS +B-OLS +OLS without LASSO +Agnostic HA +No adjustment +pOLS < +γ +JN +Naive LASSO +N-LASSO +LASSO without PoSI +Agnostic HA +No adjustment +pLASSO < γ +Bonferroni Naive LASSO +B-LASSO +LASSO without PoSI +Agnostic HA +Bonferroni +pLASSO < +γ +JN +Bonferroni PoSI +B-PoSI +LASSO with PoSI +Agnostic HA +Bonferroni +pPoSI < +γ +JN +Panel PoSI +P-PoSI +LASSO with PoSI +Data-driven HD +Simultaneity count +pPoSI < ργ +Ni +This table compares the different methods to estimate a set of covariates from a large dimensional panel. For each +method, we list the name and abbreviation. The selection refers to the regression approach for each univariate +time-series. The hypothesis is either agnostic or data-driven given the selected subset of covariates. The multiple +testing adjustment includes no adjustment, a conventional Bonferroni adjustment and our novel simultaneity count +for a data-driven hypothesis. The rejection rules combine the valid p-values and multiple testing adjustment. pOLS +is the p-value for a conventional t-statistics of an OLS estimator. pLASSO is the p-value without removing the lasso +bias or adjusting for post-selection inference, that is, it is simply the OLS p-values using the selected subset of +regressors. pPoSI is the debiased post-selection adjusted p-value based on Theorem 1. +Our baseline model is Panel PoSI, which uses post-selection inference LASSO, and a simultane- +ity count for a data driven hypothesis. The first component that we modify is the selection of the +sparse model. A simple OLS regression without shrinkage does not produce a sparse model. This +gives us the methods Naive OLS and Bonferroni OLS. A conventional LASSO results in a sparse +selection, but the p-values are not adjusted for the post-selection inference and the bias adjustment. +The corresponding models are the Naive LASSO and the Bonferroni Naive LASSO. The second +component is the hypothesis, which is agnostic for methods besides Panel PoSI. For the comparison +models, we either consider no multiple testing adjustment or the conventional Bonferroni adjust- +ment. Under the multiple testing adjustment we obtain the Bonferroni OLS, the Bonferroni Naive +LASSO and the Bonferroni PoSI. The outcome of all the estimations are adjusted p-values for the +covariates, which we use to select our model for a given target threshold. For a given value of γ we +include a covariate if its adjusted p-value is below the critical values summarized in the last column +of Table 2. +We simulate a simple and transparent model. Our panel follows the linear model +Yt,n = +J +� +j=1 +Xt,jβ(n) +j ++ ϵt,n +for t = 1, ..., T, n = 1, ..., N and j = 1, .., J. +The covariates and errors are sampled independently as normally distributed random variables: +Xt,j +iid +∼ N(0, 1), +ϵt +iid +∼ N(0, Σ). +The noise is either generated as independent noise with covariance matrix Σ = σ2I or as cross- +sectionally dependent noise with non-zero off-diagonal elements Σij = κ and diagonal elements +Σii = σ2. Note that our theorems for PoSI assume homogeneous noise, while dependent noise +violates our assumptions. Hence, the dependent noise allows us to test how robust our method is +21 + +Figure 5: Design of loadings β +This figure demonstrates the setting of our simulations with 10 factors, where loadings are shaded based on whether +they are active. In this staircase setting, the first factor affects all units, the 2nd factor affects 90%, and so on, and +lastly the 10th factor affects 10% of all units. +to misspecification. We set σ2 = 2 and κ = 1, but the results are robust to all these choices. +We construct the active set based on the staircase structure depicted in Figure 5. Of the J +covariates in X, we have K = 10 active independent factors. Figure 5 demonstrates the setting +for the 10 factors, where loadings are shaded based on whether they are active. The first factor +affects all units, the 2nd factor affects 90%, and so on, and lastly the 10th factor affects 10% of all +units. This setting is relevant, and also challenging from a multiple testing perspective. It results +in a large cohesion coefficient ρ, which makes the correct FWER control even more important. The +loadings are sampled from a uniform distribution, if they are in the active set: +β(n) +j +iid +∼ Unif +� +−1 +2, 1 +2 +� +for j in the active set, +β(n) +j += 0 +for j outside the active set. +We simulate a panel of dimension N = 120, J = 100 and T = 300 with K = 10 active factors. +The first half of the time-series observations is used for the in-sample estimation and selection, while +the second half serves for the out-of-sample analysis. All results are averages of 100 simulations. +We use the covariates selected on the in-sample data for regressions out-of-sample. Our focus is +on the inferential theory, and not on the bias correction for shrinkage. Hence, we first use the +inferential theory on the in-sample data to select our set of covariates. Second, we use the selected +subset of covariates in an OLS regression on the in-sample data to obtain the loadings. Last but +not least, we apply the estimated loadings of the selected subset to the out-of-sample data to obtain +the model fit. Note that this procedure helps a Naive LASSO, which in contrast to PoSI LASSO +does not have a bias correction. The out-of-sample explained variation is measured by R2, which +is the sum of explained variation normalized by the total variation. The rejection FWER is set to +γ = 5% or γ = 1%. The LASSO shrinkage penalty λ is selected by 5-fold cross-validation on the +in-sample data. +22 + +." +.". +... +" +" +" +"Table 3: Simulation Comparison between Selection Methods +Independent noise +Method +# Selections +# False Selections +# Correct Selections +OOS R2 +FWER γ = 5% +Panel PoSI +10.8 +2.8 +7.9 +10.0% +Bonferroni PoSI +4.7 +0.0 +4.7 +8.0% +Bonferroni Naive LASSO +0.0 +0.0 +0.0 +0.0% +Naive LASSO +0.2 +0.0 +0.2 +0.4% +Bonferroni OLS +1.0 +0.0 +1.0 +1.7% +Naive OLS +99.2 +89.2 +10.0 +-144.2% +FWER γ = 1% +Panel PoSI +8.6 +1.1 +7.5 +10.6% +Bonferroni PoSI +2.7 +0.0 +2.7 +5.2% +Bonferroni Naive LASSO +0.0 +0.0 +0.0 +0.0% +Naive LASSO +0.1 +0.0 +0.1 +0.3% +Bonferroni OLS +0.2 +0.0 +0.2 +0.5% +Naive OLS +46.4 +36.5 +9.9 +-19.3% +Cross-sectionally dependent noise +Method +# Selections +# False Selections +# Correct Selections +OOS R2 +FWER γ = 5% +Panel PoSI +10.1 +2.2 +7.9 +8.0% +Bonferroni PoSI +4.4 +0.0 +4.4 +7.2% +Bonferroni Naive LASSO +0.0 +0.0 +0.0 +0.0% +Naive LASSO +0.4 +0.0 +0.4 +0.5% +Bonferroni OLS +0.9 +0.0 +0.9 +1.3% +Naive OLS +83.7 +73.7 +10.0 +-83.8% +FWER γ = 1% +Panel PoSI +7.9 +0.6 +7.3 +10.3% +Bonferroni PoSI +2.4 +0.0 +2.4 +3.9% +Bonferroni Naive LASSO +0.0 +0.0 +0.0 +0.0% +Naive LASSO +0.0 +0.0 +0.0 +0.0% +Bonferroni OLS +0.3 +0.0 +0.3 +0.4% +Naive OLS +31.0 +21.2 +9.8 +-6.8% +This table compares the selection results for different methods in a simulation. For each method we report the num- +ber of selected covariates, the number of falsely selected covariates and the number of correctly selected covariates. +We also report the out-of-sample R2 of the models that estimated with the selected covariates on the out-of-sample +data. All results are averages of 100 simulations. The rejection FWER is set to γ = 5% or γ = 1%. We simulate a +panel of dimension N = 120, J = 100, T = 300. The first half of time-series observations is used for the in-sample +estimation and selection, while the second half serves for the out-of-sample analysis. The panel is generated by 10 +independent factors. The active set of the factors follows the staircase structure of Figure 5. The first factor affects +all units, the second 90%, and lastly the 10th factor affects 10%. The unknown error variance is estimated based as +a homogenous sample variance. The noise is either generated as independent noise with covariance matrix Σ = σ2I +or as cross-sectionally dependent noise with Σij = κ and Σii = σ2 for σ2 = 2 and κ = 1. +Table 3 compares the selection results for the different methods. For each method we report the +number of selected covariates, the number of falsely selected covariates and the number of correctly +23 + +selected covariates. We also report the out-of-sample R2. The upper panel shows the results for +independent noise, while the lower panel collects the results for cross-sectionally dependent noise. +PanelPoSI clearly dominates all models. It provides the best trade-off between correct and +false selection, which results in the best out-of-sample performance. In the case of γ = 5% and +independent noise, Panel PoSI selects 10.8 factors in a model generated by 10 factors. +7.9 of +these factors are correct. A simple Bonferroni correction is overly conservative. The Bonferroni +PoSI selects only 4.7 correct factors. While this overly conservative selection protects against false +discovery, it omits over half of the relevant factors which lowers the out-of-sample performance. +Using post-selection inference is important, as a naive lasso provides wrong p-values which makes +the overly conservative selection even worse. The other extreme is to have neither shrinkage nor +multiple testing adjustment. As expected the naive OLS has an extreme number of false selections +with a correspondingly terrible out-of-sample performance. +As expected, tightening the FWER control to 1% lowers the number of false rejections, but +also the number of correct selections. It reveals again that Panel PoSI provides the best inferential +theory among the benchmark models. Panel PoSI selects 7.5 correct covariates, while it controls +the false rejections at 1.1. The overly conservative Bonferroni methods select even fewer correct +covariates, which further deteriorates the out-of-sample performance. The gap in OOS R2 between +Panel PoSI and Bonferroni PoSI widens to 5.4%. All the other approaches cannot be used for a +meaningful selection. +Panel PosI performs well, even when some of the underlying assumptions are not satisfied. The +lower panel of Table 3 shows the results for dependent noise. As the dependence in the noise is +relatively strong, it can be interpreted as omitting a relevant factor in the set of candidate covariates +X. Even thought the PoSI theory is developed for homogeneous noise, Panel PoSI continues to +perform very well. In contrast, the comparison methods perform even worse, and the Bonferroni +approaches select even less correct covariates. +7 +Empirical Analysis +7.1 +Data and Problem +Our empirical analysis studies a fundamental problem in asset pricing. We select a parsimonious +factor model from a large set of candidate factors that can jointly explain the asset prices of a large +cross-section of investment strategies. Our data is standard and obtained from the data libraries +of Kenneth French and Hou, Xue, and Zhang (2018). +We consider monthly excess returns from January 1967 to December 2021, which results in a +time dimension of T = 660. Our test assets are the N = 243 double-sorted portfolios of Kenneth +French’s data library summarized in Table A.1 in the Appendix. The candidate factors are J = 114 +univariate long-short factors based on the data of Hou, Xue, and Zhang (2018). We include all +univariate portfolio sorts from their data library that are available for our time period, and construct +top minus bottom decile factor portfolios. In addition, we include the five Fama-French factors of +24 + +Fama and French (2015) from Kenneth French’s data library. +Our analysis projects out the excess return of the market factor. +We are interested in the +question which factors explain the component that is orthogonal to market movements. Hence, +we regress out the market factor from the test assets and use the residuals as test assets. We also +do not include a market factor in the set of long-short candidate factors. The original test assets +have a market component as they are long only portfolios. Our results are essentially the same +when we include the market component in the test assets, with the only difference that we would +need to include the market factor as an additional factor in our parsimonious models. The market +factor would always be selected by all models as significant, but this by itself is neither a novel nor +interesting result. +We present in-sample and out-of-sample results. +The in-sample analysis uses the first 330 +observations (January, 1967 to June, 1994), while the out-of-sample results are based on the second +330 observations (July, 1994 to December, 2021). As in the simulation, we first use the inferential +theory on the in-sample data to select our set of covariates. Second, we use the selected subset +of covariates in an OLS regression on the in-sample data to obtain the loadings. Last but not +least, we use the estimated loadings on the selected subset of factors for the out-of-sample model. +The LASSO penalty λ is selected via 5-fold cross-validation on the in-sample data to minimize the +squared errors.3 Hence, LASSO represents a first-stage dimension reduction tool, and we need the +inferential theory to select our final sparse model. +We allow our selection to impose a prior on two of the most widely used asset pricing models. +More specifically, we estimate models without a prior, and two specific priors that impose an infinite +weight on the Fama-French 3 factors (FF3) and the Fama-French 5 factors (FF5). This prior as +part of PoSI LASSO enforces that the FF3 and FF5 factors are included in the active set. Note +that because we work with data orthogonal to the market return, we do not include the market +factor in the prior, but only the size and value factors for FF3 and in addition the investment and +profitability factor for FF5. We denote these weights by ωFF3 and ωFF5. This is an example where +the researcher has economic knowledge that she wants to include in her statistical selection method. +We evaluate the models with standard metrics. The root-mean-squared error (RMSE) is based +on the squared residuals relative to the estimated factor models. Hence, in-sample the models are +estimated to minimize the RMSE. The pricing error is the economic quantity of interest. It is +the time-series mean of the residual component of the factor model, and corresponds to the mean +return that is not explained by the risk premia and exposure to the factors. In summary, we obtain +the residuals as ˆϵ = Yt,n − XS ˆβS for the selected factors, where the loadings are estimated on the +in-sample data. The metrics are the RMSE and mean absolute pricing error (MAPE): +RMSE = +� +� +� +� 1 +N T +N +� +i=1 +T +� +t=1 +ˆϵ2, +MAPE = 1 +N +N +� +i=1 +����� +1 +T +T +� +t=1 +ˆϵ +����� . +3We select λ from the grid exp(a) · log J/ +√ +T with a = −8, ..., 8. This grid choice satisfies the Assumptions in +Chatterjee (2014) and hence Assumption A.4. +25 + +In addition to Panel PoSI without and with the FF3 and FF5 priors, we consider the benchmark +methods of Table 2. We compare Panel PoSI (P-PoSI), Panel PoSI with infinite priors on FF3 and +FF5 (P-PoSI ωFF3 respectively ωFF5), Bonferroni Naive LASSO (B-LASSO), Naive LASSO (N- +LASSO), Bonferroni OLS (B-OLS) and Naive OLS (N-OLS). Our main analysis sets the FWER +control to the usual γ = 5%. +7.2 +Asset Pricing Results +Panel PoSI selects parsimonious factor models with the best out-of-sample performance among +the benchmarks. For the FWER rate of γ = 5% the number of factors differs substantially among +the different methods. Panel PoSI selects 3 factors. Imposing infinite priors on FF3 or FF5 results +in 4 and 5 factors for P-PoSI ωFF3 respectively ωFF5. In contrast, the alternative approaches select +too many factors. Bonferroni Naive LASSO includes 10, Naive Lasso 70, Bonferroni OLS 107 and +Naive OLS 114. These over-parametrized models lead to overfitting of the in-sample data. +Figure 6 shows in-sample and out-of-sample RMSE for each set of double-sorts. The composition +of the double sorts is summarized in Table A.1 in the Appendix. The in-sample performance in +the left subfigure has the expected result that more factors mechanically decrease the RMSE. +The important findings are in the right subfigure with the out-of-sample RMSE. The uniformly +best performing model is Panel PoSI without any priors. In fact, imposing a prior on the Fama- +French factors increases the out-of-sample RMSE. The conventional LASSO and OLS estimates +have substantially higher RMSE, which can be more than twice as large. +The Panel PoSI models also explain the average returns the best. In Figure 7, we compare +the mean absolute pricing errors among the benchmarks for each set of double sorts. Importantly, +the pricing errors are not used as in objective function of the estimation, and hence the fact that +the models with the smallest RMSE explain expected returns is an economic finding supporting +arbitrage pricing theory. Our Panel PoSI has the smallest out-of-sample pricing errors, which can +be up to six times smaller compared to the OLS estimates. Including the Fama-French factors as a +prior does not improve the models, except for the profitability and investment double sort, which +uses the same information as two of the Fama-French factors. +The Panel PoSI models select economically meaningful factors. Table 4 reports the ranking of +factors based on their FWER bound without prior and infinite prior weights on the Fama-French +3 and 5 factors. The rows are ordered based on sorted ascending ρ−1Njpj, which corresponds to +the FWER bound. It allows us to infer the number of factors for different levels of FWER control +values. Setting γ = 5% leads to 3, 4 and respectively 5 factors, while a γ = 1% results in 2, 4 and +5 factors, respectively. +In addition to their significance, we can infer the relative importance of factors. The baseline +PoSI with γ = 5% selects a size, dollar trading volume and value factor. The size and value factors +are among the most widely used asset pricing factors. Their selection is in line with their economic +importance and confirms the Fama-French 3 factor model. The dollar trading volume factor is less +conventional, but is correlated with many assets in our cross-sections. The size factor is the most +26 + +Figure 6: RMSE across cross-sections +(a) In-sample +(b) Out-of-sample +This figure shows the in-sample and out-of-sample root-mean-squared errors (RMSE) for each cross-section of test +assets for different factor models. The test assets are the N = 243 double-sorted portfolios, and we show the RMSE +for each set of double-sorts. The rejection FWER is set to γ = 5% The candidate factors are the 114 univariate +factor portfolios. The time dimension is T = 660. We use the first half for the in-sample estimation and selection, +while the second half serves for the out-of-sample analysis. We compare Panel PoSI (P-PoSI), Panel PoSI with +infinite priors on FF3 and FF5 (P-PoSI ωFF3 respectively ωFF5), Bonferroni LASSO (B-LASSO), Naive LASSO +(N-LASSO), Bonferroni OLS (B-OLS) and Naive OLS (N-OLS). +important as measured by the FWER bound, that is, the product of the number of relevant assets +and its minimum p-value are the smallest. The short term reversal factor is less important and +would require a FWER control of 10% to be included. +Imposing a prior affects the p-values of PoSI and the simultaneity count. For example, the +cohesiveness coefficient increases from ρ = 0.16 for no priors to ρ = 0.18 in the case of the two +priors. Hence, the FWER bounds of all factors can change when we impose a prior. The FF3 prior +increases the significance of the short-term reversal factor, which is widely used in asset pricing. +Interestingly, even for a FF5 prior, the profitability and investment factors remain insignificant. +7.3 +Number of Factors +Our method contributes to the discussion about the number of asset pricing factors. Many +popular asset pricing models suggest between three and six factors. Our approach allows a disci- +27 + +OP +1.32 +1.35 +1.49 +1.82 +2.02 +2.01 +2.00 +INV +ME +1.16 +1.18 +1.30 +1.58 +1.74 +1.73 +1.62 +Prior60 +ME +1.13 +1.16 +1.29 +1.88 +1.99 +1.94 +1.93 +Prior12 +ME +1.08 +1.10 +1.23 +1.49 +1.68 +1.53 +1.53 +Priorl +ME +0.95 +0.96 +1.06 +1.32 +1.44 +1.43 +1.42 +OP +ME +0.95 +0.97 +1.07 +1.32 +1.43 +1.42 +1.42 +INV +ME +0.59 +0.60 +0.68 +0.92 +1.02 +1.01 +1.01 +EP +ME +0.62 +0.64 +0.71 +0.98 +1.09 +1.06 +1.06 +DP +ME +0.57 +0.58 +0.66 +0.90 +1.00 +1.00 +1.00 +CFP +BEME +1.53 +1.57 +1.72 +2.11 +2.29 +2.28 +2.27 +OP +BEME +1.37 +1.40 +1.56 +1.91 +2.09 +2.08 +2.07 +INV +BE +0.99 +1.01 +1.11 +1.39 +1.48 +1.47 +1.46 +ME +N-OLS +B-OLS +N-LASSO B-LASSO +P-POSI +P-POSI +P-POSI +WFF3 +WFF5OP +2.72 +2.62 +2.33 +1.56 +0.91 +0.95 +0.96 +INV +ME +3.05 +3.03 +2.80 +2.28 +2.05 +2.10 +2.22 +Prior60 +ME +3.39 +3.35 +3.16 +2.12 +1.82 +2.04 +2.02 +Prior12 +ME +3.14 +3.08 +2.82 +2.29 +1.78 +2.23 +2.24 +Priorl +ME +2.87 +2.84 +2.65 +2.18 +1.91 +1.93 +1.95 +OP +ME +2.83 +2.79 +2.58 +2.10 +1.94 +1.98 +1.99 +INV +ME +2.39 +2.39 +2.31 +2.15 +1.99 +2.01 +2.01 +EP +ME +2.19 +2.14 +2.05 +1.90 +1.82 +1.88 +1.87 +DP +ME +2.35 +2.34 +2.29 +2.16 +1.97 +1.99 +2.00 +CFP +BEME +3.42 +3.30 +2.98 +2.39 +1.62 +1.67 +1.67 +OP +BEME +2.88 +2.78 +2.45 +1.79 +1.46 +1.50 +1.52 +INV +BE +3.04 +3.00 +2.84 +2.39 +2.28 +2.30 +2.31 +ME +N-OLS +B-OLS +N-LASSO B-LASSO +P-POSI +P-POSI +P-POSI +WFF3 +wFF5Figure 7: MAPE across cross-sections +(a) In-sample +(b) Out-of-sample +This figure shows the mean absolute pricing errors (MAPE) for each cross-section of test assets for different factor +models. The test assets are the N = 243 double-sorted portfolios, and we show the average |α| for each set of double +sorts. The rejection FWER is set to γ = 5% The candidate factors are the 114 univariate factor portfolios. The +time dimension is T = 660. We use the first half for the in-sample estimation and selection, while the second half +serves for the out-of-sample analysis. We compare Panel PoSI (P-PoSI), Panel PoSI with infinite priors on FF3 and +FF5 (P-PoSI ωFF3 respectively ωFF5), Bonferroni LASSO (B-LASSO), Naive LASSO (N-LASSO), Bonferroni OLS +(B-OLS) and Naive OLS (N-OLS). +plined estimate for the number of factors based on inferential theory. The level of sparsity of a +linear model also depends on the rotation of the covariates. Therefore, we also study the principal +components (PCs) of the covariates X as candidate factors. In this case, we use the step-down +procedure, which we refer to as “Ordered PoSI” or O-POSI for short. +Figure 8 shows the number of factors for different FWER rates γ. The factor count is obtained +by traversing K∗(γ) equal to 0.01, 0.02, 0.05 and 0.1. Panel PoSI without priors selects 2 factors +for γ = 0.01 and 3 for γ = 0.05. Once, we impose an infinite weight on the Fama-French 3 factors, +we select 4 factors for all FWER levels, while the prior on the Fama-French 5 factors results in a +5 factor model for all FWER levels. The Ordered PoSI with PCA rotated factors selects 3 factors +for all FWER levels. In summary, our results confirm that depending on the desired significance, +the number of asset pricing factors for a good model seems to be between 2 and 4. Note that our +analysis is orthogonal to the market factor, which would also be added to the final model. Thus, +28 + +OP +0.04 +0.04 +0.04 +0.08 +0.17 +0.17 +0.17 +INV +ME +0.04 +0.04 +0.05 +0.12 +0.10 +0.10 +0.11 +Prior60 +ME +0.04 +0.04 +0.05 +0.28 +0.36 +0.39 +0.38 +Prior12 +ME +0.06 +0.06 +0.08 +0.17 +0.21 +0.19 +0.19 +Priorl +ME +0.03 +0.03 +0.03 +0.07 +0.11 +0.11 +0.11 +OP +ME +0.03 +0.03 +0.04 +0.08 +0.11 +0.10 +0.10 +INV +ME +0.02 +0.02 +0.03 +0.05 +0.10 +0.10 +0.10 +EP +ME +0.02 +0.02 +0.03 +0.06 +0.09 +0.09 +0.09 +DP +ME +0.02 +0.02 +0.04 +0.06 +0.11 +0.11 +0.11 +CFP +BEME +0.04 +0.04 +0.05 +0.12 +0.17 +0.17 +0.17 +OP +BEME +0.06 +0.05 +0.06 +0.10 +0.13 +0.13 +0.13 +INV +BE +0.04 +0.04 +0.04 +0.10 +0.11 +0.10 +0.10 +ME +N-OLS +B-OLS +N-LASSO B-LASSO +P-POSI +P-POSI +P-POSI +WFF3 +WFF5Op +0.19 +0.21 +0.17 +0.14 +0.03 +0.03 +0.02 +INV +ME +0.10 +0.11 +0.10 +0.10 +0.09 +0.10 +0.09 +Prior60 +ME +0.18 +0.19 +0.17 +0.11 +0.08 +0.09 +0.08 +Prior12 +ME +0.14 +0.13 +0.09 +0.10 +0.08 +0.09 +0.09 +Priorl +ME +0.18 +0.17 +0.16 +0.14 +0.08 +0.08 +0.08 +OP +ME +0.14 +0.14 +0.13 +0.10 +0.09 +0.08 +0.08 +INV +ME +0.13 +0.14 +0.13 +0.09 +0.08 +0.08 +0.08 +EP +ME +0.12 +0.13 +0.15 +0.11 +0.07 +0.07 +0.08 +DP +ME +0.13 +0.12 +0.12 +0.10 +0.07 +0.07 +0.07 +CFP +BEME +0.27 +0.27 +0.23 +0.15 +0.05 +0.05 +0.05 +OP +BEME +0.17 +0.14 +0.13 +0.09 +0.05 +0.05 +0.05 +INV +BE +0.14 +0.12 +0.14 +0.10 +0.09 +0.09 +0.09 +ME +N-OLS +B-OLS +N-LASSO B-LASSO +P-POSI +P-POSI +P-POSI +WFF3 +wFF5Table 4: Selected factors with Panel PoSI +Factor +Nj +pj +ρ−1Njpj +Order +No prior +Size (SMB) +1824 +<0.00001 +<0.0001 +1 +Dollar Trading Volume (dtv 12) +2099 +<0.00001 +<0.0001 +2 +Value (HML) +1191 +<0.00001 +0.0280 +3 +Short-Term Reversal (srev) +1050 +0.00001 +0.0974 +4 +Forecast Revisions (rev 1) +242 +0.00018 +0.2782 +5 +Investment (CMA) +998 +0.00112 +>0.9999 +6 +Profitability (RMW) +797 +0.00123 +>0.9999 +7 +FF3 prior (ωFF3) +Size (SMB) +2802 +<0.00001 +<0.0001 +1 +Value (HML) +2802 +<0.00001 +<0.0001 +2 +Dollar Trading Volume (dtv 12) +779 +<0.00001 +0.0017 +3 +Short-Term Reversal (srev) +1106 +<0.00001 +0.0049 +4 +Profitability (RMW) +819 +0.00006 +0.2527 +5 +Investment (CMA) +874 +0.00087 +>0.9999 +6 +FF5 prior (ωFF5) +Size (SMB) +2911 +<0.00001 +<0.0001 +1 +Value (HML) +2911 +<0.00001 +<0.0001 +2 +Forecast Revisions (rev 1) +230 +<0.00001 +0.0005 +3 +Short-Term Reversal (srev) +1140 +<0.00001 +0.0052 +4 +Dollar Trading Volume (dtv 12) +661 +<0.00001 +0.0072 +5 +Profitability (RMW) +2911 +0.00001 +0.1937 +6 +Investment (CMA) +2911 +0.00001 +0.1996 +7 +Gross profits-to-assets (gpa) +1151 +0.00013 +0.8382 +8 +This table reports ranking of factors based on their FWER bound for no prior, and infinite weight priors on the +Fama-French 3 and 5 factors. The test assets are the N = 243 double-sorted portfolios and the candidate factors are +J = 114 univariate long-short factors. The rows are ordered based on sorted ascending ρ−1Njpj, which corresponds +to the FWER bound. +the final model would have between 3 and 5 factors. +Table 5 further confirms our findings about the number of asset pricing factors. We compare +the number of factors for γ = 5% selected either from the univariate high-minus-low factors (HL), +their PCA rotation or the combination of the high-minus-low factors and their PCs. Panel PoSI +selects consistently 3 factors from the long-short factors and their PCs. When combined, PoSI +selects 4 factors, which is plausible as the optimal sparse model can be different for this larger set +of candidate factors. The Bonferroni PoSI is overly conservative and selects only 2 HL factors. The +models based on Naive LASSO or OLS select excessively many factors independent of the rotation. +Overall, the findings support that parsimonious asset pricing models can be described by three to +four factors. Of course, any discussion about the number of asset pricing factors is always subject +to the choice of test assets and candidate factors. +29 + +Figure 8: Number of selected factors for different FWER +(a) Univariate factors with priors (P-POSI) +(b) PCA rotated factors (O-POSI) +This figure shows the number of selected factors to explain the test assets of double-sorted portfolios for different +FWER rates γ. The factor count is obtained by traversing K∗(γ) for γ ranging from 0.01 to 0.1. The left subfigure +uses univariate high-minus-low factors as candidate factors. We consider the case of no prior, and the cases of an +infinite weight on the Fama-French 3 factor model (ωFF3) and an infinite weight on the Fama-French 5 factor model +(ωFF5). The right subfigure uses the PCA rotation as candidate factors with the step-down procedure Ordered PoSI +(O-POSI). +Table 5: Number of selected factors for different methods +HL +PCs +HL + PCs +Panel PoSI +3 +3 +4 +Bonferroni PoSI +2 +3 +2 +Bonferroni Naive LASSO +10 +29 +10 +Naive LASSO +70 +50 +76 +Bonferroni OLS +107 +13 +117 +Naive OLS +114 +50 +164 +This figure shows the number of selected factors to explain the test assets of double-sorted portfolios for different +methods and different sets of candidate factors. The rejection FWER is set to γ = 5%. The factor count is obtained +by traversing K∗(γ) for γ. The number of factors is selected on the in-sample data. For PCs, we use the step-down +method for the nested hypothesis. +8 +Conclusion +This paper proposes a new method for covariate selection in large dimensional panels. +We +develop the conditional inferential theory for large dimensional panel data with many covariates +by combining post-selection inference with a new multiple testing method specifically designed for +panel data. Our novel data-driven hypotheses are conditional on sparse covariate selections and +valid for any regularized estimator. +Based on our panel localization procedure, we control for +family-wise error rates for the covariate discovery and can test unordered and nested families of +hypotheses for large cross-sections. We provide a method that allows us to traverse the inferential +30 + +P-PoSI +P-PoSL WE3 +6 +P-PoSI WFF5 +Factor count +5 +5 +5 +5 +5 +4 +4 +4 +44 +4 +3 +3 +2 +2 +2 +0.01 +0.02 +0.05 +0.17 +6 +PC count +5 +4. +3 +3 +3 +3 +3 : +0.01 +0.02 +0.05 +0.1results and determine the least number of covariates that have to be included given a user-specified +FWER level. +As an easy-to-use and practically relevant procedure, we propose Panel-PoSI, which combines +the data-driven adjustment for panel multiple testing with valid post-selection p-values of a gen- +eralized LASSO, that allows to incorporate weights for priors. In an empirical study, we select a +small number of asset pricing factors that explain a large cross-section of investment strategies. +Our method dominates the benchmarks out-of-sample due to its better control of false rejections +and detections. +A +Post-selection Inference with Weighted-LASSO +A.1 +Weighted-LASSO: Linear Truncation Results +This appendix collects the assumptions and formal statements underlying Theorem 1. +We +present the results for the Weighted-LASSO, which includes the conventional LASSO as a special +case. In order to ensure uniqueness of the LASSO solution, we impose the following condition, +which is standard in the LASSO literature: +Definition A.1. General position +The matrix X ∈ RT×J has columns in general position if the affine span of any J0 + 1 points +(σ1Xi1, ..., σJ0+1XiJ0+1) in RT for arbitrary σ1, ...σd0+1 ∈ {±1} does not contain any element of +{±Xi : i /∈ {i1, ..., iJ0+1}}, where J0 < J4 and Xi denotes ith column of X. +This position notion will help us to avoid ambiguity in the LASSO solution. Note that this +condition is a much weaker requirement than full-rank of X, and states that if one constructs a +J0-dimensional subspace, it must contain at most J0 + 1 entries of {±X1, ..., ±XJ}. Even though +this appears to be a complicated and mechanical condition, by a union argument it turns out that +with probability 1, if the entries of X ∈ RT×J are drawn from a continuous probability distribution +on RT×J then X is in general position.5 Then, we will be able to discuss the LASSO solution for +general design with relative ease, thanks to Lemma 3 of Tibshirani (2013). It shows that if X lie +in general position, it is sufficient to have a unique LASSO solution regardless of the penalty scalar +λ. This condition will later be used in establishing our Lemma A.2. +We can now state the formal assumptions: +Assumption A.1. Unique low dimensional model +(a) Low dimensional truth: +The data satisfies Y = XSβS + ϵ where |S| = O(1); +(b) General position design: +The covariates X have columns in general position as given by Definition A.1; +4The original condition needs to hold for J0 < min{T, J} but in the scope of our study, we consider T > J. +5See Donoho (2006) and §2.2 of Tibshirani (2013) for more discussions on uniqueness and general position. +31 + +We start our analysis with the simpler model of known error variance, and later extend it to +the case of estimated unknown variance. +Assumption A.2. Gaussian residual with known variance +The residuals are distributed as ϵ ∼ N(0, Σ) where Σ is known. +Before formalizing the inferential theory, we need to clarify the quantity for which we want +to make inference statements. As stated before, we only test the hypothesis on a covariate if its +LASSO estimate turns out active. This is exactly the approach how researchers in practice conduct +explorations in high-dimensional datasets. In other words, we focus on ˆβM and quantities associated +with it, where M denotes the active set of selected covariates. +We study the inferential theory of the “debiased estimator”, which is a shifted version of the +LASSO fit as defined below. We show that this debiased estimator is unbiased, consistent and +follows a truncated Gaussian distribution, with profound connections to the debiased LASSO lit- +erature such as Javanmard and Montanari (2018), but has different properties by a subtle different +descent direction. More concretely, given M, clearly ˆY = XM ˆβM is the fitted value since ˆβ−M = 0, +where −M is the complement of the set M. We let ˆϵM := Y − XM ˆβM be the residual from the +LASSO estimator. By considering only the partial LASSO loss of ℓ(Y, XM, λ, β) and given we are +currently at the LASSO estimator ˆβ, the Newton step is X+ +Mˆϵ following (Boyd and Vandenberghe, +2004, § 9.5.2), where we denote X+ +M = (X⊤ +MXM)−1X⊤ +M as the pseudo-inverse of the active subma- +trix of X. The invertibility of X⊤ +MXM either is observed when we are in the fixed design regime +or happens almost surely when we are dealing with continuous quantities, as a consequence of +Assumption A.1(b) as argued in Tibshirani (2013) and Lee, Sun, Sun, and Taylor (2016). Now we +can formally define the main object of our inferential theories: +Definition A.2. Debiased Estimator +The debiased Weighted-LASSO estimator ¯βM given M is given by +¯βM = ˆβM + X+ +MˆϵM +(21) +It is now evident why some of the literature refers to the debiased estimator also as the one-step +estimator: given that ˆβM solves the Karush-Kuhn-Tucker (KKT) condition and reaches the optimal +sub-gradient for the full loss ℓ(Y, X, c, β), our debiased estimator ¯βM is the result of moving one +more Newton-Ralphson method step after ˆβM, but only taking XM rather than X as a whole into +the likelihood loss function. Hence, the update step is actually only a partial update from the +LASSO solution point. Intuitively, ¯βM should still be close to solving the KKT conditions, and +would exactly solve the KKT conditions if XM happen to be the true covariates (i.e. XM = XS). +If we were to take a Newton’s method step with gradient and Hessian calculated with the entirety +of data X, or equivalently taking a full update from the stationary point, we will recover the ˆβd +M +proposed in Javanmard and Montanari (2018). The material difference is that the full-update would +require the J ×J precision matrix Ω = Γ−1, where Γ = X⊤X if X assumed fixed or Γ = E[X⊤X] if +X assumed to be generated from a stationary process. Using ℓ(Y, XM, λ, β) instead of ℓ(Y, X, λ, β), +32 + +our debiased estimator would not need the full Hessian, which is leveraging LASSO’s screening +property and uses (X⊤ +MXM)−1X⊤ +M (i.e. X+ +M) as a much lower-dimensional alternative of ΩX⊤. +Without loss of generality, we assume that the covariate indexed i ≤ |M| is part of M, and we can +always rearrange the columns of X to have the first |M| covariates as active. Let η = (X+ +M)⊤ei ∈ RT +be a vector where ei ∈ R|M| is a vector with 1 at ith coordinate and 0 otherwise. Hence, the η +vector is the linear mapping from Y to the ith coordinate of an OLS estimator. In particular, the +debiased estimator and the response satisfy the following relationship: +Lemma A.1. Debiased Estimator is OLS-post-LASSO +The debiased estimator is a linear mapping of Y . Specifically, given η = (X+ +M)⊤ei: +¯βi = η⊤Y +(22) +Moreover, ¯βM is the OLS estimate of regressing XM on Y : +¯βM = arg min +β +1 +2T ∥Y − XMβ∥2 +2. +(23) +The proof of Lemma A.1 is deferred to the Online Appendix. Although its proof is simple, this +lemma reveals that our debiased estimator is the same as the least-square after LASSO estimator +proposed in Belloni and Chernozhukov (2013). +Our strategy to obtain a rigorous statistical inferential theory with p-values is as follows. First +we perform an algebraic manipulation to transform ˆβM into ¯βM in the linear form of (22). Then, we +follow the strategy in Lee, Sun, Sun, and Taylor (2016) to traverse the KKT subgradient optimal +equations for general X by writing it equivalently into a truncation in the form of {AY ≤ b}, as +we will do in Lemma A.2. Finally we will circle back to ˆβM by the linear mapping between ¯βM +and Y and the distributional results induced by the fact that Y is truncated by {AY ≤ b}. +For our Weighted-LASSO, the KKT sub-gradient equations are +X⊤(X ˆβ − Y ) + λ +� +s +v +� +⊙ ω−1 = 0 +where +� +� +� +si = sign(ˆβi) +if ˆβi ̸= 0, ωi < ∞ +vi ∈ [−1, 1] +if ˆβi = 0, ωi < ∞ +(24) +In other words, when ω is specified, the KKT conditions can be identified using the tuple of +{M, s}, where M is the active covariates set and s is the signs of LASSO fit. This is a consequence +of how LASSO KKT condition can separate the slacks into s for active variables and v for inactive +variables. If we have infinite importance weights (J ̸= ∅), we would simply need si < ∞ for i ∈ J +because λsi/ωi = 0 is guaranteed. We rigorously characterize the KKT sub-gradient conditions as a +combinations of signs and infinity norm bounds conditions by the following lemma, which parallels +Lemma 4.1 of Lee, Sun, Sun, and Taylor (2016): +Lemma A.2. Selection in norm equivalency +33 + +Consider the following random variables +w(M, s, ω) = (X⊤ +MXM)−1(X⊤ +MY − λs ⊙ ω−1 +M ) +u(M, s, ω) = ω−M ⊙ +� +X⊤ +−M(X+ +M)⊤s ⊙ ω−1 +M + 1 +λX⊤ +−M(I − PM)Y +� +(25) +where PM = XMX+ +M ∈ RT×|M| is the projection matrix. The Weighted-LASSO selection can be +written equivalently as +{M, s} = {sign(w(M, s, ω)) = s, ∥u(M, s, ω)∥∞ < 1} +(26) +Using this characterization, we are then able to provide the distributional results for the debiased +estimators. Consider ξ = Ση(η⊤Ση)−1 ∈ RT as a covariance-scaled version of our η, and a mapping +of Y using residual projection matrix: z = (I − ξη⊤)Y . Note that z can be calculated once we +observe (X, Y ), so it can be conditioned on were we to do so. We will soon see that the truncation +set will depend on the variable z, but this does not cause any issues thanks to the following lemma, +the proof of which is deferred to the Online Appendix: +Lemma A.3. Ancillarity in truncation +The projected z and the debiased estimator ¯βi are independently distributed. +As a result of Lemma A.3, when describing the distribution of ¯βi, we can use z in its truncation +conditions as long as we condition on z as well. To simplify notation, we can collect all quantities +we need to condition on into +˜ +M := ((M, s), z, ω, X). Now we can assemble the consequences of +Lemmas A.1, A.2, A.3 to arrive at the truncated Gaussian statements for the debiased estimator +similar to Lee, Sun, Sun, and Taylor (2016), but for weighted-LASSO: +Theorem A.1. Truncated Gaussian +Under Assumptions A.1 and A.2 for i ∈ M, ¯βi is conditionally distributed as: +¯βi| ˜ +M ∼ T N(βi, η⊤Ση; [V −(z), V +(z)]) +(27) +where T N is a truncated Gaussian with mean βi, variance η⊤Ση and truncation set [V −(z), V +(z)]. +βi denotes the ith entry of the true β. The vector of signs is s = sign(ˆβM) ∈ R|M| and the truncation +set depends on +A = +� +�� +λ−1X⊤ +−M(I − PM) +−λ−1X⊤ +−M(I − PM) +−diag(s)X+ +M +� +�� ∈ R(2J−|M|)×T , +b = +� +�� +ω−1 +−M − X⊤ +−M(X+ +M)⊤s ⊙ ω−1 +M +ω−1 +−M + X⊤ +−M(X+ +M)⊤s ⊙ ω−1 +M +−λ · diag(s)(X⊤ +MXM)−1s ⊙ ω−1 +M +� +�� ∈ R2J−|M| +V −(z) = +max +j:(Aξ)j<0 +bj − (Az)j +(Aξ)j +, +V +(z) = +min +j:(Aξ)j>0 +bj − (Az)j +(Aξ)j +. +Notice that Theorem A.1 is decoupled across M, which is to say we are able to deal with +1-dimensional statistics. We arrive at this form because the construction of (V −, V +) over the +34 + +extreme points of the linear inequality system (or vertices of the polyhedral) has decomposed the +dimensionality of the truncation. This decoupling is of significant practical value, in that it would +be otherwise a non-trivial task to calculate a statistic of multivariate (in our case |M|-dimensional) +truncated Gaussian and then marginalize over |M| − 1 dimensions. +A.2 +Weighted-LASSO Quasi-Linear Truncation with Estimated Variance +This section generalizes the distribution results to the practically relevant case when the noise +variance is unknown and has to be estimated. This becomes a challenging problem for post-selection +inference. We replace Assumption A.2 by the following assumption: +Assumption A.3. Gaussian residual with simple unknown variance +The residuals are distributed as ϵi +iid +∼ N(0, σ2) where σ2 is unknown. +The simple structure of unknown variance of Assumption A.2 is common in the post-selection +inference literature as for example in Lee, Sun, Sun, and Taylor (2016) and Tian, Loftus, and Taylor +(2018). A feasible conditional distribution replaces σ2 with an estimate. Under Assumption A.2, +we can estimate the variance using LASSO residuals and then reiterate the previous truncation +arguments. The most common standard variance estimator is +ˆσ2(Y ) = ∥Y − X ˆβ∥2 +2/(T − |M|). +(28) +In classical regression analysis, the normally distributed estimated coefficient divided by an +estimated standard deviation follows a t-statistic. Hence, we would expect that a truncated normal +debiased estimator divided by a sample standard deviation might yield a truncated t-distribution. +However, the arguments are substantially more involved. Simply using ˆσ(Y ) of (28) in the expres- +sion η⊤Ση of Theorem A.1 changes the truncation. Specifically, Y having truncated support means +ˆσ(Y )2 is not χ2-distributed supported on the entire R+, which makes the support of ¯β/ˆσ(Y ) non- +trivial. Therefore, in order to correctly assess the truncation of the studentized quantity, we have +to disentangle how much truncation is implied in ˆσ(Y )−1 and ¯β simultaneously. Geometrically, as +ˆσ(Y ) is a non-linear function of Y and ¯β, the truncation on Y is in fact no longer of the simple +linear form {AY ≤ b} such as in Theorem A.1. +Instead of a polyhedral induced by affine constraints, we have a “quasi-affine constraints” form +of {CY ≤ ˆσ(Y )b} because LASSO KKT conditions preserve the estimated variance throughout +the arguments. Thus, both sides of the inequality CY ≤ ˆσ(Y )b have Y , and in right-hand-side +the ˆσ(Y ) is non-linear in Y . A significantly more complex set of arguments are needed compute +the exact truncation, which is equivalent to solve for a |M|-system of non-linear inequalities rather +than linear inequalities that constrain the support of Y for inference on each ¯βi. Theorem A.2 +shows the appropriate truncation based on those arguments: +Theorem A.2. Truncated t-distribution for estimated variance +Under Assumptions A.1 and A.3, and the null hypothesis that βi = 0, the studentized quantity +35 + +¯βi/∥η∥ˆσ(Y ) follows +¯βi/∥η∥ˆσ(Y ) ∼ TTd;Ω, +(29) +where TT is a truncated t-distribution with d degrees of freedom and truncation set Ω. +The +truncation set Ω = � +i∈M{t : t +√ +Wνi + ξi +√ +d + t2 ≤ −θi +√ +W} is an |M|-intersection of simple +inequality-induced intervals based on the following quantities. +The active signs are denoted as +s = sign(ˆβM) ∈ R|M|. The scaled LASSO equivalent penalty is ˜λ2 = +λ2 +ˆσ2(Y )·(T−|M|)+∥(X+ +M)⊤s⊙ω−1 +M ∥2 +2λ2 . +θi = (˜λsi +� +T − |M| +1 − ˜λ2∥(X+ +M)⊤s ⊙ ω−1 +M ∥2 +2 +) · e⊤ +i +� +(X⊤ +MXM)−1s ⊙ ω−1� +for i ∈ M +C = −diag(s)X+ +M ∈ R|M|×T , +ν = Cη ∈ R|M|, +ξ = C(PM − ηη⊤)Y ∈ R|M|, +d = tr(I − PM), +W = ˆσ2(Y ) · d + (η⊤Y )2 +The quantities θ and C describe the quasi-linear constraints, whereas ν and ξ transform them +into the form of Ω. Note that the Ω set is obtained from solving a low-dimensional set of quadratic +inequalities that do not necessarily yield a single interval after intersection. We provide a proof of +this result in the Online Appendix. +Using Theorem A.2 in practice poses several challenges. First, the computations are much more +involved, especially as each βi requires calculation of Ω which includes |M| actual constraints, each +of which involves solving a simple but still non-linear inequality. It is non-trivial to ensure that the +numerical stability holds at every step of the calculations. Second, since Ω is not necessarily an +interval, it is harder to interpret the truncation and also calculate the cumulative density function +through Monte-Carlo simulations when there is a non-trivial truncation structure. Third, in fact, +the authors in Tian, Loftus, and Taylor (2018) recommend a regularized likelihood minimizing +variance estimator that deviates from the simple ˆσ(Y ), which would in turn involves more numerical +integration and optimization steps. Last but not least, this result was proposed initially for studying +scale-LASSO, which is why there has to be a penalty term transformation of λ to ˜λ. Our goal is to +provide a set of tools that can be useful for a wide range of applications including the LASSO with +l2 squared norm loss rather than un-squared norm loss. These implementation difficulties are also +discussed in more detail in the Online Appendix, which provides the accompanying proofs and the +exact forms of the truncations. +We provide a practical solution based on an asymptotic normal argument. +We impose the +standard assumption that we have a consistent estimator of the residual variance: +Assumption A.4. Consistent estimator ˆσ(Y ) +Given λ, the residual variance estimator is consistent ˆσ(Y ) +p→ σ2 as T → ∞. +This general assumption includes many common scenarios such as the results specified in Corol- +lary 6.1 of van de Geer and B¨uhlmann (2011), or in Theorem 2 of Chatterjee (2014). For example, +for diminishing c +� +log(J)/T → 0 as J, T grow and our Assumptions A.1 and A.3, we obtain con- +sistency of ˆσ(Y ) of (28) by Chatterjee (2014). +36 + +Theorem A.3. Asymptotic truncated normal distribution +Suppose Assumptions A.1, A.3 and A.4 hold. Under the null hypothesis that βi = 0 and for T → ∞ +the studentized quantity ¯βi/∥η∥ˆσ(Y ) follows +¯βi/∥η∥ˆσ(Y ) ∼ TNΩ, +(30) +where TN is a truncated normal distribution with truncation Ω = [V −(z)/∥η∥2ˆσ(Y ); V +(z)/∥η∥2ˆσ(Y )], +where V −(z) and V +(z) are the same as in Theorem A.1. +The asymptotic distribution result has several advantages. First, it is intuitive since it parallels +the classical OLS inference with a t-statistic converging to Gaussianity. Secondly, it is computa- +tionally more tractable than results of Appendix Theorem A.2. With this result, one could obtain +asymptotically valid post-selection p-values. +B +Appendix: Empirics +Table A.1: Compositions of DS portfolios +Sorted by +# portfolios +Sorted by +# portfolios +Sorted by +# portfolios +Sorted by +# portfolios +BEME, INV +25 +ME, CFP +6 +ME, INV +25 +ME, Prior1 +25 +BEME, OP +25 +ME, DP +6 +ME, OP +25 +ME, Prior12 +25 +ME, BE +25 +ME, EP +6 +OP, INV +25 +ME, Prior60 +25 +This table lists the composition of double sorted portfolios that we use as test assets in our empirical study. All the +double sorted portfolios are from Kenneth French’s data library. +References +Ahn, S. C., and A. R. Horenstein (2013): “Eigenvalue Ratio Test for the Number of Factors,” Econo- +metrica, 81(3), 1203–1227. +Barber, R. F., and E. J. Cand´es (2015): “Controlling the false discovery rate via knockoffs,” The Annals +of Statistics, 43(5), 2055 – 2085. +Belloni, A., and V. Chernozhukov (2013): “Least squares after model selection in high-dimensional +sparse models,” Bernoulli, 19(2), 521 – 547. +Benjamini, Y., and Y. Hochberg (1995): “Controlling the False Discovery Rate: A Practical and Pow- +erful Approach to Multiple Testing,” Journal of the Royal Statistical Society. Series B (Methodological), +57(1), 289–300. +Benjamini, Y., and D. Yekutieli (2001): “The control of the false discovery rate in multiple testing +under dependency,” The Annals of Statistics, 29(4), 1165 – 1188. +Bonferroni, C. E. (1935): “Il calcolo delle assicurazioni su gruppi di teste.,” In Studi in Onore del +Professore Salvatore Ortu Carbon. +Boyd, S., and L. Vandenberghe (2004): Convex optimization. Cambridge University Press. +37 + +Cand´es, E., Y. Fan, L. Janson, and J. Lv (2018): “Panning for gold: ‘model-X‘ knockoffs for high +dimensional controlled variable selection,” Journal of the Royal Statistical Society: Series B (Statistical +Methodology), 80(3), 551–577. +Chatterjee, S. (2014): “Assumptionless consistency of the Lasso,” Working paper. +Chernozhukov, V., C. Hansen, and M. Spindler (2015): “Valid Post-Selection and Post-Regularization +Inference: An Elementary, General Approach,” Annual Review of Economics, 7(1), 649–688. +Choi, Y., J. Taylor, and R. Tibshirani (2017): “Selecting the number of principal components: Esti- +mation of the true rank of a noisy matrix,” The Annals of Statistics, 45(6), 2590 – 2617. +Donoho, D. L. (2006): “For most large underdetermined systems of linear equations the minimal l1-norm +solution is also the sparsest solution,” Communications on Pure and Applied Mathematics, 59(6), 797–829. +Fama, E. F., and K. R. French (2015): “A five-factor asset pricing model,” Journal of Financial Eco- +nomics, 116(1), 1–22. +Fithian, W., and L. Lei (2022): “Conditional calibration for false discovery rate control under depen- +dence,” The Annals of Statistics, 50(6), 3091–3118. +Fithian, W., D. Sun, and J. Taylor (2017): “Optimal Inference After Model Selection,” Working paper. +G’Sell, M. G., S. Wager, A. Chouldechova, and R. Tibshirani (2016): “Sequential selection pro- +cedures and false discovery rate control,” Journal of the Royal Statistical Society: Series B (Statistical +Methodology), 78(2), 423–444. +Hou, K., C. Xue, and L. Zhang (2018): “Replicating Anomalies,” The Review of Financial Studies, +33(5), 2019–2133. +Javanmard, A., and A. Montanari (2018): “Debiasing the lasso: Optimal sample size for Gaussian +designs,” The Annals of Statistics, 46(6A), 2593 – 2622. +Johari, R., P. Koomen, L. Pekelis, and D. Walsh (2021): “Always Valid Inference: Continuous +Monitoring of A/B Tests,” Operations Research. +Kapetanios, G. (2010): “A Testing Procedure for Determining the Number of Factors in Approximate +Factor Models With Large Datasets,” Journal of Business & Economic Statistics, 28(3), 397–409. +Kuchibhotla, A. K., L. D. Brown, A. Buja, E. I. George, and L. Zhao (2018): “Valid Post-selection +Inference in Assumption-lean Linear Regression,” Working paper. +Lee, J. D., D. L. Sun, Y. Sun, and J. E. Taylor (2016): “Exact post-selection inference, with application +to the lasso,” The Annals of Statistics, 44(3), 907–927. +Markovic, J., L. Xia, and J. Taylor (2018): “Unifying approach to selective inference with applications +to cross-validation,” Working paper. +Meinshausen, N., and P. B¨uhlmann (2006): “High-dimensional graphs and variable selection with the +Lasso,” The Annals of Statistics, 34(3), 1436 – 1462. +Onatski, A. (2010): “Determining the number of factors from empirical distribution of eigenvalues,” The +Review of Economics and Statistics, 92(4), 1004–1016. +Pelger, M. (2019): “Large-dimensional factor modeling based on high-frequency observations,” Journal of +Econometrics, 208(1), 23–42, Special Issue on Financial Engineering and Risk Management. +R´enyi, A. (1953): “On the theory of order statistics,” Acta Mathematica Academiae Scientiarum Hungarica, +4, 191–231. +38 + +Siegmund, D. (1985): Sequential Analysis. Springer-Verlag. +Simes, R. J. (1986): “An Improved Bonferroni Procedure for Multiple Tests of Significance,” Biometrika, +73(3), 751–754. +Taylor, J., and R. J. Tibshirani (2015): “Statistical learning and selective inference,” Proceedings of +the National Academy of Sciences, 112(25), 7629–7634. +Tian, X., J. R. Loftus, and J. E. Taylor (2018): “Selective inference with unknown variance via the +square-root lasso,” Biometrika, 105(4), 755–768. +Tian, X., and J. Taylor (2017): “Asymptotics of Selective Inference,” Scandinavian Journal of Statistics, +44(2), 480–499. +(2018): “Selective inference with a randomized response,” The Annals of Statistics, 46(2), 679–710. +Tibshirani, R. (1996): “Regression Shrinkage and Selection via the Lasso,” Journal of the Royal Statistical +Society. Series B (Methodological), 58(1), 267–288. +Tibshirani, R. J. (2013): “The lasso problem and uniqueness,” Electronic Journal of Statistics, 7, 1456 – +1490. +van de Geer, S., and P. B¨uhlmann (2011): Statistics for high dimensional data methods, theory and +applications. Springer. +van de Geer, S., P. B¨uhlmann, Y. Ritov, and R. Dezeure (2014): “On asymptotically optimal +confidence regions and tests for high-dimensional models,” The Annals of Statistics, 42(3), 1166 – 1202. +Zhang, C.-H., and S. S. Zhang (2014): “Confidence intervals for low dimensional parameters in high +dimensional linear models,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), +76(1), 217–242. +Zrnic, T., and M. I. Jordan (2020): “Post-Selection Inference via Algorithmic Stability,” Working paper. +39 +