Simultaneous Inference for High-Dimensional Approximate Factor Model

This paper studies simultaneous inference for factor loadings in the approximate factor model. We propose a test statistic based on the maximum discrepancy measure. Taking advantage of the fact that the test statistic can be approximated by the sum of the independent random variables, we develop a multiplier bootstrap procedure to calculate the critical value, and demonstrate the asymptotic size and power of the test. Finally, we apply our result to multiple testing problems by controlling the family-wise error rate (FWER). The conclusions are confirmed by simulations and real data analysis.


Introduction
The high-dimensional factor model is becoming more and more important in different scientific areas including finance and macroeconomics. For example, the data in the World Bank contain two-hundred countries over forty years and the number of stocks can be in the thousands which is larger than or of the same order of the sample size for portfolio allocation. Due to its broad applications, much efforts have been devoted to analyzing factor model in different aspects. Examples include estimation of factors and loadings for latent factor model [1,2], covariance matrix estimation [3][4][5][6], and simultaneous inference for factor loadings of dynamic factor model [7,8], among others.
This work focuses on the simultaneous inference for the loading matrix with observed factors, which is an important issue in the analysis of approximate factor models. For example, in the study of gene expression genomics, it is commonly assumed that each gene is associated with only a few factors. For example, the authors of [9] showed that several oncogenes are related to Rb/E2F pathway rather than any other pathways. The authors of [10] also considered sparse loading matrix for gene expression data. Therefore, it is necessary to test the sparsity of the factor loadings. In the literature, some inference procedures have been proposed for latent factor models. For example, in low-dimensional setting, the authors of [11] considered testing the homogeneity assumption, i.e., the loadings associated to a factor are identical for all variables. The same testing problem has been considered by the authors of [12] in high-dimensional situation. As for observed factors, to the best of our knowledge, very limited work has been conducted.
Inference for the factor loadings with observed factors is not trivial. The approaches for latent factors cannot be directly applied to observed factors. The major difficulty is due to the high dimensionality, which poses significant challenges in deriving the asymptotic null limiting distribution of the test statistic. We propose a test statistic based on the maximum discrepancy measure. The distribution of this statistic is attractive in high-dimensional statistical inference such as model selection, simultaneous inference, and multiple testing. Examples include the works in [13][14][15][16][17], among others.
We use the multiplier bootstrap procedure to obtain the critical value of our test statistic. Based on the fact that the test statistic can be approximated by the sum of the independent random variables, we show that the proposed multiplier bootstrap method consistently approximates the null limiting distribution of the test statistic, and thus the testing procedure achieves the prespecified significance level asymptotically. There are some related works applying multiplier bootstrap method to high-dimensional inference; see in [16,18,19], among others. However, their procedures require sparsity assumption on the parameters and cannot be directly applied to factor model. Compared with the works with latent factors, we do not require homogeneity constraints or sparsity on the model and our procedure is adaptive to high-dimensional regime.
Another application of our procedure is the multiple testing problem. Combining the multiplier bootstrap method with step-down procedure proposed by [17], we show that our procedure has a strong control of the family-wise error rate (FWER). Our method is asymptotically non-conservative as compared to the Bonferroni-Holm procedure since the correlation among the test statistics has been taken into account. We also want to point out that any procedure controlling the FWER will also control the false discovery rate [20] when there exist some true discoveries.
The rest of the paper is organized as follows. In Section 2.1, we develop the multiplier bootstrap procedure for simultaneous test of parameters for a single factor and demonstrate its asymptotic level and power. In Section 2.2, we give the result of simultaneous test of parameters for multiple factors. Section 3 discusses the multiple testing problem by combining the multiplier bootstrap procedure with the step-down method proposed by [17]. Section 4 investigates the numerical performance of the proposed test by simulations. We also conduct real data analysis on portfolio risk of S&P stocks via Fama-French model in Section 5. The proofs of the main results are given in Appendix A.
Finally, we introduce some notation. For set S, let |S| denote the cardinality of S. Let 0 p ∈ R p be the vector of zeros. For p × p matrix A = (a ij ) p i,j=1 , denote by λ min (A) and λ max (A) the minimum and maximum eigenvalues of A, respectively. The matrix element-wise maximum norm and L 2 norm are defined as A ∞ = max 1≤i,j≤p |a ij | and A = λ 1/2 max (A A), respectively. For a = (a 1 , . . . , a p ) ∈ R p and q > 0, denote by a q = ( p i=1 |a i | q ) 1/q and a ∞ = max 1≤j≤p |a j |. Let v i ∈ R K be the ith column of the K × K identity matrix. We write a t b t if a t is smaller than or equal to b t up to a universal positive constant. For a, b ∈ R, we write a ∨ b = max{a, b}. For two sets, A and B, A B denotes their symmetric difference, that is, A B = (A\B) ∪ (B\A).

Simultaneous Test for a Single Factor
We consider the factor model defined as follows, where y it is the observed response for the ith variable at time t, b i ∈ R K is the unknown vector of factor loadings, f t ∈ R K is the observed vector of common factors, and u it is the latent error. Here, K is a fixed integer denoting the number of factors, p is the number of variables, and T denotes the sample size. Model (1) is commonly used in finance and macroeconomics, see, e.g., in [3,4,21], among others.
Denote by B = (b 1 , . . . , b p ) , y t = (y 1t , . . . , y pt ) and u t = (u 1t , . . . , u pt ) , then model (1) can be re-expressed as We first focus on testing the coefficients b ik = b i v k corresponding to a single factor, i.e., the kth factor. Specifically, we consider the following simultaneous testing problem that given k = 1, . . . , K, where G is a subset of {1, . . . , p} and b null ik are prespecified values. For example, if b null ik are 0, then the hypotheses are able to test whether the variables with indices in G are significantly associated with the kth factor. Throughout the paper, |G| is allowed to grow as fast as p, which may have exponential growth with T as in Assumption 3.
The ordinary least squares (OLS) estimator B = Y F(F F) −1 is applied to estimate B, where Y = (y 1 , . . . , y T ) and F = (f 1 , . . . , f T ) . Therefore, We propose the following test statistic for H 0,G , For each i ∈ G, the asymptotic normality of b ik is straightforward due to the central limit theorem. However, when |G| diverges with p, it is very challenging to demonstrate the existence of the limiting distribution of M T,k . In order to approximate the asymptotic distribution of M T,k , we will use the multiplier bootstrap method. From (4), we know where In order to apply the multiplier bootstrap procedure, we need to approximate T t=1 ξ it / √ T by sum of independent random variables. As " Ω f is consistent for Ω f = {E(f t f t )} −1 , we can replace the former with the latter in ξ it , and define We then apply the multiplier bootstrap procedure to approximate the distribution of . To estimate σ ij , we first calculate the residuals Denote by u t = ( u 1t , . . . , u pt ) , then the error covariance matrix is estimated by Let {e t } T t=1 , a sequence of i.i.d. N(0, 1) independent of {y t , f t } T t=1 , be the multiplier random variables. Then the multiplier bootstrap statistic is defined as Conditioning which can sufficiently approximate the covariance between ξ it and ξ jt . Then, the bootstrap critical value can be obtained via repeatedly. In our simulations and real data, we conduct bootstrap 500 times. We now present some technical assumptions.
Assumption 2. There exist positive constants r 1 , r 2 , b 1 , b 2 , such that for any s > 0, t ≤ T, i ≤ p and j ≤ K, The "i.i.d." condition in Assumption 1 is commonly considered in the literature for high-dimensional inference, see, e.g., in [16]. Assumption 1 (ii) allows the bounded eigenvalue of the error covariance matrix. As noted in [22], such assumption is satisfied by two situations: (1) cov(U 1 , . . . , U p ), where {U i , i ≥ 1} is a stationary ergodic process with spectral density f , 0 < c 1 < f < c 2 and (2) cov(X 1 , . . . , X p ) where X i = U i + V i , i = 1, . . . , p, {U i } is a stationary process as above and {V i } is a noise process independent of {U i }. In Example 1 in [22], they demonstrated that ARMA(r, q) process satisfies Assumption 1 (ii). Furthermore, this assumption is commonly considered in the literature, see, e.g., in [4,15]. Assumption 2 allows the application of the large deviation theory to (1/T) T t=1 u it u jt − σ ij and (1/T) T t=1 u it f jt . In this paper, we assume that f t and u t have exponential-type tails. Let γ −1 Assumption 3 is needed in Bernstein-type inequality [23] and commonly assumed in the literature for Gaussian approximation theory. Assumption 4 is also reasonable by bounding the eigenvalues of Ω f .

Theorem 1. Under Assumptions 1-4, we have
Theorem 1 demonstrates that the multiplier bootstrap critical value c W T,k (α) well approximates the quantile of the test statistic. It is worth mentioning that our method does not require any sparsity assumption on either Σ u or B.
The proof of Theorem 1 depends on the two results: T and (2) the covariances of ξ it and ξ jt are well approximated by the bootstrap version. The first result is demonstrated in Lemma A7 that there exist ζ 1 > 0 and ζ 2 > 0 such that i.e., the maximum discrepancy between the empirical and population covariance matrices converges to zero. Based on Theorem 1, for a given significance level 0 < α < 1, we define the test Φ α by The hypothesis H 0,G is rejected whenever Φ α = 1.
Bootstrap is a commonly used resampling method and full theories about it can be found in [24]. There are many versions of bootstrap, for example, wild bootstrap, empirical bootstrap, and multiplier bootstrap, among others. As discussed in [25], other exchangeable bootstrap methods are asymptotically equivalent to the multiplier bootstrap. As our test statistic can be approximated by the maximum of sum of independent random vectors, we adopt the multiplier bootstrap method in [25] based on Gaussian approximation.
Alternatively, we propose the studentized statistic M * T,k := max i∈G Similarly to Section 2.1, we define the multiplier bootstrap statistic as Theorem 2 below justifies the validity of the bootstrap procedure for the studentized statistic.

Theorem 2.
Under the assumptions in Theorem 1, we have Based on this result, for a given significance level 0 < α < 1, we define the test Φ * α by The hypothesis H 0,G is rejected whenever Φ * α = 1.
For the studentized statistic, we can derive its asymptotic distribution. By Lemma 6 of the work in [15], for any x ∈ R and as p → ∞, we have However, the above alternative testing procedure may not perform well in practice, because it requires diverging p, and the convergence rate is typically slow.
In contrast to the extreme value approach, our testing procedure explicitly accounts for the effect of |G| in the sense that the bootstrap critical value c * W T,k (α) depends on G. Therefore, our approach is more robust to the change in |G|.
Next, we turn to the (asymptotic) power analysis of the above procedure. Denote by B k the kth column of B. We focus on the case where |G| → ∞ as T → ∞ below. Define the separation set where As long as one entry of b ik − b null ik has a magnitude larger than ( √ 2 + ε 0 ) log |G|/T, our bootstrap-assisted test can reject the null correctly. Therefore, with B being sparse, our procedure performs well in detecting non-sparse alternatives. According to Section 3.2 of [26], the separation rate ( √ 2 + ε 0 ) log(|G|)/T is minimax optimal under suitable assumptions.

Simultaneous Test for Multiple Factors
In this section, we test the elements of the loading matrix corresponding to different factors. The testing problem can be stated as follows, We propose the studentized test statistic From the linear expansion in (5), the multiplier bootstrap statistic is defined as Then, the bootstrap critical value can be obtained via Based on Theorem 4, for a given significance level 0 The hypothesis H 0,G * is rejected whenever Φ α (G * ) = 1. Now we turn to the power analysis of the test Φ α (G * ). Similar to Section 2.1, we focus on the case where |G * | → ∞ as T → ∞ and define the separation set . We consider the following condition.
The asymptotic power of the testing procedure is given as follows.

Theorem 5.
Under the assumptions in Theorem 4 and Assumption 6 , for any ε 0 > 0, we have

Multiple Testing with Strong FWER Control
In this section, we study the following multiple testing problem, For simplicity, we set G = {1, 2, . . . , p} and let j be fixed. We combine the bootstrap-assisted procedure with the step-down method proposed by [17]. Our method can be seen as a special case in Section 5 of [25]. Note that this framework can cover the case of testing equalities (H 0,j : b ij = b null ij ) because equalities can be rewritten as pairs of inequalities.
We briefly illustrate the control of the FWER. Full details and theory can be found in [25]. Let Ω be the space for all data generating processes, and ω be the true process. Each null hypothesis H 0,i is equivalent to ω ∈ Ω i for some Ω i ⊂ Ω. For any η ⊂ G, denote by where P ω denotes the probability distribution under the data-generating process ω.
. For a subset η ⊂ G, let c η (α) be the bootstrapped estimate for the (1 − α)-quantile of max i∈η t ij . The step-down procedure in [17] is described as follows. Define η(1) = G at the first step and reject all H 0,i satisfying t ij > c η(1) (α). If no H 0,i is rejected, then stop the procedure. If some H 0,i are rejected, let η(2) be the set of indices for those hypotheses not being rejected at the first step. On step ≥ 2, let η( ) ⊂ G be the subset of hypotheses that were not rejected at step − 1. Reject all hypotheses H 0,i for i ∈ η( ) satisfying that t ij > c η( ) (α). If no hypothesis is rejected, then stop the procedure. Proceed in this way until the algorithm stops.
Romano and Wolf [17] proved the following result: Therefore, we can show that the step-down method together with the multiplier bootstrap provide strong control of the FWER by verifying (9) and (10). The theoretical results are given in the proposition below. The proofs are similar to those of Theorem 1, which are omitted here.

Proposition 1.
Under the assumptions in Theorem 1, the step-down procedure with the bootstrap critical value c η (α) satisfies (8).
Our multiple testing method has the following two important features: (i) It can be applied to models with an increasing dimension; (ii) It takes into account the correlation amongst statistics and hence is asymptotically non-conservative.
In the simulation, we also consider Benjamini-Hochberg procedure [20] to control the false discovery rate (FDR), which is summarized as follows. For each of H 0,1 , . . . , H 0,p , we calculate the p-values P 1 , . . . , P p based on the studentized test statistic. Let P (1) ≤ · · · ≤ P (p) be the ordered p-values, and denote by H 0,(i) the null hypothesis corresponding to P (i) . Let k = max{i : P (i) ≤ iα/p}, and then reject all H 0,(i) for i = 1, . . . , k.

Simulation Study
This section examines the performance of the proposed testing procedure by a simulation study. We fix the number of factors K = 3, the sample size T ∈ {200, 400}, and let the dimensionality p increase from 50 to 600. Throughout the simulation, we consider testing the first column of B and repeat multiplier bootstrap 500 times.
Each row of B is generated independently from N(0, Here, we consider two models for the covariance structure Σ u . Under each model, {f t } T t=1 and {u t } T t=1 are generated independently from N(0, cov(f t )) and N(0, Σ u ), respectively.
We calculate the empirical sizes of test for each column of B under each model by considering hypothesis (3) with G = {1, 2, . . . , p} and b null ik being the true value of b ik . The results are summarized in Table 1. Here "NST", "ST" denote the non-studentized, studentized Bootstrap-based test, respectively, and "EX" denotes the test using extreme value distribution. The estimated sizes of the three tests are reasonably close to the nominal level 0.05 for the values of p ranging from 50 to 600. For all i ∈ G, by varying b ik = b null ik + c/40 with c = ±0.8 and = 0, . . . , 10, we plot the empirical powers of M T,k and M * T,k in Figure 1. For ease of presentation, we only consider p ∈ {10, 200, 600}. The results for other dimensionality are similar in spirit, and are not presented here. For all tests, the significance level is fixed at α = 0.05. From Figure 1, we can tell that the empirical rejection rate grows from the nominal level to one as c deviates away from zero. The difference between NST test and ST test is slight. For small p, the EX test does not perform well because this approach requires diverging p. Furthermore, for non-sparse error covariance matrix, our method performs better than the EX method. These numerical results confirm our theoretical analysis.
Next, we study the numerical performance of the step-down method in Section 3 and compare it with the Bonferroni-Holm procedure. Consider the following two-sided multiple testing problem; H 0,i : b ij = b null ij among all i = 1, 2, . . . , p with j = 1. For Models 1 and 2, the first s 0 entries of are b null ij + 0.5 and b null ij + 0.35, respectively, and the rest are equal to b null ij . We set T ∈ {200, 400} and p ∈ {50, 200, 500, 600}.
We employ both the step-down method based on the studentized/non-studentized test statistic, and the Bonferroni-Holm procedure (based on the studentized test statistic) to control the FWER. We denote these three procedures by NST-FWER, ST-FWER, and BH-FWER, respectively. For comparison, we also consider using Benjamini-Hochberg procedure to control FDR. We denote this procedure by BH-FDR. Based on 500 replications, we calculate the average empirical  Tables 2 and 3 report the empirical FWER, FDR, and the average power.  From Tables 2 and 3, the proposed and Bonferroni-Holm procedures provide similar control on the FWER, and Benjamini-Hochberg procedure can control FDR. The empirical powers of the step-down method and Benjamini-Hochberg procedure are higher than that of the Bonferroni-Holm procedure. It is also seen that controlling the FDR is more powerful than controlling the FWER.

Real Data Analysis
This section conducts hypothesis testing for financial data from 1 January 2017 to 14 March 2018. The dataset consists of daily returns of 491 stocks from S&P 500 index. In addition, we collected Fama-French three factors [21] in the same period. In summary, the panel matrix is a 300 by 491 matrix Y, in addition to a factor matrix F of size 300 by 3. Here, 300 is the number of days and 491 is the number of stocks.
We first centralize and standardize the factor matrix F and Y is centralized as well. We consider testing the sparsity of each column of B and repeat the multiplier bootstrap 500 times. Simultaneous test of parameters corresponding to multiple factors is also considered. The hypotheses are  Table 4. For the first column of B, it is therefore not reasonable to assume b i1 = 0. However, we can claim that the last two columns of B are sparse. Hence, a sufficiently large number of stocks are not influenced by the last two factors.

Appendix A. Technical Details
We prove the main results in this section. First, we introduce some notations. Throughout this section, we denote by c, c , C, C , C i constants that do not depend on p, T and may vary from place to place.
We begin by presenting some useful lemmas that will be used in the proofs of the main results.
Lemma A2. Under Assumptions 1-4, we have Proof of Lemma A2. For a proof, see the proof of Lemma A.3 and Lemma B.1 in [4].
Lemma A3. If a random variable X satisfies exponential-type tail: there exist r > 0 and b > 0 such that Proof of Lemma A3. Note that Proof of Lemma A4. Implied by Assumption 3, we have (log(pT)) 7 /T ≤ C 1 T −c 1 for some constants c 1 , C 1 > 0. We then apply Corollary 2.1 of [25] to the sequence ξ it . What we should check is its Condition (E.2), that is, uniformly over i, where c 0 , C 0 > 0 and B is some large enough constant. By Lemmas A1 and A3 we have E( f it f jt ) = O(1) uniformly for i, j ≤ K. This implies Ω f ∞ = O(1). Uniformly for i ≤ p, we have By Lemma A1, we know γ it and max 1≤i≤p |γ it | is exponential-type tail. Then by Lemma A3 we have E(|ξ it | 4 ) ≤ E(|γ it | 4 )=O(1) and E(max i≤p |ξ it |) 4 ≤ E(max i≤p |γ it |) 4 = O(1). Thus, we can find large enough B such that the above condition is satisfied. Then, the proof is complete.
Lemma A5. Under the assumptions in Theorem 1, there exists a sequence of positive numbers α T → ∞ such that α T /p = o(1) and P(α T (log p) 2 | " Proof of Lemma A5. By Lemma A2 (iii), we have " On the other hand, Choosing α T = log T, by Assumption 3, the proof is complete.
Proof of Lemma A7. The arguments in the proof of Lemma A5 imply that " By Lemma A2 (ii), uniformly for i ≤ p, we have Choosing then the proof is complete.
(ii) By Assumption 1 and Lemma A1, u it f jt f mt f nt satisfies the exponential tail condition for the tail parameter r 1 r 2 /(9r 1 + 3r 2 ). Therefore, again by the Bernstein's inequality and the Bonferroni method on u it f jt f mt f nt similar to (A1) with the parameter r −1 3 = 3r −1 1 + 9r −1 2 , it follows from 1.5(r −1 1 + r −1 2 ) > 1 that r 3 < 1. Thus, when s = C log p/T for large enough C, as K is fixed, the term and the rest terms on the right-hand side of the inequality, multiplied by pK 3 are of order o(p −2 ).  Part I I is similar to ii, thus we have Then the proof is complete.
denote the maximum discrepancy between the empirical and population covariance matrices. By the triangular inequality, we have