Next Article in Journal
Generalized Information Matrix Tests for Detecting Model Misspecification
Next Article in Special Issue
Subset-Continuous-Updating GMM Estimators for Dynamic Panel Data Models
Previous Article in Journal
Panel Cointegration Testing in the Presence of Linear Time Trends
Previous Article in Special Issue
Estimation of Dynamic Panel Data Models with Stochastic Volatility Using Particle Filters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Testing Cross-Sectional Correlation in Large Panel Data Models with Serial Correlation

1
Department of Economics & Center for Policy Research, 426 Eggers Hall, Syracuse University, Syracuse, NY 13244-1020, USA
2
Department of Economics, 365 Fairfield Way, U-1063, University of Connecticut, Storrs, CT 06269-1063, USA
3
Department of Finance, 523 School of Economics, Huazhong University of Science and Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Econometrics 2016, 4(4), 44; https://doi.org/10.3390/econometrics4040044
Submission received: 23 July 2016 / Revised: 12 October 2016 / Accepted: 19 October 2016 / Published: 4 November 2016
(This article belongs to the Special Issue Recent Developments in Panel Data Methods)

Abstract

:
This paper considers the problem of testing cross-sectional correlation in large panel data models with serially-correlated errors. It finds that existing tests for cross-sectional correlation encounter size distortions with serial correlation in the errors. To control the size, this paper proposes a modification of Pesaran’s Cross-sectional Dependence (CD) test to account for serial correlation of an unknown form in the error term. We derive the limiting distribution of this test as N , T . The test is distribution free and allows for unknown forms of serial correlation in the errors. Monte Carlo simulations show that the test has good size and power for large panels when serial correlation in the errors is present.

1. Introduction

This paper studies testing for cross-sectional correlation in panel data when serial correlation is also present in the disturbances. It does that for the case of strictly-exogenous regressors1. Cross-sectional correlation could be due to unknown common shocks, spatial effects or interactions within social networks. Ignoring cross-sectional correlation in panels can have serious consequences. In time series with serial correlation, existing cross-sectional correlation leads to efficiency loss for least squares and invalidates inference. In some cases, it results in inconsistent estimation; see Lee [1] and Andrews [2]. Testing the cross-sectional correlation of panel residuals is therefore important.
One could test for a specific form of correlation in the error like spatial correlation; see Anselin and Bera [3] for cross-sectional data and Baltagi et al. [4] for panel data, to mention a few. Alternatively, one could test for correlation without imposing any structure on the form of correlation among the disturbances. The null hypothesis, in that case, is testing the diagonality of the covariance or correlation matrix of the N-dimensional disturbance vector u t = ( u 1 t , , u N t ) , which is usually assumed to be independent over time, for t = 1 , , T . When N is fixed and T is large, the traditional multivariate statistics techniques, including log-likelihood ratio and Lagrange multiplier tests, are applicable; see, for example, Breusch and Pagan [5], who propose a Lagrange Multiplier (LM) test, which is based on the average of the squared pair-wise correlation coefficients of the least squares residuals.
However, as N becomes large because of the growing availability of the comprehensive databases in macro and finance, this so-called “high dimensional” phenomenon brings challenges to classical statistical inference. As shown in the Random Matrix Theory (RMT) literature, the sample covariance and correlation matrices are ill-conditioned since their eigenvectors are not consistent with their population counterparts; see Johnstone [6] and Jiang [7]. New approaches have been considered in the statistics literature for the testing the diagonality of the sample covariance or correlation matrices; see Ledoit and Wolf [8], Schott [9] and Chen et al. [10], to mention a few.
The above tests for raw data cannot be used directly to test cross-sectional correlation in panel data regressions since the disturbances are not observable. Noise caused by substituting residuals for the actual disturbances may accumulate due to large dimensions, and this in turn may lead to biased inference. The bias for cross-sectional correlation tests in large panels depends on the model specification, the estimation method and the sample sizes N and T, among other things. For example, Pesaran et al. [11] consider an LM test and correct its bias in a large heterogeneous panel data model; Baltagi et al. [12] extend Schott’s test [9] to a fixed effects panel data model and correct the bias caused by estimating the disturbances with fixed effects residuals in a homogeneous panel data model. Following Ledoit and Wolf [8], Baltagi et al. [13] propose a bias-adjusted test for testing the null of sphericity in the fixed effects homogeneous panel data model. However, this method does not test cross-sectional correlation directly. Rejection of the null could be due to cross-sectional correlation or heteroscedasticity or both. A general test for cross-sectional correlation was proposed by Pesaran [14]. His test statistic is based on the average of pair-wise correlation coefficients, defined as CD P (CD, Cross-sectional Dependence). The test is exactly centered at zero under the null and does not need bias correction. Pesaran [15] extends his test statistic to test the null of weak cross-sectional correlation and derives its asymptotic distribution using joint limits. This test is robust to many model specifications and has many applications. Recent surveys for cross-sectional correlation or dependence tests in large panels are provided by Moscone and Tosetti [16], Sarafidis and Wansbeek [17] and Chudik and Pesaran [18].
The asymptotics and bias-correction of existing tests for cross-sectional correlation in large panels are carried out under some, albeit restrictive, assumptions. For instance, the errors are normally distributed; N / T c 0 , as N , T , and so on. One fundamental restriction is that the errors are independent over time. In fact, the presence of serial correlation in panel data applications is likely to be the rule rather than the exception, especially for macro applications and when T is large. Ignoring serial correlation does not affect the consistency of estimates, but it leads to incorrect inference. In RMT, when u 1 , u 2 , , u T are independent across t = 1 , , T , and N is large, the Limiting Spectral Distribution (LSD) of the corresponding sample covariance matrix is the Marchenko-Pastur (M-P) law; see Bai and Silverstein [19]. Existing correlation among these disturbances may cause a deviation of the LSD from the M-P law. Indeed, Bai and Zhou [20] show that the LSD of the sample covariance matrix with correlations in columns is different from the M-P law. Gao et al. [21] show similar results for the sample correlation matrix. Therefore, the cross-sectional correlation tests, which heavily depend on the assumption of independence over time, could lead to misleading inference if there is a serial correlation in the disturbances.
To better understand the effects of potential serial correlation on the existing tests of cross-sectional correlation, let us assume that the T × 1 independent random vectors u i = u i 1 , , u i T , for i = 1 , , N are observable. The correlation coefficients ρ i j of any u i and u j ( i j ) are defined by u i u j / u i · u j . Their means are zero vectors. If all of the elements of each u i are independent and identically spherically distributed, Muirhead [22] shows that E ρ i j 2 = 1 / T . When N is fixed, the summation of all distinct N ( N 1 ) / 2 terms of ρ i j 2 will be small, as T . In Section 3, we show that if all of the elements of each u i follow a multiple Moving Average model of order one (MA(1)) with parameter θ , then E ρ i j 2 = 1 / T + θ 2 / ( T + T θ 2 ) . As N , the extra term θ 2 / ( T + T θ 2 ) can accumulate and lead to extra bias for the existing LM type tests in panels. Although CD P is centered at zero, it may still encounter size distortions because serial correlation is ignored.
This paper proposes a modification of Pesaran’s CD test of cross-sectional correlation when the error terms are serially correlated in large panel data models. First, using results from RMT, we study the first two moments of the test statistic and propose an unbiased and consistent estimate of the variance with unknown serial correlation under the null. Second, we derive the limiting distribution of the test under the asymptotic framework with N , T simultaneously in any order without any distribution assumption. We also discuss its local power properties under a multi-factor alternative. Monte Carlo simulations are conducted to study the performance of our test statistic in finite samples. The results confirm our theoretical findings.
The plan for the paper is as follows. The next section introduces the model and notation, existing LM type tests and the Cross-sectional Dependence (CD) test. It then presents our assumptions and the proposed modified Pesaran’s CD test statistic. Section 3 derives the asymptotics of this test statistic. Section 4 reports the results of the Monte Carlo experiments. Section 5 provides some concluding remarks. All of the mathematical proofs are provided in the Appendix.
Throughout the paper, we adopt the following notation. For a squared matrix B, tr ( B ) is the trace of B ; B = ( tr ( B B ) ) 1 / 2 denotes the Frobenius norm of a matrix or the Euclidean norm of a vector B; d denotes convergence in distribution; and p denotes convergence in probability. We use N , T to denote the joint convergence of N and T when N and T pass to infinity simultaneously. K is a generic positive number not depending on N nor T .

2. Model and Tests

Consider the following heterogeneous panel data model
y i t = β i x i t + u i t , for i = 1 , , N ; t = 1 , , T ,
where i and t index the cross-section dimension and time dimension, respectively; y i t is the dependent variable, and x i t is a k × 1 vector of exogenous regressors. The individual coefficients β i are defined on a compact set and allowed to vary across i . The null hypothesis of no cross-sectional correlation is
H 0 : c o v ( u i t , u j t ) = 0 , for all t , i j ,
or equivalently as
H 0 : ρ i j = 0 , for i j ,
where ρ i j is the pair-wise correlation coefficients of the disturbances defined by
ρ i j = t = 1 T u i t u j t t = 1 T u i t 2 1 / 2 t = 1 T u j t 2 1 / 2 .
Under the alternative, there exists at least one ρ i j 0 , for some i j . For the panel regression model 1, the residuals are unobservable. In this case, the test statistic is based on the residual-based correlation coefficients ρ ^ i j . Specifically,
ρ ^ i j = t = 1 T e i t e j t t = 1 T e i t 2 1 / 2 t = 1 T e j t 2 1 / 2 ,
where e i t is the Ordinary Least Squares (OLS) residuals using T observations for each i = 1 , , N . These OLS residuals are given by
e i t = y i t x i t β ^ i ,
with β ^ i being the OLS estimates of β i from (1) for i = 1 , , N . Let M i = I T P X i , where P X i = X i X i X i 1 X i , and X i is a T × k matrix of regressors with the t-th row being the 1 × k vector x i t . We also define u i = u i 1 , , u i T , e i = e i 1 , , e i T and v i = e i / e i , for i = 1 , , N . The OLS residuals can be rewritten in vector form as e i = M i u i , and the residual-based pair-wise correlation coefficients can be rewritten as ρ ^ i j = v i v j , for any 1 i j N .

2.1. LM and CD Tests

For N fixed and T , Breusch and Pagan [5] propose an LM test to test the null of no cross-sectional correlation in (2) without imposing any structure on this correlation. It is given by
L M B P = T i = 1 N 1 j = i + 1 N ρ ^ i j 2 .
LM B P is asymptotically distributed as a Chi-squared distribution with N ( N 1 ) / 2 degrees of freedom under the null. However, for a typical micro-panel dataset, N is larger than T; and the Breusch-Pagan LM test statistic is not valid under this “large N, small T” setup. In fact, Pesaran [14] proposes a scaled version of this LM test as follows
L M P = 1 N ( N 1 ) i = 1 N 1 j = i + 1 N T ρ ^ i j 2 1 .
Pesaran [14] shows that L M P is distributed as N ( 0 , 1 ) with T first, then N under the null. However, E T ρ ^ i j 2 1 is not correctly centered at zero with fixed T and large N. Hence, Pesaran et al. [11] propose a bias-adjusted version of this LM test, denoted by LM P U Y . They show that the exact mean and variance of ( T k ) ρ ^ i j 2 are given by
μ T i j = E T k ρ ^ i j 2 = 1 T k tr E ( M i M j ) ,
and
ν T i j 2 = var T k ρ ^ i j 2 = tr 2 E ( M i M j ) a 1 T + 2 tr E ( M i M j ) 2 a 2 T ,
where a 1 T = a 2 T 1 T k 2 , and a 2 T = 3 ( T k 8 ) ( T k + 2 ) T k + 2 ( T k 2 ) ( T k 4 ) 2 . L M P U Y is given by
L M P U Y = 2 N ( N 1 ) T k ρ ^ i j 2 μ T i j ν T i j .
Pesaran et al. [11] show that L M P U Y is asymptotically distributed as N ( 0 , 1 ) under the null (2) and the normality assumption of the disturbances as T followed by N . Alternatively, Pesaran [14] proposes a test based on the average of pair-wise correlation coefficients rather than their squares. The test statistic is given by
C D P = 2 T N ( N 1 ) i = 1 N 1 j = i + 1 N ρ ^ i j .
Pesaran [15] shows that this test is asymptotically distributed as N 0 , 1 with N , T . He also extends this to test the null of weak cross-sectional correlation.

2.2. Assumptions and the Modified CD Test Statistic

So far, all of the methods surveyed above for testing cross-sectional correlation in panel data models assume that the disturbances are independent over time. Ignoring serial correlation usually results in efficiency loss and biased inference. In fact, we show in Section 3 that the existence of serial correlation leads to extra bias in the LM-type tests. For the CD P test in (10), it is still centered at zero with serial correlation, but its variance is affected by serial correlation. As a result, we also expect size distortions in CD P . To correct for this, we consider a modification of this test statistic that accounts for an unknown form of serial correlation in the disturbances. First, we introduce the assumptions needed:
Assumption 1.
Define ξ i = ξ i 0 , ξ i 1 , , ξ i T and ε i = ε i 0 , ε i 1 , , ε i T . We also assume that ξ i = σ i ε i , for i = 1 , , N , where ε i is a random vector with mean vector zero and covariance matrix I T . Let ε i t denote the t-th entry of ε i , for any i = 1 , , N . ε i t has a uniformly bounded fourth moment, and there exists a finite constant Δ, such that E ( ε i t 4 ) = 3 + Δ . Following Bai and Zhou [20], the disturbances u t = u 1 t , u 2 t , , u N t are generated by
u t = s = 0 d s ξ t s , f o r t = 1 , , T ,
where ξ t = ξ 1 t , ξ 2 t , , ξ N t , for t = 0 , 1 , , T , are I I D random vectors across time, and d s s = 0 is a sequence of numbers satisfying s = 0 d s < K < .
Assumption 1 allows the error term u i t to be correlated over time. The condition s = 0 d s < K < excludes long memory-type strong dependence. We need bounded moment conditions to ensure large N , T asymptotics for panel data models with serial correlation. The conditions in Assumption 1 are quite relaxable; they are satisfied by many parametric weak dependence processes, such as stationary and invertible finite-order Auto-Regressive and Moving Average (ARMA) models. Under Assumption 1, the covariance matrix of each u i is Σ i = σ i 2 Σ , where Σ is a T × T symmetric positive definite matrix. The random vector u i can be written as u i = σ i Γ ε i , where Γ Γ = Σ . The generic covariance matrix Σ i of each u i captures the serial correlation. Bai and Zhou [20] use this representation and show that 1 / T tr ( Σ κ ) is bounded for any fixed positive integer κ. More specifically, considering a multiple Moving Average model of order one (MA(1))
u t = ξ t + θ ξ t 1 , t = 1 , , T ,
where θ < 1 and u t , ξ t , u i and ξ i are defined in Assumption 1. For this case, Σ MA = ( δ l r ) T × T , where
δ l r = ( 1 + θ 2 ) , l = r ; θ , l r = 1 ; 0 , l r > 1 .
One can also verify that for (11), we have the following generic representation,
Σ = ( ϖ l r ) T × T , where ϖ l r = s = 0 d s d ( | l r | + s ) .
We use this representation throughout the paper for convenience.
Assumption 2.
The regressors, x i t , are strictly exogenous, such that
E u i t | X i = 0 , f o r a l l i = 1 , , N a n d t = 1 , , T ,
and X i X i is a positive definite matrix.
Assumption 3.
T > k and the OLS residuals, e i t , defined by (4), are not all zeros with probability approaching one.
Assumptions 2 and 3 are standard for model (1); see Pesaran [14] and Pesaran et al. [11]. We impose the assumption that the regressors are strictly exogenous. We do not impose any restrictions on the distribution of the errors or the relative convergence speed of ( N , T ). This framework is quite relaxable while LM-type tests usually impose the normality assumption and restrictions on the relative speed of N and T, namely N / T c 0 , .
Under these assumptions, the OLS estimates for model (1) are consistent, but inefficient. We focus on the term used in Pesaran’s CD test [14]
T n = 2 N ( N 1 ) 1 / 2 i = 2 N j = 1 i 1 ρ ^ i j .
In the next section, we derive the first two moments of this test statistic and later derive its limiting distribution under this general unknown form of serial correlation over time.

3. Asymptotics

3.1. Asymptotic Distribution under the Null

In this section, we study the asymptotics of the test statistic T n defined in (16). To derive its limiting distribution, we first consider its first two moments.
Theorem 1.
Under Assumptions 1–3 and the null given in (2),
E ( T n ) = 0
and
γ 2 = v a r T n = 2 N ( N 1 ) i = 2 N j = 1 i 1 E ( ρ ^ i j 2 ) = 2 N ( N 1 ) i = 2 N j = 1 i 1 t r M j Σ M j M i Σ M i t r M i Σ tr M j Σ ,
where M i = I T X i ( X i X i ) 1 X i , and Σ is defined by (14).
Theorem 1 shows that the mean of the test statistic is zero. Its variance depends on Σ , which is a generic form containing serial correlation.
In fact, as shown in the proof of Theorem 1 (see the Appendix B), E ρ ^ i j 2 = t r M j Σ M j M i Σ M i / tr M i Σ tr M j Σ . In the special case where the error terms are independent over time, Σ = I T , and E ρ ^ i j 2 reduces to tr M j M i / ( T k ) 2 , which yields the results given in Equation (7) for the LM P U Y test statistic with no serial correlation. However, with serial correlation in the errors, an extra bias term is introduced in LM P U Y since
tr M j Σ M j M i Σ M i tr M i Σ tr M j Σ tr M j M i ( T k ) 2 0 , if Σ I T .
More specifically, let us assume that u i , i = 1 , , N , are observable, then E ρ i j 2 = tr Σ 2 / tr 2 Σ . For the MA(1) process defined by (12), tr Σ 2 / tr 2 Σ = 1 / T + θ 2 / ( T + T θ 2 ) and tr Σ 2 / tr 2 Σ = 1 / T , for θ = 0 . The extra term θ 2 / ( T + T θ 2 ) accumulates in the LM test statistic and leads to extra bias as N . As discussed above, we expect that LM P U Y to have serious size distortions when serial correlation is present in the disturbances.
Unlike LM-type tests, the test statistic T n is centered at zero; it does not need bias adjustment. Note that if u i t are independent over time, our model reduces to that of Pesaran [14]. Let γ 0 2 be the variance of T n without serial correlation; it can be written as
γ 0 2 = 2 N ( N 1 ) i = 2 N j = 1 i 1 T 2 k T k 2 + tr ( P X i P X j ) T k 2 ,
where P X i = X i X i X i 1 X i and P X j = X j X j X j 1 X j . The above result is the exact variance for T n without serial correlation; it is derived by Pesaran [15]. A modified version of CD P is also given by Pesaran [15] using this exact variance. From Theorem 1, γ 2 is different from γ 0 2 if Σ I T . Hence, we also expect CD P to have size distortions when serial correlation is present in the disturbances. Next, we consider the limiting distribution of the proposed test. The result is given in the following theorem.
Theorem 2.
Under Assumptions 1–3 and the null in (2), as ( N , T ) , we have
γ 1 T n d N 0 , 1 .
Theorem 2 shows that appropriately standardized γ 1 T n is asymptotically distributed as a standard normal. It is valid for N and T tending to infinity jointly in any order. However, we do not observe Σ in a panel data regression model; and an estimate of the variance γ 2 is needed for practical applications. Following Chen and Qin [23], an unbiased and consistent estimator of γ 2 under the null is obtained using the cross-validation approach proposed in the following theorem:
Theorem 3.
Let γ ^ 2 = 1 N ( N 1 ) i = 2 N j = 1 i 1 v i ( v j v ¯ ( i , j ) ) v j ( v i v ¯ ( i , j ) ) , where v ¯ ( i , j ) = 1 N 2 1 τ i , j N v τ . Under Assumptions 1–3 and the null in (2), E γ ^ 2 = γ 2 . As N , T ,
γ ^ 2 p γ 2 .
Define C D R = γ ^ 1 T n . As N , T ,
C D R d N 0 , 1 .
Theorem 3 shows that γ ^ 2 is a good approximation for the variance, and we do not need to specify the structure of Σ. In other words, the test statistic allows the error terms of model (1) to be dependent over time. Furthermore, CD R is a modified version of CD P , so they are likely to perform very similarly with respect to many model specifications (see Pesaran [14]).

3.2. Local Power Properties

We now consider the power analysis of the test. Naturally, the power properties depend on the specifications of the alternatives. One general alternative specification that allows for global cross-sectional correlation in panels is the unobserved multi-factor model. Under this alternative, the new error terms are defined by
u i = u i + σ i F λ i = σ i ( Γ ε i + F λ i ) ,
where F = f 1 , f 2 , , f T denotes the T × r common factor matrix and λ i is the r factor loading vector. Under the null hypothesis, λ i = 0 , for all i. We now consider the following Pitman-type local alternative2
H a : λ i = 1 T 1 / 4 N 1 / 2 δ i , for some i ,
where δ i is a non-random and non-zero r × 1 vector, which does not depend on N or T. To simplify the analysis, we add the following assumption:
Assumption 4.
(1) f t I I D ( 0 , I r ) ; (2) f t are independent of ε i t , x i t , for all i and t; (3) for each i, T 1 / 2 t = 1 T u i t f t = O p ( 1 ) ; T 1 / 2 t = 1 T x i t f t = O p ( 1 ) and T 1 t = 1 T f t f t = I r + O p ( T 1 / 2 ) ; (4) T 1 / 2 X i X j = O p ( 1 ) and T 1 / 2 X i u i = O p ( 1 ) , for all i and j.
The following theorem gives the power properties under the local alternative (24).
Theorem 4.
Under Assumptions 1–4 and local alternative (24), as ( N , T ) ,
γ 1 T n d N ( ψ , 1 ) ,
where ψ = p l i m ( N , T ) γ 1 2 N ( N 1 ) 1 / 2 i = 2 N j = 1 i 1 T 1 / 2 N 1 δ i δ j t r 1 / 2 ( M i Σ ) t r 1 / 2 ( M j Σ ) 0 .
From Theorem 4, the test has nontrivial power against the local alternative that contracts to the null at the rate of T 1 / 4 N 1 / 2 . Hence, Theorem 4 establishes the consistency of the proposed test at the rate of N T under the alternative, as long as ψ 0 .

4. Monte Carlo Simulations

This section conducts Monte Carlo simulations to examine the empirical size and power of the proposed test (CD R ) defined in (22) in heterogeneous panel data regression models. We also look at the performance of LM P U Y and CD P defined by (9) and (10), respectively, for comparison purposes. We consider four scenarios: (1) the errors are independent over time, with no serial correlation; (2) the errors follow a moving average model of order one MA ( 1 ) over time; (3) the errors follow an Auto-Regressive model of order one (AR ( 1 ) ) over time; (4) the errors follow an Auto-Regressive and Moving Average of order (1,1) (ARMA ( 1 , 1 ) ) over time. Finally, we provide small sample evidence on the power performance of the modified CD R test against a factor and spatial auto-regressive model of order one alternatives, which are popular in economics for modeling cross-sectional correlation.

4.1. Experimental Design

Following Pesaran et al. [11], our experiments use the following data-generating process
y i t = α i + β i x i t + u i t , i = 1 , , N ; t = 1 , , T ,
x i t = η x i t 1 + υ i t ,
where α i IID N ( 1 , 1 ) ; β i IID N ( 1 , 0.04 ) . x i t is a strictly exogenous regressor, and we set η = 0.6 and υ i t IID N ( ϕ i 2 / ( 1 0.6 2 ) ) with ϕ i IID χ 2 ( 6 ) / 6 , for i = 1 , , N . The error terms of (26) are generated using the following four data generating processes
( 1 ) IID : u i t = ξ i t ;
( 2 ) MA ( 1 ) : u i t = ξ i t + θ ξ i t 1 ;
( 3 ) AR ( 1 ) : u i t = ρ u i t 1 + ξ i t ;
( 4 ) ARMA ( 1 , 1 ) : u i t = ρ u i t 1 + ξ i t + θ ξ i t 1 ,
where ξ i t = σ i ε i t ; σ i 2 IID χ 2 ( 2 ) / 2 and ε i t ∼IID ( 0 , 1 ) . We further set θ = 0.8 and ρ = 0.6 . To check the robustness of the tests to non-normal distributions, ε i t are generated from a Normal ( 0 , 1 ) and a Chi-squared distribution χ 2 ( 2 ) / 2 1 .
To examine the empirical power of the tests, we consider two different cross-sectional correlation alternatives: factor and spatial models. The factor model is generated by
u i t * = λ i f t + u i t , for i = 1 , , N ; t = 1 , , T ,
where f t IID N ( 0 , 1 ) and λ i IID U [ 0.1 , 0.3 ] ; In this case, u i t * replaces u i t in (26) for the power studies. u i t is generated by the four scenarios defined by (28)–(31), respectively. For the spatial model, we consider a first-order spatial auto-correlation model (SAR(1)),
u i t * = δ 0.5 u i 1 , t * + 0.5 u i + 1 , t * + u i t ,
where δ = 0.4 and u i t are defined by (28)–(31), respectively.
The experiments are conducted for N = 10 , 20 , 30 , 50 , 100 , 200 and T = 10 , 20 , 30 , 50 , 100 . For each pair of N , T , we run 2000 replications. To obtain the empirical size, we conduct the proposed test (CD R ) and CD P at the two-sided 5% nominal significance level and LM P U Y at the positive one-sided 5% nominal significance level.

4.2. Simulation Results

Table 1 reports the empirical size of CD P , LM P U Y and CD R for normal and chi-squared distributed errors. The error terms are assumed to be independent over time. The results show that all of the tests have the correct size with different ( N , T ) combinations under both normal and chi-squared scenarios. Those are consistent with the theoretical findings. The only exceptions are for small N or T equal to 10, especially for LM P U Y . Table 2 reports the empirical size of the three tests with MA(1) error terms defined by (29). The results show that CD R has the correct size for all ( N , T ) , but CD P has size distortions for different ( N , T ) combinations because the disturbances are MA(1) over time. For example, under the normality scenario, the size of CD P is 9.35 % for N = 10 and T = 20 ; it becomes 11.1 % when T grows to 100 . LM P U Y suffers serious size distortions, because of the extra bias caused by ignoring serial correlation. From Table 2, the empirical size of LM P U Y is 100 % as N or T becomes larger than 30. Table 3 and Table 4 report the empirical size of the tests with AR(1) and ARMA(1,1) errors under the two distributions: normal and chi-squared scenarios. Note that CD R is over-sized in Table 4 for the chi-squared case when T = 10 . However, it has the correct size as T gets larger than 20 . In contrast, LM P U Y has serious size issues, rejecting 100 % of the time, and CD P is over sized by as much as 25 % . Overall, in comparison with CD P and LM P U Y , the proposed test CD R controls for size distortions when serial correlation in the disturbances is present and is not much affected when serial correlation is not present.
Table 5 summarizes the size-adjusted power of CD R with MA(1), AR(1) and ARMA(1,1) errors under the factor model alternative. Results show that CD R performs reasonably well under the two distribution scenarios especially for N and T > 10 . Table 6 confirms the power properties of CD R for MA(1), AR(1) and ARMA(1,1) errors under the SAR(1) alternative, especially for large N and T.

5. Conclusions

In this paper, we find that in the large heterogeneous panel data model, LM P U Y exhibits serious size bias when there is serial correlation in the disturbances. While CD P is centered at zero, it still encounters size distortions caused by ignoring serial correlation. We modify Pesaran’s CD P test to account for serial correlation of an unknown form in the error term and call it CD R . This paper has several novel aspects: first, an unbiased and consistent estimate of the variance under the assumptions and the null of no cross-section correlation is proposed without knowing the form of serial correlation over time. Second, the limiting distribution of the test is derived as N , T in any order. Third, it is distribution free. Simulations show that the proposed test CD R successfully controls for size distortions with serial correlation in the error term. It also has reasonable power under the alternatives of a factor model and a spatial auto-correlation SAR(1) model for different serial correlation specifications.

Author Contributions

All authors contributed equally to the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

This Appendix includes proofs of the main results in the text. The Appendix includes two parts: Part A includes some useful lemmas, which are frequently used in the proofs of the theorems; Part B gives the proofs of all of the theorems included in the paper.
Let us introduce some notation before proceeding: For two matrices B = ( b i j ) and C = ( c i j ) , we define B C = ( b i j c i j ) . ∑ denotes summation over mutually-different indices, e.g., i 1 , i 2 , j 1 , j 2 means summation over i 1 , i 2 , j 1 , j 2 : i 1 , i 2 , j 1 , j 2 are mutually different .

Appendix A. Some Useful Lemmas

Lemma A1.
Let F and G be non-stochastic N × N symmetric and positive definite matrices. Define r = u i F u i u i G u i . Under Assumptions 1, we have
(a) 
E ( r k ) = E ε i F ε i k E ε i G ε i k ;
(b) 
E ε i F ε i = tr F ;
(c) 
E ε i F ε i 2 = tr ( F 2 ) + 2 tr 2 ( F ) + Δ tr ( F F ) ;
(d) 
tr ( F F ) tr F 2 .
The Proof of part (a) is given by Lieberman [24], and the proofs of (b)–(d) are from Proposition 1 of Chen et al. [10]; hence, we omit the proof here.
Lemma A2.
Define B j = M j Σ M j , for any j , respectively. Under Assumptions 1–3 and the null in (2), we have
(a) 
E ρ ^ i j 2 = t r B i B j t r B i t r B j ;
(b) 
E ρ ^ i j 4 3 + Δ 2 + Δ t r B i B j 2 + t r 2 B i B j t r 2 ( B i ) t r 2 ( B j ) ;
(c) 
For any j 1 j 2 , E ρ ^ i j 1 2 ρ ^ i j 2 2 2 + Δ t r B i B j 1 2 + t r 2 B i B j 1 1 / 2 2 + Δ t r B i B j 2 2 + t r 2 B i B j 2 1 / 2 t r B j 1 t r ( B j 2 ) t r 2 ( B i ) .
Proof. 
Recall that the pair-wise correlation coefficients is defined as
ρ ^ i j = v i v j = t = 1 T v i t v j t ,
where v i are the scaled residual vectors defined by v i = e i ( e i e i ) 1 / 2 . e i is the OLS residual vector from the individual-specific least squares regression, and it is given by
e i = M i u i = M i σ i Γ ε i , with M i = I T P X i = I T X i X i X i 1 X i ,
where M i is idempotent. Consider part (a),
E ρ ^ i j 2 = E v i v j 2 = E e i e j e i e i 1 / 2 e j e j 1 / 2 2 = E e i A j e i e i e i ,
where A j = e j e j e j e j . Then
E ρ ^ i j 2 = E E ρ ^ i j 2 | ε j = E E e i A j e i e i e i | ε j .
Since e i = M i σ i Γ ε i , and using parts (a) and (b) of Lemma A1, we have E e i A j e i e i e i | ε j = tr Γ M i A j M i Γ tr Γ M i Γ . Moreover,
E tr Γ M i A j M i Γ = E ε j Γ M j M i Γ Γ M i M j Γ ε j ε j Γ M j Γ ε j = tr Γ M j M i Γ Γ M i M j Γ tr Γ M j Γ = tr M j Σ M j M i Σ M i tr M j Σ .
Together with the above results, we have:
E ρ ^ i j 2 = tr M j Σ M j M i Σ M i tr M i Σ tr M j Σ = tr B i B j tr B i tr B j .
Consider part (b),
E ρ i j 4 = E E ρ i j 4 v j = E E e i A j e i e i e i 2 v j = E E ( ε i Γ M i A j M i Γ ε i ) 2 tr 2 Γ M i Γ v j = E 2 tr Γ M i A j M i Γ 2 + tr 2 Γ M i A j M i Γ + Δ tr Γ M i A j M i Γ Γ M i A j M i Γ tr 2 ( B i ) .
Using part (a) of Lemma A1, we have
E tr 2 Γ M i A j M i Γ = E ε j Γ M j M i Γ Γ M i M j Γ ε j ε j Γ M j Γ ε j 2 = E ε j Γ M j M i Γ Γ M i M j Γ ε j 2 E ε j Γ M j Γ ε j 2 .
Using part (c) of Lemma A1, we also have
E ε j Γ M j M i Γ Γ M i M j Γ ε j 2 = 2 tr ( Γ M j M i Γ Γ M i M j Γ ) 2 + tr 2 ( Γ M j M i Γ Γ M i M j Γ ) + Δ tr Γ M j M i Γ Γ M i M j Γ Γ M j M i Γ Γ M i M j Γ = 2 tr B i B j 2 + tr 2 B i B j + Δ tr Γ M j M i Γ Γ M i M j Γ Γ M j M i Γ Γ M i M j Γ 2 + Δ tr B i B j 2 + tr 2 B i B j .
With the fact that E ε j Γ M j Γ ε j = tr ( B j ) , we obtain
E tr 2 Γ M i A j M i Γ 2 + Δ tr B i B j 2 + tr 2 B i B j tr 2 ( B j ) .
Next, we consider E tr Γ M i A j M i Γ 2 .
E tr Γ M i A j M i Γ 2 = E ε j Γ M j M i Γ Γ M i M j Γ ε j ε j Γ M j Γ ε j 2 2 + Δ tr B i B j 2 + tr 2 B i B j tr 2 ( B j ) .
Hence,
E ρ ^ i j 4 3 + Δ 2 + Δ tr B i B j 2 + tr 2 B i B j tr 2 ( B i ) tr 2 ( B j ) .
Consider part (c); since
E ρ ^ i j 1 2 ρ ^ i j 2 2 = EE ρ ^ i j 1 2 ρ ^ i j 2 2 | v i = E E ρ ^ i j 1 2 | v i E ρ ^ i j 2 2 | v i = E v i B j 1 v i v i B j 2 v i tr B j 1 tr ( B j 2 ) .
Note that E v i B j 1 v i v i B j 2 v i E v i B j 1 v i 2 1 / 2 E v i B j 2 v i 2 1 / 2 by using the Cauchy–Schwarz inequality and
E v i B j 1 v i 2 = E ε i Γ M i M j 1 Γ Γ M j 1 M i Γ ε i ε i Γ M i Γ ε i 2 2 + Δ tr B i B j 1 2 + tr 2 B i B j 1 tr 2 ( B i ) .
Hence:
E ρ ^ i j 1 2 ρ ^ i j 2 2 2 + Δ tr B i B j 1 2 + tr 2 B i B j 1 1 / 2 2 + Δ tr B i B j 2 2 + tr 2 B i B j 2 1 / 2 tr B j 1 tr ( B j 2 ) tr 2 ( B i ) .
Lemma A3.
Under Assumptions 1–3 and the null in (2), for any fixed positive number k, we have
(a) 
1 T tr Σ k = O ( 1 ) ;
(b) 
1 T tr ( B i k ) = O ( 1 ) ;
(c) 
1 T tr ( B i 1 B i 2 B i k ) = O ( 1 ) , for i 1 i 2 i k .
Proof. 
Part (a) is directly from Bai and Zhou [20]; hence, we omit it here. Next, we consider part (b). Since I T P X i is idempotent, for any i = 1 , , N ; hence, tr B i k = tr I T P X i Σ ( I T P X i ) k = tr I T P X i Σ k . By using the inequality that for any positive definite matrices A and B (see Bushell and Trustrum [25])
tr A B k tr A k B k ,
we have
tr B i k tr I T P X i Σ k tr Σ k .
Using part (a), then
1 T tr B i k 1 T tr Σ k = O ( 1 ) .
For part (c), since for each B i l ,   l = 1 , , k , it is positive semi-definite. We also have B i l Σ , l = 1 , , k . By using the facts that for any matrices A , B , with A B and C positive definite, tr ( A C ) tr ( B C ) , we conclude that
1 T tr ( B i 1 B i 2 B i k ) 1 T tr Σ k = O ( 1 ) .
Part (c) holds. ☐

Appendix B. Proof of the Theorems

Appendix B.1. Proof of Theorem 1

Proof. 
Since E e i | X i = 0 and ε i , i = 1 , , N , are independent, it is easy to show that
E ( ρ ^ i j ) = 0 ,
which further implies E T n = 0 . Next, we consider the variance of T n .
var i = 1 N j = 1 i 1 ρ ^ i j = E i = 1 N j = 1 i 1 ρ ^ i j 2 = E i 1 = 1 N j 1 = 1 i 1 1 i 2 = 1 N j 2 = 1 i 2 1 ρ ^ i 1 j 1 ρ ^ i 2 j 2 .
To calculate the above term, we have three cases to discuss:
(1)
i 1 , i 2 , j 1 , j 2 are mutually different. E ρ ^ i 1 j 1 ρ ^ i 2 j 2 = 0 .
(2)
i 1 = i 2 , j 1 = j 2 . By using Lemma A2, we have E ρ ^ i j 2 = tr B i B j tr B i tr B j .
(3)
i 1 = i 2 , i 1 j 1 j 2 . Since v i 1 , v j 1 , v i 1 and v j 2 are independent, we have E ρ ^ i 1 j 1 ρ ^ i 1 j 2 = E v i 1 v j 1 v i 1 v j 2 = 0 .
Hence, the above results give us the variance of T n , which is
γ 2 = var T n = 2 N ( N 1 ) i = 1 N j = 1 , j i N tr M j Σ M j M i Σ M i tr M i Σ tr M j Σ = 2 N ( N 1 ) i = 2 N j = 1 i 1 tr B i B j tr B i tr B j ,
and Theorem 1 is proven. ☐

Appendix B.2. Proof of Theorem 2

Proof. 
To prove this theorem, we need to employ the Martingale central limit theorem (Billingsley [26]). For that purpose, we define F 0 = ϕ , Ω , F N i as the σ-field generated by ε 1 , ε 2 , , ε i for 1 i N . Let E N r · denote the conditional expectation given filtration F N r E 0 · = E · . Write L n = i = 1 N D N , i with D N , 1 = 0 . More specifically,
D N , i = 1 N ( N 1 ) 1 / 2 j = 1 i 1 v i v j .
For every N , we can further show that
E D N , i F N , i 1 = 0 .
Hence, D N , i 1 i N is a martingale difference sequence with respect to F N , i 1 i N . Let δ N i 2 = E D N i 2 F N , i 1 . By applying the Martingale central limit theorem, it is sufficient to show that, as N , T ,
i = 1 N δ N i 2 var T n p 1 and i = 1 N E D N , i 4 var 2 T n 0 .
Lemmas B1 and B2 prove the above conditions. Hence, we can apply the Martingale central limit theorem, and as N , T , we have
γ 1 T n d N ( 0 , 1 ) .
Lemma B1.
Under Assumptions 1–3 and the null (2), as N , T ,
i = 1 N δ N i 2 v a r T n p 1 ,
where δ N i 2 = E D N i 2 F N , i 1 .
Proof. 
To prove Lemma B1, we first show that E i = 1 N δ N i 2 = var ( T n ) . Then, we will show that as ( N , T ) , var i = 1 N δ N i 2 / var 2 ( T n ) 0 . It is easy to show that
E i = 1 N δ N i 2 = i = 1 N E E D N i 2 F N , i 1 = var T n .
Next, we only need to show that the second condition is satisfied. We first consider the magnitude of var( T n ). From Lemma A3, we know that
tr B j B i tr B i tr B j = O ( T 1 ) ,
which implies var 2 ( T n ) = O ( T 2 ) . Now, consider var ( i = 1 N δ N i 2 ) . Let Q j = j = 1 i 1 v j , then:
δ N i 2 = E D N i 2 F N , i 1 = 2 N ( N 1 ) E v i Q j Q j v i F N , i 1 = 2 N ( N 1 ) E ε i Γ M i Q j Q j M i Γ ε i ( ε i M i Γ Γ M i ε i ) F N , i 1 = 2 N ( N 1 ) Q j M i Γ Γ M i Q j tr B i .
Therefore, we need to show the magnitude of var i = 1 N Q j M i Γ Γ M i Q j . Rewrite Q j M i Γ Γ M i Q j = j 1 = 1 i 1 j 2 = 1 i 1 v j 1 B i v j 2 and:
E j 1 = 1 i 1 j 2 = 1 i 1 v j 1 M i Γ Γ M i v j 2 = E j = 1 i 1 v j B i v j = j = 1 i 1 E ε j Γ M j B i M j Γ ε j ( ε j Γ M j Γ ε j ) = j = 1 i 1 tr B j B i tr B j .
Next, we consider E j 1 = 1 i 1 j 2 = 1 i 1 v j 1 B i v j 2 2 .
E j 1 = 1 i 1 j 2 = 1 i 1 v j 1 B i v j 2 2 = E j 1 = 1 i 1 j 2 = 1 i 1 j 3 = 1 i 1 j 4 = 1 i 1 v j 1 B i v j 2 v j 3 B i v j 4 .
To calculate the magnitude order of the above term, we have three cases to discuss:
(1)
j 1 = j 2 = j 3 = j 4 = j .
E v j B i v j 2 = E ε j Γ M j B i M j Γ ε j 2 ( ε j Γ M j Γ ε j ) 2 = E ε j Γ M j B i M j Γ ε j 2 E ε j Γ M j Γ ε j 2 = tr 2 B j B i + 2 tr B j B i 2 + Δ tr B j B i B j B i tr 2 B j 3 + Δ tr 2 B j B i tr 2 B j .
(2)
j 1 = j 2 j 3 = j 4 .
E v j 1 B i v j 1 v j 3 B i v j 3 = E v j 1 B i v j 1 E v j 3 B i v j 3 = tr B j 1 B i tr B j 1 tr B j 3 B i tr B j 3 .
(3)
j 1 = j 3 j 2 = j 4 .
E v j 1 B i v j 2 v j 1 B i v j 2 = E E v j 1 B i v j 2 v j 2 B i v j 1 v j 2 = E tr Γ M j 1 B i M j 2 Γ ε j 2 ε j 2 Γ M j 2 B i M j 1 Γ tr M j 1 Σ ε j 2 Γ M j 2 Γ ε j 2 = tr B j 2 B i B j 1 B i tr B j 1 tr B j 2 .
Hence,
var ( Q j Γ M i Γ Q j ) = E ( Q j Γ M i Γ Q j ) 2 E Q j Γ M i Γ Q j 2 j 1 = 1 i 1 j 2 = 1 , j 2 j 1 i 1 tr B j 1 B i tr B j 2 B i tr B j 1 tr B j 2 + 3 + Δ j = 1 i 1 tr 2 B j B i tr 2 B j + 2 j 1 = 1 i 1 j 2 = 1 , j 2 j 1 i 1 tr B j 2 B i B j 1 B i tr B j 1 tr B j 2 j = 1 i 1 tr B j B i tr B j 2 = 2 j 1 = 1 i 1 j 2 = 1 , j 2 j 1 i 1 tr B j 2 B i B j 1 B i tr B j 1 tr B j 2 + 2 + Δ j = 1 i 1 tr 2 B j B i tr 2 B j .
It further leads to
var i = 1 N δ N i 2 4 N 2 ( N 1 ) 2 N i = 1 N var δ N i 2 8 N ( N 1 ) 2 i = 1 N j 1 = 1 i 1 j 2 = 1 , j 2 j 1 i 1 tr B j 2 B i B j 1 B i tr 2 B i tr B j 1 tr B j 2 + 4 ( 2 + Δ ) N ( N 1 ) 2 i = 1 N j = 1 i 1 tr 2 B j B i tr 2 B i tr 2 B j .
By using Lemma A3, we have
var i = 1 N δ N i 2 K O 1 T 3 + O ( 1 N T 2 ) .
As ( N , T ) , var i = 1 N δ N i 2 / var 2 ( T n ) 0 . Lemma B1 is proven. ☐
Lemma B2.
Under Assumptions 1–3 and the null (2), as ( N , T ) ,
i = 1 N E D N , i 4 v a r 2 T n 0 .
Proof. 
Rewrite
E D N , i 4 = E E D N , i 4 | F N , i 1 = E E v i Q j Q j v i 2 F N , i 1 = E tr 2 Γ M i Q j Q j M i Γ + 2 tr ( Γ M i Q j Q j M i Γ ) 2 + Δ tr Γ M i Q j Q j M i Γ Γ M i Q j Q j M i Γ tr 2 B i .
By using the results from Lemma B1, we have
E tr 2 Γ M i Q j Q j M i Γ = E Q j B i Q j 2 j 1 = 1 i 1 j 3 = 1 , j 3 j 1 i 1 tr B j 1 B i tr B j 3 B i tr B j 1 tr B j 3 + 3 + Δ j = 1 i 1 tr 2 B j B i tr 2 B j + 2 j 1 = 1 i 1 j 2 = 1 , j 2 j 1 i 1 tr B j 2 B i B j 1 B i tr B j 1 tr B j 2 .
Since
tr ( Γ M i Q j Q j M i Γ ) 2 tr 2 Γ M i Q j Q j M i Γ
and
tr Γ M i Q j Q j M i Γ Γ M i Q j Q j M i Γ tr 2 Γ M i Q j Q j M i Γ ,
thus
i = 1 N E D N , i 4 K N 2 ( N 1 ) 2 i = 1 N j 1 = 1 i 1 j 2 = 1 , j 2 j 1 i 1 tr B j 1 B i tr B j 2 B i tr 2 B i tr B j 1 tr B j 3 + K N 2 ( N 1 ) 2 i = 1 N j = 1 i 1 tr 2 B j B i tr 2 B i tr 2 B j + K N 2 ( N 1 ) 2 i = 1 N j 1 = 1 i 1 j 2 = 1 , j 2 j 1 i 1 tr B j 2 B i B j 1 B i tr 2 B i tr B j 1 tr B j 2 K 2 N T 2 = O 1 N T 2 .
Hence, i = 1 N E D N , i 4 var 2 T n 0 , as N , T . Lemma B2 is proven. ☐

Appendix B.3. Proof of Theorem 3

Proof. 
We want to show
E ( γ ^ 2 ) = γ 2 and γ ^ 2 γ 2 = o p ( 1 ) .
Note that
γ ^ 2 = 1 2 N ( N 1 ) i , j N v i v j v ¯ ( i , j ) v j v i v ¯ ( i , j ) = 1 2 N ( N 1 ) i , j N v i v j 2 v i v j v j v ¯ ( i , j ) v i v ¯ ( i , j ) v j v i + v i v ¯ ( i , j ) v j v ¯ ( i , j ) = a 1 + a 2 + a 3 + a 4 , say .
It is easy to show that the first term E a 1 = γ 2 , and E ( a i ) = 0 , i = 2 , 3 , 4 . Therefore, we prove the first part. By using Lemma A3 and Theorem 1, we have γ 2 = O ( T 1 ) . Hence, to prove γ ^ 2 γ 2 = o p ( 1 ) , we only need to show that var( a 1 ) = o p ( T 2 ) and a i = o p γ 2 , for i = 2 , 3 , 4 . Let us consider var( a 1 ).
var ( a 1 ) = E ( a 1 2 ) γ 4 = 4 N 2 ( N 1 ) 2 E i = 1 N j = 1 i 1 ρ ^ i j 2 2 4 N 2 ( N 1 ) 2 i = 2 N j = 1 i 1 tr B j B i tr B i tr B j 2 = 4 N 2 ( N 1 ) 2 E i 1 = 2 N j 1 = 1 i 1 i 2 = 2 N j 2 = 1 i 2 1 ρ i 1 j 1 2 ρ i 2 j 2 2 4 N 2 ( N 1 ) 2 i = 2 N j = 1 i 1 tr B j B i tr B i tr B j 2 .
Now, we only consider the term E i 1 = 2 N j 1 = 1 i 1 i 2 = 2 N j 2 = 1 i 1 ρ i 1 j 1 2 ρ i 2 j 2 2 . There are three cases for this term, and Lemma A2 is used frequently:
(1)
i 1 , i 2 , j 1 and j 2 are mutually different.
E ρ i 1 j 1 2 ρ i 2 j 2 2 = tr B i 1 B j 1 tr B i 2 B j 2 tr B i 1 tr B i 1 tr B i 2 tr B i 2 = O p 1 T 2 .
(2)
i 1 = i 2 , j 1 = j 2 and i 1 j 1 .
E ρ i j 4 3 + Δ 2 + Δ tr B i B j 2 + tr 2 B i B j tr 2 ( B i ) tr 2 ( B j ) = O p 1 T 2 .
(3)
i 1 = i 2 , i 1 j 1 j 2 .
E ρ i j 1 2 ρ i j 2 2 2 + Δ tr B i B j 1 2 + tr 2 B i B j 1 1 / 2 2 + Δ tr B i B j 2 2 + tr 2 B i B j 2 1 / 2 tr B j 1 tr ( B j 2 ) tr 2 ( B i ) = O p 1 T 2 .
From the above results, we have
var ( a 1 ) = O p 1 N 2 T 2 .
Hence a 1 p γ 2 . Consider the second term a 2 , which is equal to 1 2 N ( N 1 ) N 2 i , j , τ N v i v j v j v τ . The first term of E i , j , τ N v i v j v j v τ 2 is
i , j 1 , j 2 , τ N E v i v j 1 v j 1 v τ v i v j 2 v j 2 v τ = i , j 1 , j 2 , τ N t r ( M j 2 M τ Σ M τ M i Σ M i M j 1 Σ M j 1 M j 2 Σ ) t r B τ t r B j 2 t r ( B j 1 ) t r B i = O N 4 T 3 ,
by using Lemmas A2 and A3. By using part (c) of Lemma A3, the second term of E i , j , τ N v i v j v j v τ 2 is
E i , j , τ N v i v j v j v τ 2 = O p ( N 3 T 2 ) .
Hence, a 2 = O p N 1 T 3 / 2 + O p N 3 / 2 T 1 , which further implies a 2 = o p ( γ 2 ) . Since a 2 = a 3 , a 3 = o p ( γ 2 ) . Consider a 4 ; it can be divided into two terms
1 2 N ( N 1 ) N 2 2 i , j , τ N v i v τ v j v τ and 1 2 N ( N 1 ) N 2 2 i , j , τ 1 , τ 2 N v i v τ 1 v j v τ 2 .
It is easy to show that the former term is O p N 1 a 2 , then it is o p ( γ 2 ) . We only need to consider the latter term E i , j , τ 1 , τ 2 N v i v τ 1 v j v τ 2 2 .
E i , j , τ 1 , τ 2 N v i v τ 1 v j v τ 2 2 = i , j , τ 1 , τ 2 N E v i v τ 1 2 v j v τ 2 2 = O N 4 T 2 ,
by using Lemma A2–A3. Hence, the latter term is O p ( N 2 T 1 ) . The above results together lead to a 4 = o p ( γ 2 ) . The first part of Theorem 3 holds; the second part of Theorem 3 is directly derived by using Theorem 2 and the first part of Theorem 3. ☐

Appendix B.4. Proof of Theorem 4

Proof. 
The OLS residuals under the local alternative are defined by M i u i = σ i M i Γ ε i + M i F λ i , thus
T n = 2 N ( N 1 ) 1 / 2 i = 2 N j = 1 i 1 M i Γ ε i + M i F λ i M j Γ ε j + M j F λ j | | M i Γ ε i + M i F λ i | | | | M j Γ ε j + M j F λ j | | = 2 N ( N 1 ) 1 / 2 i = 2 N j = 1 i 1 M i Γ ε i + M i F λ i M j Γ ε j + M j F λ j ε i Γ M i Γ ε i + 2 ε i Γ M i F λ i + λ i F M i F λ i 1 / 2 ε j Γ M j Γ ε j + 2 ε j Γ M j F λ j + λ j F M j F λ j 1 / 2 .
Consider the denominator. Note that E ε i Γ M i Γ ε i 2 = tr ( Σ M i ) 2 + 2 tr ( Σ M i ) 2 + Δ tr Σ M i Σ M i = O p T 2 , which lead to ε i Γ M i Γ ε i = O ( T ) . Consider the term ε i Γ M i F λ i . Since
ε i Γ M i F λ i = ε i Γ F λ i ε i Γ X i X i X i 1 X i F λ i .
From Assumption 4, we have X i F = O p T 1 / 2 , ε i Γ F = O p T 1 / 2 and ε i Γ X i = O p T 1 / 2 , which lead to | | ε i Γ F λ i | | = O p T 1 / 4 N 1 / 2 and | | ε i Γ X i X i X i 1 X i F λ i | | = O p ( T 1 / 4 N 1 / 2 ) . Hence, ε i Γ M i F λ i = o p ε i Γ M i Γ ε i . Similarly, by using Assumption 4, we also have λ i F M i F λ i = o p ε i Γ M i Γ ε i . From the above results, we further have
ε i Γ M i Γ ε i + 2 ε i Γ M i F λ i + λ i F M i F λ i = ( 1 + o p ( 1 ) ) ε i Γ M i Γ ε i .
It results in
T n = 2 N ( N 1 ) 1 / 2 i = 2 N j = 1 i 1 ε i Γ M i M j Γ ε j + ε i Γ M i M j F λ j + λ i F M i M j Γ ε j + λ i F M i M j F λ j ( 1 + o p ( 1 ) ) ε i Γ M i Γ ε i 1 / 2 ( 1 + o p ( 1 ) ) ε j Γ M j Γ ε j 1 / 2 = T n 1 + T n 2 + T n 3 + T n 4 ,
where T n 1 = 2 N ( N 1 ) 1 / 2 i = 2 N j = 1 i 1 ε i Γ M i M j Γ ε j D i j ; T n 2 = 2 N ( N 1 ) 1 / 2 i = 2 N j = 1 i 1 ε i Γ M i M j F λ j D i j ; T n 3 = 2 N ( N 1 ) 1 / 2 i = 2 N j = 1 i 1 λ i F M i M j Γ ε j D i j and T n 4 = 2 N ( N 1 ) 1 / 2 i = 2 N j = 1 i 1 λ i F M i M j F λ j D i j with D i j = ( 1 + o ( 1 ) ) ε i Γ M i Γ ε i 1 / 2 ( 1 + o ( 1 ) ) ε j Γ M j Γ ε j 1 / 2 . From Theorem 2, γ 1 T n 1 d N ( 0 , 1 ) .
From Theorem 1, T n 1 = O p T 1 / 2 . Consider T n 2 . We observe that E ( T n 2 ) = 0 and
E T n 2 2 = 2 N ( N 1 ) E i = 2 N j = 1 i 1 ε i Γ M i M j F λ j D i j 2 = 2 N ( N 1 ) i = 2 N j = 1 i 1 E ε i Γ M i M j F λ j D i j 2 + i = 2 N j 1 = 1 i 1 j 2 j 1 i 1 E ε i Γ M i M j 1 F λ j 1 λ j 2 F M j 2 M i Γ ε i D i j 1 D i j 2 .
Consider the term ε i Γ M i M j F λ j .
ε i Γ M i M j F λ j = ε i Γ F λ j ε i Γ X i X i X i 1 X i F λ j ε i Γ X j ( X j X j ) 1 X j F λ j + ε i Γ X i ( X i X i ) 1 X i X j ( X j X j ) 1 X j F λ j .
Using Assumption 4 and under the local alternative, we first have | | ε i Γ F λ j | | = O p T 1 / 4 N 1 / 2 ; we then have | | ε i Γ X i ( X i X i ) 1 X i F λ j | | = O p T 1 / 4 N 1 / 2 since
ε i Γ X i ( X i X i ) 1 X i F = ε i Γ X i T X i X i T 1 X i F T = O p ( 1 ) ;
we last have | | ε i Γ X i ( X i X i ) 1 X i X j ( X j X j ) 1 X j F λ j | | = O p T 1 / 4 N 1 / 2 . Hence, | | ε i Γ M i M j F λ j | | = O p T 1 / 4 N 1 / 2 . Together with the fact that | | D i j | | = O p ( T ) , the first term of E ( T n 2 ) 2 is of order O p T 3 / 2 N 1 . Similar to the proof of above, | | ε i Γ M i M j 1 F λ j 1 λ j 2 F M j 2 M i Γ ε i | | = O p T 1 / 2 N 1 ; with the facts that | | D i j 1 | | = O p ( T ) and | | D i j 2 | | = O p ( T ) ; the second term of E ( T n 2 ) 2 is of order O p T 3 / 2 . Thus, T n 2 = O p T 3 / 4 = o p T n 1 . Similarly, T n 3 = o p T n 1 .
Consider T n 4 . Note that
λ i F M i M j F λ j = λ i F F λ j λ i F X i ( X i X i ) 1 X i F λ j λ i F X j ( X j X j ) 1 X j F λ j + λ i F X i ( X i X i ) 1 X i X j ( X j X j ) 1 X j F λ j .
From Assumption 4, we know that λ i F F λ j = λ i T ( I r + O p ( T 1 / 2 ) ) λ j p T λ i λ j . Since
F X i ( X i X i ) 1 X i F = F X i T X i X i T 1 X i F T = O p ( 1 ) ,
λ i F X i ( X i X i ) 1 X i F λ j = o p ( λ i F F λ j ) . Similarly, we can also show that the third and the fourth terms are of smaller order of the first term. Hence, λ i F M i M j F λ j = ( 1 + o p ( 1 ) ) λ i F F λ j .
Note that E ( λ i F F λ j ) = T λ i λ j 0 , and 2 N ( N 1 ) 1 / 2 i = 2 N j = 1 N λ i F F λ j D i j = O p ( T 1 / 2 ) ; hence, γ 1 T n 4 = O p ( 1 ) . One can also show that D i j p tr 1 / 2 ( M i Σ ) tr 1 / 2 ( M j Σ ) . Let ψ = p l i m ( N , T ) γ 1 2 N ( N 1 ) 1 / 2 i = 2 N j = 1 i 1 T 1 / 2 N 1 δ i δ j tr 1 / 2 ( M i Σ ) tr 1 / 2 ( M j Σ ) ; from all of the above results, as ( N , T ) ,
γ 1 T n 1 ψ d N ( 0 , 1 ) .

References

  1. L. Lee. “Consistency and Efficiency of Least Squares Estimation for Mixed Regressive, Spatial Autoregressive Models.” Econom. Theory 18 (2002): 252–277. [Google Scholar] [CrossRef]
  2. D.W.K. Andrews. “Cross-Section Regression with Common Shocks.” Econometrica 73 (2005): 1551–1585. [Google Scholar] [CrossRef]
  3. L. Anselin, and A.K. Bera. “Spatial Dependence in Linear Regression Models with an Introduction to Spatial Econometrics.” In Handbook of Applied Economic Statistics. Edited by A. Ullah and D.E. Giles. New York, NY, USA: Marcel Dekker, 1998, pp. 237–289. [Google Scholar]
  4. B.H. Baltagi, S.H. Song, and W. Koh. “Testing Panel Data Regression Models with Spatial Error Correlation.” J. Econom. 117 (2003): 123–150. [Google Scholar] [CrossRef]
  5. T.S. Breusch, and A.R. Pagan. “The Lagrange Multiplier Test and Its Application to Model Specifications in Econometrics.” Rev. Econ. Stud. 47 (1980): 239–253. [Google Scholar] [CrossRef]
  6. I.M. Johnstone. “On the Distribution of the Largest Eigenvalue in Principal Components Analysis.” Ann. Stat. 29 (2001): 295–327. [Google Scholar] [CrossRef]
  7. T.F. Jiang. “The Limiting Distributions of Eigenvalues of Sample Correlation Matrices.” Sankhyā 66 (2004): 35–48. [Google Scholar]
  8. O. Ledoit, and M. Wolf. “Some Hypothesis Tests for the Covariance Matrix When the Dimension is Large Compared to the Sample Size.” Ann. Stat. 41 (2002): 1055–1692. [Google Scholar] [CrossRef]
  9. J.R. Schott. “Testing for Complete Independence in High Dimensions.” Biometrika 92 (2005): 951–956. [Google Scholar] [CrossRef]
  10. S.X. Chen, L.X. Zhang, and P.S. Zhong. “Tests for High Dimensional Covariance Matrices.” J. Am. Stat. Assoc. 105 (2010): 810–819. [Google Scholar] [CrossRef]
  11. M.H. Pesaran, A. Ullah, and T. Yamagata. “A Bias-Adjusted LM Test of Error Cross-Section Independence.” Econom. J. 11 (2008): 105–127. [Google Scholar] [CrossRef]
  12. B.H. Baltagi, Q. Feng, and C. Kao. “A Lagrange Multiplier Test for Cross-Sectional Dependence in a Fixed Effects Panel Data Model.” J. Econom. 170 (2012): 164–177. [Google Scholar] [CrossRef]
  13. B.H. Baltagi, Q. Feng, and C. Kao. “Testing for Sphericity in a Fixed Effects Panel Data Model.” Econom. J. 14 (2011): 25–47. [Google Scholar] [CrossRef]
  14. M.H. Pesaran. “General Diagnostic Test for Cross Section Dependence in Panels.” CESifo Working Paper Series No. 1229, IZA Discussion Paper No. 1240. Available online: http://ssrn.com/abstract=572504 (accessed on 2 May 2015).
  15. M.H. Pesaran. “Testing Weak Cross-Sectional Dependence in Large Panels.” Econom. Rev. 34 (2015): 1089–1117. [Google Scholar] [CrossRef]
  16. F. Moscone, and E. Tosetti. “A Review and Comparisons of Tests of Cross-Section Dependence in Panels.” J. Econ. Surv. 23 (2009): 528–561. [Google Scholar] [CrossRef]
  17. V. Sarafidis, and T. Wansbeek. “Cross-Sectional Dependence in Panel Data Analysis.” Econom. Rev. 31 (2012): 483–531. [Google Scholar] [CrossRef] [Green Version]
  18. A. Chudik, and M.H. Pesaran. “Large Panel Data Models with Cross-Sectional Dependence: A Survey.” In The Oxford Handbook on Panel Data. Edited by B.H. Baltagi. Oxford, UK: Oxford University Press, 2015, Chapter 1; pp. 3–45. [Google Scholar]
  19. Z.D. Bai, and J.W. Silverstein. “CLT for Linear Spectral Statistics of Large-Dimensional Sample Covariance Matrices.” Ann. Probab. 32 (2004): 553–605. [Google Scholar]
  20. Z.D. Bai, and W. Zhou. “Large Sample Covariance Matrices without Independence Structures in Columns.” Stat. Sin. 18 (2008): 425–442. [Google Scholar]
  21. J.T. Gao, X. Han, G.M. Pan, and Y.R. Yang. “High Dimensional Correlation Matrices: CLT and Its Applications.” J. R. Stat. Soc. Ser. B Stat. Methodol., 2016. [Google Scholar] [CrossRef]
  22. R.J. Muirhead. Aspects of Multivariate Statistical Theory. Hoboken, NJ, USA: John Wiley & Sons, 1982. [Google Scholar]
  23. S.X. Chen, and Y.L. Qin. “A Two-Sample Test for High Dimensional Data with Application to Gene-Set Testing.” Ann. Stat. 38 (2010): 808–835. [Google Scholar] [CrossRef]
  24. O. Lieberman. “A Laplace Approximation to the Moments of a Ratio of Quadratic Forms.” Biometrika 81 (1994): 681–690. [Google Scholar] [CrossRef]
  25. P.J. Bushell, and G.B. Trustrum. “Trace Inequality for Positive Definite Matrix Power Products.” Linear Algebra Appl. 132 (1990): 173–178. [Google Scholar] [CrossRef]
  26. P. Billingsley. Probability and Measure, 3rd ed. New York, NY, USA: Wiley, 1995. [Google Scholar]
  • 1.The inclusion of predetermined variables, which is the weakly-exogenous case, alters the results.
  • 2.We only consider the case that the number of non-zero factor loading vectors is N or of order N, which means the model has strong error cross-sectional correlation. For the weak error cross-sectional correlation case, we conjecture that it is similar to Pesaran [15].
Table 1. Size of tests with IID errors over time.
Table 1. Size of tests with IID errors over time.
Tests(N,T)NormalChi-Squared
1020305010010203050100
C D R 105.755.905.504.756.455.904.805.555.156.45
203.854.555.054.705.154.604.504.505.855.40
304.454.104.705.104.604.404.804.454.506.25
504.454.755.405.254.504.103.654.754.054.60
1004.654.854.205.655.304.354.804.704.354.95
2004.054.653.904.605.005.655.054.854.655.40
C D P 105.605.505.254.106.005.604.705.054.705.65
204.054.755.054.905.304.904.704.655.855.30
304.904.454.855.205.005.205.204.555.006.05
504.955.205.605.554.455.004.155.004.554.70
1005.655.154.505.955.455.155.655.054.505.05
2005.005.004.454.855.156.355.755.154.705.55
L M P U Y 106.756.056.106.005.606.606.857.657.956.60
206.205.456.757.005.507.056.406.407.155.60
306.206.255.406.355.957.655.956.355.857.00
506.554.955.255.605.407.006.857.205.405.85
1008.105.455.404.604.557.005.856.105.855.90
2008.605.756.505.905.358.007.206.306.406.70
Notes: This table reports the size of CD P , LM P U Y and CD R with u i t = ξ i t , where ξ i t = σ i ε i t ; σ i 2 IID χ 2 ( 2 ) / 2 . ε i t IID ( 0 , 1 ) and are generated from normal and Chi-squared distributions. The tests are conducted at the 5% nominal significance level.
Table 2. Size of tests with MA(1) errors.
Table 2. Size of tests with MA(1) errors.
Tests(N,T)NormalChi-Squared
1020305010010203050100
C D R 106.106.254.455.356.256.305.405.905.856.50
205.154.805.054.605.305.205.354.706.154.75
304.504.354.205.354.955.554.754.905.306.15
505.254.505.305.704.305.004.654.604.354.85
1004.755.354.505.455.605.804.155.454.354.90
2004.354.953.504.504.906.206.304.304.305.50
C D P 107.609.358.4010.0511.107.807.7510.3010.2510.95
206.608.309.959.1010.907.008.959.3010.7010.50
306.458.358.3010.5010.607.909.659.5010.8010.60
507.457.9510.7511.309.657.557.909.209.709.15
1006.509.359.0010.8511.557.858.3510.609.3010.20
2006.658.458.459.7010.959.909.509.359.6511.20
L M P U Y 1037.9554.4057.1059.5560.7039.1553.0056.5060.7561.55
2081.5596.0096.8098.2597.9083.2595.4597.0597.7098.20
3098.30100.00100.00100.00100.0098.45100.00100.00100.00100.00
50100.00100.00100.00100.00100.00100.00100.00100.00100.00100.00
100100.00100.00100.00100.00100.00100.00100.00100.00100.00100.00
200100.00100.00100.00100.00100.00100.00100.00100.00100.00100.00
Notes: This table reports the size of CD P , LM P U Y and CD R with u i t = ξ i t + θ ξ i t 1 , where ξ i t = σ i ε i t ; σ i 2 IID χ 2 ( 2 ) / 2 . ε i t IID ( 0 , 1 ) and are generated from normal and Chi-squared distributions. The tests are conducted at the 5% nominal significance level.
Table 3. Size of tests with AR(1) errors.
Table 3. Size of tests with AR(1) errors.
Tests(N,T)NormalChi-Squared
1020305010010203050100
C D R 106.106.254.906.156.756.054.806.106.005.65
204.755.654.654.705.004.855.604.505.554.80
304.154.854.004.554.655.504.255.755.106.65
504.154.505.205.454.405.255.354.604.404.35
1004.354.804.805.454.805.754.155.304.055.10
2004.854.604.054.555.057.805.354.954.204.55
C D P 106.809.6510.2014.5516.806.558.2512.2513.9016.30
205.759.5011.3513.2516.855.909.6011.5015.0515.45
305.659.8010.0013.3014.057.359.6512.0015.2017.15
505.908.4511.9514.8014.107.109.559.7012.4015.80
1006.0510.0010.4014.7016.557.258.7012.2513.8515.00
2006.659.0010.2513.3016.709.4010.310.8513.7016.10
L M P U Y 1037.9554.4057.1059.5560.7027.6066.3082.4590.8095.35
2055.5097.9099.85100.00100.0059.9598.4099.85100.00100.00
3098.3099.95100.00100.00100.0082.75100.00100.00100.00100.00
5097.80100.00100.00100.00100.0098.60100.00100.00100.00100.00
100100.00100.00100.00100.00100.00100.00100.00100.00100.00100.00
200100.00100.00100.00100.00100.00100.00100.00100.00100.00100.00
Notes: This table reports the size of CD P , LM P U Y and CD R with u i t = ρ u i t 1 + ξ i t , where ξ i t = σ i ε i t ; σ i 2 IID χ 2 ( 2 ) / 2 . ε i t IID ( 0 , 1 ) and are generated from normal and Chi-squared distributions. The tests are conducted at the 5% nominal significance level.
Table 4. Size of tests with ARMA(1,1) errors.
Table 4. Size of tests with ARMA(1,1) errors.
Tests(N,T)NormalChi-Squared
1020305010010203050100
C D R 106.956.454.906.205.857.205.256.405.405.45
205.405.554.954.754.956.405.704.955.554.70
304.654.754.054.804.657.454.605.955.106.50
504.954.955.255.304.507.505.704.804.354.80
1005.055.154.605.104.9010.255.104.654.004.80
2005.754.654.454.855.2017.456.605.754.504.25
C D P 109.1015.9516.3522.5024.3010.9513.8019.2021.7025.15
208.3014.4017.8020.1525.0510.1014.8018.9022.8523.15
308.3015.4017.7021.5522.5510.9515.2519.2523.5524.25
508.7014.8518.8022.7023.4011.7515.4017.3019.1523.95
1009.3515.9017.5022.1524.2017.2014.4517.9522.0522.70
2009.5014.0518.3520.0024.9525.4517.0018.5521.3524.65
L M P U Y 1083.6598.4599.4599.7599.8083.6598.4099.7099.90100.00
2099.85100.00100.00100.00100.0099.85100.00100.00100.00100.00
30100.00100.00100.00100.00100.00100.00100.00100.00100.00100.00
50100.00100.00100.00100.00100.00100.00100.00100.00100.00100.00
100100.00100.00100.00100.00100.00100.00100.00100.00100.00100.00
200100.00100.00100.00100.00100.00100.00100.00100.00100.00100.00
Notes: This table reports the size of CD P , LM P U Y and CD R with u i t = ρ u i t 1 + ξ i t + θ ξ i t 1 , where ξ i t = σ i ε i t ; σ i 2 IID χ 2 ( 2 ) / 2 . ε i t IID ( 0 , 1 ) and are generated from normal and Chi-squared distributions. The tests are conducted at the 5% nominal significance level.
Table 5. Size adjusted power of CD R : factor model.
Table 5. Size adjusted power of CD R : factor model.
DGP(N,T)NormalChi-Squared
1020305010010203050100
M A ( 1 ) 1014.5523.9530.3045.4063.0521.9530.7533.6546.0066.10
2035.7056.6568.9584.0595.9547.3063.2575.8086.0097.40
3059.6581.7091.7597.6599.9569.7587.5092.6098.0099.95
5083.6596.6099.30100.00100.0088.7598.0099.55100.00100.00
10096.7599.95100.00100.00100.0098.9099.90100.00100.00100.00
20099.70100.00100.00100.00100.0099.70100.00100.00100.00100.00
A R ( 1 ) 1018.9523.9532.4038.1056.7526.9535.0028.9037.1561.25
2045.6062.1069.9581.4594.2055.1067.4574.8585.6596.60
3068.8083.5092.3097.6099.7578.1590.8592.7097.4099.85
5088.5597.4599.40100.00100.0092.9098.5099.65100.00100.00
10098.80100.00100.00100.00100.0099.6099.95100.00100.00100.00
20099.90100.00100.00100.00100.0099.85100.00100.00100.00100.00
A R M A ( 1 , 1 ) 107.707.7010.0010.8014.809.6510.358.809.6019.60
2022.0518.8524.2527.8039.5024.8522.3523.4030.6046.20
3037.7537.4546.1548.9075.0041.7547.3544.1553.1571.25
5066.5066.7571.6083.1096.2066.2572.3582.4588.2098.00
10091.1596.6098.7599.90100.0090.4598.5599.4099.95100.00
20098.95100.00100.00100.00100.0098.4599.95100.00100.00100.00
Notes: This table computes the size adjusted power for CD R with a factor model that allows for cross-sectional correlation in the errors: u i t * = λ i f t + u i t . u i t are generated by MA(1), AR(1) and ARMA (1,1) defined by (29)–(31). ξ i t = σ i ε i t ; σ i 2 IID χ 2 ( 2 ) / 2 . ε i t IID ( 0 , 1 ) and are generated from normal and Chi-squared distributions.
Table 6. Size adjusted power of CD R : SAR(1) model.
Table 6. Size adjusted power of CD R : SAR(1) model.
DGP(N,T)NormalChi-Squared
1020305010010203050100
M A ( 1 ) 1038.8560.5572.2088.2597.3043.0567.1572.5588.4597.70
2037.4561.7076.0092.1599.0539.2561.2576.8089.5599.10
3039.6064.5578.6092.0099.6040.3065.6578.8091.9099.35
5040.0566.4579.1592.7099.7539.9566.5578.6594.6599.70
10033.6062.7080.5592.5599.6537.8564.6579.2094.4099.90
20040.6564.5080.6594.7099.837.7562.5081.2595.6599.80
A R ( 1 ) 1037.2053.9568.2079.2092.1042.8563.2061.1578.0094.80
2038.2556.5069.3082.9095.8538.5555.5068.6583.7097.20
3037.9056.9071.8084.6598.1038.7062.0066.2585.7096.90
5038.8059.8071.4086.6098.6039.7059.1571.2589.0099.00
10038.8557.8570.9086.6098.7535.2559.8572.5588.9598.60
20040.7555.9574.4087.7598.8033.8056.0070.8590.4099.10
A R M A ( 1 , 1 ) 1029.0043.4058.0570.2085.9032.7549.7551.3067.4088.20
2031.0543.5556.6572.1089.1028.3543.4554.8071.3591.35
3030.0045.7059.3571.3594.2028.1048.1054.0073.0591.90
5033.0545.3054.4071.7093.3027.3043.9058.0075.7594.45
10030.6045.1555.5075.4094.9521.8045.4557.8577.3594.75
20030.3042.0558.1575.7595.1521.0538.8055.7077.5095.80
Notes: This table computes the size adjusted power for CD R with a SAR(1) model that allows for cross-sectional correlation in the error: u i t * = δ ( 0 . 5 u i 1 , t * + 0 . 5 u i + 1 , t * ) + u i t with δ = 0.4 . u i t are generated by MA(1), AR(1) and ARMA (1,1) defined by (29)–(31). ξ i t = σ i ε i t ; σ i 2 IID χ 2 ( 2 ) / 2 . ε i t IID ( 0 , 1 ) and are generated from normal and Chi-squared distributions.

Share and Cite

MDPI and ACS Style

Baltagi, B.H.; Kao, C.; Peng, B. Testing Cross-Sectional Correlation in Large Panel Data Models with Serial Correlation. Econometrics 2016, 4, 44. https://doi.org/10.3390/econometrics4040044

AMA Style

Baltagi BH, Kao C, Peng B. Testing Cross-Sectional Correlation in Large Panel Data Models with Serial Correlation. Econometrics. 2016; 4(4):44. https://doi.org/10.3390/econometrics4040044

Chicago/Turabian Style

Baltagi, Badi H., Chihwa Kao, and Bin Peng. 2016. "Testing Cross-Sectional Correlation in Large Panel Data Models with Serial Correlation" Econometrics 4, no. 4: 44. https://doi.org/10.3390/econometrics4040044

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop