Next Article in Journal
Forecast Combination under Heavy-Tailed Errors
Next Article in Special Issue
Spatial Econometrics: A Rapidly Evolving Discipline
Previous Article in Journal
Forecasting Interest Rates Using Geostatistical Techniques
Previous Article in Special Issue
Measurement Errors Arising When Using Distances in Microeconometric Modelling and the Individuals’ Position Is Geo-Masked for Confidentiality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Testing in a Random Effects Panel Data Model with Spatially Correlated Error Components and Spatially Lagged Dependent Variables

1
Department of Economics, University of Washington, Seattle, WA 98195, USA
2
Department of Economics, Portland State University, Portland, OR 97201, USA
*
Author to whom correspondence should be addressed.
Econometrics 2015, 3(4), 761-796; https://doi.org/10.3390/econometrics3040761
Submission received: 6 August 2015 / Revised: 18 October 2015 / Accepted: 26 October 2015 / Published: 9 November 2015
(This article belongs to the Special Issue Spatial Econometrics)

Abstract

:
We propose a random effects panel data model with both spatially correlated error components and spatially lagged dependent variables. We focus on diagnostic testing procedures and derive Lagrange multiplier (LM) test statistics for a variety of hypotheses within this model. We first construct the joint LM test for both the individual random effects and the two spatial effects (spatial error correlation and spatial lag dependence). We then provide LM tests for the individual random effects and for the two spatial effects separately. In addition, in order to guard against local model misspecification, we derive locally adjusted (robust) LM tests based on the Bera and Yoon principle (Bera and Yoon, 1993). We conduct a small Monte Carlo simulation to show the good finite sample performances of these LM test statistics and revisit the cigarette demand example in Baltagi and Levin (1992) to illustrate our testing procedures.

1. Introduction

Spatial econometric models have been extensively used to study regional effects and interdependence between different spatial units. Most of the widely used spatial models are variants of the benchmark models developed in Cliff and Ord (1973, 1981) [1,2] and Anselin (1988a) [3]. Based on the form of spatially correlated error components and/or spatially lagged dependent variables, these models better fit the real world data generating process by explicitly considering the spatial interdependence. Hypothesis testing for spatial dependence has been developed rapidly in the recent literature. For standard LM tests of spatial dependence in cross section models, see Anselin (1988a, b) [3,4], Anselin and Bera (1998) [5], and Anselin (2001) [6]. For standard LM tests of spatial dependence in panel data models, Baltagi et al., (2003) [7] provide tests for random effects and/or spatial error correlation. Baltagi and Liu (2008) [8] provide tests for random effects and/or spatial lag dependence. Debarsy and Ertur (2010) [9] derive tests in the spatial panel data model with individual fixed effects based on Lee and Yu (2010) [10]. Qu and Lee (2012) [11] consider tests in spatial models with limited dependent variables. Baltagi et al., (2013) [12] extend the model in Kapoor et al., (2007) [13] by allowing for different spatial correlation parameters in the individual random effects and in the disturbances, and they derive the corresponding LM tests. Further, standardized versions of the LM tests are discussed in Yang (2010) [14], Baltagi and Yang (2013a) [15] to remedy distributional misspecifications in finite sample and sensitivity to spatial layout. Born and Breitung (2011) [16], Baltagi and Yang (2013b) [17] discuss versions of LM tests that are robust against unknown heteroskedasticity. Recently, Yang (2015) [18] provides residual-based bootstrap procedure to obtain improved approximations to the finite sample critical values of the LM test statistics in spatial econometric models.
However, to the best of our knowledge, there are no test statistics treating the individual random effects, the spatial error correlation, and the spatial lag dependence simultaneously. We contribute to the literature by constructing various LM test statistics in such a general framework, or the so-called spatial autoregressive model with autoregressive disturbances (SARAR). Our results are useful for applied researchers to implement and perform model diagnostic testing in the SARAR framework. In particular, we first derive the joint LM test for the individual random effects and the two spatial effects. We next derive LM tests for the individual random effects. Finally, we derive LM tests for the two spatial effects. In addition, we provide robust LM tests in some cases as needed in order to guard against local misspecification. We emphasize some key features of the robust LM test in the following.
Bera and Yoon (1993) [19] argue that the LM test with specific values of the nuisance parameters (marginal LM test) might suffer from local misspecification in the nuisance parameters. They propose robust LM test to guard against such local misspecification, see also Anselin et al., (1996) [20], Bera et al., (2001, 2009, 2010) [21,22,23], and He and Lin (2013) [24]. Here, we emphasize two advantages of the robust LM test. First, the asymptotic size of marginal LM test will be distorted under local misspecification in the nuisance parameters since it follows a non-central χ 2 distribution. On the other hand, robust LM test follows a χ 2 distribution under such misspecification, thus it can provide valid asymptotic size as long as the misspecification is local. Second, while LM test without specifying values of the nuisance parameters (conditional LM test) does not suffer from such size distortion, it generally needs the maximum likelihood estimator (MLE), which could be costly in computation. In contrast, the robust LM test only requires restricted estimator under the relevant joint null hypothesis, which is simply the ordinary least square (OLS) estimator in most cases. Therefore, the robust LM test can provide result as good as the conditional LM test at a lower computational cost, provided that the deviation of nuisance parameters is local. However, there is one potential loss in using the robust LM test. If the values of nuisance parameters are correctly specified, the robust LM test is in general less powerful than the marginal LM test. Also, when the nuisance parameters deviate far away from the pre-specified values, the robust LM test is generally invalid. In sum, the standard LM tests (marginal and conditional LM tests) and the robust LM tests complement each other, and they should be used together for inference purposes.
In this paper, we maintain the assumption of random effects model, while an alternative specification is the fixed effects model with spatial dependence as in Lee and Yu (2010) [10], Debarsy and Ertur (2010) [9], and He and Lin (2013) [24]. On the one hand, the random effects specification is a parsimonious way to allow for individual effects in different spatial units and it will be particularly useful for testing and selection in microeconometric applications when the number of units is very large. On the other hand, the fixed effects specification, which allows for correlation between the individual effects and the covariates, is more suited for many macro studies when the number of units is not very large (see Elhorst (2014) [25] for more discussion on comparison of the random effects model and the fixed effects model).
The rest of the paper is organized as follows. Model specification is discussed in Section 2. The LM test statistics are presented In Section 3. In Section 4, we report the Monte Carlo simulation results to show their satisfactory finite sample size and power performances. In Section 5, we provide an empirical example to illustrate our testing procedures. Section 6 concludes with suggestions for future research. All mathematical derivations are relegated to the Appendices.

2. The Model

Suppose that the data is generated according to the following spatial panel data model, for t = 1 , 2 , · · · , T ,
y t = λ W y t + X t β + ϵ t , ϵ t = ρ M ϵ t + μ + v t .
In the above specification, y t = ( y t 1 , y t 2 , · · · , y t N ) is an N × 1 vector of dependent variable for period t. It is spatially interdependent, as reflected by the spatial lag dependence coefficient λ. X t is an N × ( K + 1 ) matrix of non-stochastic regressors for period t, with the first column to be ones, and β = ( β 0 , β 1 , · · · , β K ) is the corresponding ( K + 1 ) × 1 slope parameter vector. ϵ t = ( ϵ t 1 , ϵ t 2 , · · · , ϵ t N ) is an N × 1 vector of the regression error term for period t. The error vector is also spatially correlated, as reflected by the spatial error correlation coefficient ρ. μ = ( μ 1 , μ 2 , · · · , μ N ) is an N × 1 vector representing the individual random effects. The random effects terms { μ i } , i = 1 , · · · , N are i.i.d. across i, with zero mean, variance σ μ 2 and E | μ i | 4 + c 1 < for some c 1 > 0 . v t = ( v t 1 , v t 2 , · · · , v t N ) is an N × 1 vector of innovation terms. The innovation terms { v t i } , t = 1 , · · · , T , i = 1 , · · · , N are i.i.d. across i and t, with zero mean, variance σ v 2 and E | v t i | 4 + c 2 < for some c 2 > 0 (see Lee and Yu (2012) [26] for other regularity and identification conditions as needed for asymptotic theory). W and M are nonstochastic spatial weights matrices of size N × N . Typically, they are specified by the first-order rook contiguity criterion and are row-standardized so that each row sums up to one.
By stacking across t, the model can be written as
y = λ ( I T W ) y + X β + ϵ , ϵ = ρ ( I T M ) ϵ + ι T μ + v ,
where y = ( y 1 , y 2 , · · · , y T ) , X = ( X 1 , X 2 , · · · , X T ) , ϵ = ( ϵ 1 , ϵ 2 , · · · , ϵ T ) , and v = ( v 1 , v 2 , · · · , v T ) . ι T is a T × 1 vector of ones, I denotes the identity matrix with its dimension in the subscript, and ⊗ denotes the kronecker product. Let A = [ I T ( I N - ρ M ) ] , and B = [ I T ( I N - λ W ) ] . The error component ϵ is expressed as ϵ = A - 1 ( ι T μ + v ) , with E [ ϵ ] = 0 , Var [ ϵ ] = Ω ϵ = A - 1 Ω ( A - 1 ) , where Ω = [ J T ( σ μ 2 I N ) + I T ( σ v 2 I N ) ] . Using the results in Magnus (1982) [27], we get Ω - 1 = ( T σ μ 2 + σ v 2 ) - 1 J T ¯ I N + ( σ v 2 ) - 1 E T I N , and | Ω | = ( T σ μ 2 + σ v 2 ) N ( σ v 2 ) N ( T - 1 ) , where J T ¯ = J T / T , J T = ι T ι T , E T = I T - J T ¯ . Notice that y = B - 1 X β + B - 1 ϵ , with E [ y ] = B - 1 X β and Var [ y ] = Ω y = B - 1 A - 1 Ω ( A - 1 ) ( B - 1 ) . We have Ω y - 1 = B A Ω - 1 A B , and | Ω y | = | B | - 2 | A | - 2 ( T σ μ 2 + σ v 2 ) N ( σ v 2 ) N ( T - 1 ) .
Let δ = ( β , ρ , λ , σ μ 2 , σ v 2 ) and ϵ = B y - X β , the log-likelihood function of the random vector y as if it is normally distributed is
L ( δ ) = - N T 2 ln ( 2 π ) - N 2 ln T σ μ 2 + σ v 2 - N ( T - 1 ) 2 ln σ v 2 + ln | A | + ln | B | - 1 2 ϵ A Ω - 1 A ϵ

3. LM and Robust LM Test Statistics

In this section, we provide explicit formulae for the LM test statistics. We first present the joint LM test for both the individual random effects and the spatial effects. We then provide LM test statistics for the individual random effects. Lastly, we present LM test statistics for the spatial effects, namely, the spatial error correlation and/or the spatial lag dependence.1 In addition, we provide formulae for robust LM tests when necessary.
Before presenting the LM test statistics, we introduce the following notations for easy reference. Let R 1 = M ( I N - ρ M ) - 1 , R 2 = W ( I N - ρ M ) - 1 , R 3 = W ( I N - λ W ) - 1 , and R 4 = ( I N - ρ M ) ( I N - ρ M ) . Let z ^ ρ = ϵ ^ A ^ Ω ^ - 1 ( I T M ) ϵ ^ , z ^ λ = ϵ ^ A ^ Ω ^ - 1 A ^ ( I T W ) y , and z ^ σ μ 2 = [ ϵ ^ A ^ ( J ¯ T I N ) A ^ ϵ ^ ] / σ ^ v 2 - N , where ϵ ^ = B ^ y - X β ^ , A ^ , B ^ , Ω ^ , β ^ , and σ ^ v 2 are restricted MLEs of A, B, Ω, β, and σ v 2 under the null hypotheses, respectively. Next, define ν ^ = y ^ ( I T W ) A ^ Ω ^ - 1 A ^ ( I T W ) y ^ , τ ^ = T 2 ( b 1 b 3 - b 2 2 ) + T b 1 ω ^ , where ω ^ = y ^ ( I T W ) A ^ [ Ω ^ - 1 -Ω^-1A^X(XA^Ω^-1A^X)-1XA^Ω^-1]A^(ITW)y^, y ^ = B ^ - 1 X β ^ , b 1 = tr ( M M + M M ) , b 2 = tr ( M W + M W ) , b 3 = tr ( W W + W W ) , and tr ( · ) is the trace operator. Finally, let
ξ ^ = N T ϑ ^ 2 + N ω ^ - 2 T ϑ ^ 3 2 T b 1 ( N T ϑ ^ 2 + N ω ^ - 2 T ϑ ^ 3 2 ) - N ( T ϑ ^ 1 ) 2 , ζ ^ = N θ ^ 1 - 2 θ ^ 3 2 ( N θ ^ 1 - 2 θ ^ 3 2 ) ( T θ ^ 4 + ω ^ ) - N T θ ^ 2 2
where ϑ ^ 1 = tr [ ( M + M ) R ^ 3 ] , ϑ ^ 2 = tr ( R ^ 3 R ^ 3 + R ^ 3 R ^ 3 ) , ϑ ^ 3 = tr ( R ^ 3 ) , θ ^ 1 = tr R ^ 1 R ^ 1 + R ^ 1 R ^ 1 , θ ^ 2 = tr W R ^ 1 + R ^ 2 R ^ 1 ( I N - ρ ^ M ) , θ ^ 3 = tr ( R ^ 1 ) , θ ^ 4 = tr ( W W ) + tr R ^ 2 R ^ 2 R ^ 4 , and R ^ 1 , R ^ 2 , R ^ 3 , R ^ 4 are restricted MLEs of R 1 , R 2 , R 3 , R 4 , respectively.

3.1. Jointly Testing for Random Effects and Spatial Effects

Now we are ready to present the test statistics. We first construct the joint LM test statistic for the individual random effects and the spatial effects, namely, the spatial error correlation and the spatial lag dependence. The joint hypothesis is H 0 a : σμ2=ρ=λ=0 vs. H 1 a : at least one of σ μ 2 , ρ and λ is not zero. Thus we are testing the classical pooled panel data model against the full specification in (2.1), the LM test in this case is given by
L M a = T b 3 + ω ^ a τ ^ a z ^ ρ , a 2 + T b 1 τ ^ a z ^ λ , a 2 - 2 T b 2 τ ^ a z ^ ρ , a z ^ λ , a + T 2 N ( T - 1 ) z ^ σ μ 2 , a 2 ,
where ω ^ a = [ y ^ a ( I T W ) ( I N T - X ( X X ) - 1 X ) ( I T W ) y ^ a ] / σ ^ v , a 2 , σ ^ v , a 2 = ( ϵ ^ a ϵ ^ a ) / ( N T ) , ϵ ^ a = y - y ^ a , y ^ a = X β ^ a , τ ^ a = T 2 ( b 1 b 3 - b 2 2 ) + T b 1 ω ^ a , z ^ ρ , a = [ ϵ ^ a ( I T M ) ϵ ^ a ] / σ ^ v , a 2 , z ^ λ , a = [ ϵ ^ a ( I T W ) y ] / σ ^ v , a 2 , z ^ μ , a = [ ϵ ^ a ( J ¯ T I N ) ϵ ^ a ] / σ ^ v , a 2 - N , and β ^ a is the OLS estimator of β. Under H 0 a , L M a is asymptotically distributed as χ 3 2 , where χ d 2 denotes the χ 2 distribution with degree of freedom d.
The test statistic L M a is useful in practice, and it is simple to compute since only the OLS estimator is required. Researchers should first use this joint LM test to determine if there is individual random effects and/or spatial effects in the general specification (2.1). If the joint null hypothesis cannot be rejected, then it is reasonable to just adopt the classical pooled panel data model. Otherwise either the individual random effects, or the spatial error correlation, or the spatial lag dependence need to be considered. As in Baltagi et al., (2003) [7], we do not provide formal proofs for the asymptotic null distributions of the LM test statistics in this paper, but these distributions are likely to hold by using the Central Limit Theorems (CLTs) in Kelejian and Prucha (2001, 2010) [28,29] under similar sets of low level assumptions in their papers.

3.2. Testing for Random Effects

In this section, we focus on testing for the individual random effects in various spatial panel data models. We provide formulae for the standard LM tests as well as formulae for the robust LM tests when necessary. The first hypothesis we consider is H 0 b : σ μ 2 = 0 ( ρ = λ = 0 ) vs. H 1 b : σ μ 2 > 0 ( ρ = λ = 0 ) . The null model is the classical pooled panel data model, and the alternative model is the random effects panel data model without spatial effects. The LM test denoted as L M b , is available in Baltagi et al., (2003) [7]. Moreover, it can be shown that the robust LM test is the same as L M b in this case. We thus omit these formulae for the sake of compactness.
The second hypothesis is H 0 c : σμ2=0(ρ0,λ=0) vs. H 1 c : σ μ 2 > 0 ( ρ 0 , λ = 0 ) .2 Under the null hypothesis, it is the pooled panel data model with spatial error correlation. Under the alternative hypothesis, it is the random effects panel data model with spatial error correlation. The LM test statistic in this case is given by
L M c = T 2 N ( T - 1 ) z ^ σ μ 2 , c 2 ,
where z ^ σ μ 2 , c is z ^ σ μ 2 evaluated at the restricted MLE under H 0 c .3 Under H 0 c , L M c is asymptotically distributed as χ 1 2 . Further, it can be easily shown that the robust LM test in this case is the same as L M c . Thus L M c itself is robust against local deviation of λ from 0, and this will be confirmed by the simulation results.
The third hypothesis is H 0 d : σ μ 2 = 0 ( ρ = 0 , λ 0 ) vs. H 1 d : σ μ 2 > 0 ( ρ = 0 , λ 0 ) . Under the null hypothesis, it is the pooled panel data model with spatial lag dependence. Under the alternative hypothesis, it is the random effects panel data model with spatial lag dependence. The LM test statistic, denoted as L M d , is available in Baltagi and Liu (2008) [8]. Moreover, it can be shown that the robust LM test in this case is the same as L M d . We thus omit these formulae here.
The last hypothesis concerning testing for the individual random effects is H 0 e : σ μ 2 = 0 ( ρ 0 , λ 0 ) vs. H 1 e : σ μ 2 > 0 ( ρ 0 , λ 0 ) . Under the null hypothesis, it is the pooled panel data model with both spatial error correlation and spatial lag dependence. Under the alternative hypothesis, it is the full model in (2.1). The LM test statistic is given by
L M e = T 2 N ( T - 1 ) z ^ σ μ 2 , e 2 .
Under H 0 e , L M e is asymptotically distributed as χ 1 2 . L M e tests for the individual random effects in the most general spatial model, and it is particularly useful when the researcher does not have any prior knowledge about whether the spatial error correlation and/or spatial lag dependence exist or not. In practice, the above four LM tests for the individual random effects correspond to different prior information on the nuisance parameters, and they need to be analyzed together to lead to the most appropriate model.
Notice that the LM tests in Section 3.2 are all designed for two-sided alternative hypothesis, while the parameter involved in the hypotheses, σ μ 2 , is by definition nonnegative. While our LM tests have good power against the one-sided alternative (see the simulation results), we point out that power of these tests can be further improved by following the ideas in Honda (1985, 1991) [30,31].

3.3. Testing for Spatial Effects

In this section, we focus on testing for the spatial effects. We provide formulae for the standard LM tests as well as formulae for the robust LM tests when necessary.

3.3.1. Joint Tests for Spatial Effects

In practice, researcher may be interested in jointly testing for the spatial error correlation and the spatial lag dependence. The first joint hypothesis is H 0 f : ρ = λ = 0 ( σ μ 2 = 0 ) vs. H 1 f : at least one of ρ and λ is not zero ( σ μ 2 = 0 ) . Under the null hypothesis, it is the classical pooled panel data model. Under the alternative hypothesis, it is the pooled panel data model with at least one type of the spatial effects. The LM test statistic in this case is given by
L M f = T b 3 + ω ^ a τ ^ a z ^ ρ , a 2 + T b 1 τ ^ a z ^ λ , a 2 - 2 T b 2 τ ^ a z ^ ρ , a z ^ λ , a ,
where all quantities involved are defined Section 3.1 since only the OLS estimator is needed in this case. The test L M f is a useful extension of the result in Anselin et al., (1996) [20] to the panel data case. Interestingly, L M f is the sum of the first three terms of L M a . Under H 0 f , L M f is asymptotically distributed as χ 2 2 . It can be easily shown that the robust LM test in this case is the same as L M f .
The second joint hypothesis is H 0 g : ρ = λ = 0 ( σ μ 2 0 ) vs. H 1 g : at least one of ρ and λ is not zero ( σ μ 2 0 ) . Under the null hypothesis, it is the random effects panel data model without any spatial effects. Under the alternative hypothesis, it is the random effects panel data model with at least one type of spatial effects. The LM test statistic in this case is given by
L M g = T b 3 + ω ^ g τ ^ g z ^ ρ , g 2 + T b 1 τ ^ g z ^ λ , g 2 - 2 T b 2 τ ^ g z ^ ρ , g z ^ λ , g .
Under H 0 g , L M g is asymptotically distributed as χ 2 2 . Notice that both L M f and L M g are useful for jointly testing the spatial error correlation and spatial lag dependence. However, L M f assumes that it is pooled panel data model, while L M g allows for the individual random effects. Since L M f is the same as its robust version, then it can guard against local deviation of σ μ 2 from zero. However, L M g will work well even when σ μ 2 deviates far away from zero.

3.3.2. Testing for Spatial Error Correlation

In this section, we focus on testing for spatial error correlation. The first hypothesis we consider is H 0 h : ρ = 0 ( σ μ 2 = λ = 0 ) vs. H 1 h : ρ 0 ( σ μ 2 = λ = 0 ) . Under the null hypothesis, it is the classical pooled panel data model. Under the alternative hypothesis, it is the pooled panel data model with spatial error correlation. The LM and robust LM (denoted as LM * ) test statistics are given by
L M h = 1 T b 1 z ^ ρ , a 2 , L M h * = T b 3 + ω ^ a τ ^ a z ^ ρ , a - T b 2 T b 3 + ω ^ a z ^ λ , a 2 ,
where all quantities involved are defined in Section 3.1 since only the OLS estimator is needed in this case. Notice that although the formula of L M h is available in Baltagi et al., (2003) [7], we provide it here for comparison purposes as it is different from the robust test L M h * , which has not been considered previously in the literature. Under H 0 h , if the nuisance parameters λ and σ μ 2 do not deviate from 0, both L M h and L M h * are asymptotically distributed as χ 1 2 . However, when ρ = 0 , but either λ or σ μ 2 deviates locally from 0, the distribution of L M h becomes non-centralized, tending to over reject the null hypothesis. On the other hand, L M h * is still asymptotically distributed as χ 1 2 , thus it does not suffer from size distortion as L M h .
The second hypothesis is H 0 i : ρ = 0 ( σ μ 2 = 0 , λ 0 ) vs. H 1 i : ρ 0 ( σ μ 2 = 0 , λ 0 ) . Under the null hypothesis, it is the pooled panel data model with spatial lag dependence. Under the alternative hypothesis, it is the pooled panel data model with both spatial error correlation and spatial lag dependence. The LM test statistic is given by
L M i = ξ ^ i z ^ ρ , i 2 .
L M i is a useful extension of the results in Anselin et al., (1996) [20] to the panel data case. Under H 0 i , L M i is asymptotically distributed as χ 1 2 . Moreover, it can be easily shown that the robust LM test in this case is the same as L M i . Thus L M i itself is robust against local deviation of σ μ 2 from 0, and this will be confirmed by the simulation results.
The third hypothesis is H 0 j : ρ = 0 ( σ μ 2 0 , λ = 0 ) vs. H 1 j : ρ 0 ( σ μ 2 0 , λ = 0 ) . Under the null hypothesis, it is the random effects panel data model without spatial effects. Under the alternative hypothesis, it is the random effects panel data model with spatial error correlation. The LM and robust LM test statistics in this case are given by
L M j = 1 T b 1 z ^ ρ , j 2 , L M j * = T b 3 + ω ^ j τ ^ j z ^ ρ , j - T b 2 T b 3 + ω ^ j z ^ λ , j 2 .
When ρ = 0 , and if the nuisance parameter λ does not deviate from 0, both L M j and L M j * are asymptotically distributed as χ 1 2 . However, when ρ = 0 , but λ deviates locally from 0, the distribution of L M j becomes non-centralized, tending to over reject the null hypothesis. On the other hand, L M j * is still asymptotically distributed as χ 1 2 in this case, thus it does not suffer from size distortion as L M j .
The last hypothesis for spatial error correlation is H 0 k : ρ = 0 ( σ μ 2 0 , λ 0 ) vs . H 1 k : ρ 0 ( σ μ 2 0 , λ 0 ) . Under the null hypothesis, it is the random effects panel data model with spatial lag dependence. Under the alternative hypothesis, it is the random effects panel data model with both spatial error correlation and spatial lag dependence. The LM test statistic in this case is given by
L M k = ξ ^ k z ^ ρ , k 2 .
Under H 0 k , L M k is asymptotically distributed as χ 1 2 . In practice, to test for the spatial error correlation, the above test statistics correspond to different prior information on the nuisance parameters, and they need to be analyzed together to lead to the most appropriate model.

3.3.3. Testing for Spatial Lag Dependence

In this section, we focus on testing for the spatial lag dependence. The first hypothesis we consider is H 0 l : λ = 0 ( σ μ 2 = ρ = 0 ) vs. H 1 l : λ 0 ( σ μ 2 = ρ = 0 ) . Under the null hypothesis, it is the classical pooled panel data model. Under the alternative hypothesis, it is the pooled panel data model with spatial lag dependence. The LM and robust LM test statistic in this case are given by
L M l = 1 T b 3 + ω ^ a z ^ λ , a 2 , L M l * = T b 1 τ ^ a z ^ λ , a - b 2 b 1 z ^ ρ , a 2 ,
where all the related quantities are defined in Section 3.1 since only the OLS estimator is needed in this case. The formulae for L M l and L M l * are useful extensions of the results in Anselin et al., (1996) [20] to the panel data case. When λ = 0 , and if the nuisance parameters σ μ 2 and ρ do not deviate from 0, both L M l and L M l * are asymptotically distributed as χ 1 2 . However, when λ = 0 , but either σ μ 2 or ρ deviates locally from 0, the distribution of L M l becomes non-centralized, tending to over reject the null hypothesis. On the other hand, L M l * is still asymptotically distributed as χ 1 2 in this case, thus it does not suffer from size distortion as L M l .
The second hypothesis is H 0 m : λ = 0 ( σ μ 2 = 0 , ρ 0 ) vs. H 1 m : λ 0 ( σ μ 2 = 0 , ρ 0 ) . Under the null hypothesis, it is the pooled panel data model with spatial error correlation. Under the alternative hypothesis, it is the pooled panel data model with both spatial error correlation and spatial lag dependence. The LM test statistic in this case is given by
L M m = ζ ^ m z ^ λ , m 2 .
L M m is a useful extension of the result in Anselin et al., (1996) [20] to the panel data case, and it is asymptotically distributed as χ 1 2 under H 0 m . It can be easily shown that the robust LM test in this case is the same as L M m . Thus L M m itself is robust against local deviation of σ μ 2 from 0, and this will be confirmed by the simulation results.
The third hypothesis is H 0 n : λ = 0 ( σ μ 2 0 , ρ = 0 ) vs. H 1 n : λ 0 ( σ μ 2 0 , ρ = 0 ) . Under the null hypothesis, it is the random effects panel data model without spatial effects. Under the alternative hypothesis, it is the random effects panel data model with spatial lag dependence. The LM and robust LM test statistics are given by
L M n = 1 T b 3 + ω ^ n z ^ λ , n 2 , L M n * = T b 1 τ ^ n z ^ λ , n - b 2 b 1 z ^ ρ , n 2 .
Notice that although the formula of L M n is available in Baltagi and Liu (2008) [8], we provide it here for comparison purposes since it is different from the robust test L M n * , which has not been considered previously in the literature. When λ = 0 , and if the nuisance parameters ρ does not deviate from 0, both L M n and L M n * are asymptotically distributed as χ 1 2 . However, when λ = 0 , but ρ deviates locally from 0, the distribution of L M n becomes non-centralized, tending to over reject the null hypothesis. On the other hand, L M n * is still asymptotically distributed as χ 1 2 in this case, thus it does not suffer from size distortion as L M n .
The last hypothesis is H 0 o : λ = 0 ( σ μ 2 0 , ρ 0 ) vs. H 1 o : λ 0 ( σ μ 2 0 , ρ 0 ) . Under the null hypothesis, it is the random effects panel data model with spatial error correlation. Under the alternative hypothesis, it is the random effects panel data model with both spatial error correlation and spatial lag dependence. The LM test statistic in this case is given by
L M o = ζ ^ o z ^ λ , o 2 .
Under H 0 o , L M o is asymptotically distributed as χ 1 2 . In practice, to test for the spatial lag dependence, the above test statistics correspond to different prior information on the nuisance parameters, and they need to be analyzed together to lead to the most appropriate model.

4. Monte Carlo Experiment

In this section, we conduct and present a small Monte Carlo experiment to show satisfactory performances of the above LM test statistics. In our Monte Carlo experiment, the data generating process is, for t = 1 , 2 , · · · , T ,
( I N - λ W ) y t = α ι N + X t β + ( I N - ρ M ) - 1 ( μ + v t ) ,
where α = 5 , β = 0.5 . x i t is a single variable and is generated as x i t = 0.1 t + 0.5 x i , t - 1 + z i t , where z i t is generated according to a uniform distribution on [ - 0.5 , 0.5 ] . The initial value x i 0 is set to be 5 + 10 z i 0 . The random effects term μ i is generated according to μ i i.i.n. ( 0 , σ μ 2 ) , and the innovation term v i t is generated according to v i t i.i.n. ( 0 , σ v 2 ) . σ μ 2 takes values 0, 0.2, 0.5, and 0.8, while σ v 2 is fixed to be 1. The spatial weights matrices M and W are set to be first-order rook and queen contiguity matrices with row-standardization, respectively. The spatial error correlation parameter ρ and spatial lag parameter λ vary in [ - 0.8 , 0.8 ] , with increment 0.2. Two combinations of sample size ( N , T ) are considered, namely ( 49 , 7 ) and ( 100 , 10 ) . Each experiment is replicated 1000 times, and the nominal size is set to be 0.05.
Frequency of rejection (FoR) of the joint test L M a is summarized in Table 1, where the upper part and lower part correspond to different sample sizes. We only report the case when σ μ 2 = 0 . For other cases, that is, σ μ 2 = 0.2 , 0.5 , 0.8 , FoRs are uniformly higher than that when σ μ 2 = 0 . This is because we are jointly testing σ μ 2 = ρ = λ = 0 . The empirical sizes of L M a are 0.049 and 0.050 for the ( 49 , 7 ) and the ( 100 , 10 ) sample, respectively. They are almost the same as the nominal size, reflecting that the limiting χ 3 2 distribution approximates the finite sample null distribution very well. As ρ or λ deviates from 0, FoR increases very fast. For example, when the sample size is ( 49 , 7 ) , FoR is 0.995 when ρ = λ = 0.2 , and it reaches 1 when ρ = λ = 0.4 . The power performance when sample size is ( 100 , 10 ) is better than that when the sample size is ( 49 , 7 ) . The good size and power performance demonstrates that the joint test L M a should be very useful for applied researcher to determine whether there are individual random effects and spatial effects in a preliminary diagnostic testing process.
Experiment results for L M c and L M e are summarized in Table 2. For L M c , we only report results when ρ = - 0.4 and 0.4 here to save space. The results when ρ takes other values are very similar, and they are available upon request. First, for the (49,7) sample, the empirical sizes of L M c are 0.043 and 0.036 when ρ = - 0.4 and 0.4 , respectively. As σ μ 2 increases from 0, the FoR of L M c increases very fast. Actually, when ρ = - 0.4 , FoR of L M c is 0.976 even when σ μ 2 is only 0.2 . Second, we discussed in Section 3.2 that the robust LM test corresponding to L M c is the same as L M c , which implies that L M c itself is robust against local deviation of λ from 0. This is confirmed by the column for σ μ 2 = 0 in Table 2. Given σ μ 2 = 0 , for the (49,7) sample, when λ varies in [ - 0.4 , 0.4 ] , the FoRs of L M c are in the range of [ 0.036 , 0.057 ] . Thus L M c itself does not suffer from size distortion under local misspecification of the nuisance parameter λ. As expected, the performance of L M c for the ( 100 , 10 ) sample is even better than that for the ( 49 , 7 ) sample. Next, for L M e , we choose to report the simulation results for a few combinations of ρ and λ as shown in the table. Results for other cases are very similar. For the (49,7) sample, the empirical sizes of L M e vary in [ 0.035 , 0.044 ] , and the FoR increases rapidly as σ μ 2 increases from 0. As expected, the performance of L M e for the ( 100 , 10 ) sample is even better than that for the ( 49 , 7 ) sample. Both L M c and L M e are useful in testing for the random effects. L M e is useful in testing for the random effects when the researcher does not have any knowledge about the spatial effects, while L M c is particularly useful when the researcher has information that there is only spatial error correlation.
Table 1. Frequency of rejection (FoR) of L M a , σ μ 2 = 0 , Sample Sizes: Upper Part: ( 49 , 7 ) ; Lower Part: ( 100 , 10 ) .
Table 1. Frequency of rejection (FoR) of L M a , σ μ 2 = 0 , Sample Sizes: Upper Part: ( 49 , 7 ) ; Lower Part: ( 100 , 10 ) .
ρ-0.8-0.6-0.4-0.20.00.20.40.60.8
λ
- 0.8 1.0001.0001.0001.0001.0001.0000.9990.9961.000
- 0.6 1.0001.0001.0001.0001.0000.9800.9420.9981.000
- 0.4 1.0001.0001.0001.0000.9460.6920.8611.0001.000
- 0.2 1.0001.0001.0000.9840.3890.2900.9541.0001.000
0.0 1.0001.0001.0000.6810.0490.6231.0001.0001.000
0.2 1.0001.0000.9940.5150.5520.9951.0001.0001.000
0.4 1.0001.0000.9980.9570.9971.0001.0001.0001.000
0.6 1.0001.0001.0001.0001.0001.0001.0001.0001.000
0.8 1.0001.0001.0001.0001.0001.0001.0001.0001.000
ρ-0.8-0.6-0.4-0.20.00.20.40.60.8
λ
-0.81.0001.0001.0001.0001.0001.0001.0001.0001.000
-0.61.0001.0001.0001.0001.0001.0001.0001.0001.000
-0.41.0001.0001.0001.0001.0000.9881.0001.0001.000
-0.21.0001.0001.0001.0000.8640.7581.0001.0001.000
0.01.0001.0001.0000.9890.0500.9861.0001.0001.000
0.21.0001.0001.0000.9110.9261.0001.0001.0001.000
0.41.0001.0001.0001.0001.0001.0001.0001.0001.000
0.61.0001.0001.0001.0001.0001.0001.0001.0001.000
0.81.0001.0001.0001.0001.0001.0001.0001.0001.000
The simulation results for L M f and L M g are presented in Table 3 and Table 4, respectively. From Table 3, the empirical sizes of L M f are 0.053 and 0.050 for the ( 49 , 7 ) and ( 100 , 10 ) sample, respectively. As either ρ or λ deviates from 0, the FoR of increases very fast, indicating good power performance of L M f . For example, when ρ = λ = 0.2 , FoR of L M f is 0.994 for the ( 49 , 7 ) sample, while it is 1 for the ( 100 , 10 ) sample. In Table 4, we only report the results for the case when σ μ 2 = 0.5 to save space, results for the other two cases are similar and available upon request. The empirical sizes of L M g are 0.050 and 0.043 for the ( 49 , 7 ) and the ( 100 , 10 ) sample, respectively. Similar to L M f , as either ρ or λ deviates from 0, the FoR of increases rapidly. Both L M f and L M g are useful in jointly detecting spatial error correlation and spatial lag dependence. L M f is useful when we assume pooled panel data model, while L M g is useful when we assume random effects panel data model.
Table 2. FoR of LM Tests for Random Effects, Sample Sizes: Upper Part: ( 49 , 7 ) ; Lower Part: ( 100 , 10 ) .
Table 2. FoR of LM Tests for Random Effects, Sample Sizes: Upper Part: ( 49 , 7 ) ; Lower Part: ( 100 , 10 ) .
σ μ 2 = 0 0.20.50.8
L M c ρ = - 0.4 λ = - 0.4 0.0570.9741.0001.000
- 0.2 0.0430.9781.0001.000
0.00.0430.9761.0001.000
0.2 0.0480.9731.0001.000
0.4 0.0510.9701.0001.000
ρ = 0.4 λ = - 0.4 0.0470.9681.0001.000
- 0.2 0.0460.9691.0001.000
0.0 0.0360.9761.0001.000
0.2 0.0520.9821.0001.000
0.4 0.0530.9691.0001.000
L M e ρ = - 0.4 λ = - 0.4 0.0350.9761.0001.000
- 0.4 0.4 0.0430.9821.0001.000
0.0 0.0 0.0350.9701.0001.000
0.4 - 0.4 0.0390.9761.0001.000
0.4 0.4 0.0440.9831.0001.000
σ μ 2 = 0 0.2 0.5 0.8
L M c ρ = - 0.4 λ = - 0.4 0.0521.0001.0001.000
- 0.2 0.0471.0001.0001.000
0.0 0.0491.0001.0001.000
0.2 0.0451.0001.0001.000
0.4 0.0571.0001.0001.000
ρ = 0.4 λ = - 0.4 0.0531.0001.0001.000
- 0.2 0.0541.0001.0001.000
0.0 0.0381.0001.0001.000
0.2 0.0471.0001.0001.000
0.4 0.0491.0001.0001.000
L M e ρ = - 0.4 λ = - 0.4 0.0411.0001.0001.000
- 0.4 0.4 0.0451.0001.0001.000
0.0 0.0 0.0411.0001.0001.000
0.4 - 0.4 0.0471.0001.0001.000
0.4 0.4 0.0421.0001.0001.000
Table 3. FoR of L M f , σ μ 2 = 0 , Sample Sizes: Upper Part: ( 49 , 7 ) ; Lower Part: ( 100 , 10 ) .
Table 3. FoR of L M f , σ μ 2 = 0 , Sample Sizes: Upper Part: ( 49 , 7 ) ; Lower Part: ( 100 , 10 ) .
ρ-0.8-0.6-0.4-0.20.00.20.40.60.8
λ
- 0.8 1.0001.0001.0001.0001.0001.0001.0001.0001.000
- 0.6 1.0001.0001.0001.0000.9990.9860.9651.0001.000
- 0.4 1.0001.0001.0001.0000.9820.7520.9191.0001.000
- 0.2 1.0001.0001.0000.9930.4870.3740.9691.0001.000
0.0 1.0001.0001.0000.7460.0530.6841.0001.0001.000
0.2 1.0001.0000.9950.5730.5760.9941.0001.0001.000
0.4 1.0001.0000.9950.9720.9961.0001.0001.0001.000
0.6 1.0001.0001.0001.0001.0001.0001.0001.0001.000
0.8 1.0001.0001.0001.0001.0001.0001.0001.0001.000
ρ-0.8-0.6-0.4-0.20.00.20.40.60.8
λ
- 0.8 1.0001.0001.0001.0001.0001.0001.0001.0001.000
- 0.6 1.0001.0001.0001.0001.0001.0001.0001.0001.000
- 0.4 1.0001.0001.0001.0001.0000.9931.0001.0001.000
- 0.2 1.0001.0001.0001.0000.9160.8291.0001.0001.000
0.0 1.0001.0001.0000.9920.0500.9901.0001.0001.000
0.2 1.0001.0001.0000.9330.9531.0001.0001.0001.000
0.4 1.0001.0001.0001.0001.0001.0001.0001.0001.000
0.6 1.0001.0001.0001.0001.0001.0001.0001.0001.000
0.8 1.0001.0001.0001.0001.0001.0001.0001.0001.000
Table 4. FoR of L M g , σ μ 2 = 0.5 , Sample Sizes: Upper Part: ( 49 , 7 ) , Lower Part: ( 100 , 10 ) .
Table 4. FoR of L M g , σ μ 2 = 0.5 , Sample Sizes: Upper Part: ( 49 , 7 ) , Lower Part: ( 100 , 10 ) .
ρ-0.8-0.6-0.4-0.20.00.20.40.60.8
λ
- 0.8 1.0001.0001.0001.0001.0001.0000.9970.9991.000
- 0.6 1.0001.0001.0001.0001.0000.9840.9561.0001.000
- 0.4 1.0001.0001.0000.9990.9630.7380.9071.0001.000
- 0.2 1.0001.0001.0000.9890.4580.3310.9711.0001.000
0.0 1.0001.0001.0000.7430.0500.6871.0001.0001.000
0.2 1.0001.0000.9980.5360.5420.9931.0001.0001.000
0.4 1.0001.0000.9960.9620.9981.0001.0001.0001.000
0.6 1.0001.0001.0001.0001.0001.0001.0001.0001.000
0.8 1.0001.0001.0001.0001.0001.0001.0001.0001.000
ρ-0.8-0.6-0.4-0.20.00.20.40.60.8
λ
- 0.8 1.0001.0001.0001.0001.0001.0001.0001.0001.000
- 0.6 1.0001.0001.0001.0001.0001.0001.0001.0001.000
- 0.4 1.0001.0001.0001.0001.0000.9931.0001.0001.000
- 0.2 1.0001.0001.0001.0000.8980.8151.0001.0001.000
0.0 1.0001.0001.0000.9930.0430.9911.0001.0001.000
0.2 1.0001.0001.0000.9280.9441.0001.0001.0001.000
0.4 1.0001.0001.0001.0001.0001.0001.0001.0001.000
0.6 1.0001.0001.0001.0001.0001.0001.0001.0001.000
0.8 1.0001.0001.0001.0001.0001.0001.0001.0001.000
Now we discuss simulation results of the test statistics for spatial error correlation, their FoRs are summarized in Table 5 and Table 6 for different sample sizes. We focus on the ( 49 , 7 ) sample for discussion. For L M h , the empirical size is 0.048 . Moreover, L M h is very powerful in detecting spatial error correlation given that σ μ 2 = λ = 0 . For example, FoR of L M h is 0.856 when ρ = - 0.2 . However, as discussed in Section 3.3.2, L M h is not robust against local misspecification, and this is confirmed by the simulation results. Given ρ = 0 , when σ μ 2 and λ deviate locally from 0, the large FoRs of L M h are undesirable. For example, when ρ = 0 but σ μ 2 = λ = 0.2 , FoR of L M h is 0.416 . However, this size distortion is avoided by using L M h * . When ρ = 0 , σ μ 2 takes values 0 , 0.2 and λ [ - 0.4 , 0.4 ] , FoRs of L M h * are in the range of [ 0.039 , 0.081 ] . Although it is a little oversized in some cases, it is much better than that of L M h . On the other hand, the robust LM test is supposed to be less powerful than the corresponding LM test when the nuisance parameters are correctly specified. This is also confirmed by the simulation results, that is, when σ μ 2 = λ = 0 , L M h * is less powerful than L M h . For example, when ρ = - 0.2 , FoR of L M h is 0.856 , while that of L M h * is 0.626 . Next, L M i tests for spatial error correlation in a pooled panel data model with spatial lag dependence. When σ μ 2 = 0 , the empirical sizes of L M i vary in [ 0.040 , 0.060 ] . As ρ deviates from 0, FoR increases as expected. Furthermore, as discussed in Section 3.3.2, the robust LM test in this case is the same as L M i , implying that L M i is robust against local deviation of σ μ 2 from 0. This is confirmed by the simulation results. When σ μ 2 = 0.2 , ρ = 0 , the empirical sizes of L M i vary in [ 0.058 , 0.080 ] . Next, we discuss L M j and L M j * . L M j tests for spatial error correlation in a random effects panel data model without spatial lag dependence. We only present the representative result when σ μ 2 = 0.5 to save space, results in other cases are very similar. The empirical size of L M j is 0.045 , and the FoR increases as ρ deviates away from 0. For example, FoR is 0.835 when ρ = - 0.2 . However, L M j suffers from size distortion when the nuisance parameter λ deviates away from 0. For example, when λ = - 0.4 , ρ = 0 , the FoR of L M j is 0.747 , which is undesirably high. For the same case, FoR of L M j * is 0.064 . On the other hand, the robust LM test is less powerful when the nuisance parameter is correctly specified. This is also confirmed by the simulation results. For example, when λ = 0 , ρ = - 0.2 , FoR of L M j * is 0.602 , while that of L M j is 0.835 . Lastly, we discuss the performance of L M k . L M k tests for spatial error correlation in a random effects panel data model with spatial lag dependence. The empirical sizes of L M k vary in [ 0.038 , 0.058 ] . As expected, when ρ moves away from 0, FoR increases rapidly, suggesting its good power performance. For all of the above tests, their performances are even better for the ( 100 , 10 ) sample than for the ( 49 , 7 ) sample and follow a similar discussion. In sum, the test statistics L M h , L M h * , L M i , L M j , L M j * , and L M k are all useful for detecting the spatial error correlation, but they are suited for different assumptions about the nuisance parameters. In practice, researchers are suggested to analyze them together to draw correct inference on ρ.
Table 5. FoR of LM Tests for the Spatial Error Correlation, Sample Size: ( 49 , 7 ) .
Table 5. FoR of LM Tests for the Spatial Error Correlation, Sample Size: ( 49 , 7 ) .
ρ = - 0.8 - 0.6 - 0.4 - 0.2 0 0.2 0.4 0.6 0.8
L M h σ μ 2 = 0 λ = - 0.4 1.0001.0001.0001.0000.7380.0230.6711.0001.000
- 0.2 1.0001.0001.0000.9860.3050.1930.9751.0001.000
0.0 1.0001.0001.0000.8560.0480.7541.0001.0001.000
0.2 1.0001.0000.9960.3400.4520.9931.0001.0001.000
0.4 1.0001.0000.7530.2100.9651.0001.0001.0001.000
L M h σ μ 2 = 0.2 λ = - 0.4 1.0001.0001.0000.9980.7470.0370.6201.0001.000
- 0.2 1.0001.0001.0000.9840.3240.1890.9711.0001.000
0.0 1.0001.0000.9990.8220.0650.7431.0001.0001.000
0.2 1.0001.0000.9920.3720.4160.9891.0001.0001.000
0.4 1.0001.0000.7560.2060.9541.0001.0001.0001.000
L M h * σ μ 2 = 0 λ = - 0.4 1.0001.0000.9760.4850.0710.5390.9611.0000.999
- 0.2 1.0001.0000.9900.5620.0560.4820.9411.0000.999
0.0 1.0001.0000.9950.6260.0390.4310.9520.9990.989
0.2 1.0001.0000.9970.6390.0520.4600.9691.0000.961
0.4 1.0001.0000.9960.5240.0460.5970.9801.0000.880
σ μ 2 = 0.2 λ = - 0.4 1.0001.0000.9750.4950.0810.5350.9371.0000.999
- 0.2 1.0001.0000.9870.5530.0800.4650.9231.0000.999
0.0 1.0001.0000.9880.6070.0750.4320.9260.9980.991
0.2 1.0001.0000.9900.6480.0560.4610.9430.9990.957
0.4 1.0001.0000.9940.5330.0540.5100.9670.9990.869
L M i σ μ 2 = 0 λ = - 0.4 1.0001.0000.9990.6650.0500.4790.9570.9981.000
- 0.2 1.0001.0000.9990.6410.0460.4690.9400.9991.000
0.0 1.0001.0000.9970.6460.0400.4480.9511.0001.000
0.2 1.0001.0000.9970.6460.0600.4570.9610.9991.000
0.4 1.0001.0000.9960.6390.0460.4970.9661.0001.000
σ μ 2 = 0.2 λ = - 0.4 1.0001.0000.9950.6510.0580.4580.9390.9991.000
- 0.2 1.0001.0000.9950.6190.0800.4550.9270.9981.000
0.0 1.0001.0000.9910.6190.0720.4540.9200.9971.000
0.2 1.0001.0000.9920.6620.0690.4630.9391.0001.000
0.4 1.0001.0000.9950.6100.0630.4870.9511.0001.000
L M j σ μ 2 = 0.5 λ = - 0.4 1.0001.0001.0001.0000.7470.0360.6571.0001.000
- 0.2 1.0001.0001.0000.9870.3010.1920.9751.0001.000
0.0 1.0001.0001.0000.8350.0450.7711.0001.0001.000
0.2 1.0001.0000.9910.3530.4270.9931.0001.0001.000
0.4 1.0001.0000.7580.1940.9571.0001.0001.0001.000
L M j * σ μ 2 = 0.5 λ = - 0.4 1.0001.0000.9790.4800.0640.5220.9451.0000.998
- 0.2 1.0001.0000.9900.5510.0600.4600.9341.0000.998
0.0 1.0001.0000.9930.6020.0400.4120.9430.9990.985
0.2 1.0001.0000.9930.6170.0350.4290.9541.0000.944
0.4 1.0001.0000.9910.5080.0420.5500.9720.9990.852
L M k σ μ 2 = 0.5 λ = - 0.4 1.0001.0000.9970.6600.0490.4170.9330.9991.000
- 0.2 1.0001.0000.9980.6070.0490.4220.9360.9961.000
0.0 1.0001.0000.9920.6270.0580.4330.9520.9991.000
0.2 1.0001.0000.9960.6020.0540.4270.9521.0001.000
0.4 1.0001.0000.9950.6300.0380.4800.9481.0001.000
Table 6. FoR of LM Tests for the Spatial Error Correlation, Sample Size: ( 100 , 10 ) .
Table 6. FoR of LM Tests for the Spatial Error Correlation, Sample Size: ( 100 , 10 ) .
ρ = - 0.8 -0.6-0.4-0.200.20.40.60.8
L M h σ μ 2 = 0 λ = - 0.4 1.0001.0001.0001.0000.9890.0160.9961.0001.000
- 0.2 1.0001.0001.0001.0000.6540.5731.0001.0001.000
0.0 1.0001.0001.0000.9990.0480.9941.0001.0001.000
0.2 1.0001.0001.0000.6280.8191.0001.0001.0001.000
0.4 1.0001.0000.9840.3691.0001.0001.0001.0001.000
L M h σ μ 2 = 0.2 λ = - 0.4 1.0001.0001.0001.0000.9830.0300.9931.0001.000
- 0.2 1.0001.0001.0001.0000.6540.5481.0001.0001.000
0.0 1.0001.0001.0000.9990.0860.9931.0001.0001.000
0.2 1.0001.0001.0000.6330.7811.0001.0001.0001.000
0.4 1.0001.0000.9760.3441.0001.0001.0001.0001.000
L M h * σ μ 2 = 0 λ = - 0.4 1.0001.0001.0000.8520.0760.9381.0001.0001.000
- 0.2 1.0001.0001.0000.9340.0610.8991.0001.0001.000
0.0 1.0001.0001.0000.9520.0510.8991.0001.0001.000
0.2 1.0001.0001.0000.9500.0360.9301.0001.0001.000
0.4 1.0001.0001.0000.9010.0670.9771.0001.0001.000
0.2 λ = - 0.4 1.0001.0001.0000.8260.1030.9141.0001.0001.000
- 0.2 1.0001.0001.0000.9170.0860.8701.0001.0001.000
0.0 1.0001.0001.0000.9480.0790.8641.0001.0001.000
0.2 1.0001.0001.0000.9210.0520.8901.0001.0001.000
0.4 1.0001.0001.0000.8960.0700.9611.0001.0001.000
L M i σ μ 2 = 0 λ = - 0.4 1.0001.0001.0000.9610.0460.9101.0001.0001.000
- 0.2 1.0001.0001.0000.9580.0470.8961.0001.0001.000
0.0 1.0001.0001.0000.9540.0480.8921.0001.0001.000
0.2 1.0001.0001.0000.9480.0470.9021.0001.0001.000
0.4 1.0001.0001.0000.9500.0480.9131.0001.0001.000
0.2 - 0.4 1.0001.0001.0000.9550.0640.8761.0001.0001.000
- 0.2 1.0001.0001.0000.9480.0640.8651.0001.0001.000
0.0 1.0001.0001.0000.9330.0660.8641.0001.0001.000
0.2 1.0001.0001.0000.9340.0610.8701.0001.0001.000
0.4 1.0001.0001.0000.9420.0560.8851.0001.0001.000
L M j σ μ 2 = 0.5 λ = - 0.4 1.0001.0001.0001.0000.9900.0180.9981.0001.000
- 0.2 1.0001.0001.0001.0000.6580.5771.0001.0001.000
0.0 1.0001.0001.0000.9990.0490.9961.0001.0001.000
0.2 1.0001.0001.0000.6420.8101.0001.0001.0001.000
0.4 1.0001.0000.9830.3511.0001.0001.0001.0001.000
L M j * σ μ 2 = 0.5 λ = - 0.4 1.0001.0001.0000.8530.0860.9321.0001.0001.000
- 0.2 1.0001.0001.0000.9330.0600.8931.0001.0001.000
0.0 1.0001.0001.0000.9490.0490.8841.0001.0001.000
0.2 1.0001.0001.0000.9400.0350.9211.0001.0001.000
0.4 1.0001.0001.0000.9030.0580.9741.0001.0001.000
L M k σ μ 2 = 0.5 λ = - 0.4 1.0001.0001.0000.9580.0550.8991.0001.0001.000
- 0.2 1.0001.0001.0000.9490.0480.8881.0001.0001.000
0.0 1.0001.0001.0000.9470.0500.8841.0001.0001.000
0.2 1.0001.0001.0000.9490.0460.8971.0001.0001.000
0.4 1.0001.0001.0000.9530.0460.9111.0001.0001.000
Finally, the performances of the test statistics for spatial lag dependence are summarized in Table 7 and Table 8. We focus on the ( 49 , 7 ) sample for discussion. For L M l , the empirical size is 0.044 , and it is very powerful in detecting the spatial lag dependence given that σ μ 2 = ρ = 0 . For example, FoR of L M l is 0.987 when λ = - 0.4 . However, as discussed in Section 3.3.3, L M l is not robust against local misspecification, and this is confirmed by the simulation results. Given λ = 0 , when σ μ 2 and ρ deviate locally from 0, the large FoRs of L M l are undesirable. For example, when λ = 0 but σ μ 2 = ρ = 0.2 , FoR of L M l is 0.526 . This size distortion is to some extent avoided by L M l * . When λ = 0 and σ μ 2 takes values 0 , 0.2 and ρ [ - 0.2 , 0.2 ] , FoRs of L M l * are in the range of [ 0.035 , 0.088 ] . However, L M l * should be used with caution in this case, since the range of ρ in which L M l * provides valid size is narrow, and this becomes even worse for the ( 100 , 10 ) sample. On the other hand, the robust LM test is supposed to be less powerful than the corresponding LM test when the nuisance parameters are correctly specified. When σ μ 2 = ρ = 0 , L M l * is less powerful than L M l . For example, when λ = - 0.4 , FoR of L M l is 0.987 , while that of L M l * is 0.809 . Next, L M m tests for the spatial lag dependence in a pooled panel data model with spatial error correlation. The empirical sizes of L M m vary in [ 0.037 , 0.055 ] . As λ deviates from 0, FoR increases. Moreover, as discussed in Section 3.3.3, the robust LM test in this case is the same as L M m , implying that L M m itself is robust against local deviation of σ μ 2 from 0. This is confirmed in the simulation results. When σ μ 2 = 0.2 , λ = 0 , the FoRs of L M m vary in [ 0.060 , 0.74 ] . Next, we discuss L M n and L M n * . L M n tests for the spatial lag dependence in a random effects panel data model without spatial error correlation. We only present the representative result when σ μ 2 = 0.5 , results in other cases are very similar. The empirical size of L M n is 0.035 , and the FoR increases as λ deviates away from 0. For example, FoR is 0.981 when λ = - 0.4 . However, L M n suffers from size distortion when the nuisance parameter ρ deviates away from 0. For example, when λ = 0 , ρ = - 0.2 , the FoR of L M n is 0.476 , which is undesirably high. For the same case, FoR of L M n * is 0.063 . However, as L M l * , L M n * should also be used with caution, since the range of ρ in which L M n * provides valid size is narrow, and it becomes even worse for the ( 100 , 10 ) sample. On the other hand, L M n * is less powerful than L M n when ρ = 0 . For example, when ρ = 0 , λ = - 0.4 , FoR of L M n * is 0.778 , while that of L M n is 0.981 . Lastly, we discuss the performance of L M o . L M o tests for the spatial lag dependence in a random effects panel data model with spatial error correlation. The empirical sizes of L M o vary in [ 0.063 , 0.079 ] . As expected, when λ moves away from 0, FoR increases, suggesting its good power performance. For all of the above tests, their performances for the ( 100 , 10 ) sample follows a similar discussion. In sum, the test statistics L M l , L M l * , L M m , L M n , L M n * , and L M o are all useful for detecting the spatial lag dependence, but they are suited for different assumptions about the nuisance parameters. In practice, researchers are suggested to analyze them together to draw correct inference on λ.
Table 7. FoR of LM Tests for the Spatial Lag Dependence, Sample Size: ( 49 , 7 ) .
Table 7. FoR of LM Tests for the Spatial Lag Dependence, Sample Size: ( 49 , 7 ) .
λ = - 0.8 -0.6-0.4-0.200.20.40.60.8
L M l σ μ 2 = 0 ρ = - 0.4 1.0001.0001.0001.0000.9650.2210.4051.0001.000
- 0.2 1.0001.0001.0000.9820.4460.0630.9381.0001.000
0.0 1.0001.0000.9870.5910.0440.6930.9981.0001.000
0.2 1.0000.9840.6310.0670.5760.9881.0001.0001.000
0.4 0.9790.5100.0610.5930.9921.0001.0001.0001.000
σ μ 2 = 0.2 ρ = - 0.4 1.0001.0001.0001.0000.9640.2850.3050.9991.000
- 0.2 1.0001.0001.0000.9680.5050.0710.8921.0001.000
0.0 1.0001.0000.9850.6200.0510.6370.9981.0001.000
0.2 1.0000.9870.6370.0940.5260.9811.0001.0001.000
0.4 0.9630.5020.0810.5990.9811.0001.0001.0001.000
L M l * σ μ 2 = 0 ρ = - 0.4 0.6260.3180.1280.1210.3370.8010.9951.0001.000
- 0.2 0.9680.8550.5470.1630.0540.4970.9831.0001.000
0.0 0.9990.9740.8090.3380.0350.4590.9781.0001.000
0.2 0.9990.9910.8580.3050.0660.6490.9931.0001.000
0.4 1.0000.9760.6510.1210.3230.9221.0001.0001.000
σ μ 2 = 0.2 ρ = - 0.4 0.5820.3180.1570.1530.3530.7530.9921.0001.000
- 0.2 0.9610.8120.5100.1810.0880.4500.9641.0001.000
0.0 0.9980.9630.7730.3660.0630.4210.9631.0001.000
0.2 0.9990.9850.8240.3240.0820.6140.9911.0001.000
0.4 1.0000.9520.6410.1450.3090.9041.0001.0001.000
L M m σ μ 2 = 0 ρ = - 0.4 1.0001.0000.9580.4900.0500.4840.9811.0001.000
- 0.2 1.0000.9980.9410.4440.0370.4220.9661.0000.973
0.0 1.0000.9980.9000.4060.0550.3710.9090.9970.757
0.2 1.0000.9880.8620.3600.0490.2850.7690.9230.556
0.4 0.9980.9710.7560.2630.0460.2330.5750.7660.366
σ μ 2 = 0.2 ρ = - 0.4 1.0001.0000.9780.5430.0670.4930.9901.0001.000
- 0.2 1.0000.9990.9290.4780.0740.4460.9641.0000.962
0.0 1.0000.9980.9240.4780.0600.3670.9340.9980.701
0.2 1.0000.9960.8690.3380.0670.2990.8180.9460.462
0.4 0.9990.9770.7500.2940.0680.2250.5910.7720.531
L M n σ μ 2 = 0.5 ρ = - 0.4 1.0001.0001.0001.0000.9700.2690.2910.9991.000
- 0.2 1.0001.0001.0000.9790.4760.0630.9171.0001.000
0.0 1.0001.0000.9810.5910.0350.6761.0001.0001.000
0.2 1.0000.9840.6120.0670.5570.9891.0001.0001.000
0.4 0.9650.4610.0800.6290.9861.0001.0001.0001.000
L M n * σ μ 2 = 0.5 ρ = - 0.4 0.5770.2850.1300.1370.3530.7790.9911.0001.000
- 0.2 0.9660.8230.5150.1530.0630.4530.9721.0001.000
0.0 0.9990.9710.7780.3280.0510.4210.9731.0001.000
0.2 0.9990.9830.8300.2870.0720.6420.9961.0001.000
0.4 1.0000.9520.6230.1030.3260.9281.0001.0001.000
L M o σ μ 2 = 0.5 ρ = - 0.4 1.0001.0000.9610.5040.0640.5320.9921.0000.998
- 0.2 1.0000.9990.9330.4770.0700.4840.9821.0000.964
0.0 1.0000.9970.8990.4420.0780.4380.9350.9960.644
0.2 1.0000.9990.9180.4750.0790.3470.8180.8920.371
0.4 1.0000.9950.8550.3810.0630.2050.4600.5220.275
Table 8. FoR of LM Tests for the Spatial Lag Dependence, Sample Size: ( 100 , 10 ) .
Table 8. FoR of LM Tests for the Spatial Lag Dependence, Sample Size: ( 100 , 10 ) .
λ = - 0.8 -0.6-0.4-0.200.20.40.60.8
L M l σ μ 2 = 0 ρ = - 0.4 1.0001.0001.0001.0001.0000.6080.6991.0001.000
- 0.2 1.0001.0001.0001.0000.8530.1081.0001.0001.000
0.0 1.0001.0001.0000.9530.0560.9751.0001.0001.000
0.2 1.0001.0000.9390.0680.9221.0001.0001.0001.000
0.4 1.0000.8250.1020.9781.0001.0001.0001.0001.000
σ μ 2 = 0.2 ρ = - 0.4 1.0001.0001.0001.0001.0000.6600.5811.0001.000
- 0.2 1.0001.0001.0001.0000.8610.1051.0001.0001.000
0.0 1.0001.0001.0000.9290.0820.9591.0001.0001.000
0.2 1.0001.0000.9300.0870.8891.0001.0001.0001.000
0.4 1.0000.7870.1480.9611.0001.0001.0001.0001.000
L M l * σ μ 2 = 0 ρ = - 0.4 0.9550.5790.1520.1980.7310.9951.0001.0001.000
- 0.2 1.0000.9970.9080.3160.1010.8931.0001.0001.000
0.0 1.0001.0000.9930.7230.0580.8441.0001.0001.000
0.2 1.0001.0000.9970.6420.1110.9591.0001.0001.000
0.4 1.0001.0000.9460.1280.7041.0001.0001.0001.000
σ μ 2 = 0.2 ρ = - 0.4 0.9270.5420.1750.2380.7150.9881.0001.0001.000
- 0.2 1.0000.9890.8600.3270.1370.8491.0001.0001.000
0.0 1.0001.0000.9940.6840.0930.7981.0001.0001.000
0.2 1.0001.0000.9920.6120.1310.9481.0001.0001.000
0.4 1.0001.0000.9210.1530.6791.0001.0001.0001.000
L M m σ μ 2 = 0 ρ = - 0.4 1.0001.0001.0000.8970.0520.9041.0001.0001.000
- 0.2 1.0001.0001.0000.8640.0540.8601.0001.0001.000
0.0 1.0001.0001.0000.7940.0530.7941.0001.0000.995
0.2 1.0001.0000.9970.7120.0550.6790.9941.0000.883
0.4 1.0001.0000.9870.5880.0450.5220.9570.9930.624
σ μ 2 = 0.2 ρ = - 0.4 1.0001.0001.0000.8700.0680.8581.0001.0001.000
- 0.2 1.0001.0001.0000.8100.0750.8001.0001.0001.000
0.0 1.0001.0001.0000.7950.0750.7320.9991.0000.989
0.2 1.0001.0000.9930.6790.0700.6300.9931.0000.888
0.4 1.0001.0000.9890.5920.0730.4910.9510.9931.000
L M n σ μ 2 = 0.5 ρ = - 0.4 1.0001.0001.0001.0001.0000.6480.6341.0001.000
- 0.2 1.0001.0001.0001.0000.8680.0861.0001.0001.000
0.0 1.0001.0001.0000.9410.0530.9751.0001.0001.000
0.2 1.0001.0000.9300.0560.9301.0001.0001.0001.000
0.4 1.0000.7950.1240.9801.0001.0001.0001.0001.000
L M n * σ μ 2 = 0.5 ρ = - 0.4 0.9410.5310.1320.2270.7260.9951.0001.0001.000
- 0.2 1.0000.9950.8840.3050.1050.8741.0001.0001.000
0.0 1.0001.0000.9950.7020.0550.8131.0001.0001.000
0.2 1.0001.0000.9990.6240.1000.9611.0001.0001.000
0.4 1.0001.0000.9400.1370.7311.0001.0001.0001.000
L M o σ μ 2 = 0.5 ρ = - 0.4 1.0001.0001.0000.8840.0620.9921.0001.0001.000
- 0.2 1.0001.0001.0000.8580.0760.8801.0001.0001.000
0.0 1.0001.0001.0000.8300.0840.8121.0001.0000.979
0.2 1.0001.0001.0000.7590.0760.6960.9941.0000.761
0.4 1.0001.0000.9940.6470.0700.4970.9250.9660.746

5. Empirical Illustration

In this section, we revisit the empirical example in Baltagi and Levin (1992) [32]. For the purpose of illustrating usefulness of the test statistics in our framework, we estimate a static demand model for cigarettes as in Elhorst (2014) [25]. The data is obtained from the Wiley website, it is a panel data of 45 U.S. states and Washington D.C. over the period of 1963–1992.4 We estimate the regression equation
ln C t = λ W ln C t + β 0 + β 1 ln P t + β 2 ln Y t + ϵ t , ϵ t = ρ M ϵ t + μ + v t ,
for t = 1 , · · · , T . In the above specification, C t is the vector of average cigarettes consumption (in packs) per capita (14 years and older) for all the states in a given year t. P t is the corresponding price (per pack) vector in year t, with β 1 capturing the price effect on the demand for cigarettes. Y t is the vector of per capita disposable income in year t, with β 2 capturing the income effect on the demand for cigarettes. In the regression analysis, in addition to the standard price and income effects, we are particularly interested in whether and how the consumption of cigarettes in one state is related to those of its neighboring states. This is reflected by the spatial lag dependence structure (or the corresponding parameter λ). Further, we use the error component structure to allow for individual random effects and spatial correlation in the error term. For the spatial weights matrices, we choose two specifications and try different combinations of them. One is the standard row-normalized first-order rook contiguity matrix. The other one is the row-standardized border length weights matrix as suggested in Debarsy et al., (2012) [33].5
Since we are particularly interested in whether the consumptions of cigarettes are spatial correlated and how they are correlated, we plot the average consumption of cigarettes (in packs) per capita over the 30 years for all the states in Figure 1. On the one hand, we see that the cigarette consumptions in some states are negatively correlated with those of its neighboring states. For example, Utah, which has a high percentage of Mormon population,6 has the lowest consumption of cigarettes, with the average consumption to be 67.9 packs per capita per year. In sharp contrast, the consumption of cigarettes in Nevada is nearly 2.6 times of that of Utah, and this could be attributed to the fact that Nevada is a highly tourist state with many casinos. The cigarette consumption in Utah is negatively correlated with those of its neighboring states. This similar pattern also hold for Nevada, New Mexico, Kentucky, and New Hampshire. On the other hand, we see that the consumptions of cigarettes among many other states tend to be similar and thus positively correlated, examples include Iowa, Wisconsin, Pennsylvania, Alabama, Georgia, and so on. The overall spatial dependence is not clear and we will use the formal regression analysis to assess it below.
Figure 1. Average Consumption of Cigarettes (in packs) per Capita per Year by States, 1963–1992.
Figure 1. Average Consumption of Cigarettes (in packs) per Capita per Year by States, 1963–1992.
Econometrics 03 00761 g001
Before the regression analysis, we first perform the diagnostic procedures and the results are summarized in Table 9. The first row in Table 9 shows that the joint null hypothesis H 0 a is strongly rejected, thus the classical pooled panel data model not appropriate for this data set, and the OLS estimator is biased. The next four rows show that the random effects model is strongly favored against the pooled panel data model, regardless of the assumptions imposed on the spatial parameters. Next, when we test for the spatial error effect and the spatial lag effect jointly, the values of L M f and L M g show that the null hypothesis ρ = λ = 0 is rejected no matter we assume σ μ 2 = 0 or allow for the case σ μ 2 > 0 . This points us to a random effects model with at least one type of spatial effects. In the next six rows, we focus on testing for the spatial error correlation. Although these test statistics unanimously reject the null hypothesis ρ = 0 in favor of a model with spatial error correlation, we point out that the appropriate statistics should be L M j , L M j * , and L M k given the presence of random effects but not spatial lag dependence yet at this stage. Finally, we test for the spatial lag dependence, the results in the last six rows suggest rejecting the null hypothesis of no spatial lag dependence, although there is a small value of L M l * . Given that the random effects and the spatial error correlation are present by results above, we point out that the appropriate statistics should be L M l * , L M n * , and L M o . From the estimation results in Table 10, the estimated ρ is between 0.353 and 0.622 and thus it is not proper to consider such value of ρ as local deviation from 0. As a result, the robust LM tests do not apply. We are left with L M o , and it points to the presence of spatial lag dependence. In the following, we first estimate the random effects model with spatial error correlation using two types of spatial weights matrices, then we add in the spatially lagged dependent variable as suggested by our test statistics.
The estimation results are summarized in Table 10. The first column provides the OLS benchmark estimation result for comparison purpose. Although it is biased, all the parameter estimates have expected signs. Price has a negative effect on the consumption of cigarettes per capita, and the disposable income per capita has a positive effect on the consumption of cigarettes per capita, which are consistent with the standard consumption theory. In Columns 2 and 4, we include the spatial error correlation for the two different spatial weights matrices. Compared to the OLS estimates, the price and income elasticity all become smaller in magnitude but have the same signs. The estimated spatial error correlation are positive as 0.353 and 0.364 , which do not differ much for the two different spatial weights matrices. In Columns 3, 5, 6, and 7, we estimate the full model with different combinations of the weights matrices. The estimates for the spatial lag dependence is negative and statistically significant, which is consistent with our diagnostic testing results. We thus conclude that the full model is the more appropriate specification, and overall we find that the negative correlation dominates in the spatial dependence among cigarette consumptions.
Table 9. LM Testing Statistics.
Table 9. LM Testing Statistics.
W = Rook W = Border W = Rook W = Border
M = Rook M = Border M = Border M = Rook
Joint Test L M a 12559 ***12542 ***12532 ***12555 ***
Testing for Random Effects
L M b 12471 ***12471 ***12471 ***12471 ***
L M c 12207 ***12260 ***12260 ***12207 ***
L M d 12471 ***12530 ***12471 ***12530 ***
L M e 1354.7 ***1662.7 ***1573.1 ***1457.8 ***
Testing for Spatial Effects
Spatial Error and Lag
L M f 88.13 ***70.78 ***60.82 ***83.76 ***
L M g 172.81 ***188.36 ***174.62 ***143.68 ***
Spatial Error
L M h 76.35 ***60.72 ***60.72 ***76.35 ***
L M h * 51.78 ***42.07 ***24.47 ***55.05 ***
L M i 32.39 ***23.68 ***10.33 ***38.02 ***
L M j 138.96 ***156.08 ***156.08 ***138.96 ***
L M j * 126.82 ***126.41 ***128.63 ***81.72 ***
L M k 94.01 ***112.90 ***99.03 ***58.83 ***
Spatial Lag
L M l 36.35 ***28.71 ***36.35 ***28.71 ***
L M l * 11.77 ***10.06 ***0.107.41 ***
L M m 1147.00 ***1385.40 ***1028.20 ***580.53 ***
L M n 45.99 ***61.95 ***45.99 ***61.95 ***
L M n * 33.85 ***32.28 ***18.53 ***4.72 ***
L M o 133.96 ***283.36 ***108.62 ***80.05 ***
* p < 0.1 ; ** p < 0.05 ; *** p < 0.01 .
Table 10. Estimation of the Cigarette Demand Function.
Table 10. Estimation of the Cigarette Demand Function.
ModelOLSMLEMLEMLEMLEMLEMLE
( W = M = Rook )( W = M = Border )( W = Rook )( W = Border )
( M = Border )( M = Rook )
β ^ 0 2.825 ***2.918 ***4.267 ***2.949 ***3.737 ***4.634 ***3.227 ***
(0.098)(0.086)(0.241)(0.087)(0.196)(0.261)(0.169)
β ^ 1 -0.773 ***-0.739 ***-0.867 ***-0.729 ***-0.821 ***-0.851 ***-0.788 ***
(0.026)(0.021)(0.026)(0.021)(0.027)(0.025)(0.030)
β ^ 2 0.586 ***0.559 ***0.645 ***0.551 ***0.616 ***0.628 ***0.595 ***
(0.022)(0.018)(0.022)(0.018)(0.022)(0.022)(0.024)
λ ^ -0.329 *** -0.204 ***-0.388 ***-0.088 **
(0.044) (0.039)(0.044)(0.038)
ρ ^ 0.353 ***0.586 ***0.364 ***0.510 ***0.622 ***0.419 ***
(0.030)(0.034)(0.028)(0.034)(0.032)(0.039)
σ ^ μ 2 0.152 ***0.140 ***0.154 ***0.145 ***0.140 ***0.149 ***
(0.015)(0.014)(0.016)(0.015)(0.014)(0.015)
σ ^ v 2 0.075 ***0.069 ***0.073 ***0.071 ***0.067 ***0.074 ***
(0.001)(0.002)(0.001)(0.002)(0.002)(0.001)
log-likelihood450.941489.21514.71502.41514.61537.91491.7
* p < 0.1 ; ** p < 0.05 ; *** p < 0.01 .

6. Conclusions

In this paper, we propose a panel data random effects models with both spatially correlated error components and spatially lagged dependent variables. We consider diagnostic testing within such a framework. We first derive the joint LM test for the individual random effects, the spatial error correlation and the spatial lag dependence. In practice, applied researchers should first consider this joint test. If the joint null hypothesis cannot be rejected, it is reasonable to adopt the classical pooled panel data model. Otherwise, either the individual random effects, or the spatial error correlation, or the spatial lag dependence must be taken into consideration. Next, we derive a wide range of LM tests for the individual random effects and for the two spatial effects separately. In addition, in order to guard against possible local model misspecification, we apply the Bera and Yoon (1993) [19] principle and construct robust LM tests in some cases. These test statistics complement each other and should be used together in performing diagnostic test to search for the most appropriate model. A small Monte Carlo experiment is carried out and the size and power performances of these test statistics are satisfactory. We further use the cigarette demand data set in Baltagi and Levin (1992) [32] to illustrate our testing procedures.
Some future research directions can be considered. First, for the model specification of spatially correlated error components used in this paper, although it allows for both spatial spillovers of permanent and temporary shocks, it does not permit different intensities of these two shocks. It would be of interest to relax this assumption and further generalize our model (see Baltagi et al., 2013 [12]). Second, one can borrow the ideas in Baltagi and Yang (2013a, 2013b) [15,17] to modify our test statistics in order to remedy distributional misspecifications in finite sample, sensitivity to spatial layout, or to be robust against unknown heteroskedasticity.

Acknowledgments

We thank the Editor-in-Chief, Kerry Patterson, and two anonymous referees for their constructive comments that have greatly improved the article. We also thank Anil Bera and participants at the IVth World Conference of Spatial Econometric Association for helpful discussions. All remaining errors are our own.

Author Contributions

Kuan-Pin Lin suggested the problem and Ming He solved the problem. Both authors contributed to the final paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

In the appendices, we provide detailed derivations of the score vector, information matrix and the LM test statistics in each case.

Appendix A

In this appendix, we provide formulae of the score vector and the information matrix for our general model specification. Let R 1 = M ( I N - ρ M ) - 1 , R 2 = W ( I N - ρ M ) - 1 , R 3 = W ( I N - λ W ) - 1 and R 4 = ( I N - ρ M ) ( I N - ρ M ) . The score vector is
L ( δ ) δ = s β s ρ s λ s σ μ 2 s σ v 2 = X A Ω - 1 A ϵ - T tr ( R 1 ) + ϵ A Ω - 1 ( I T M ) ϵ - T tr ( R 3 ) + ϵ A Ω - 1 A ( I T W ) y - N T 2 ( T σ μ 2 + σ v 2 ) + T 2 ( T σ μ 2 + σ v 2 ) 2 ϵ A ( J T ¯ I N ) A ϵ - N 2 ( T σ μ 2 + σ v 2 ) - N ( T - 1 ) 2 σ v 2 - 1 2 ϵ A Ω - 1 σ v 2 A ϵ ,
where ϵ = B y - X β . Let I be the information matrix, that is, I = - E 2 L ( δ ) / δ δ . Then after some routine calculation, the elements of I are given by
I β β = X A Ω - 1 A X , I β ρ = 0 , I β λ = X A Ω - 1 A ( I T W ) B - 1 X β , I β σ μ 2 = 0 I β σ v 2 = 0 , I ρ ρ = T tr ( R 1 R 1 + R 1 R 1 ) , I ρ λ = T tr ( R 3 R 1 ) + T tr [ R 3 ( I N - ρ M ) - 1 R 1 ( I N - ρ M ) ] , I ρ σ μ 2 = T T σ μ 2 + σ v 2 tr ( R 1 ) , I ρ σ v 2 = 1 T σ μ 2 + σ v 2 + T - 1 σ v 2 tr ( R 1 ) , I λ λ = T tr ( R 3 R 3 ) + T tr ( R 4 R 3 R 4 - 1 R 3 ) + ( B - 1 X β ) ( I T W ) A Ω - 1 A ( I T W ) B - 1 X β , I λ σ μ 2 = T T σ μ 2 + σ v 2 tr ( R 3 ) , I λ σ v 2 = 1 T σ μ 2 + σ v 2 + T - 1 σ v 2 tr ( R 3 ) , I σ μ 2 σ μ 2 = N T 2 2 ( T σ μ 2 + σ v 2 ) 2 , I σ μ 2 σ v 2 = N T 2 ( T σ μ 2 + σ v 2 ) 2 , I σ v 2 σ v 2 = N 2 ( T σ μ 2 + σ v 2 ) 2 + N ( T - 1 ) 2 ( σ v 2 ) 2 .
Due to the large number of test statistics in this paper, it turns out to be convenient to introduce some general notations for reference and easy exposition. Let
z ^ ρ = ϵ ^ A ^ Ω ^ - 1 ( I T M ) ϵ ^ , z ^ λ = ϵ ^ A ^ Ω ^ - 1 A ^ ( I T W ) y , z ^ σ μ 2 = ϵ ^ A ^ ( J ¯ T I N ) A ^ ϵ ^ σ ^ v 2 - N ,
where ϵ ^ = B ^ y - X β ^ , and A ^ , B ^ , Ω ^ , β ^ , σ ^ v 2 are restricted MLEs of A , B , Ω , β , σ v 2 , respectively. Next, define
ν ^ = y ^ ( I T W ) A ^ Ω ^ - 1 A ^ ( I T W ) y ^ , τ ^ = T 2 ( b 1 b 3 - b 2 2 ) + T b 1 ω ^ , ω ^ = y ^ ( I T W ) A ^ Ω ^ - 1 - Ω ^ - 1 A ^ X ( X A ^ Ω ^ - 1 A ^ X ) - 1 X A ^ Ω ^ - 1 A ^ ( I T W ) y ^ ,
where y ^ = B ^ - 1 X β ^ , b 1 = tr ( M M + M M ) , b 2 = tr ( M W + M W ) , b 3 = tr ( W W + W W ) . Finally, let
ξ ^ = N T ϑ ^ 2 + N ω ^ - 2 T ϑ ^ 3 2 T b 1 ( N T ϑ ^ 2 + N ω ^ - 2 T ϑ ^ 3 2 ) - N ( T ϑ ^ 1 ) 2 , ζ ^ = N θ ^ 1 - 2 θ ^ 3 2 ( N θ ^ 1 - 2 θ ^ 3 2 ) ( T θ ^ 4 + ω ^ ) - N T θ ^ 2 2 ,
where ϑ ^ 1 = tr [ ( M + M ) R ^ 3 ] , ϑ ^ 2 = tr ( R ^ 3 R ^ 3 + R ^ 3 R ^ 3 ) , ϑ ^ 3 = tr ( R ^ 3 ) , θ ^ 1 = tr R ^ 1 R ^ 1 + R ^ 1 R ^ 1 , θ ^ 2 = tr W R ^ 1 + R ^ 2 R ^ 1 ( I N - ρ ^ M ) , θ ^ 3 = tr ( R ^ 1 ) , θ ^ 4 = tr ( W W ) + tr R ^ 2 R ^ 2 R ^ 4 , and R ^ 1 , R ^ 2 , R ^ 3 , R ^ 4 are restricted MLEs of R 1 , R 2 , R 3 , R 4 , respectively.
For notational convenience in deriving many LM test statistics in Appendix B, we write the general information matrix as
I = I β β I β ρ I β λ I β σ μ 2 I β σ v 2 · I ρ ρ I ρ λ I ρ σ μ 2 I ρ σ v 2 · · I λ λ I λ σ μ 2 I λ σ v 2 · · · I σ μ 2 σ μ 2 I σ μ 2 σ v 2 · · · · I σ v 2 σ v 2 ,
and partition it into
I = I 11 I 12 I 21 I 22 , where I 11 = I β β .
Now
( I 22 - I 21 I 11 - 1 I 12 ) = I ρ ρ I ρ λ I ρ σ μ 2 I ρ σ v 2 · I λ λ I λ σ μ 2 I λ σ v 2 · · I σ μ 2 σ μ 2 I σ μ 2 σ v 2 · · · I σ v 2 σ v 2 - I ρ β I β β - 1 I λ β I β β - 1 I σ μ 2 β I β β - 1 I σ v 2 β I β β - 1 I β ρ I β λ I β σ μ 2 I β σ v 2 = J ρ ρ J ρ λ J ρ σ μ 2 J ρ σ v 2 · J λ λ J λ σ μ 2 J λ σ v 2 · · J σ μ 2 σ μ 2 J σ μ 2 σ v 2 · · · J σ v 2 σ v 2 J ,
where J s 1 s 2 = I s 1 s 2 - I s 1 β I β β - 1 I β s 2 for s 1 , s 2 = ρ , λ , σ μ 2 or σ v 2 . Further partition J into
J = J 11 J 12 J 21 J 22 , where J 11 = J ρ ρ J ρ λ · J λ λ .
Let K = J 11 - J 12 J 22 - 1 J 21 and L = J 22 - J 21 J 11 - 1 J 12 , thus the upper left 2 by 2 submatrix and lower right 2 by 2 submatrix of J - 1 are K - 1 , L - 1 , respectively.7 Moreover, we use I s 1 s 2 Δ to denote the term in I - 1 with the same location as I s 1 s 2 in I.

Appendix B

In this appendix, we provide detailed derivations for all of the LM test statistics.

B.1. Derivation of LMa

The restricted MLE under H 0 a is essentially the OLS estimator, i.e., β ^ a = ( X X ) - 1 X y , σ ^ v , a 2 = ( ϵ ^ a ϵ ^ a ) / ( N T ) , where ϵ ^ a = y - y ^ a , y ^ a = X β ^ a . The score vector under H 0 a and evaluated at the restricted MLE is
L ( δ ) δ | a = 0 z ^ ρ , a z ^ λ , a T 2 σ ^ v , a 2 z ^ σ μ 2 , a 0 ,
where z ^ ρ , a = [ ϵ ^ a ( I T M ) ϵ ^ a ] / σ ^ v , a 2 , z ^ λ , a = [ ϵ ^ a ( I T W ) y ] / σ ^ v , a 2 , z ^ σ μ 2 , a = [ ϵ ^ a ( J ¯ T I N ) ϵ ^ a ] / σ ^ v , a 2 - N . After straightforward calculation, the information matrix under H 0 a and evaluated at the restricted MLE is
I a = X X σ ^ v , a 2 0 X ( I T W ) y ^ a σ ^ v , a 2 0 0 · T b 1 T b 2 0 0 · · T b 3 + ν ^ a 0 0 · · · N T 2 2 ( σ ^ v , a 2 ) 2 N T 2 ( σ ^ v , a 2 ) 2 · · · · N T 2 ( σ ^ v , a 2 ) 2 ,
where ν ^ a = [ y ^ a ( I T ( W W ) ) y ^ a ] / σ ^ v , a 2 . In this case,
J a = T b 1 T b 2 0 0 · T b 3 + ω ^ a 0 0 · · N T 2 2 ( σ ^ v , a 2 ) 2 N T 2 ( σ ^ v , a 2 ) 2 · · · N T 2 ( σ ^ v , a 2 ) 2 , K a = T b 1 T b 2 · T b 3 + ω ^ a , L a = N T 2 2 ( σ ^ v , a 2 ) 2 N T 2 ( σ ^ v , a 2 ) 2 · N T 2 ( σ ^ v , a 2 ) 2 ,
where ω ^ a = [ y ^ a ( I T W ) ( I N T - X ( X X ) - 1 X ) ( I T W ) y ^ a ] / σ ^ v , a 2 . The first term of ( L a ) - 1 is easily calculated to be 2 ( σ ^ v , a 2 ) 2 / [ N T ( T - 1 ) ] . Also, we have
( K a ) - 1 = 1 τ ^ a T b 3 + ω ^ a - T b 2 - T b 2 T b 1 , where τ ^ a = T 2 ( b 1 b 3 - b 2 2 ) + T b 1 ω ^ a .
Finally, the joint LM test statistic is given by
L M a = T b 3 + ω ^ a τ ^ a z ^ ρ , a 2 + T b 1 τ ^ a z ^ λ , a 2 - 2 T b 2 τ ^ a z ^ ρ , a z ^ λ , a + T 2 N ( T - 1 ) z ^ σ μ 2 , a 2 .

B.2. Derivation of LMc

The score vector under H 0 c and evaluated at the restricted MLE is
L ( δ ) δ | c = 0 0 T 2 σ ^ v , c 2 z ^ σ μ 2 , c 0 , where z ^ σ μ 2 , c = [ ϵ ^ c A ^ ( J ¯ T I N ) A ^ ϵ ^ c ] / σ ^ v , c 2 - N ,
σ ^ v , c 2 = ( ϵ ^ c A ^ c A ^ c ϵ ^ c ) / ( N T ) , ϵ ^ c = y - y ^ c , y ^ c = X β ^ c , and β ^ c , A ^ c are the restricted MLEs of β , A under H 0 c , respectively. After a little calculation, the information matrix under H 0 c and evaluated at the restricted MLE is
I c = X A ^ c A ^ c X σ ^ v , c 2 0 0 0 · T θ ^ 1 , c T θ ^ 3 , c σ ^ v , c 2 T θ ^ 3 , c σ ^ v , c 2 · · N T 2 2 ( σ ^ v , c 2 ) 2 N T 2 ( σ ^ v , c 2 ) 2 · · · N T 2 ( σ ^ v , c 2 ) 2 ,
where θ ^ 1 , c , θ ^ 3 , c are θ ^ 1 , θ ^ 3 evaluated under H 0 c , respectively. Making use of the block diagonal structure of I c and straightforward but tedious calculation gives that I σ μ 2 σ μ 2 , c Δ = 2 ( σ ^ v , c 2 ) 2 / ( N T ( T - 1 ) ) . Finally, the LM test statistic in this case is given by
L M c = T 2 N ( T - 1 ) z ^ σ μ 2 , c 2 .

B.3. Derivation of LMe

The score vector under H 0 e and evaluated at the restricted MLE is
L ( δ ) δ | e = 0 0 0 T 2 σ ^ v , e 2 z ^ σ μ 2 , e 0 , where z ^ σ μ 2 , e = [ ϵ ^ e A ^ e ( J ¯ T I N ) A ^ e ϵ ^ e ] / σ ^ v , e 2 - N ,
σ ^ v , e 2 = ( ϵ ^ e A ^ e A ^ e ϵ ^ e ) / ( N T ) , ϵ ^ e = B ^ e y - X β ^ e , and β ^ e , A ^ e , B ^ e are restricted MLEs of β , A , B under H 0 e , respectively. After a little calculation, the information matrix under H 0 e and evaluated at the restricted MLE is
I e = X A ^ e A ^ e X σ ^ v , e 2 0 X A ^ e A ^ e ( I T W ) y ^ e σ ^ v , e 2 0 0 · I ^ ρ ρ , e I ^ ρ λ , e T tr ( R ^ 1 , e ) σ ^ v , e 2 T tr ( R ^ 1 , e ) σ ^ v , e 2 · · T tr ( R ^ 3 , e 2 ) + T tr ( R ^ 4 , e R ^ 3 , e R ^ 4 , e - 1 R ^ 3 , e ) + ν ^ e T tr ( R ^ 3 , e ) σ ^ v , e 2 T tr ( R ^ 3 , e ) σ ^ v , e 2 · · · N T 2 2 ( σ ^ v , e 2 ) 2 N T 2 ( σ ^ v , e 2 ) 2 · · · · N T 2 ( σ ^ v , e 2 ) 2 ,
where y ^ e = B ^ e - 1 X β ^ e , ν ^ e = y ^ e ( I T W ) A ^ e A ^ e ( I T W ) y ^ e / σ ^ v , e 2 , and I ^ ρ ρ , e , I ^ ρ λ , e , R ^ 1 , e , R ^ 3 , e , R ^ 4 , e are restricted MLEs of I ρ ρ , I ρ λ , R 1 , e , R 3 , e , R 4 , e under H 0 e , respectively. In this case, straightforward calculation gives
J e = I ^ ρ ρ , e I ^ ρ λ , e T tr ( R ^ 1 , e ) σ ^ v , e 2 T tr ( R ^ 1 , e ) σ ^ v , e 2 · T tr ( R ^ 3 , e 2 ) + T tr ( R ^ 4 , e R ^ 3 , e R ^ 4 , e - 1 R ^ 3 , e ) + ω ^ e T tr ( R ^ 3 , e ) σ ^ v , e 2 T tr ( R ^ 3 , e ) σ ^ v , e 2 · · N T 2 2 ( σ ^ v , e 2 ) 2 N T 2 ( σ ^ v , e 2 ) 2 · · · N T 2 ( σ ^ v , e 2 ) 2 ,
where ω ^ e = y ^ e ( I T W ) A ^ e I N T - A ^ e X ( X A ^ e A ^ e X ) - 1 X A ^ e A ^ e ( I T W ) y ^ e / σ ^ v , e 2 . After some straightforward but tedious calculation, we get that the term I σ μ 2 σ μ 2 , e Δ is 2 ( σ ^ v , e 2 ) 2 / [ N T ( T - 1 ) ] . Finally, the LM test statistic in this case is given by
L M e = T 2 N ( T - 1 ) z ^ σ μ 2 , e 2 .

B.4. Derivation of LMf

The restricted MLE under H 0 f is essentially the OLS estimator. The score vector under H 0 f and evaluated at the restricted MLE is
L ( δ ) δ | f = 0 z ^ ρ , a z ^ λ , a 0 .
After a little calculation, the information matrix under H 0 f and evaluated at the restricted MLE is
I f = X X σ ^ v , a 2 0 X ( I T W ) y ^ a σ ^ v , a 2 0 · T b 1 T b 2 0 · · T b 3 + ν ^ a 0 · · · N T 2 ( σ ^ v , a 2 ) 2 .
In this case, it is very easy to calculate that
I ρ ρ , f Δ I ρ λ , f Δ · I λ λ , f Δ = 1 τ ^ a T b 3 + ω ^ a - T b 2 - T b 2 T b 1 , where τ ^ a = T 2 ( b 1 b 3 - b 2 2 ) + T b 1 ω ^ a .
Thus the LM test statistic in this case is given by
L M f = T b 3 + ω ^ a τ ^ a z ^ ρ , a 2 + T b 1 τ ^ a z ^ λ , a 2 - 2 T b 2 τ ^ a z ^ ρ , a z ^ λ , a .

B.5. Derivation of LMg

The score vector under H 0 g and evaluated at the restricted MLE is
L ( δ ) δ | g = 0 z ^ ρ , g z ^ λ , g 0 0 ,
where z ^ ρ , g = ϵ ^ g Ω ^ g - 1 ( I T M ) ϵ ^ g , z ^ λ , g = ϵ ^ g Ω ^ g - 1 ( I T W ) y , ϵ ^ g = y - X β ^ g , and β ^ g , Ω ^ g - 1 are restricted MLEs of β , Ω - 1 under H 0 g , respectively. After a little calculation, the information matrix under H 0 g and evaluated at the restricted MLE is
I g = X Ω ^ g - 1 X 0 X Ω ^ g - 1 ( I T W ) y ^ g 0 0 · T b 1 T b 2 0 0 · · T b 3 + ν ^ g 0 0 · · · N T 2 2 ( T σ ^ μ , g 2 + σ ^ v , g 2 ) 2 N T 2 ( T σ ^ μ , g 2 + σ ^ v , g 2 ) 2 · · · · N 2 ( T σ ^ μ , g 2 + σ ^ v , g 2 ) 2 + N ( T - 1 ) 2 ( σ ^ v , g 2 ) 2 ,
where ν ^ g = y ^ g ( I T W ) Ω ^ g - 1 ( I T W ) y ^ g , y ^ g = X β ^ g , and σ ^ μ , g 2 , σ ^ v , g 2 are restricted MLEs of σ μ 2 , σ v 2 under H 0 g , respectively. In this case,
J g = T b 1 T b 2 0 0 · T b 3 + ω ^ g 0 0 · · N T 2 2 ( T σ ^ μ , g 2 + σ ^ v , g 2 ) 2 N T 2 ( T σ ^ μ , g 2 + σ ^ v , g 2 ) 2 · · · N 2 ( T σ ^ μ , g 2 + σ ^ v , g 2 ) 2 + N ( T - 1 ) 2 ( σ ^ v , g 2 ) 2 ,
where ω ^ g = y ^ g ( I T W ) Ω ^ g - 1 - Ω ^ g - 1 X ( X Ω ^ g - 1 X ) - 1 X Ω ^ g - 1 ( I T W ) y ^ g . It is easy to calculate that
( K g ) - 1 = 1 τ ^ g T b 3 + ω ^ g - T b 2 - T b 2 T b 1 , where τ ^ g = T 2 ( b 1 b 3 - b 2 2 ) + T b 1 ω ^ g .
Finally, the LM test statistic in this case is given by
L M g = T b 3 + ω ^ g τ ^ g z ^ ρ , g 2 + T b 1 τ ^ g z ^ λ , g 2 - 2 T b 2 τ ^ g z ^ ρ , g z ^ λ , g .

B.6. Derivation of LMh

The restricted MLE under H 0 h is essentially the OLS estimator. The score vector under H 0 h and evaluated at the restricted MLE is
L ( δ ) δ | h = 0 z ^ ρ , a 0 .
The information matrix under H 0 h and evaluated at the restricted MLE is simply
I h = X X σ ^ v , a 2 0 0 · T b 1 0 · · N T 2 ( σ ^ v , a 2 ) 2 ,
In this case, we simply have I ρ ρ , h Δ = 1 T b 1 , and thus the LM test statistic is given by
L M h = 1 T b 1 z ^ ρ , a 2 .
To derive L M h * , we make use of the Bera and Yoon (1993) Principle that L M ψ ϕ = L M ϕ + L M ψ * , where L M ψ ϕ denotes the joint LM test for ψ , ϕ , L M ϕ denotes the marginal LM test for ϕ, while L M ψ * denotes the robust LM test for ψ. Thus it can be easily deduced that
L M h * = L M f - L M l = T b 3 + ω ^ a τ ^ a ( z ^ ρ , a - T b 2 T b 3 + ω ^ a z ^ λ , a ) 2 .

B.7. Derivation of LMi

The score vector under H 0 i and evaluated at the restricted MLE is
L ( δ ) δ | i = 0 z ^ ρ , i 0 0 , where z ^ ρ , i = 1 σ ^ v , i 2 ϵ ^ i ( I T M ) ϵ ^ i ,
σ ^ v , i 2 = ϵ ^ i ϵ ^ i / ( N T ) , ϵ ^ i = B ^ i y - X β ^ i , and β ^ i , B ^ i are restricted MLEs of β , B under H 0 i , respectively. The information matrix under H 0 i and evaluated at the restricted MLE is
I i = X X σ ^ v , i 2 0 X ( I T W ) y ^ i σ ^ v , i 2 0 · T b 1 T ϑ ^ 1 , i 0 · · T ϑ ^ 2 , i + ν ^ i T ϑ ^ 3 , i σ ^ v , i 2 · · · N T 2 ( σ ^ v , i 2 ) 2 ,
where y ^ i = B ^ i - 1 X β ^ i , ν ^ i = y ^ i ( I T ( W W ) ) y ^ i / σ ^ v , i 2 , ϑ ^ 1 , i , ϑ ^ 2 , i , ϑ ^ 3 , i are ϑ ^ 1 , ϑ ^ 2 , ϑ ^ 3 evaluated at the restricted MLE under H 0 i , respectively. Straightforward but tedious calculation gives that
I ρ ρ , i Δ = ξ ^ i N T ϑ ^ 2 , i + N ω ^ i - 2 T ϑ ^ 3 , i 2 T b 1 ( N T ϑ ^ 2 , i + N ω ^ i - 2 T ϑ ^ 3 , i 2 ) - N ( T ϑ ^ 1 , i ) 2 ,
where ω ^ i = y ^ i ( I T W ) [ I N T - X ( X X ) - 1 X ] ( I T W ) y ^ i / σ ^ v , i 2 . Finally, the LM test statistic in this case is given by
L M i = ξ ^ i z ^ ρ , i 2 .

B.8. Derivation of LMj

The score vector under H 0 j and evaluated at the restricted MLE is
L ( δ ) δ | j = 0 z ^ ρ , j 0 0 , where z ^ ρ , j = ϵ ^ j Ω ^ j - 1 ( I T M ) ϵ ^ j ,
ϵ ^ j = y - X β ^ j , and β ^ j , Ω ^ j - 1 are restricted MLEs of β , Ω - 1 under H 0 j , respectively. The information matrix under H 0 j and evaluated at the restricted MLE is
I j = X Ω ^ j - 1 X 0 0 0 · T b 1 0 0 · · N T 2 2 ( T σ ^ μ , j 2 + σ ^ v , j 2 ) 2 N T 2 ( T σ ^ μ , j 2 + σ ^ v , j 2 ) 2 · · · N 2 ( T σ ^ μ , j 2 + σ ^ v , j 2 ) 2 + N ( T - 1 ) 2 ( σ ^ v , j 2 ) 2 .
Thus the LM test statistic in this case is trivially given by
L M j = 1 T b 1 z ^ ρ , j 2 .
To derive L M j * , we again make use of the Bera and Yoon (1993) [19] Principle and it can be easily deduced that
L M j * = L M g - L M n = T b 3 + ω ^ j τ ^ j z ^ ρ , j - T b 2 T b 3 + ω ^ j z ^ λ , j 2 .

B.9. Derivation of LMk

The score vector under H 0 k and evaluated at the restricted MLE is
L ( δ ) δ | k = 0 z ^ ρ , k 0 0 0 , where z ^ ρ , k = ϵ ^ k Ω ^ k - 1 ( I T M ) ϵ ^ k ,
ϵ ^ k = B ^ k y - X β ^ k , β ^ k , B ^ k , Ω ^ k - 1 are restricted MLEs of β , B , Ω - 1 under H 0 k , respectively. The information matrix under H 0 k and evaluated at the restricted MLE is
I k = X Ω ^ k - 1 X 0 X Ω ^ k - 1 ( I T W ) y ^ k 0 0 · T b 1 T ϑ ^ 1 , k 0 0 · · T ϑ ^ 2 , k + ν ^ k T ϑ ^ 3 , k T σ ^ μ , k 2 + σ ^ v , k 2 T [ ( T - 1 ) σ ^ μ , k 2 + σ ^ v , k 2 ] ϑ ^ 3 , k ( T σ ^ μ , k 2 + σ ^ v , k 2 ) σ ^ v , k 2 · · · N T 2 2 ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 N T 2 ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 · · · · N 2 ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 + N ( T - 1 ) 2 ( σ ^ v , k 2 ) 2 ,
where y ^ k = B ^ k - 1 X β ^ k , ν ^ k = y ^ k ( I T W ) Ω ^ k - 1 ( I T W ) y ^ k , σ ^ μ , k 2 , σ ^ v , k 2 are restricted MLEs of σ μ 2 , σ v 2 under H 0 k , and ϑ ^ 1 , k , ϑ ^ 2 , k , ϑ ^ 3 , k are ϑ ^ 1 , ϑ ^ 2 , ϑ ^ 3 evaluated at the restricted MLE under H 0 k , respectively. Straightforward calculation gives
J k = T b 1 T ϑ ^ 1 , k 0 0 · T ϑ ^ 2 , k + ω ^ k T ϑ ^ 3 , k T σ ^ μ , k 2 + σ ^ v , k 2 T [ ( T - 1 ) σ ^ μ , k 2 + σ ^ v , k 2 ] ϑ ^ 3 , k ( T σ ^ μ , k 2 + σ ^ v , k 2 ) σ ^ v , k 2 · · N T 2 2 ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 N T 2 ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 · · · N 2 ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 + N ( T - 1 ) 2 ( σ ^ v , k 2 ) 2 ,
where ω ^ k = y ^ k ( I T W ) Ω ^ k - 1 - Ω ^ k - 1 X ( X Ω ^ k - 1 X ) - 1 X Ω ^ k - 1 ( I T W ) y ^ k . Next, we need to calculate the ( 1 , 1 ) th element of ( K k ) - 1 . Straightforward calculation yields
K k = T b 1 T ϑ ^ 1 , k · ϰ ^ k ,
where
ϰ ^ k = T ϑ ^ 2 , k + ω ^ k - T 2 ( T - 1 ) 2 ϑ ^ 3 , k 2 η ^ 3 , k ( σ ^ μ , k 2 ) 2 ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 ( σ ^ v , k 2 ) 2 - 2 T 2 ( T - 1 ) ϑ ^ 3 , k 2 ( η ^ 2 , k + η ^ 3 , k ) σ ^ μ , k 2 ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 σ ^ v , k 2 - T 2 ϑ ^ 3 , k 2 ( η ^ 1 , k + 2 η ^ 2 , k + η ^ 3 , k ) ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 ,
and
η ^ 1 , k η ^ 2 , k η ^ 2 , k η ^ 3 , k = N T 2 2 ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 N T 2 ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 N T 2 ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 N 2 ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 + N ( T - 1 ) 2 ( σ ^ v , k 2 ) 2 - 1 = 2 ( σ ^ v , k 2 ) 2 N T 2 ( T - 1 ) + 2 ( T σ ^ μ , k 2 + σ ^ v , k 2 ) 2 N T 2 - 2 ( σ ^ v , k 2 ) 2 N T ( T - 1 ) - 2 ( σ ^ v , k 2 ) 2 N T ( T - 1 ) 2 ( σ ^ v , k 2 ) 2 N ( T - 1 ) .
Then the ( 1 , 1 ) th element of ( K k ) - 1 can be easily calculated as
ξ ^ k = ϰ ^ k T b 1 ϰ ^ k - ( T ϑ ^ 1 , k ) 2 = N T ϑ ^ 2 , k + N ω ^ k - 2 T ϑ ^ 3 , k 2 T b 1 ( N T ϑ ^ 2 , k + N ω ^ k - 2 T ϑ ^ 3 , k 2 ) - N ( T ϑ ^ 1 , k ) 2 .
Finally, the LM test statistic in this case is given by
L M k = ξ ^ k z ^ ρ , k 2 .

B.10. Derivation of LMl

The restricted MLE under H 0 l is essentially the OLS estimator. The score vector under H 0 l and evaluated at the restricted MLE is
L ( δ ) δ | l = 0 z ^ λ , a 0 .
The information matrix under H 0 l and evaluated at the restricted MLE is
I l = X X σ ^ v , a 2 X ( I T W ) y ^ a σ ^ v , a 2 0 · T b 3 + ν ^ a 0 · · N T 2 ( σ ^ v , a 2 ) 2 , .
Straightforward calculation gives that I λ λ , l Δ = 1 / ( T b 3 + ω ^ a ) , thus the LM test statistic corresponding to H 0 l is given by
L M l = 1 T b 3 + ω ^ a z ^ λ , a 2 .
To derive L M l * , we again make use of the Bera and Yoon (1993) [19] Principle and it can be easily deduced that
L M l * = L M f - L M h = T b 1 τ ^ a z ^ λ , a - b 2 b 1 z ^ ρ , a 2 .

B.11. Derivation of LMm

The score vector under H 0 m and evaluated at the restricted MLE is
L ( δ ) δ | m = 0 0 z ^ λ , m 0 , where z ^ λ , m = 1 σ ^ v , m 2 ϵ ^ m A ^ m A ^ m ( I T W ) y ^ m ,
σ ^ v , m 2 = ϵ ^ m A ^ m A ^ m ϵ ^ m / ( N T ) , ϵ ^ m = y - y ^ m , y ^ m = X β ^ m , and β ^ m , A ^ m are restricted MLEs of β , A under H 0 m , respectively. The information matrix under H 0 m and evaluated at the restricted MLE is
I m = X A ^ m A ^ m X σ ^ v , m 2 0 X A ^ m A ^ m ( I T W ) y ^ m σ ^ v , m 2 0 · T θ ^ 1 , m T θ ^ 2 , m T θ ^ 3 , m σ ^ v , m 2 · · T θ ^ 4 , m + ν ^ m 0 · · · N T 2 ( σ ^ v , m 2 ) 2 , ,
where ν ^ m = y ^ m ( I T W ) A ^ m A ^ m ( I T W ) y ^ m / σ ^ v , m 2 , and θ ^ 1 , m , θ ^ 2 , m , θ ^ 3 , m , θ ^ 4 , m are θ ^ 1 , θ ^ 2 , θ ^ 3 , θ ^ 4 evaluated at the restricted MLE under H 0 m , respectively. Let
I m = I 11 , m I 12 , m I 21 , m I 22 , m , where I 11 , m = X A ^ m A ^ m X σ ^ v , m 2 .
After some calculation, we get
I 22 , m - I 21 , m ( I 11 , m ) - 1 I 12 , m = T θ ^ 1 , m T θ ^ 2 , m T θ ^ 3 , m σ ^ v , m 2 · T θ ^ 4 , m + ω ^ m 0 · · N T 2 ( σ ^ v , m 2 ) 2 ,
where ω ^ m = y ^ m ( I T W ) A ^ m [ I N T - A ^ m X ( X A ^ m A ^ m X ) - 1 X A ^ m ] A ^ m ( I T W ) y ^ m / σ ^ v , m 2 . Straightforward calculation gives that
I λ λ , m Δ = ζ ^ m N θ ^ 1 , m - 2 θ ^ 3 , m 2 ( N θ ^ 1 , m - 2 θ ^ 3 , m 2 ) ( T θ ^ 4 , m + ω ^ m ) - N T θ ^ 2 , m 2 .
Finally, the LM test statistic in this case is given by
L M m = ζ ^ m z ^ λ , m 2 .

B.12. Derivation of LMn

The score vector under H 0 n and evaluated at the restricted MLE is
L ( δ ) δ | n = 0 z ^ λ , n 0 0 , where z ^ λ , n = ϵ ^ n Ω ^ n - 1 ( I T W ) y ,
ϵ ^ n = y - X β ^ n , and β ^ n , Ω ^ n - 1 are the restricted MLEs of β , Ω - 1 under H 0 n , respectively. The information matrix under H 0 n and evaluated at the restricted MLE is
I n = X Ω ^ n - 1 X X Ω ^ n - 1 ( I T W ) y ^ n 0 0 · T b 3 + ν ^ n 0 0 · · N T 2 2 ( T σ ^ μ , n 2 + σ ^ v , n 2 ) 2 N T 2 ( T σ ^ μ , n 2 + σ ^ v , n 2 ) 2 · · · N 2 ( T σ ^ μ , n 2 + σ ^ v , n 2 ) 2 + N ( T - 1 ) 2 ( σ ^ v , n 2 ) 2 ,
where y ^ n = X β ^ n , ν ^ n = y ^ n ( I T W ) Ω ^ n - 1 ( I T W ) y ^ n . The LM test statistic in this case can be easily calculated as
L M n = 1 T b 3 + ω ^ n z ^ λ , n 2 ,
where
ω ^ n = y ^ n ( I T W ) [ Ω ^ n - 1 - Ω ^ n - 1 X ( X Ω ^ n - 1 X ) - 1 X Ω ^ n - 1 ] ( I T W ) y ^ n .
To derive L M n * , we again make use of the Bera and Yoon (1993) [19] Principle and it can be easily deduced that
L M n * = L M g - L M j = T b 1 τ ^ n z ^ λ , n - b 2 b 1 z ^ ρ , n 2 .

B.13. Derivation of LMo

The score vector under H 0 o and evaluated at the restricted MLE is
L ( δ ) δ | o = 0 z ^ λ , o 0 0 0 , where z ^ λ , o = ϵ ^ o A ^ o Ω ^ o - 1 A ^ o ( I T W ) y ,
ϵ ^ o = y - X β ^ o , and β ^ o , A ^ o , Ω ^ o - 1 are restricted MLEs of β , A , Ω - 1 under H 0 o , respectively. The information matrix under H 0 o and evaluated at the restricted MLE is
I o = X A ^ o Ω ^ o - 1 A ^ o X 0 X A ^ o Ω ^ o - 1 A ^ o ( I T W ) y ^ o 0 0 · T θ ^ 1 , o T θ ^ 2 , o T θ ^ 3 , o T σ ^ μ , o 2 + σ ^ v , o 2 T [ ( T - 1 ) σ ^ μ , o 2 + σ ^ v , o 2 ] θ ^ 3 , o ( T σ ^ μ , o 2 + σ ^ v , o 2 ) σ ^ v , o 2 · · T θ ^ 4 , o + ν ^ o 0 0 · · · N T 2 2 ( T σ ^ μ , o 2 + σ ^ v , o 2 ) 2 N T 2 ( T σ ^ μ , o 2 + σ ^ v , o 2 ) 2 · · · · N 2 ( T σ ^ μ , o 2 + σ ^ v , o 2 ) 2 + N ( T - 1 ) 2 ( σ ^ v , o 2 ) 2 ,
where y ^ o = X β ^ o , ν ^ o = y ^ o ( I T W ) A ^ o Ω ^ o - 1 A ^ o ( I T W ) y ^ o , σ ^ μ , o 2 , σ ^ v , o 2 are restricted MLEs of σ μ 2 , σ v 2 under H 0 o , and θ ^ 1 , o , θ ^ 2 , o , θ ^ 3 , o , θ ^ 4 , o are θ ^ 1 , θ ^ 2 , θ ^ 3 , θ ^ 4 evaluated at the restricted MLE under H 0 o , respectively. A little calculation gives
J o = T θ ^ 1 , o T θ ^ 2 , o T θ ^ 3 , o T σ ^ μ , o 2 + σ ^ v , o 2 T [ ( T - 1 ) σ ^ μ , o 2 + σ ^ v , o 2 ] θ ^ 3 , o ( T σ ^ μ , o 2 + σ ^ v , o 2 ) σ ^ v , o 2 · T θ ^ 4 , o + ω ^ o 0 0 · · N T 2 2 ( T σ ^ μ , o 2 + σ ^ v , o 2 ) 2 N T 2 ( T σ ^ μ , o 2 + σ ^ v , o 2 ) 2 · · · N 2 ( T σ ^ μ , o 2 + σ ^ v , o 2 ) 2 + N ( T - 1 ) 2 ( σ ^ v , o 2 ) 2 ,
where ω ^ o = y ^ o ( I T W ) A ^ o Ω ^ o - 1 - Ω ^ o - 1 A ^ o X ( X A ^ o Ω ^ o - 1 A ^ o X ) - 1 X A ^ o Ω ^ o - 1 A ^ o ( I T W ) y ^ o . Next, we need to calculate the ( 2 , 2 ) th element of ( K o ) - 1 . Straightforward calculation yields
K o = T θ ^ 1 , o - ( T θ ^ 3 , o ) 2 η ^ 1 , o ( T σ ^ μ , o 2 + σ ^ v , o 2 ) 2 - 2 T 2 [ ( T - 1 ) σ ^ μ , o 2 + σ ^ v , o 2 ] θ ^ 3 , o 2 η ^ 2 , o ( T σ ^ μ , o 2 + σ ^ v , o 2 ) 2 σ ^ v , o 2 - T 2 [ ( T - 1 ) σ ^ μ , o 2 + σ ^ v , o 2 ] θ ^ 3 , o 2 η ^ 3 , o ( T σ ^ μ , o 2 + σ ^ v , o 2 ) 2 ( σ ^ v , o 2 ) 2 T θ ^ 2 , o · T θ ^ 4 , o + ω ^ o ,
where
η ^ 1 , o η ^ 2 , o η ^ 2 , o η ^ 3 , o = 2 ( σ ^ v , o 2 ) 2 N T 2 ( T - 1 ) + 2 ( T σ ^ μ , o 2 + σ ^ v , o 2 ) 2 N T 2 - 2 ( σ ^ v , o 2 ) 2 N T ( T - 1 ) - 2 ( σ ^ v , o 2 ) 2 N T ( T - 1 ) 2 ( σ ^ v , o 2 ) 2 N ( T - 1 ) .
Then straightforward calculation gives the ( 2 , 2 ) th element of ( K o ) - 1 as
ζ ^ o = N T θ ^ 1 , o - 2 T θ ^ 3 , o 2 ( N T θ ^ 1 , o - 2 T θ ^ 3 , o 2 ) ( T θ ^ 4 , o + ω ^ o ) - N ( T θ ^ 2 , o ) 2 .
Finally, the LM test statistic in this case is given by
L M o = ζ ^ o z ^ λ , o 2 .

References

  1. A. Cliff, and J. Ord. Spatial Autocorrelation. London, UK: Pion, 1973. [Google Scholar]
  2. A. Cliff, and J. Ord. Spatial Processes, Models and Applications. London, UK: Pion, 1981. [Google Scholar]
  3. L. Anselin. Spatial Econometrics: Methods and Models. Dordrecht, The Netherlands: Kluwer Academic Publishers, 1988. [Google Scholar]
  4. L. Anselin. “Lagrange Multiplier Test Diagnostics for Spatial Dependence and Spatial Heterogeneity.” Geogr. Anal. 20 (1988): 1–17. [Google Scholar] [CrossRef]
  5. L. Anselin, and A.K. Bera. “Spatial Dependence in Linear Regression Models with an Introduction to Spatial Econometrics.” In Handbook of Applied Economic Statistics. Edited by A. Ullah and D.E.A. Giles. New York, NY, USA: Marcel Dekker, 1998. [Google Scholar]
  6. L. Anselin. “Rao’s Score Tests in Spatial Econometrics.” J. Stat. Plan. Inference 97 (2001): 113–139. [Google Scholar] [CrossRef]
  7. B.H. Baltagi, S. Song, and W. Koh. “Testing Panel Data Regression Models with Spatial Error Correlation.” J. Econom. 117 (2003): 123–150. [Google Scholar] [CrossRef]
  8. B.H. Baltagi, and L. Liu. “Testing for Random Effects and Spatial Lag Dependence in Panel Data Models.” Stat. Probab. Lett. 78 (2008): 3304–3306. [Google Scholar] [CrossRef]
  9. N. Debarsy, and C. Ertur. “Testing for Spatial Autocorrelation in a Fixed Effects Panel Data Model.” Reg. Sci. Urban Econ. 40 (2010): 453–470. [Google Scholar] [CrossRef]
  10. L.F. Lee, and J. Yu. “Estimation of Spatial Autoregressive Panel Data Models with Fixed Effects.” J. Econom. 154 (2010): 165–185. [Google Scholar] [CrossRef]
  11. X. Qu, and L.F. Lee. “LM Tests for Spatial Correlation in Spatial Models with Limited Dependent Variables.” Reg. Sci. Urban Econ. 42 (2012): 430–445. [Google Scholar] [CrossRef]
  12. B.H. Baltagi, P. Egger, and M. Pfaffermayr. “A Generalized Spatial Panel Data Model with Random Effects.” Econom. Rev. 32 (2013): 650–685. [Google Scholar] [CrossRef]
  13. M. Kapoor, H.H. Kelejian, and I.R. Prucha. “Panel Data Models with Spatially Correlated Error Components.” J. Econom. 140 (2007): 97–130. [Google Scholar] [CrossRef]
  14. Z.L. Yang. “A Robust LM Test for Spatial Error Components.” Reg. Sci. Urban Econ. 40 (2010): 299–310. [Google Scholar] [CrossRef]
  15. B.H. Baltagi, and Z.L. Yang. “Standardized LM Tests for Spatial Error Dependence in Linear or Panel Regressions.” Econom. J. 16 (2013): 103–134. [Google Scholar] [CrossRef]
  16. B. Born, and J. Breitung. “Simple Regression-Based Tests for Spatial Dependence.” Econom. J. 14 (2011): 330–342. [Google Scholar] [CrossRef]
  17. B.H. Baltagi, and Z.L. Yang. “Heteroskedasticity and Non-Normality Robust LM Tests for Spatial Dependence.” Reg. Sci. Urban Econ. 43 (2013): 725–739. [Google Scholar] [CrossRef]
  18. Z.L. Yang. “LM Tests of Spatial Dependence Based on Bootstrap Critical Values.” J. Econom. 185 (2015): 33–59. [Google Scholar] [CrossRef]
  19. A.K. Bera, and M. Yoon. “Specification Testing with Locally Misspecified Alternatives.” Econom. Theory 9 (1993): 649–658. [Google Scholar] [CrossRef]
  20. L. Anselin, A.K. Bera, R. Florax, and M.J. Yoon. “Simple Diagnostic Tests for Spatial Dependence.” Reg. Sci. Urban Econ. 26 (1996): 77–104. [Google Scholar] [CrossRef]
  21. A.K. Bera, W. Sosa-Escudero, and M. Yoon. “Tests for the Error Component Model in the Presence of Local Misspecification.” J. Econom. 101 (2001): 1–23. [Google Scholar] [CrossRef]
  22. A.K. Bera, G. Montes-Rojas, and W. Sosa-Escudero. “Testing Under Local Misspecification and Artificial Regression.” Econ. Lett. 150 (2009): 66–68. [Google Scholar] [CrossRef]
  23. A.K. Bera, G. Montes-Rojas, and W. Sosa-Escudero. “General Specification Testing with Locally Misspecified Models.” Econom. Theory 26 (2010): 1838–1845. [Google Scholar] [CrossRef]
  24. M. He, and K.P. Lin. “Locally Adjusted LM Test for Spatial Dependence in Fixed Effects Panel Data Models.” Econ. Lett. 121 (2013): 59–63. [Google Scholar] [CrossRef]
  25. J.P. Elhorst. Spatial Econometrics from Cross-Sectional Data to Spatial Panels. New York, NY, USA: Springer, 2014. [Google Scholar]
  26. L.F. Lee, and J. Yu. “Spatial Panels: Random Components versus Fixed Effects.” Int. Econ. Rev. 53 (2012): 1369–1412. [Google Scholar] [CrossRef]
  27. J.R. Magnus. “Multivariate Error Components Analysis of Linear and Nonlinear Regression Models by Maximum Likelihood.” J. Econom. 19 (1982): 239–285. [Google Scholar] [CrossRef]
  28. H.H. Kelejian, and I.R. Prucha. “On the Asymptotic Distribution of the Moran’s I Test Statistic with Applications.” J. Econom. 104 (2001): 219–257. [Google Scholar] [CrossRef]
  29. H.H. Kelejian, and I.R. Prucha. “Specification and Estimation of Spatial Autoregressive Models with Autoregressive and Heteroskedastic Disturbances.” J. Econom. 157 (2010): 53–67. [Google Scholar] [CrossRef] [PubMed]
  30. Y. Honda. “Testing the Error Components Model with Non-Normal Disturbances.” Rev. Econ. Stud. 52 (1985): 681–690. [Google Scholar] [CrossRef]
  31. Y. Honda. “A Standardized Test for the Error Components Model with the Two-Way Layout.” Econ. Lett. 37 (1991): 125–128. [Google Scholar] [CrossRef]
  32. B.H. Baltagi, and D. Levin. “Cigarette Taxation: Raising Revenues and Reducing Consumption.” Struct. Chang. Econ. Dyn. 3 (1992): 321–335. [Google Scholar] [CrossRef]
  33. N. Debarsy, C. Ertur, and J. LeSage. “Interpreting Dynamic Space-Time Panel Data Models.” Stat. Methodol. 9 (2012): 158–171. [Google Scholar] [CrossRef] [Green Version]
  34. T. Holmes. “The State Border Data Set.” Available online: http://www.econ.umn.edu/∼holmes/data/BorderData.html (accessed on 11 September 2015).
  • 1.Notice that there are four cases for which we do not present the LM tests formulae, since these four cases are not particularly interesting. The null hypotheses for the four cases are: σ μ 2 = ρ = 0 ( λ = 0 ) , σ μ 2 = ρ = 0 ( λ 0 ) , σ μ 2 = λ = 0 ( ρ = 0 ) and σ μ 2 = λ = 0 ( ρ 0 ) . The LM tests formulae for these four cases are not presented in the paper, but they are available upon request from the authors.
  • 2.By ρ 0 , we mean that ρ is allowed to be nonzero, and the notation “≠” has similar meaning in the rest of this paper.
  • 3.In order to save space and avoid notational complication, we do not present the formulae of quantities involved in the LM test statistics case by case. The subscripts of these quantities indicate these quantities evaluated at the restricted MLE for each case. If the restricted MLE is just the OLS estimator, we use subscript “a” to indicate this, which is in line with the fact that only OLS estimation is needed in L M a .
  • 4.The data for Colorado, Oregon and Pennsylvania are not available.
  • 5.The data for border length is obtained from Holmes [34].
  • 6.According to the 2010 United States Census, the Mormons represent 62.1 % of Utah’s total population.
  • 7.For computing the inverse of a partitioned symmetric matrix, if Λ = Λ11Λ12Λ21Λ22, then Λ-1=(Λ11-Λ12Λ22-1Λ21)-1-Λ11-1Λ12(Λ22-Λ21Λ11-1Λ12)-1-(Λ22-Λ21Λ11-1Λ12)-1Λ21Λ11-1(Λ22-Λ21Λ11-1Λ12)-1.

Share and Cite

MDPI and ACS Style

He, M.; Lin, K.-P. Testing in a Random Effects Panel Data Model with Spatially Correlated Error Components and Spatially Lagged Dependent Variables. Econometrics 2015, 3, 761-796. https://doi.org/10.3390/econometrics3040761

AMA Style

He M, Lin K-P. Testing in a Random Effects Panel Data Model with Spatially Correlated Error Components and Spatially Lagged Dependent Variables. Econometrics. 2015; 3(4):761-796. https://doi.org/10.3390/econometrics3040761

Chicago/Turabian Style

He, Ming, and Kuan-Pin Lin. 2015. "Testing in a Random Effects Panel Data Model with Spatially Correlated Error Components and Spatially Lagged Dependent Variables" Econometrics 3, no. 4: 761-796. https://doi.org/10.3390/econometrics3040761

Article Metrics

Back to TopTop