Next Article in Journal
Efficient Estimation in Heteroscedastic Varying Coefficient Models
Previous Article in Journal
A New Approach to Model Verification, Falsification and Selection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

Cardiff Business School, Cardiff University, Aberconway Building, Colum Drive, Cardiff CF10 3EU, UK
Econometrics 2015, 3(3), 494-524; https://doi.org/10.3390/econometrics3030494
Submission received: 27 February 2015 / Accepted: 16 June 2015 / Published: 10 July 2015

Abstract

:
We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002) does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC) are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE). We also study the implications of different levels of inclusion probabilities by simulations.

1. Introduction

For a panel linear regression model with lags of the dependent variable as regressors (dynamic panel model) and agent specific fixed effects, the maximum likelihood estimators (MLE) of the common parameters, whose number does not change with sample size, are inconsistent when the number of time periods is small and fixed, see Nerlove [1] and Nickell [2]. This problem, known as the “incidental parameter problem”, has been reviewed by Lancaster [3]. A plethora of studies have been undertaken to obtain consistent estimators for the common parameters in dynamic panel models. Among them there are two main approaches: one is to use the generalized method of moments (GMM), see the overview in Hsiao [4]; the other is based on modified profile or integrated likelihood, see e.g., the recent works by Bester and Hansen [5], Hahn and Kuersteiner [6], Arellano and Bonhomme [7], Dhaene and Jochmans [8]. Researchers using these two approaches usually presume the moment conditions or the parametric models are correctly specified and the issue of model selection has relatively attracted less attention. Correct model specification is very important, without which consistent parameter estimation can not be achieved. In Andrews and Lu [9], the authors proposed model and moment selection criteria (MMSC) under GMM context based on J-test statistics to address the issue. However, for dynamic panel models, GMM will suffer from the weak instrument problem when the coefficient for the lagged dependent variable is close to 1. Hence MMSC is unlikely to work for such situation.
Lee and Phillips [10] used the bias reducing prior from Arellano and Bonhomme [7] to develop integrated likelihood information criterion to study lag order selection in dynamic panel models. The prior in Arellano and Bonhomme [7] is designed to obtain first-order (in the time dimension) unbiased estimators.1 Lancaster [11] suggested a way to reparameterize the fixed effects to achieve consistent estimation (not just first-order) in the panel. While Lee and Phillips [10] only considered stationary data in their application, we show that it is possible for Lancaster's method to handle non-stationary data. Different from Lee and Phillips [10], our paper focuses on the selection of exogenous regressors rather than lag order selection. For the purpose of model comparison, proper priors must be used for parameters not common to all the models to avoid Bartlett's paradox when Bayes factors are used (see e.g., [12]). Dhaene and Jochmans [8] found that the modified profile likelihood with Lancaster's correction term can be infinite for infinite parameter support, which implies a prior to ensure the posterior distribution to be proper should be used in a Bayesian context. We develop a data dependent proper prior to combine with Lancaster's reparameterization to calculate Bayes factors and find that model selection based on Bayes factors is inconsistent only for very extreme situations, such as when the number of time periods is 2 or when the true value of the lag coefficient is less than 1 . On the other hand, model selection based on Bayesian information criterion (BIC) with the parameters evaluated at the biased MLE can be inconsistent under more common circumstances. From an empirical point of view, researchers could often be confronted with a large number of possible regressors and hence many possible models. Model uncertainty leads to estimation risks especially for small samples since the estimates that we use from a misspecified model could be far away from the true parameter values and hence misleading. From our simulations, we find that Bayesian model averaging (BMA) can reduce such risk and produce point estimators with lower root mean squared errors (RMSE).
The plan of the paper is as follows. Section 2 summarizes the model and the posterior results with the estimation strategies discussed. Section 3 gives our motivations to compare different model specifications and shows the conditions under which our estimator will be consistent when the model is misspecified. Section 4 presents the conditions under which Bayes factors and BIC can be consistent in model selection. In Section 5, we carry out simulation studies to verify our claims before Section 6 concludes.

2. The Model and the Estimation

Here we investigate the first order autoregression linear panel model with a fixed effect, f i ,
y i , t = f i + y i , t 1 ρ + x i , t β + u i , t , i = 1 ... N , t = 1 ... T .
where ρ is a scalar and x i , t is a k × 1 vector of explanatory variables. Denote u i as ( u i , 1 , u i , 2 , , u i , T ) , X i = ( x i , 1 , x i , 2 , , x i , T ) and y i as ( y i , 1 , y i , 2 , , y i , T ) . We can rewrite Equation (1) into the vector form below,
y i = f i ι + y i _ ρ + X i β + u i ,
where ι is a T × 1 vector of ones. By repeated substitution, we can obtain
y i _ = f i ζ 1 + y i , 0 ζ 2 + C X i β + C u i ,
where y i _ = ( y i , 0 , y i , 1 , , y i , T 1 ) ,
ζ 1 = 0 1 1 + ρ 1 + ρ + ρ 2 + + ρ T 2 , ζ 2 = 1 ρ ρ 2 ρ T 1 , C = 0 0 0 1 0 0 ρ 1 0 ρ T 2 ρ T 3 1 0
The following are the assumptions we use throughout the paper.
Assumption 1. u i | X i , f i , y i , 0 , σ 2 , ρ , β i . i . d . ( 0 , σ 2 I T ) where I T is an identity matrix with dimension T 2 .
Assumption 2. (a) { ( X i , f i , y i , 0 ) } is a cross-sectionally independent sequence;
(b)
E ( | y i , 0 2 | 1 + δ ) < , E ( | f i 2 | 1 + δ ) < and E ( | x i , t , h 2 | 1 + δ ) < , for some δ > 0 , all i = 1 , 2 , , N , t = 1 , 2 , , T and h = 1 , 2 , , k , where x i , t , h denotes the hth element in x i , t ;
(c)
k and T are finite;
(d)
E i = 1 N X i H X i N is finite and uniformly positive definite (see [13], p. 22) where H = I T ι ι T ;
(e)
For any finite values of ρ, the following expression is uniformly positive, i.e., given sufficiently large N,
1 N i = 1 N { E ( y i _ C u i ) H ( y i _ C u i ) E ( y i _ C u i ) H X i E i = 1 N X i H X i 1 E i = 1 N X i H ( y i _ C u i ) } > 0 .
Assumption 1 implies that X i , f i and y i , 0 are strictly exogenous. In comparison to the i.i.d. regularity conditions in Lancaster [11], Assumption 2 (a)–(d) allow the distribution of X i , f i and y i , 0 to be heterogeneous for cross sectional units with slightly more rigorous conditions on their moments such that the asymptotic results in the paper can hold. Assumption 2 (e) is used to simplify the proofs of Proposition 4 and Lemma 10 in Appendix D. Its purpose is to prevent the (within-group) regression of ( f i ζ 1 + y i , 0 ζ 2 + C X i β ) on fixed effects and X i from having perfect fit asymptotically, i.e., R-squared tends to 1 as N increases and to ensure the true value of ρ to be the local mode of its marginal posterior (discussed later) asymptotically. When β = 0 (no exogenous regressors in the model), Assumption 2 (e) rules out f i = 0 . When T 3 , if Assumption 2 (e) is satisfied and β 0 , as shown in Appendix D (Equation (53) and its discussion), the following probability limit should also be strictly positive,
p l i m N 1 N i = 1 N β X i C M ζ C X i β i = 1 N β X i C M ζ X i i = 1 N X i M ζ X i 1 i = 1 N X i M ζ C X i β > 0 ,
where M ζ = I T ζ ( ζ ζ ) 1 ζ and ζ = ( ζ 1 , ζ 2 ) . In practice, one could calculate the expression after p l i m in Equation (5) to check Assumption 2 (e) with ρ and β replaced by their consistent estimates. If the value of the expression decreases towards 0 with N,2 there would be concern for Assumption 2 (e). We would think such case should be very rare with real data.
The MLE of the common parameters, ρ, β and σ 2 , are not consistent due to the presence of the incidental parameter f i , whose number will increase with N. For fixed T, it is impossible for the MLE of f i to be consistent. When the predetermined regressor y i , t 1 is included, the MLE for ρ will be correlated with that of f i and will also be inconsistent. To obtain the consistent estimators for the common parameters, Lancaster [11] suggested the following way to reparameterize the fixed effect:
f i = g i exp ϕ ρ 1 T ι X i β ,
where g i is the new fixed effect, ι is a vector of ones and ϕ ρ is defined as
ϕ ( ρ ) = 1 T T 1 t = 1 T t t ρ t .
We use the following prior for ρ, β and σ 2 and g = ( g 1 , g 2 , . . . , g N ) :
p ( g , σ 2 , ρ , β ) = p ( g 1 ) . . . p ( g N ) p ( σ 2 ) p ( ρ ) p ( β | σ 2 ) 1 σ 2 I ( ρ L ρ ρ U ) p ( β | σ 2 , X ) .
In other words, a flat prior for g and Jeffreys' prior for σ 2 are used. is uniformly distributed over [ ρ L , ρ U ] . The specifications of ρ L and ρ U will be discussed in Proposition 4 later. p ( β | σ 2 ) takes the form of g-prior in [14]:
β | σ 2 , X N 0 , σ 2 ( η i = 1 N X i H X i ) 1 ,
where X = ( X 1 , X 2 , , X N ) . The strength of the prior depends on the value of η. The smaller is η, the less informative is our prior. As discussed in Section 4, to ensure model selection consistency we can choose 0 < η ( N ) = O ( N α ) for α < 0 . For our simulation studies below, we choose η = 1 N T . The posterior results of the model are summarized below.
Proposition 3. The posterior distributions of the parameters in our model will take the following forms:
g i | y i , y i , 0 , σ 2 , ρ i . i . d . N e ϕ ( ρ ) ι ( y i y i _ ρ ) T , σ 2 T exp [ 2 ϕ ( ρ ) ] ,
β | σ 2 , ρ , Y , Y 0 , X N i = 1 N X i H X i 1 i = 1 N X i H ( y i y i _ ρ ) η + 1 , σ 2 i = 1 N X i H X i 1 η + 1 ,
σ 2 | ρ , Y , Y 0 , X I G ( N ( T 1 ) , a ρ 2 2 b ρ + c ) ,
p ( ρ | Y , Y 0 , X ) I ( ρ L < ρ < ρ U ) exp N ψ ( ρ ) ,
where Y 0 = ( y 1 , 0 , y 2 , 0 , , y N , 0 ) , Y = ( y 1 , y 2 , , y N ) and
ψ ( ρ ) = ϕ ( ρ ) T 1 2 ln a N ρ 2 2 b N ρ + c N ,
a = i = 1 N y i _ H y i _ 1 η + 1 i = 1 N y i _ H X i i = 1 N X i H X i 1 i = 1 N X i H y i _ ,
b = i = 1 N y i _ H y i 1 η + 1 i = 1 N y i _ H X i i = 1 N X i H X i 1 i = 1 N X i H y i ,
c = i = 1 N y i H y i 1 η + 1 i = 1 N y i H X i i = 1 N X i H X i 1 i = 1 N X i H y i .
I G ( · ) denotes inverted gamma distribution with degrees of freedom N ( T 1 ) and mean a ρ 2 2 b ρ + c N ( T 1 ) 2 .
Note that a and c in Equations (15) and (17) are close to the sum of squared residuals (SSR) obtained by respectively regressing y i _ and y i on fixed effects and X i .3 ϕ ( ρ ) in Equation (14) is the term from Lancaster's reparameterization, which corrects the marginal posterior local mode of ρ to make it consistent. Dhaene and Jochmans [8] showed that the modified profile likelihood function with ϕ ( ρ ) can be infinite for ρ . Analogical to their results, we show that the marginal posterior of ρ will be infinite and hence improper when ρ or when T is odd and ρ in Appendix D. Lancaster [11] noted such behaviour of ρ's marginal posterior in simulations, but did not discuss much on how to specify the boundary points. In Proposition 4, we provide a data dependent way to specify ρ L and ρ U , which is necessary for model comparison to avoid Bartlett's paradox. First note that the probability limit of ψ ( ρ ) is
ψ ̲ ( ρ ) = p l i m N ψ ( ρ ) = ϕ ( ρ ) T 1 2 ln d ( ρ ) ,
where
d ( ρ ) = p l i m N a N ρ 2 2 b N ρ + c N .
Proposition 4. If X i are the true set of exogenous regressors used to generate y i , under Assumption 1 and 2, asymptotically the marginal posterior of ρ in Equation (13) will have more than one stationary points satisfying ψ ̲ ( ρ ) = 0 : 3 stationary points when T is odd and 2 when T is even, regardless of the true value of ρ. The local posterior mode, which is a consistent point estimator, is the stationary point nearest to the MLE satisfying ψ ̲ ( ρ ) < 0 asymptotically, which the other stationary point(s) do not satisfy. ρ U can be specified as the stationary point on the right of the posterior mode. When T is odd, ρ L can be specified as the stationary point on the left of the posterior mode; when T is even, ρ L could be chosen as a function of N such that ρ L ( N ) < 0 is sufficiently small.4
Choosing the boundary points as in Proposition 4 can ensure the marginal posterior of ρ to be proper and its support to contain the true value of ρ asymptotically. ρ L and ρ U are different from the boundary points in the constrained maximization in Dhaene and Jochmans [8], who only considered parameter estimation. The interval of our boundary points is wider than theirs, since we want to preserve the bell-shaped part of the posterior density curve for model comparison. Another point to note is that when the true exogenous regressors are included, the local posterior mode will exist regardless of the true value of ρ (even if it is 1) due to Assumption 2.2 (e). Next we investigate the consequences when we can not include the true regressors.

3. Motivations and Methods to Compare Different Model Specifications

In empirical applications, researchers are often faced with many possible regressors suggested by different economic theories to be included into Equation (1). Different models are defined by the inclusion of different combinations of the exogenous regressors and by whether or not the lagged dependent variable is present. Proposition 5 below implies there is no guarantee that the posterior mode in Equation (13) is a consistent estimator if some true regressors are excluded from the model.
Proposition 5. The posterior mode in Equation (13) is a consistent estimator for ρ if and only if we have either
h 2 ( β , ρ ̲ ) = h 3 ( β ) = 0 ,
or
( T 1 ) h 2 ( β , ρ ̲ ) h 3 ( β ) = h ( ρ ̲ )
where
h ρ = t = 1 T 1 T t T ρ t 1 = d ϕ ( ρ ) d ρ = 1 T ι ζ 1 = t r a c e ( C H ) ,
h 2 ( β , ρ ̲ ) = p l i m N 1 N i = 1 N y i _ H X ̲ i β i = 1 N y i _ H X i i = 1 N X i H X i 1 i = 1 N X i H X ̲ i β , h 3 ( β ) = p l i m N 1 N i = 1 N β X ̲ i H X ̲ i β i = 1 N β X ̲ i H X i i = 1 N X i H X i 1 i = 1 N X i H X ̲ i β .
Here X ̲ represents the regressors in the true model and X denotes the regressors we actually include in our candidate model, while ρ ̲ is the true value of ρ.
The values of h 2 ( β , ρ ̲ ) and h 3 ( β ) depend on how the true regressors and the included regressors are related, apart from the values of β and ρ ̲ . For h 2 ( β , ρ ̲ ) = h 3 ( β ) = 0 to be satisfied, it suffices that the true regressors X ̲ are a subset of X.5 When some true regressors are excluded, the model will suffer omitted variable bias unless Equation (21) holds. Given Assumption 1 and 2, one example for Equation (21) to hold could be that all the true regressors are covariance stationary and they have no serial correlation; the included regressors have zero correlation with the true regressors; moreover, ζ 1 H lim N 1 N i = 1 N E ( f i X ̲ i ) β = ζ 2 H lim N 1 N i = 1 N E ( y i , 0 X ̲ i ) β = 0 . For this restrictive case, it will be possible to estimate ρ consistently without any true regressors included.6
To avoid inconsistent estimation due to model misspecification, one could include all the potential regressors into the model. For finite sample, however, that could inflate the posterior variances for the coefficients of the true regressors if too many irrelevant regressors are included. The simulation studies in Section 5 reveal that while including all the regressors does not influence the estimation of ρ in comparison to other consistent approaches, it leads to substantially high RMSE when estimating β. Hence appropriate procedures for model selection are desirable. In a Bayesian framework, one can evaluate different model specifications, denoted by M i below, by their posterior model probabilities, which can be calculated as
p M j | Y , Y 0 , X j = p M j p Y | Y 0 , X j , M j p Y | Y 0 = p M j p Y | Y 0 , X j , M j i = 1 2 K + 1 p M i p Y | Y 0 , X i , M i ,
where X j denotes the regressors included under M j and p Y | Y 0 , X j , M j is the marginal likelihood, obtained by integrating out ρ in Equation (13) or (43) in Appendix C. K is the number of all potential exogenous regressors. The total number of models is therefore 2 K + 1 . p M j is the prior model probability of model j. For finite samples, Ley and Steel [15] showed that the choice of prior model probability will affect the posterior results to a large extent when the number of potential regressors is large compared to the sample size. In what follows, we focus on the asymptotic behaviour of posterior model probabilities and assume all the models are equally possible a priori. The posterior model probability p M j | Y , Y 0 , X j will hence only depend on p Y | Y 0 , X j , M j .

4. Consistency in Model Selection

In this section, we discuss the situations when the posterior model probability of the true model will tend to 1 as N tends to infinity. We will also analyze whether Bayesian information criterion (BIC) based on biased MLE is consistent in model selection.
For static panel models when the true value of ρ is zero and the lagged dependent variable is not included as a regressor, the analysis of Bayes factors is similar to that of Fernandez et al. [16]. In our context, we can ensure model selection consistency by setting η as a function of N with 0 < η ( N ) = O ( N α ) for α < 0 . As for BIC, it is consistent in model selection for static panel models.
Let us now consider the case when our candidate model ( M 1 ) contains y i _ and X i 1 . M 1 is compared to either M 0 , which has X i 0 but no y i _ , or M 2 , which has X i 2 and y i _ . X i j denotes the exogenous regressors included under model M j for j = 0 , 1 , 2 , which satisfy Assumption 2.1 and 2.2. X i 1 can be the same or different from X i 0 , while X i 2 is different from X i 1 . The Bayes factors respectively are:
p ( Y | Y 0 , X 1 , M 1 ) p ( Y | Y 0 , X 0 , M 0 ) = η η + 1 k 1 k 0 2 ρ U 1 ρ L 1 exp N ψ ( ρ | M 1 ) d ρ ( ρ U 1 ρ L 1 ) c | M 0 N N ( T 1 ) 2 ,
p ( Y | Y 0 , X 1 , M 1 ) p ( Y | Y 0 , X 2 , M 2 ) = ρ U 2 ρ L 2 ρ U 1 ρ L 1 η η + 1 k 1 k 2 2 ρ U 1 ρ L 1 exp N ψ ( ρ | M 1 ) d ρ ρ U 2 ρ L 2 exp N ψ ( ρ | M 2 ) d ρ ,
where k j denotes the number of columns in X i j ; a | M j , b | M j and c | M j in ψ ( ρ | M j ) defined in Equation (14) are calculated by replacing X i with X i j in Equations (15) to (17) for j = 0 , 1 , 2 , which multiplied by 1 N have the following probability limits under Assumption 1 and 2 with η ( N ) = O ( N α ) and α < 0 :
p l i m N 1 N a | M j = a ̲ | M j ,
p l i m N 1 N b | M j = b ̲ | M j = a ̲ | M j ( ρ ̲ + γ | M j ) ,
p l i m N 1 N c | M j = c ̲ | M j = ρ ̲ 2 a ̲ | M j + 2 a ̲ | M j ρ ̲ γ | M j + h 3 ( β | M j | M j ) + ( T 1 ) σ 2 ,
γ | M j = h 2 ( β | M j , ρ ̲ | M j ) σ 2 h ( ρ ̲ ) a ̲ | M j .
γ | M j stands for the Nickell MLE bias of ρ under M j . We can see the MLE bias results from two sources: the incidental parameter part ( σ 2 h ( ρ ̲ ) ) and the model misspecification part ( h 2 ( β | M j , ρ ̲ | M j ) ). Proposition 4 shows that when the model is correctly specified, the local posterior mode is a consistent estimator for ρ. In the simulation studies in Section 5 we find that when some combination of the wrong exogenous regressors are included, the marginal posterior density of ρ can be either monotonically increasing or is U-shaped depending on the value of T and does not have a local maximum. When we find such a wrong model, we will assign 0 as its posterior model probability and will not estimate the model. In Proposition 6 and 7 below, we consider the cases when the local maximum exists for ψ ̲ ( ρ | M j ) in Equation (18) and show the sufficient conditions under which the Bayes factors in Equations (25) and (26) can lead to the selection of the true model asymptotically. Denote ρ | M j * as the local maximum of ψ ̲ ( ρ | M j ) in ( ρ L j , ρ U j ) with ψ ̲ ( ρ | M j * | M j ) < 0 .
Proposition 6. When M 1 is the true model, i.e., ρ ̲ 0 and X i 1 is the set of true regressors to generate Y, as N increases, p ( Y | Y 0 , X 1 , M 1 ) p ( Y | Y 0 , X 0 , M 0 ) in Equation (25) will tend to infinity if the following holds,
ϕ ( ρ ̲ ) + T 1 2 ln c ̲ | M 0 ( T 1 ) σ 2 > 0
When M 0 is the true model, i.e., X i 0 is the set of true regressors and ρ ̲ = 0 , as N increases, p ( Y | Y 0 , X 1 , M 1 ) p ( Y | Y 0 , X 0 , M 0 ) in Equation (25) will tend to 0 if either of the following is satisfied:
(a) we have
ϕ ( ρ | M 1 * ) + T 1 2 ln d ( ρ | M 1 * | M 1 ) ( T 1 ) σ 2 > 0 ;
If Equation (21) is true under M 1 , Equation (32) will hold.
(b) Equation (20) is true under M 1 . In this case, the left hand side of Equation (32) is equal to 0.
Proposition 7. When M 2 is the true model and M 1 is the misspecified model, as N increases, p ( Y | Y 0 , X 1 , M 1 ) p ( Y | Y 0 , X 2 , M 2 ) in Equation (26) will tend to 0 if either of the following holds:
(a) we have
ϕ ( ρ ̲ ) ϕ ( ρ | M 1 * ) + T 1 2 ln d ( ρ | M 1 * | M 1 ) ( T 1 ) σ 2 > 0 ,
If Equation (21) is true under M 1 , Equation (33) will hold.
(b) Equation (20) is true under M 1 . In this case, the left hand side of Equation (33) is equal to 0.
Since both Equations (20) and (21) imply that the local posterior mode in Equation (13) is a consistent estimator for ρ, from Proposition 6 and 7, we can see that if the posterior mode is consistent under the misspecified model, the misspecified model will not be chosen by the Bayes factor (model selection will be consistent). In Appendix D, we show that h ( ρ ) is positive over ℝ when T is an even number. This implies ϕ ( ρ ) is an increasing function over ℝ. Also note that ϕ ( 0 ) = 0 . Hence ϕ ( ρ ) < 0 for ρ < 0 and it is possible for Equation (31) to be violated when T is even and ρ ̲ is a negative number. As shown in the last paragraph in Appendix E, though Equation (31) could be violated for the extreme case of T = 2 and ρ ̲ < 0 , fortunately, apart from this extreme case, violation of Equation (31) could only occur when ρ ̲ < 1 for T being an even number greater than 2, which may not be relevant for most economic applications with ρ ̲ [ 1 , 1 ] .
Note that Equation (32) is the special case of Equation (33) with ρ ̲ = 0 . When the posterior mode is not consistent under the misspecified model, it is difficult to state under what circumstances Equation (33) is or is not satisfied since ρ * generally does not have closed form. By construction, ρ | M 1 * is a local minimum for the left of Equation (33). In our simulation studies in Section 5, we calculate Equation (32) or (33) under different settings when model selection errors based on Bayes factors occur. We cannot find a single occurrence of either Equation (32) or (33) being violated except the cases when Equation (20) is true, that is, when the candidate model nests the true model. It appears that the left hand sides of Equation (33) could be interpreted as how close the candidate model is to nest the true model. Note that with real data, it is difficult to check Equation (20), but one can assess whether Equation (33) is violated by replacing d ( ρ | M j ) with a | M j N ρ 2 2 b | M j N ρ + c | M j N and supplanting σ 2 and ρ ̲ by their consistent estimates e.g., those from the model including all the potential regressors.
Proposition 8 below shows when the BIC based on the biased MLE is consistent in model selection. BIC for the model with and without the lagged dependent variable is defined respectively below,
B I C l a g = N T ln c N T b 2 a × N T + ln 2 π + 1 + ( 1 + k + N ) ln ( N T ) ,
B I C no lag = N T ln c N T + ln 2 π + 1 + ( k + N ) ln ( N T ) ,
where a, b and c are defined respectively in Equations (15), (16) and (17) with η = 0 , and k is the number of exogenous regressors included. The model with smaller BIC value will be preferred.
Proposition 8. For the comparison of the two models in Equation (25), when M 0 is the true model, BIC will be consistent if the following is satisfied
( T 1 ) σ 2 c ̲ | M 1 + b ̲ | M 1 2 a ̲ | M 1 < 0 .
However, if Equation (20) is true under M 1 , the left of Equation (36) will be greater than 0 and BIC will be inconsistent.
When M 1 is the true model, BIC will be consistent in model selection if the following condition is met:
c ̲ | M 0 c ̲ | M 1 + b ̲ | M 1 2 a ̲ | M 1 > 0 .
If X i 0 is the same as X i 1 and the probability limit of ρ ^ M L E is equal to 0, the left of Equation (37) will be 0 and BIC will be inconsistent.
For the comparison of the two models in Equation (26), when M 1 is the true model, BIC will be consistent in model selection if the following holds
c ̲ | M 2 c ̲ | M 1 b ̲ | M 2 2 a ̲ | M 2 + b ̲ | M 1 2 a ̲ | M 1 > 0 .
Conditions (36), (37) and (38) are just the sufficient but not necessary conditions for BIC to be consistent in model selection since BIC has a penalty term against over-parameterization (the last term in Equations (34) and (35)). Note that ρ ^ M L E = b a and p l i m N ρ ^ M L E = ρ ̲ + γ from Equations (27) and (28), where γ is the Nickell bias. The violation of Equations (36) and (37) is related to the hypothesis test, H 0 : ρ = 0 . When p l i m N ρ ^ M L E = γ ( ρ ̲ = 0 ) and X i 1 = X i 0 , the probability of making type I errors based on classical test statistics, such as Wald or likelihood ratio (LR), will be 1 and BIC will choose M 1 asymptotically with the left hand side of Equation (36) being a ̲ | M 1 γ | M 1 2 > 0 ; when p l i m N ρ ^ M L E = 0 and X i 1 = X i 0 , the probability of making type II errors will be 1 asymptotically and BIC will choose M 0 even if ρ ̲ 0 with the left hand side of Equation (37) being a ̲ | M 1 ( ρ ̲ + γ | M 1 ) 2 = 0 . When incidental parameters are present, Cox and Reid [17] suggest using the likelihood conditional on the MLE of the orthogonalized incidental parameters to construct LR statistics. In practice, if we find ρ ^ M L E is close to 0 or the estimated Nickell bias, we should be cautious to use BIC for model selection. For Equation (38), as shown in Appendix G, if M 1 is the true model and X i 2 nests X i 1 , the left hand side of Equation (38) will be less than or equal to 0 asymptotically depending on whether a ̲ | M 2 is less than or equal to a ̲ | M 1 . Though Equation (38) is violated when a ̲ | M 2 = a ̲ | M 1 , BIC can still favour M 1 since there are more parameters under M 2 . However, if a ̲ | M 2 < a ̲ | M 1 , which could happen when f i is highly correlated with all the potential regressors, BIC will choose the wrong model M 2 asymptotically, as shown in Section 5.3. Given the SSR interpretation of a in Equation (15) (with η = 0 ), the practical implication of this result is that if BIC chooses the model with all the regressors included, which always has smaller a N for finite sample comparing to other models, we should be cautious with such choice in the application.

5. Simulation Studies

In this section we use Monte Carlo simulation to verify the claims in Proposition 6, 7 and 8 and investigate the impact of model uncertainty on point estimation. The number of simulations is 1000. We set T = 4 , σ 2 = 1 , η = 1 N T , ρ L = N when T is even and the number of possible regressors to 8. We select 4 regressors out of 8 (K) to generate the dependent variable. The coefficient values of the chosen regressors are 0 . 1 0 . 3 , 1 and 2 respectively. We draw independently f i and y i , 0 from U [ 4 , 4 ] . For each simulation, we calculate the posterior model probabilities and the BIC of all the models and evaluate the performances of the two criteria. In Proposition 5, we show that the posterior mode is not a consistent estimator of ρ when neither Equation (20) nor (21) holds, which is possible when the regressors have collinearity and serial correlation. We generate the potential regressors to be covariance stationary and make them serially correlated and correlated with each other. The details of the data generating process (DGP) can be found in Appendix A. There are three parameters controlling the properties of the regressors: σ X 2 = 5 . 33 (the variance with the same value as those of f i and y i , 0 ), s = 0 . 5 (the autocorrelation coefficient) and λ = 1 (between 0 and 1, the closer to 0, the higher the correlation among the regressors). The settings are the same for the subsequent simulation exercises unless otherwise stated. The results of robust checks with other values of σ X 2 , s and λ are shown in Appendix B.

5.1. When Model Selection is Consistent

The model selection performance results for different values of ρ ̲ > 1 appear similar and are available upon request. If some true regressors are excluded, Equation (20) or (21) would be violated more often under ρ ̲ = 1 than when ρ ̲ takes other positive values. The results presented in Table 1 are based on ρ ̲ = 1 . The “ER” column shows the error rates of Bayes factors7 while the “ERBIC”column contains those of BIC. For N = 40 , BIC performs better than Bayes factors. As the sample size increases, the error rates of the two criteria get closer and both decrease, which implies both are consistent in model selection. Note that the coefficient of one of the exogenous regressors in the true model is equal to 0 . 1 , which is close to 0. Models selected based on Bayes factors often cannot pick up this regressor when N = 40 . The column “nest” indicates how often the model chosen by Bayes factors only omits the regressor with coefficient 0 . 1 or is the same as the true model. In other words, the true model nests the chosen model. Comparing this column to “nestbic”, we can see that the models chosen by Bayes factors are more often nested inside the true model with the less important regressor excluded than the models from BIC. Column “ER11” shows the proportions of errors committed when the true model and the model chosen by Bayes factors both include y i _ but have different exogenous regressors.8 We can see that all the errors made by Bayes factors and BIC are due to the inclusion of the wrong set of exogenous regressors rather than omitting y i _ . Hence there is no point to check whether or not Equation (31) or (37) is violated. When the errors of ER11 or ERBIC11 occurred, we checked whether Equation (33) or (38) was violated. For this and the following simulation exercises, we did not find any violations of Equation (33). For Equation (38), it is only violated with its left hand side being 0 when the chosen model ( M 2 ) includes all the regressors of the true model ( M 1 ). For this case, BIC is still consistent. In other words, the errors of ER11 or ERBIC11 are fixable with larger sample sizes for both selection criteria.
Table 1. Simulation results when both criteria are consistent in model selection.
Table 1. Simulation results when both criteria are consistent in model selection.
NERERBICER11ERBIC11NestNestbic
400.8340.762110.7990.704
1000.5430.510110.8290.777
2000.3000.299110.8620.830
5000.1220.110110.9020.907
10000.0640.064110.9430.942

5.2. When Equation (31) is Violated for Bayes Factors

In Section 4, we mentioned that when T is even and ρ ̲ is a small negative number, it is possible for Equation (31) to be violated. Under the settings in Section A, the left hand side of Equation (31) often has a root of ρ ̲ between 7 . 4 and 7 . 2 when the true regressors are included.9 If ρ ̲ is less than the root, Equation (31) will be violated. In our next exercise, we set ρ ̲ = 7 . 4 and run the simulations again. The results are in Table 2. We can see that Bayes factors cannot select the true model for once out of the 1000 simulations for all sample sizes while the error rates of BIC gradually decrease with N. All the Bayes factors errors are made when the chosen model does not contain y i _ (see “ER10”) and Equation (31) is violated. Similar problems with Bayes factors arise when T = 2 and 1 < ρ ̲ < 0 as explained in Appendix E. Table 3 shows the simulation results for such situation when ρ ̲ = 0 . 9 , σ 2 = 100 and the true model does not have X i while other settings are the same as before. Bayes factors again show no signs of model selection consistency almost always due to the violation of Equation (31). The “noreg” column shows how often in the errors made by Bayes factors, the chosen model only includes the fixed effects with no other regressors. As sample size increases, Bayes factors tend to make more such errors which BIC never commits.
Table 2. Simulation results when Equation (31) is violated with ρ ̲ = 7 . 4 .
Table 2. Simulation results when Equation (31) is violated with ρ ̲ = 7 . 4 .
NERERBICER10no(31)ER10BICNestNestbic
4010.78711000.688
10010.55211000.756
20010.30011000.832
50010.13911000.886
100010.06411000.941
Table 3. Simulation results when Equation (31) is violated with ρ ̲ = 0 . 9 , T = 2 , σ 2 = 100 and no X i in the true model.
Table 3. Simulation results when Equation (31) is violated with ρ ̲ = 0 . 9 , T = 2 , σ 2 = 100 and no X i in the true model.
NERERBICER10no(31)ERBIC10NoregNoregbic
400.9100.6960.9360.99800.6210
1000.8730.5590.956100.7510
2000.8540.4400.977100.8330
5000.8580.3600.986100.8930
10000.8960.2730.991100.9210

5.3. When Equations (36), (37) or (38) is Violated for BIC

Bayes factors perform poorly in model selection when Equation (31) is violated, which takes place under rather extreme situations. Next we show that BIC can perform poorly under more common circumstances, which are more possible for economic applications. As discussed in Proposition 8, if ρ ̲ = 0 , BIC could asymptotically choose the model with the true exogenous regressor(s) and y i _ over the true model. For the next simulation exercise, we change ρ ̲ to 0. The results are shown in Table 4. Bayes factors now have smaller error rates while BIC cannot identify the true model. As expected, BIC always chooses the models with y i _ (see “ER01BIC”), while the proportion of errors violating Equation (36), gets higher for bigger sample sizes. Column “cnestbic” shows how often the chosen model by BIC nests the true model when the errors of ER01BIC occur. The values in this column are just slightly smaller than those in “no(36)”, which indicates a high proportion of the violation of Equation (36) happens when the chosen model nests the true model.
Another situation of poor BIC performance is when Equation (37) is close to violation. In Proposition 8, we mentioned that if p l i m N ρ ^ M L E = 0 under the true model, a candidate model with the same exogenous regressors as those of the true model will violate Equation (37). In our next experiment, we do not include any exogenous regressors into the true model and set ρ ̲ = 0 . 0756 to make ρ ^ M L E close to 0. If the candidate model ( M 0 ) only has fixed effects, the left hand side of Equation (37) is close to but slightly above 0. The simulation results are given in Table 5. We can see that BIC error rates gradually increase to near 1 with the sample size. Column “noregbic” indicates the proportion of BIC errors committed when the chosen model only includes fixed effects. Note that the values in this column are the same as those in “ER10BIC”, which also get closer to 1 with sample size. Clearly, the poor performance of BIC in this scenario should be related to Equation (37).
Table 4. Simulation results when Equation (36) is violated with ρ ̲ = 0 and T = 4 .
Table 4. Simulation results when Equation (36) is violated with ρ ̲ = 0 and T = 4 .
NERERBICER01ER01BICno(36)CnestbicNestNestbic
400.8441010.2430.2300.7770
1000.5721010.5240.5110.8230
2000.2901010.7770.7600.8640
5000.1041010.9410.9250.9240
10000.0421010.9960.9850.9650
Table 5. Simulation results when Equation (37) is violated with ρ ̲ = 0 . 0756 and no exogenous regressors are included in the true model.
Table 5. Simulation results when Equation (37) is violated with ρ ̲ = 0 . 0756 and no exogenous regressors are included in the true model.
NERERBICER10no(37)ER10BICNoregNoregbic
400.9520.9680.97300.8760.6600.876
1000.8080.9710.94300.9470.7620.947
2000.5400.9760.88700.9680.7200.968
5000.1320.9850.31100.9810.3030.981
10000.0560.991000.99000.990
Finally we show a case when Equation (38) is violated. Note that the left hand side of Equation (38) asymptotically depends on a ̲ | M 1 (calculated under the true model) and a ̲ | M 2 (calculated under the wrong candidate model). If a ̲ | M 1 > a ̲ | M 2 , BIC will be inconsistent, which could happen when M 2 nests M 1 and f i is highly correlated with all the potential exogenous regressors. In our next exercise, we set T = 3 , ρ ̲ = 1 and generate y i , 0 and f i * from U [ 1 , 1 ] . When we generate x ˜ i , t in Equation (40), we set s = 0 . 9 . f i is generated as f i = f i * + 10 1 T K t = 1 T h = 1 K x i , t , h . In the true model, no exogenous regressors are included, which implies any candidate model including y i _ nests or is the same as the true model. The results in Table 6 show that BIC is not consistent with increasing error rates as the sample size gets larger than 200 and all the errors are of type ERBIC11. For all the errors made by BIC, we have found that Equation (38) is violated with a ̲ | M 1 > a ̲ | M 2 . For a few cases, a ̲ | M 1 is very close to a ̲ | M 2 . The column with the heading a ̲ | M 2 a ̲ | M 1 < 0 . 999 in Table 6 indicates the percentage of the errors when a ̲ | M 2 is smaller than a ̲ | M 1 by more than 0 . 1 % of its value. We can see that the majority of the errors happen when a ̲ | M 2 is smaller by more than a tiny fraction of a ̲ | M 1 . The column headed with E a ̲ | M 2 a ̲ | M 1 shows the sample average of a ̲ | M 2 a ̲ | M 1 from all the errors, which gets smaller with the sample size. This implies BIC tends to choose the model with lower a ̲ in comparison to the true model with larger sample sizes. Note that the simulation results are sensitive to the parameter settings. If we change T to 4 while keeping other settings the same, among the BIC errors, a ̲ | M 2 will be virtually the same as a ̲ | M 1 and BIC will show decreasing error rates, which, though, are higher than those of Bayes factors for different sample sizes. In this case, we need to change ρ ̲ and s to make BIC inconsistent. The results are available upon request.
Table 6. Simulation results when Equation (38) is violated.
Table 6. Simulation results when Equation (38) is violated.
NERERBICER11ERBIC11 a ̲ | M 2 a ̲ | M 1 < 0 . 999 E a ̲ | M 2 a ̲ | M 1
400.2090.439110.9680.878
1000.1190.352110.9770.866
2000.0720.336110.9730.858
5000.0500.406110.9900.811
10000.0390.600110.9930.782
To sum up, it is possible for Equation (36), (37) or (38) to be violated and BIC can be inconsistent in model selection under more common circumstances than Bayes factors.

5.4. Point Estimation

Judging from the previous simulation results, we can see that if we simply select the model with the highest posterior model probability to provide the estimates of our interest, the chances will be high that the model selected is not the true model especially when N is small regardless of which criterion we use. Next we investigate how model uncertainty impacts on point estimation. We set ρ ̲ = 1 and the number of simulations equal to 2000. We then evaluate the performances of different consistent point estimators.10
Table 7 shows the root mean squared errors (RMSE) with the cross section sample size (N) equal to 40. The true values of ρ and β are shown under the column “True”. There are 8 potential regressors, 4 of which are not included in the true model and hence have coefficients equal to 0. The column “Top” shows the RMSE resulting from the posterior mode of the model with the highest posterior model probability, the column “All” shows the results from the model which include all the potential regressors, while the values in the column “BMA” are from the posterior mode average of different models with the weights equal to the posterior model probabilities. To evaluate the significance of a regressor in the Bayesian context, we can calculate the sum of the posterior model probabilities of all the models which include the regressor. The RMSE in the columns headed with percentage numbers are calculated based on certain inclusion probability criterion. If the inclusion probability for a regressor is lower than the percentage number of the column, we will simply use zero as its point estimate. Otherwise, we will use the BMA estimate. From Table 7, we can see that the model including all the potential regressors has much higher RMSE for all the parameters except ρ than other methods. BMA has smaller RMSE for almost all the parameters than the top model criterion11 and it tends to have lower RMSE than inclusion probability criteria for parameters different from 0 while larger RMSE for parameters equal to 0. Higher inclusion probability tends to give smaller RMSE when the true value of the parameter is 0 while higher RMSE for non-zero parameters. The last row of Table 7 shows the sum of RMSE in each column, which is a measure of the overall performances of different criteria. We can see that BMA and various inclusion probability criteria are all better than those of the top model and the all-inclusive model. The sum of RMSE is the smallest when we set the inclusion probability to 50 % .
Table 7. root mean squared errors (RMSE) of point estimators when N = 40 .
Table 7. root mean squared errors (RMSE) of point estimators when N = 40 .
TrueTopBMAAll30%40%50%60%70%80%
ρ ̲ 10.0170.0170.0170.0170.0170.0170.0170.0170.017
β0.10.1100.0904.9540.0960.0990.1010.1020.1030.103
0.30.1260.1144.2630.1170.1210.1280.1380.1510.167
00.0720.0663.5670.0620.0600.0580.0560.0490.046
00.0710.0575.2190.0540.0520.0490.0450.0420.036
00.0510.0442.2540.0390.0360.0300.0270.0230.019
10.1190.1046.0330.1050.1050.1060.1110.1140.121
00.0570.0533.7770.0490.0470.0410.0380.0330.025
20.1180.1086.5730.1080.1080.1130.1130.1200.128
Sum0.7390.65236.6560.6470.6430.6430.6460.6530.661
To add more insights into inclusion probabilities, we present the error rates of in/excluding the wrong/right regressor based on different inclusion probability criteria in Table 8 and compare to those from the top model. Similar to the findings of RMSE, higher inclusion probabilities tend to give larger error rates for non-zero parameters while smaller error rates for the zero parameters. The last row shows the average error rates of different columns, of which the highest value appears when the 10 % criterion is used and the majority of the errors are from the zero parameters. Note that for a particular regressor, the prior inclusion probability is 50 % in our setting. If the posterior inclusion probability is no less than 50 % , it implies the data confirm or strengthen the prior. The top model criterion has smaller average error rate than almost all the inclusion probability criteria except 40 % and 50 % .
Table 8. The error rates of excluding or including a regressor based on different criteria when N = 40 .
Table 8. The error rates of excluding or including a regressor based on different criteria when N = 40 .
TrueTop10%20%30%40%50%60%70%80%
ρ ̲ 10.0000.0000.0000.0000.0000.0000.0000.0000.000
β0.10.8260.1810.5260.6820.7890.8560.8940.9270.955
0.30.1190.0040.0190.0460.0700.1100.1500.2050.270
00.0530.6470.2090.1050.0600.0420.0260.0160.011
00.0520.6380.2170.1140.0730.0420.0230.0130.008
00.0480.6540.2260.1030.0640.0320.0190.0110.006
10.0030.0000.0010.0010.0010.0020.0030.0040.006
00.0430.6330.2040.1070.0640.0370.0230.0120.005
20.0010.0000.0000.0000.0000.0010.0010.0010.002
Avg.0.1270.3070.1560.1290.1250.1240.1270.1320.140
Table 9 presents the results of RMSE sums and average error rates under different sample sizes. BMA has smaller RMSE than the top model estimators for all sizes, while the top model average error rate is in general close to the minimum of various inclusion probability criteria. The minimums of RMSE sums are usually attained when the inclusion probability is above or equal to 50 % , while the minimum average error rates appear at around 50 % . Therefore, under our simulation settings, for point estimation, it seems sensible to use 50 % inclusion probability to decide whether or not a regressor should be included.
Table 9. Sum of RMSE and averages of error rates.
Table 9. Sum of RMSE and averages of error rates.
Sum of RMSEAverage Error Rates
N401002005001000401002005001000
Top0.7390.5350.2990.1890.1070.1270.0830.0460.0150.005
BMA0.6520.4590.2880.1710.101N.A.N.A.N.A.N.A.N.A.
10%0.6530.4600.2870.1710.1000.3070.1780.1160.0620.029
20%0.6530.4600.2860.1700.0990.1560.1020.0630.0300.012
30%0.6470.4610.2840.1690.0980.1290.0850.0490.0210.008
40%0.6430.4610.2850.1690.0970.1250.0810.0460.0170.006
50%0.6430.4290.2840.1680.0960.1240.0820.0480.0150.005
60%0.6460.4370.2780.1690.0960.1270.0870.0510.0150.004
70%0.6530.4470.2790.1700.0980.1320.0920.0560.0170.005
80%0.6610.4370.2790.1610.1000.1400.0990.0630.0210.007
90%0.6850.4390.3050.1710.1020.1530.1110.0770.0280.008

6. Conclusions

In this paper, we investigated consistent parameter estimation and model selection for the linear dynamic panel model. We use the fixed effect reparameterization proposed by Lancaster [11] combined with our data dependent prior for estimation and calculate Bayes factors to compare different model specifications. We recommend model selection should precede parameter estimation, since Lancaster's fixed effect transformation may not necessarily lead to consistent estimation when some true exogenous regressors are excluded. We have given the conditions under which Bayes factors or BIC can lead to consistency in model selection and have shown that Bayes factors could be inconsistent in model selection when the number of time periods is 2, or when the true autoregressive coefficient is less than 1 . Such situations could be rare for most economic applications. BIC based on the biased MLE can be inconsistent when the fixed effects are highly correlated with all the potential exogenous regressors or when the true autoregressive coefficient is 0 or its MLE is close to 0, which are more likely to happen in reality.
When model uncertainty is substantial, e.g. with small sample sizes, we argue for the use of Bayesian model averaging, which can produce point estimators with smaller RMSE than the model with the highest posterior model probability in our simulation exercises. Inclusion probability criteria can be helpful to reduce estimation risk and for deciding which regressor(s) should be chosen. We recommend using 50 % (the prior inclusion probability) to decide the inclusion of a regressor, which usually produces the smallest RMSE and average error rates in our simulation exercises. It can be promising for future research to extend Lancaster's reparameterization to account for higher order AR models and to consider lag order selection along with regressor selection.

Acknowledgments

The author wishes to thank Roberto Leon Gonzalez for his patient and insightful guidance and Gary Koop for his long-term encouragement and helpful advice. The author is also very grateful to the editor, Kerry Patterson and three anonymous referees, whose comments and suggestions lead to considerable improvement of the paper. The author is responsible for all the remaining errors.

Conflicts of Interest

The author declares no conflict of interest.

Appendix

A. The DGP of Exogenous Regressors

We first draw x i , t * K × 1 from i . i . d . N ( 0 , σ X 2 I K ) and then generate x ˜ i , t as follows
x ˜ i , t = s x ˜ i , t 1 + 1 s 2 x i , t * ,
with x ˜ i , 0 = x i , 0 * . s is the first order autocorrelation. Denote X ˜ i = ( x ˜ i , 1 , , x ˜ i , T ) , X ˜ i , j and X i , h to be the jth and hth column of X ˜ i and X i respectively, where
X i , h = K j = 1 q h , j X ˜ i , j j = 1 , 2 , , K .
Define z K × 1 = ( 1 K , , 1 K ) . q h = ( q h , 1 , , q h , K ) is drawn from angular central Gaussian distribution (ACG) with q h q h = 1 and parameter z z + λ ( I z z ) (see [18]). q h can be viewed as an orientation (direction) in K . If λ = 1 , q h will be uniformly distributed; if λ is closer to 0, q h will be closer to the orientation of z, i.e., the regressors generated thereby will have higher correlation.12 Note that under our data generating design, any element in X i will have mean 0 and variance σ X 2 . The correlation coefficient of any two elements in X i is the same across i and can be calculated as
c o r r X i , t , h , X i , t ˜ , h ˜ = s | t t ˜ | K j = 1 q h , j q h ˜ , j t = 1 , 2 , , T h = 1 , 2 , , K .

B. Properties of the Exogenous Regressors in the Simulation

Here we will do some robustness checks of our simulation results under different settings. Apart from the conditions in Proposition 6 to 8, model selection performance of Bayes factors and BIC is also sensitive to the properties of X i for different values of σ X 2 , s and λ. We first reduce σ X 2 to 1 . 33 to obtain the results in Table 10. The error rates of the two criteria are all higher for different sample sizes than those in Table 1 while the nest rates are all lower. Similar model selection deterioration could also occur with inflated error variances ( σ 2 ). Hence model selection performance is affected by the relative strength of the signal compared to the noise, which is determined by their variances.
Next we show that the levels of serial correlation and collinearity in the regressors also affect model selection performance. Recall from Section A that s is the first order autocorrelation and λ controls the level of collinearity. To have the regressors with no collinearity, we can set q h to be the hth column of an identity matrix of dimension K. We set ρ ̲ = 1 and N = 200 . The error rates under different levels of serial correlation and collinearity are shown in Table 11. We can see that cross regressor correlation and positive serial correlation are harmful for model selection. If different regressors are orthogonal to each other, Bayes factors and BIC will have lower error rates than when collinearity is present, while the highest error rates appear when s is 0 . 9 under different levels of collinearity. One intriguing phenomenon is that negative serial correlation seems to enhance model selection performance for most cases in comparison to positive or no serial correlation.
Table 10. Simulation results when ρ ̲ = 1 and σ X 2 = 1 . 33 .
Table 10. Simulation results when ρ ̲ = 1 and σ X 2 = 1 . 33 .
NERERBICER11ERBIC11NestNestbic
400.9620.958110.5010.497
1000.8860.872110.7360.716
2000.7670.747110.8270.812
5000.5000.470110.8690.856
10000.2210.217110.9100.896
Table 11. Error rates under different levels of serial correlation and collinearity for ρ ̲ = 1 and N = 200 .
Table 11. Error rates under different levels of serial correlation and collinearity for ρ ̲ = 1 and N = 200 .
Bayes FactorsBIC
λs–0.9–0.500.50.9–0.9–0.500.50.9
orthogonal0.0450.0390.0610.0850.5690.0980.0810.0890.1270.543
λ = 1 0.1350.1540.1740.2760.7790.1480.1780.1860.2780.762
λ = 0 . 01 0.6380.6610.6960.8030.9720.6000.6270.6780.7940.966

C. Proof of Proposition 3 and Proposition 5

Here we use a different way of derivation from Lancaster [11]. In brief, we attempt to find a correction function attached to the marginal posterior density of ρ such that the mode of the marginal posterior is a consistent estimator for ρ. We first reparameterize the fixed effect as
f i = g i r ( ρ ) 1 T ι X i β
where r ( ρ ) is a function of ρ, which we will find out later. The derivation of the conditional posterior distribution p ( g i , β , σ 2 | , Y , Y 0 ) follows standard Bayesian techniques, see e.g., [19] Chapter 10. The details are available upon request. Here we just show the results after g i , β and σ 2 are integrated out.
p ( ρ | Y , Y 0 ) p ( Y | Y 0 ) = Γ N ( T 1 ) 2 T N 2 ( 2 π ) N ( T 1 ) 2 I ( ρ L < ρ < ρ U ) ρ U ρ L η η + 1 k 2 , N N ( T 1 ) 2 a N ρ 2 2 b N ρ + c N N ( T 1 ) 2 r N ( ρ ) .
Taking log and differentiating both sides with respect to ρ produces
d ln p ( ρ | Y , Y 0 ) d ρ = N ( T 1 ) ( a ρ b ) a ρ 2 2 b ρ + c N d ln r ( ρ ) d ρ .
By setting the above equal to 0, we can obtain
d ln r ( ρ ) d ρ = ( T 1 ) ( a N ρ b N ) a N ρ 2 2 b N ρ + c N .
Suppose for now we have included the true regressors in our model. Taking probability limit of the right hand side by using Equations (27), (28) and (29) and evaluating both sides at ρ ̲ gives
d ln r ( ρ ̲ ) d ρ = h ( ρ ̲ ) .
Solving the above differential equation, we will have13
r ( ρ ) = e x p ϕ ( ρ ) ,
where ϕ ( ρ ) is given in Equation (7). By replacing r ( ρ ) with e x p ϕ ( ρ ) in Equation (43) and dropping the terms not involving ρ, we will have the result in Equation (13).
When some true regressors are excluded from the model, the differential Equation (44) now becomes
d ln r ( ρ ̲ ) d ρ ̲ = ( T 1 ) h 2 ( β , ρ ̲ ) σ 2 h ( ρ ̲ ) h 3 ( β ) + ( T 1 ) σ 2 d ρ ̲
If the solution in Equation (45) is still valid, we should have
( T 1 ) h 2 ( β , ρ ̲ ) + ( T 1 ) σ 2 h ( ρ ̲ ) h 3 ( β ) + ( T 1 ) σ 2 = h ( ρ ̲ ) .
So unless we have either ( T 1 ) h 2 ( β , ρ ̲ ) h 3 ( β ) = h ( ρ ̲ ) or h 2 ( β , ρ ̲ ) = h 3 ( β ) = 0 , Equation (45) will not be a solution for Equation (46). In other words, the reparameterization of the fixed effect in Equation (6) cannot lead to consistent estimation of ρ.

D. Proof of Proposition 4

To prove the claims in Proposition 4, we first need to prove Lemma 9 and Lemma 10.
Lemma 9. For T 3 , when T is odd, the polynomial h ( ρ ) is strictly increasing over ( , ) and has only one real root in [ 2 , 1 ) ; when T is even, the polynomial h ( ρ ) is greater than 0 with no real roots and is strictly decreasing for ρ < 1 and strictly increasing for ρ > 1 with 1 as the minimum point.
Proof. When T = 3 , we have h ( ρ ) = 1 3 ( ρ + 2 ) ; when T = 4 , we have h ( ρ ) = 1 4 ( ρ 2 + 2 ρ + 3 ) . These two cases obviously satisfy the claims in Lemma 9. For T > 4 , note that h ( ρ ) = d h ( ρ ) d ρ = t = 1 T 2 t ( T t 1 ) T ρ t 1 = ( T 2 ) ρ T T ρ T 1 + T ρ ( T 2 ) T ( ρ 1 ) 3 and the Sturm sequence of T h ( ρ ) ( ρ 1 ) 3 is
{ T h ( ρ ) ( ρ 1 ) = t = 0 T 2 ( 2 t T + 2 ) ρ t , 1 ( ρ 1 ) 2 d h ( ρ ) ( ρ 1 ) 3 d ρ = t = 1 T 2 t ρ t 1 , t = 1 T 3 ( T t 2 ) ρ t 1 , ( T 2 ) 2 } .
Table 12 shows the signs of the Sturm sequence for ρ = ± and T being even. We can see that the difference between the number of sign changes when ρ changes from to ∞ is 2. By Sturm's theorem, this implies there are two real roots for T h ( ρ ) ( ρ 1 ) 3 = 0 in ( , ) . Clearly, ρ = 1 is one real root. In other words, h ( ρ ) has only one real root in ( , ) , which is ρ = 1 , and h ( ρ ) > 0 for ρ > 1 , h ( ρ ) < 0 for ρ < 1 . Therefore h ( ρ ) is strictly decreasing for ρ < 1 and strictly increasing for ρ > 1 with ρ = 1 as the minimum point. Similarly, checking the difference between the number of sign changes in Table 13, we can find that h ( ρ ) has no real roots and h ( ρ ) > 0 . Hence h ( ρ ) is strictly increasing over the real line when T is odd.
Table 12. Sturm sequence of T h ( ρ ) ( ρ 1 ) 3 when T is even and greater than 4.
Table 12. Sturm sequence of T h ( ρ ) ( ρ 1 ) 3 when T is even and greater than 4.
ρ h ( ρ ) t = 0 T 2 ( 2 t T + 2 ) ρ t t = 1 T 2 t ρ t 1 t = 1 T 3 ( T t 2 ) ρ t 1 ( T 2 ) 2
−∞++
++++
Table 13. Sturm sequence of T h ( ρ ) ( ρ 1 ) 3 when T is odd and greater than 4.
Table 13. Sturm sequence of T h ( ρ ) ( ρ 1 ) 3 when T is odd and greater than 4.
ρ h ( ρ ) t = 0 T 2 ( 2 t T + 2 ) ρ t t = 1 T 2 t ρ t 1 t = 1 T 3 ( T t 2 ) ρ t 1 ( T 2 ) 2
−∞++
++++
We can write h ( ρ ) = ρ T T ρ + T 1 T ( ρ 1 ) 2 and the Sturm sequence of T h ( ρ ) ( ρ 1 ) 2 is
T h ( ρ ) ( ρ 1 ) = t = 1 T 2 ρ t ( T 1 ) , 1 ( ρ 1 ) d h ( ρ ) ( ρ 1 ) 2 d ρ = t = 0 T 2 ρ t , T 1 .
From Table 14, we can see that h ( ρ ) does not have real roots and hence it is greater than 0 when T is an even number. Table 15 shows h ( ρ ) has only one real root when T is odd. Since h ( 2 ) = ( 2 ) T + 3 T 1 9 T 0 , h ( 1 ) = T 1 2 T > 0 and h ( ρ ) is strictly increasing when T is an odd number no less than 3, the real root of h ( ρ ) = 0 must lie in between 2 and 1 .
Table 14. Sturm sequence of T h ( ρ ) ( ρ 1 ) 2 when T is even and greater than or equal to 3.
Table 14. Sturm sequence of T h ( ρ ) ( ρ 1 ) 2 when T is even and greater than or equal to 3.
ρ h ( ρ ) t = 1 T 1 ρ t T + 1 t = 0 T 2 ρ t T 1
−∞+++
++++
Table 15. Sturm sequence of T h ( ρ ) ( ρ 1 ) 2 when T is odd and greater than or equal to 3.
Table 15. Sturm sequence of T h ( ρ ) ( ρ 1 ) 2 when T is odd and greater than or equal to 3.
ρ h ( ρ ) t = 1 T 1 ρ t T + 1 t = 0 T 2 ρ t T 1
−∞++
++++
 ☐
Additionally, we need the following lemma to show that the true value of ρ is a local posterior mode asymptotically.
Lemma 10. Under Assumption 1 and 2, we have p l i m N a N = a ̲ > σ 2 t r a c e ( C H C ) , where ρ is evaluated at its true value ( ρ ̲ ) in C. Also, we have t r a c e ( C H C ) 2 h 2 ( ρ ̲ ) T 1 , where the equal sign holds only for T = 2 , and t r a c e ( C H C ) 2 h 2 ( ρ ̲ ) T 1 + h ( ρ ̲ ) , where the equal sign holds for T = 2 or ρ ̲ = 1 . In other words, the following are true:
a ̲ > 2 σ 2 h 2 ( ρ ̲ ) T 1 ,
a ̲ > 2 σ 2 h 2 ( ρ ̲ ) T 1 + h ( ρ ̲ ) σ 2 .
Proof. Substituting Equation (3) into the right of Equation (15) gives
a = i = 1 N ( f i ζ 1 + y i , 0 ζ 2 + C X i β + C u i ) H ( f i ζ 1 + y i , 0 ζ 2 + C X i β + C u i ) i = 1 N ( f i ζ 1 + y i , 0 ζ 2 + C X i β + C u i ) H X i i = 1 N X i H X i 1 i = 1 N X i H ( f i ζ 1 + y i , 0 ζ 2 + C X i β + C u i ) η + 1 .
Since we assume E ( u i | X i , f i , y i , 0 ) = 0 and set η = O ( N α ) with α < 0 , a N is asymptotically equivalent to a ˜ N , where a ˜ is defined as
a ˜ = i = 1 N u i C H C u i + i = 1 N ( f i ζ 1 + y i , 0 ζ 2 + C X i β ) H ( f i ζ 1 + y i , 0 ζ 2 + C X i β ) i = 1 N ( f i ζ 1 + y i , 0 ζ 2 + C X i ) H X i i = 1 N X i H X i 1 i = 1 N X i H ( f i ζ 1 + y i , 0 ζ 2 + C X i ) , = i = 1 N u i C H C u i + i = 1 N ( y _ C u i ) H ( y _ C u i ) i = 1 N ( y _ C u i ) H X i i = 1 N X i H X i 1 i = 1 N X i H ( y _ C u i ) .
Note that 1 N a ˜ i = 1 N u i C H C u i is non-negative since it is equal to 1 N multiplied by the SSR obtained by regressing f i ζ 1 + y i , 0 ζ 2 + C X i β on fixed effects and X i , i.e.,
f i ζ 1 + y i , 0 ζ 2 + C X i β = q i ι + X i ϑ + ε i ,
where q i denotes the fixed effect scalar and ε i = ( ε i 1 , ε i 2 , , ε i T ) is the error term in the regression. Assumption 2 (e) rules out p l i m N i = 1 N ε i ε i N = 0 . Note that ( 1 ρ ) ζ 1 + ζ 2 = ι . When T 3 , pre-multiplying both sides of Equation (53) by M ζ yields
M ζ C X i β = M ζ X i ϑ + M ζ ε i .
Therefore one can use Equation (5) to check Assumption 2 (e), which ensures the S S R N from Equation (53) to be strictly positive asymptotically and
a ̲ > p l i m N 1 N i = 1 N u i C H C u i = σ 2 t r a c e ( C H C ) .
Hence Equation (49) is strict if t r a c e ( C H C ) 2 h 2 ( ρ ̲ ) T 1 . Similarly, to prove Equation (50), one needs to show t r a c e ( C H C ) 2 h 2 ( ρ ̲ ) T 1 + h ( ρ ) . The proof of these two inequalities can be found in the proof of Lemma 3 in Dhaene and Jochmans [8], by noting V 0 L B = t r a c e ( C H C ) T 1 , b 0 = h ( ρ ) T 1 and c 0 = h ( ρ ) T 1 , where V 0 L B , b 0 and c 0 are the notations used in their paper. ☐
From Equation (14), we can see that ψ ( ρ ) = ϕ ( ρ ) l ( ρ ) and l ( ρ ) = T 1 2 ln ( a N ρ 2 2 b N ρ + c N ) . The posterior mode is found by checking the intersection points of h ( ρ ) , i.e., d ϕ ( ρ ) d ρ and l ( ρ ) , which is
l ( ρ ) = ( T 1 ) ( a N ρ b N ) a N ρ 2 2 b N ρ + c N .
Assuming that the true exogenous regressors are included such that h 2 ( β , ρ ̲ ) = h 3 ( β ) = 0 and using Equation (27) to (30), we can find that the probability limit of l ( ρ ) , denoted as l ̲ ( ρ ) , is
l ̲ ( ρ ) = p l i m N l ( ρ ) = ( T 1 ) ( ρ ρ ̲ γ ) ( ρ ρ ̲ γ ) 2 + σ 2 a ̲ 2 a ̲ ( T 1 ) σ 2 h 2 ( ρ ̲ ) .
From Equation (49) in Lemma 10, we know that σ 2 a ̲ 2 a ̲ ( T 1 ) σ 2 h 2 ( ρ ̲ ) > σ 4 h 2 ( ρ ̲ ) a ̲ 2 0 . Hence the denominator in Equation (57) is positive and l ̲ ( ρ ) ( < ) 0 for ρ ( < ) ρ ̲ + γ . Moreover,
l ̲ ( ρ ) = p l i m N l ( ρ ) = ( T 1 ) ( ρ ρ ̲ γ ) 2 σ 2 a ̲ 2 a ̲ ( T 1 ) σ 2 h 2 ( ρ ̲ ) ( ρ ρ ̲ γ ) 2 + σ 2 a ̲ 2 a ̲ ( T 1 ) σ 2 h 2 ( ρ ̲ ) 2 .
The denominator above is positive. The polynomial in the numerator has two roots: ρ ̲ + γ ± σ 2 a ̲ 2 a ̲ ( T 1 ) σ 2 h 2 ( ρ ̲ ) . Using Equation (49) in Lemma 10 again, we have ρ ̲ + γ + σ 2 a ̲ 2 a ̲ ( T 1 ) σ 2 h 2 ( ρ ̲ ) > ρ ̲ σ 2 h ( ρ ̲ ) a ̲ + σ 2 a ̲ h ( ρ ̲ ) and ρ ̲ + γ σ 2 a ̲ 2 a ̲ ( T 1 ) σ 2 h 2 ( ρ ̲ ) < ρ ̲ σ 2 h ( ρ ̲ ) a ̲ σ 2 a ̲ h ( ρ ̲ ) . Therefore the true value of ρ, i.e., ρ ̲ lies in between the two roots of l ̲ ( ρ ) = 0 , where l ̲ ( ρ ) is increasing.14 When ρ is larger (less) than the bigger (smaller) root, l ̲ ( ρ ) will be decreasing. Since l ̲ ( ρ ) ( < ) 0 for ρ ( < ) ρ ̲ + γ , we can see that lim ± l ̲ ( ρ ) = 0 . Since l ̲ ( ρ ̲ ) = h ( ρ ̲ ) , h ( ρ ) therefore intersects l ̲ ( ρ ) at ρ ̲ . Define ψ ̲ ( ρ ) = ϕ ( ρ ) p l i m N l ( ρ ) . Evaluating its second order derivative at the true value of ρ yields ψ ̲ ( ρ ̲ ) = h ( ρ ̲ ) ( T 1 ) a ̲ 2 σ 2 h 2 ( ρ ̲ ) ( T 1 ) σ 2 . Using Equation (50) in Lemma 10, we can find ψ ̲ ( ρ ̲ ) < 0 . In other words, ρ ̲ is a local maximum for ψ ̲ ( ρ ) . When T is even, because h ( ρ ) > 0 is increasing after 1 and l ̲ ( ρ ) is decreasing beyond the bigger root of l ̲ ( ρ ) = 0 , h ( ρ ) should intersect l ̲ ( ρ ) again as at the point ρ U in Figure 1, which is larger than ρ ̲ and the bigger root of l ̲ ( ρ ) = 0 . We can see that ψ ̲ ( ρ ) has bell shape over ( , ρ U ] when T is even. However, for ρ > ρ U , ψ ̲ ( ρ ) is increasing and hence lim ρ ψ ̲ ( ρ ) = . To ensure the marginal posterior of ρ to be proper, we have to restrict the bounds of ρ in our estimation. Similarly, when T is odd, since h ( ρ ) is strictly increasing, h ( ρ ) should intersect l ̲ ( ρ ) on the left hand side of ρ ̲ . There should be three intersection points as shown in Figure 2. Since ψ ̲ ( ρ ) is decreasing for ρ < ρ L due to ψ ̲ ( ρ ) < 0 , we have lim ρ ψ ̲ ( ρ ) = when T is odd. Choosing ρ L and ρ U in the way described by Proposition 4 can ensure ψ ̲ ( ρ ) evaluated at the end points to be smaller than ψ ̲ ( ρ ̲ ) and the marginal posterior density of ρ to be proper.
Figure 1. Intersection points of h ( ρ ) and l ̲ ( ρ ) when T = 6 , ρ ̲ = 1 , σ 2 = 1 , σ f 2 = 1 , σ y 0 2 = 0 . 1 , E ( f i ) = E ( y i , 0 ) = E ( f i y i , 0 ) = 0 and there are no exogenous regressors.
Figure 1. Intersection points of h ( ρ ) and l ̲ ( ρ ) when T = 6 , ρ ̲ = 1 , σ 2 = 1 , σ f 2 = 1 , σ y 0 2 = 0 . 1 , E ( f i ) = E ( y i , 0 ) = E ( f i y i , 0 ) = 0 and there are no exogenous regressors.
Econometrics 03 00494 g001
Figure 2. Intersection points of h ( ρ ) and l ̲ ( ρ ) when T = 5 , ρ ̲ = 1 , σ 2 = 1 , σ f 2 = 1 , σ y 0 2 = 0 . 1 , E ( f i ) = E ( y i , 0 ) = E ( f i y i , 0 ) = 0 and there are no exogenous regressors.
Figure 2. Intersection points of h ( ρ ) and l ̲ ( ρ ) when T = 5 , ρ ̲ = 1 , σ 2 = 1 , σ f 2 = 1 , σ y 0 2 = 0 . 1 , E ( f i ) = E ( y i , 0 ) = E ( f i y i , 0 ) = 0 and there are no exogenous regressors.
Econometrics 03 00494 g002

E. Proof of Proposition 6

To prove Proposition 6 and 7, essentially we need to simplify the integral(s) which appears in the Bayes factor. One way to do it is Laplace's method, the details of which can be found in [20,21]. Under the assumption that there exists only one solution ρ * in ( ρ L , ρ U ) for ψ ( ρ ) = 0 with ψ ( ρ * ) < 0 , the integral appearing in the Bayes factor can be written as
ρ U ρ L exp N ψ ( ρ ) = 2 π N ψ ( ρ * ) exp N ψ ( ρ * ) 1 + O ( 1 N )
Building on Equation (18), the first and second order derivatives of ψ ̲ ( ρ ) are
ψ ̲ ( ρ ) = h ( ρ ) ( T 1 ) ( ρ ρ ̲ γ ) ρ 2 2 ρ ( ρ ̲ + γ ) + ρ ̲ 2 + 2 ρ ̲ γ + ( T 1 ) σ 2 + h 3 ( β ) a ̲ ,
ψ ̲ ( ρ ) = h ( ρ ) ( T 1 ) ρ 2 2 ρ ( ρ ̲ + γ ) + ρ ̲ 2 + 2 ρ ̲ γ + ( T 1 ) σ 2 + h 3 ( β ) a ̲ 2 ( ρ ρ ̲ γ ) 2 ρ 2 2 ρ ( ρ ̲ + γ ) + ρ ̲ 2 + 2 ρ ̲ γ + ( T 1 ) σ 2 + h 3 ( β ) a ̲ 2 .
If the chosen set of regressors can lead to consistent estimation of ρ, i.e., either Equation (20) or (21) is satisfied, evaluating Equations (18), (60) and (61) at ρ ̲ will give
ψ ̲ ( ρ ̲ ) = ϕ ( ρ ̲ ) T 1 2 ln ( T 1 ) σ 2 + h 3 ( β ) , ψ ̲ ( ρ ̲ ) = 0 , ψ ̲ ( ρ ̲ ) = h ( ρ ̲ ) a ̲ ( T 1 ) ( T 1 ) σ 2 + h 3 ( β ) + 2 h 2 ( ρ ̲ ) T 1 .
The Bayes factor in Equation (25) is
p ( Y | Y 0 , M 1 ) p ( Y | Y 0 , M 0 ) = η η + 1 k 1 k 0 2 2 π N ψ ( ρ | M 1 * | M 1 ) exp N ψ ( ρ | M 1 * | M 1 ) ( ρ U 1 ρ L 1 ) c | M 0 N N ( T 1 ) 2 1 + O ( 1 N ) .
Asymptotically, replacing ψ ( ρ | M 1 * | M 1 ) , ψ ( ρ | M 1 * | M 1 ) and c | M 0 N by their probability limits and η η + 1 by O ( N α ) with α < 0 (our prior choice for η) should not affect the analysis of the Bayes factor. Define
ξ 10 = O N α ( k 1 k 0 ) 2 ρ U 1 ρ L 1 2 π N ψ ̲ ( ρ | M 1 * | M 1 ) exp N ψ ̲ ( ρ | M 1 * | M 1 ) + T 1 2 ln c ̲ | M 0 ,
which should have the same asymptotic behaviour as Equation (62). If X i 1 is the true set of regressors to generate Y (so h 2 ( β , ρ ̲ | M 1 ) = h 3 ( β | M 1 ) = 0 and ρ | M 1 * = ρ ̲ ), ξ 10 can be written as
ξ 10 = O N ( k 1 k 0 ) 2 ρ U ρ L 2 π N h ( ρ ̲ ) a ̲ | M 1 σ 2 + 2 h 2 ( ρ ̲ ) T 1 exp N ϕ ( ρ ̲ ) + N ( T 1 ) 2 ln c ̲ | M 0 ( T 1 ) σ 2 .
So we can guarantee p ( Y | Y 0 , M 1 ) p ( Y | Y 0 , M 0 ) tends to infinity given ρ ̲ 0 as long as Equation (31) holds.
Now let us consider the case when the true model is M 0 in Equation (25), i.e., ρ ̲ is 0 and X i 0 is the set of true regressors. ξ 10 takes the following form,
ξ 10 = ( T 1 ) σ 2 N ( T 1 ) 2 ρ U 1 ρ L 1 O N α ( k 1 k 0 ) 2 2 π N ψ ̲ ( ρ | M 1 * | M 1 ) exp N ψ ( ρ | M 1 * | M 1 ) = O N α ( k 1 k 0 ) 2 2 π N ψ ̲ ( ρ | M 1 * | M 1 ) exp N ϕ ( ρ | M 1 * ) + N ( T 1 ) 2 ln ( T 1 ) σ 2 d ( ρ | M 1 * | M 1 ) ρ U 1 ρ L 1 .
If Equation (32) holds, then the Bayes factor in Equation (25) will tend to 0 for large sample size. If M 1 is misspecified, but ρ can be consistently estimated, i.e., ρ | M 1 * = 0 , Equation (65) can be simplified as
ξ 10 = O N ( k 1 k 0 ) 2 2 π N ψ ̲ ( 0 | M 1 ) exp N ( T 1 ) 2 ln ( T 1 ) σ 2 ( T 1 ) σ 2 + h 3 ( β | M 1 ) ρ U 1 ρ L 1 .
If Equation (21) holds, some true regressors should be excluded from M 1 and hence Equation (32) is true with h 3 ( β | M 1 ) > 0 . If Equation (20) holds, we have h 3 ( β | M 1 ) = 0 and k 1 k 0 . The Bayes factor will therefore tend to 0 when N tends to infinity.
Finally we show when Equation (31) will be violated. If M 0 only includes the true exogenous regressors, we should have h 2 ( β , ρ ̲ | M 0 ) = h 3 ( β | M 0 ) = 0 and a ̲ | M 0 > σ 2 t r a c e ( C H C ) as in Equation (55). The following should be true:
ϕ ( ρ ̲ ) + T 1 2 ln a ̲ | M 0 ρ ̲ 2 2 ρ ̲ σ 2 h ( ρ ̲ ) ( T 1 ) σ 2 + 1 > υ ( ρ ̲ ) = ϕ ( ρ ̲ ) + T 1 2 ln 1 + ρ ̲ 2 j = 0 T 2 ( T j 1 ) ρ ̲ 2 j T 1 ρ ̲ 2 j = 0 T 2 i = 0 j ρ ̲ i 2 T ( T 1 ) 2 ρ ̲ h ( ρ ̲ ) T 1 .
When T = 2 , the right hand side of Equation (67), i.e., υ ( ρ ̲ ) , will be ρ ̲ 2 + 1 2 ln ( 1 ρ ̲ + ρ ̲ 2 2 ) and it is an increasing function with υ ( 0 ) = 0 . Hence υ ( ρ ̲ ) is less than 0 when ρ ̲ < 0 . The left hand side of Equation (67) can be negative for ρ ̲ < 0 if σ 2 is much larger than E ( f i 2 ) and E ( y i , 0 2 ) .15 When T is an odd number great than or equal to 3, υ ( ρ ) is positive for ρ . When T is even and greater than 2, υ ( ρ ) is positive for ρ ( 1 , ) and has a root less than 1 . If ρ ̲ is less than the root, υ ( ρ ̲ ) will be negative. By direct calculation, as T increases, we find that the root of υ ( ρ ) will get closer to 1 from the left. Hence, to sum up, Equation (31) will hold for ρ ̲ ( 1 , ) when T is any integer greater than or equal to 3 and h 2 ( β , ρ ̲ | M 0 ) = h 3 ( β | M 0 ) = 0 .

F. Proof of Proposition 7

By Laplace's method, we can write Equation (26) as
p ( Y | Y 0 , X 1 , M 1 ) p ( Y | Y 0 , X 2 , M 2 ) = ρ U 2 ρ L 2 ρ U 1 ρ L 1 η η + 1 k 1 k 2 2 ψ ( ρ | M 2 * | M 2 ) ψ ( ρ | M 1 * | M 1 ) exp N ψ ( ρ | M 1 * | M 1 ) ψ ( ρ | M 2 * | M 2 ) 1 + O ( 1 N ) .
Suppose the true model is M 2 , similar to the previous section, by dropping ρ U 2 ρ L 2 ρ U 1 ρ L 1 ψ ̲ ( ρ | M 2 * | M 2 ) ψ ̲ ( ρ ̲ | M 1 ) = O ( 1 ) , we can obtain the corresponding ξ 12 ,
ξ 12 = O N α ( k 1 k 2 ) 2 exp N ϕ ( ρ | M 1 * ) ϕ ( ρ ̲ ) + T 1 2 ln d ( ρ ̲ | M 2 ) d ( ρ | M 1 * | M 1 ) .
Note that d ( ρ ̲ | M 2 ) = ( T 1 ) σ 2 . So if Equation (33) is satisfied, the Bayes factor is consistent in model selection. If M 2 despite being misspecified can still lead to consistent estimation of ρ, ξ 12 will become
ξ 12 = O N α ( k 1 k 2 ) 2 ( T 1 ) σ 2 ( T 1 ) σ 2 + h 3 ( β | M 1 ) N ( T 1 ) 2 .
If Equation (21) holds, we will have h 3 ( β | M 1 ) > 0 and hence Equation (33); if Equation (20) holds, M 1 will nest M 2 ( k 1 > k 2 ) with h 3 ( β | M 2 ) = 0 . For both cases, p ( Y | Y 0 , X 1 , M 1 ) p ( Y | Y 0 , X 2 , M 2 ) will tend to 0.

G. Proof of Proposition 8

The likelihood function takes the following form,
p Y | θ , Y 0 = ( 2 π ) T N 2 σ 2 ( N T 2 ) i = 1 N exp { 1 2 σ 2 [ y i y i _ ρ ι f i X i β ] y i y i _ ρ ι f i X i β } .
By taking log of the likelihood function and solving the first order condition, we can obtain the maximum likelihood estimators as the following,
σ 2 ^ = 1 N T i = 1 N [ y i y i _ ρ ^ ι f ^ i X i β ^ ] y i y i _ ρ ^ ι f ^ i X i β ^ ,
f ^ i = ι ( y i y i _ ρ ^ X i β ^ ) T ,
β ^ = i = 1 N X i H X i 1 i = 1 N X i H ( y i y i _ ρ ^ ) ,
ρ ^ = b a ,
Substituting the above into the log of (71) multiplied by 2 and adding the appropriate constants (number of parameters multiplied by the natural log of the sample size) yields the BIC Equations in (34) and (35). A smaller BIC value indicates evidence in favour of the model. Let us now look at the case of Equation (25). When X i 1 are the true regressors to generate Y i , the BIC difference between M 0 and M 1 is
B I C | M 0 B I C | M 1 = N T ln c | M 0 N ln c | M 1 N ( b | M 1 / N ) 2 a | M 1 / N + ( k 0 k 1 1 ) ln ( N T ) .
Asymptotically speaking, replacing a N , b N and c N by a ̲ , b ̲ and c ̲ defined in Equation (27) to (30) respectively should not affect the analysis. Define ω 01 as
ω 01 = N T ln c ̲ | M 0 c ̲ | M 1 b ̲ | M 1 2 a ̲ | M 1 + ( k 0 k 1 1 ) ln ( N T ) = N T ln ( T 1 ) σ 2 + h 3 ( β | M 0 ) + a ̲ | M 0 ρ ̲ 2 + 2 ρ ̲ h 2 ( β , ρ ̲ | M 0 ) 2 ρ ̲ σ 2 h ( ρ ̲ ) ( T 1 ) σ 2 + h 3 ( β | M 1 ) h 2 ( β , 0 | M 1 ) σ 2 h ( ρ ̲ ) 2 a ̲ | M 1 + ( k 0 k 1 1 ) ln ( N T ) .
If M 1 is the true model, we should have ω 01 > 0 , h 3 ( β | M 1 ) = h 2 ( β , 0 | M 1 ) = 0 and inside the natural log of the first term, the numerator should be larger than the denominator. That is we should have Equation (37) stated in Proposition 8 or
h 3 ( β | M 0 ) + a ̲ | M 0 ρ ̲ 2 + 2 ρ ̲ h 2 ( β , ρ ̲ | M 0 ) 2 ρ ̲ σ 2 h ( ρ ̲ ) + σ 4 h 2 ( ρ ̲ ) a ̲ | M 1 > 0 .
If X i 0 is the same as X i 1 , we will have a ̲ | M 0 = a ̲ | M 1 = a ̲ , k 1 = k 0 and h 2 ( β , ρ ̲ | M 0 ) = h 3 ( β | M 0 ) = 0 . The left of Equation (78) will become a ̲ ρ ̲ σ 2 h ( ρ ̲ ) a ̲ 2 . If ρ ̲ σ 2 h ( ρ ̲ ) a ̲ = 0 , i.e., ρ ̲ + γ = p l i m N ρ ^ M L E = 0 , we will have B I C | M 0 B I C | M 1 < 0 asymptotically, which means we will prefer M 0 over M 1 even if ρ ̲ 0 . In a situation like this, model selection is not consistent. The problem with BIC will also arise when M 0 is the true model with ρ ̲ = 0 . Now ω 01 is
ω 01 = N T ln ( T 1 ) σ 2 c ̲ | M 1 b ̲ | M 1 2 a ̲ | M 1 + ( k 0 k 1 1 ) ln ( N T ) ,
To have ω 01 < 0 , we should have Equation (36) in Proposition 8 or
h 2 ( β , 0 | M 1 ) ( T 1 ) σ 2 T 2 a ̲ | M 1 h 3 ( β | M 1 ) < 0 .
If h 2 ( β , 0 | M 1 ) = h 3 ( β | M 1 ) = 0 , we will have ω 01 > 0 , which implies inconsistency in model selection. For the case of Equation (26), the corresponding ω 21 is
ω 21 = N T ln c ̲ | M 2 b ̲ | M 2 2 a ̲ | M 2 c ̲ | M 1 b ̲ | M 1 2 a ̲ | M 1 + ( k 2 k 1 ) ln ( N T ) .
BIC will be consistent if we have Equation (38) stated in Proposition 8 or
a ̲ | M 1 a ̲ | M 2 h 3 ( β | M 2 ) + a ̲ | M 2 σ 4 h 2 ( ρ ̲ ) a ̲ | M 1 h 2 ( β , ρ ̲ | M 2 ) σ 2 h ( ρ ̲ ) 2 > 0 .
If X i 2 nests X i 1 , the left of Equation (82) can be simplified as ( a ̲ | M 2 a ̲ | M 1 ) σ 4 h 2 ( ρ ̲ ) with a ̲ | M 2 a ̲ | M 1 . When a ̲ | M 2 = a ̲ | M 1 , ω 21 will become ( k 2 k 1 ) ln ( N T ) with k 2 > k 1 . BIC is therefore consistent. But if a ̲ | M 2 < a ̲ | M 1 , ω 21 in Equation (81) will be dominated by the first term (negative) asymptotically and hence BIC is inconsistent.

References

  1. M. Nerlove. “Experimental Evidence on the Estimation of Dynamic Economic Relations from a Time Series of Cross-Sections.” Econ. Stud. Quart. 18 (1968): 42–74. [Google Scholar]
  2. S. Nickell. “Biases in Dynamic Models with Fixed Effects.” Econometrica 49 (1981): 1417–1426. [Google Scholar] [CrossRef]
  3. T. Lancaster. “The incidental parameter problem since 1948.” J. Econom. 95 (2000): 391–413. [Google Scholar] [CrossRef]
  4. C. Hsiao. Analysis of Panel Data, 2nd ed. Cambridge, UK: Cambridge University Press, 2003. [Google Scholar]
  5. C. Bester, and C. Hansen. “A Penalty Function Approach to Bias Reduction in Nonlinear Panel Models with Fixed Effects.” J. Bus. Econ. Stat. 27 (2009): 131–148. [Google Scholar] [CrossRef]
  6. J. Hahn, and G. Kuersteiner. “Bias reduction for dynamic nonlinear panel models with fixed effects.” Econ. Theory 27 (2011): 1152–1191. [Google Scholar] [CrossRef]
  7. M. Arellano, and S. Bonhomme. “Robust Priors in Nonlinear Panel Data Models.” Econometrica 77 (2009): 489–536. [Google Scholar]
  8. G. Dhaene, and K. Jochmans. Likelihood Inference in an Autoregression with Fixed Effects. Discussion Paper, Sciences Po. Cambridge, UK: Cambridge University Press, 2013. [Google Scholar]
  9. D. Andrews, and B. Lu. “Consistent model and moment selection procedures for GMM estimation with application to dynamic panel data models.” Biometrika 101 (2001): 123–164. [Google Scholar] [CrossRef]
  10. Y. Lee, and P.C. Phillips. “Model Selection in the Presence of Incidental Parameters.” J. Econ., 2014. [Google Scholar] [CrossRef]
  11. T. Lancaster. “Orthogonal Parameters and Panel Data.” Rev. Econ. Stud. 69 (2002): 647–666. [Google Scholar] [CrossRef]
  12. D.J. Poirier. Intermediate Statistics and Econometrics : A Comparative Approach. Cambridge, MA, USA: MIT Press, 1995. [Google Scholar]
  13. H. White. Asymptotic Theory for Econometricians. Upper Saddle River, NJ, USA: Prentice Hall, 2001. [Google Scholar]
  14. A. Zellner. “On Assessing Prior Distributions and Bayesian Regression Analysis with G-prior Distribution.” In Bayesian Inference and Decision Techniques: Essays in Honour of Bruno de Finetti. Edited by P.K. Goel and A. Zellner. Amsterdam, The Netherlands: North-Holland, 1986, pp. 233–243. [Google Scholar]
  15. E. Ley, and M.F. Steel. “On the Effect of Prior Assumptions in Bayesian Model Averaging with Applications to Growth Regression.” J. Appl. Econom. 24 (2009): 651–674. [Google Scholar] [CrossRef]
  16. C. Fernandez, E. Ley, and M.F. Steel. “Benchmark Priors for Bayesian Model Averaging.” J. Econom. 100 (2001): 381–427. [Google Scholar] [CrossRef]
  17. D.R. Cox, and N. Reid. “Parameter Orthogonality and Approximate Conditional Inference.” J. R. Stat. Soc. Ser. B 49 (1987): 1–39. [Google Scholar]
  18. Y. Chikuse. Statistics on Special Manifolds. Berlin, Germany; Heidelberg, Germany: Springer, 2003. [Google Scholar]
  19. G. Koop, D.J. Poirier, and J.L. Tobias. Bayesian Econometric Methods. Cambridge, UK: Cambridge University Press, 2007. [Google Scholar]
  20. L. Tierney, and J. Kadane. “Accurate Approximations for Posterior Moments and Marginal Densities.” J. Am. Stat. Assoc. 81 (1986): 82–86. [Google Scholar] [CrossRef]
  21. R.E. Kass, L. Tierney, and J.B. Kadane. “The Validity of Posterior Expansions Based on Laplace’s Method.” In Bayesian and Likelihood Methods in Statistics and Econometrics: Essays in Honor of George a Barnard. Edited by S. Geisser, J.S. Hodges, S.J. Press and A. Zellner. Amsterdam, The Netherlands: North-Holland, 1990. [Google Scholar]
  • 1They treat the bias as a result from finite time periods (finite sample bias) and remove the first order bias in the Taylor expansion.
  • 2One could withhold some sample while calculating the expression to see how its value changes with N.
  • 3They are SSR when η = 0.
  • 4When T is even, lim ρ ψ ̲ ( ρ ) = 0 . For model comparison, we need a proper prior for ρ and hence ρ L must be finite for finite N. In practice, we can choose ρ L to ensure that [ ρ L , ρ U ] contains a unique posterior mode when the true set of regressors are included. In the subsequent simulations, we choose ρ L = N when T is even.
  • 5Note that h 3 ( β ) is the probability limit of 1 N times the SSR obtained by regressing X ̲ i β on fixed effects and X i .
  • 6Due to the assumptions in the example, ( T 1 ) h 2 ( β , ρ ̲ ) h 3 ( β ) = ( T 1 ) t r a c e { C H i = 1 N [ V a r ( X ̲ i β ) + H E ( X ̲ i β ) E ( X ̲ i β ) ] } t r a c e { H i = 1 N [ V a r ( X ̲ i β ) + H E ( X ̲ i β ) E ( X ̲ i β ) ] } . Since i = 1 N V a r ( X ̲ i β ) N is proportional to I T and H E ( X ̲ i β ) E ( X ̲ i β ) = 0 . Hence ( T 1 ) h 2 ( β , ρ ̲ ) h 3 ( β ) = t r a c e ( C H ) = h ( ρ ̲ ) .
  • 7That is how often the model with the highest posterior model probability is not the true model.
  • 8In the subsequent discussion, we use ER10 to denote the proportion of errors made when the true model includes y i _ while the chosen model does not, and ER01 for when the true model does not include y i _ while the chosen model does. Note that either ER10 + ER11 = 1 or ER01 + ER00 = 1. The notations for BIC are defined similarly.
  • 9Since the correlation among different regressors is random in our simulation, a ̲ M 0 and the root are also random.
  • 10The absolute value of Nickell bias under ρ ̲ = 1 is bigger than when ρ ̲ = 1 . None of the conditions for model selection consistency are found to be violated.
  • 11We have also obtained the results of finite sample biases of different point estimators (available upon request), which show that the top model point estimators are generally less biased than other criteria and hence the higher top model RMSE should be due to larger estimator variances.
  • 12When λ = 1 ( 0 . 01 ) , by simulation, we find that the 2 . 5 % th, 50 % th and 97 . 5 % th quantile of | q h z | respectively are around 0 . 012 ( 0 . 12 ), 0 . 26 ( 0 . 94 ) and 0 . 73 ( 0 . 99 ).
  • 13Strictly speaking, the right hand side should be multiplied by an arbitrary constant not involving ρ.
  • 14Note that a ̲ and σ 2 are positive. When h ( ρ ̲ ) 0 , we can have ρ ̲ + γ + σ 2 a ̲ 2 a ̲ ( T 1 ) σ 2 h 2 ( ρ ̲ ) > ρ ̲ and ρ ̲ + γ σ 2 a ̲ 2 a ̲ ( T 1 ) σ 2 h 2 ( ρ ̲ ) < ρ ̲ 2 σ 2 h ( ρ ̲ ) a ̲ ρ ̲ ; when h ( ρ ̲ ) < 0 , we can have ρ ̲ + γ + σ 2 a ̲ 2 a ̲ ( T 1 ) σ 2 h 2 ( ρ ̲ ) > ρ ̲ 2 σ 2 h ( ρ ̲ ) a ̲ > ρ ̲ and ρ ̲ + γ σ 2 a ̲ 2 a ̲ ( T 1 ) σ 2 h 2 ( ρ ̲ ) < ρ ̲ .
  • 15When T = 2 , E ( f i y i , 0 ) = 0 and the true model contains no exogenous regressors, ϕ ( ρ ̲ ) + T 1 2 ln a ̲ | M 0 ρ ̲ 2 2 ρ ̲ σ 2 h ( ρ ̲ ) ( T 1 ) σ 2 + 1 is equal to ρ ̲ 2 + 1 2 ln 1 ρ ̲ + [ E ( f i 2 ) 2 + E ( y i , 0 2 ) ( ρ ̲ 2 2 ρ ̲ + 1 2 ) + σ 2 2 ] ρ ̲ 2 σ 2 , which approaches υ ( ρ ̲ ) as E ( f i 2 ) σ 2 and E ( y i , 0 2 ) σ 2 get smaller.

Share and Cite

MDPI and ACS Style

Li, G. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects. Econometrics 2015, 3, 494-524. https://doi.org/10.3390/econometrics3030494

AMA Style

Li G. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects. Econometrics. 2015; 3(3):494-524. https://doi.org/10.3390/econometrics3030494

Chicago/Turabian Style

Li, Guangjie. 2015. "Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects" Econometrics 3, no. 3: 494-524. https://doi.org/10.3390/econometrics3030494

Article Metrics

Back to TopTop