Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002) does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC) are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE). We also study the implications of different levels of inclusion probabilities by simulations.


Introduction
For a panel linear regression model with lags of the dependent variable as regressors (dynamic panel model) and agent specific fixed effects, the maximum likelihood estimators (MLE) of the common parameters, whose number does not change with sample size, are inconsistent when the number of time periods is small and fixed, see Nerlove [1] and Nickell [2].This problem, known as the "incidental parameter problem", has been reviewed by Lancaster [3].A plethora of studies have been undertaken to obtain consistent estimators for the common parameters in dynamic panel models.Among them there are two main approaches: one is to use the generalized method of moments (GMM), see the overview in Hsiao [4]; the other is based on modified profile or integrated likelihood, see e.g., the recent works by Bester and Hansen [5], Hahn and Kuersteiner [6], Arellano and Bonhomme [7], Dhaene and Jochmans [8].Researchers using these two approaches usually presume the moment conditions or the parametric models are correctly specified and the issue of model selection has relatively attracted less attention.Correct model specification is very important, without which consistent parameter estimation can not be achieved.In Andrews and Lu [9], the authors proposed model and moment selection criteria (MMSC) under GMM context based on J-test statistics to address the issue.However, for dynamic panel models, GMM will suffer from the weak instrument problem when the coefficient for the lagged dependent variable is close to 1. Hence MMSC is unlikely to work for such situation.
Lee and Phillips [10] used the bias reducing prior from Arellano and Bonhomme [7] to develop integrated likelihood information criterion to study lag order selection in dynamic panel models.The prior in Arellano and Bonhomme [7] is designed to obtain first-order (in the time dimension) unbiased estimators. 1 Lancaster [11] suggested a way to reparameterize the fixed effects to achieve consistent estimation (not just first-order) in the panel.While Lee and Phillips [10] only considered stationary data in their application, we show that it is possible for Lancaster's method to handle non-stationary data.Different from Lee and Phillips [10], our paper focuses on the selection of exogenous regressors rather than lag order selection.For the purpose of model comparison, proper priors must be used for parameters not common to all the models to avoid Bartlett's paradox when Bayes factors are used (see e.g., [12]).Dhaene and Jochmans [8] found that the modified profile likelihood with Lancaster's correction term can be infinite for infinite parameter support, which implies a prior to ensure the posterior distribution to be proper should be used in a Bayesian context.We develop a data dependent proper prior to combine with Lancaster's reparameterization to calculate Bayes factors and find that model selection based on Bayes factors is inconsistent only for very extreme situations, such as when the number of time periods is 2 or when the true value of the lag coefficient is less than −1.On the other hand, model selection based on Bayesian information criterion (BIC) with the parameters evaluated at the biased MLE can be inconsistent under more common circumstances.From an empirical point of view, researchers could often be confronted with a large number of possible regressors and hence many possible models.Model uncertainty leads to estimation risks especially for small samples since the estimates that we use from a misspecified model could be far away from the true parameter values and hence misleading.From our simulations, we find that Bayesian model averaging (BMA) can reduce such risk and produce point estimators with lower root mean squared errors (RMSE).
The plan of the paper is as follows.Section 2 summarizes the model and the posterior results with the estimation strategies discussed.Section 3 gives our motivations to compare different model specifications and shows the conditions under which our estimator will be consistent when the model is misspecified.Section 4 presents the conditions under which Bayes factors and BIC can be 1 They treat the bias as a result from finite time periods (finite sample bias) and remove the first order bias in the Taylor expansion.
consistent in model selection.In Section 5, we carry out simulation studies to verify our claims before Section 6 concludes.

The Model and the Estimation
Here we investigate the first order autoregression linear panel model with a fixed effect, f i , y i,t = f i + y i,t−1 ρ + x i,t β + u i,t , i = 1 . . .N, t = 1 . . .T. ( where ρ is a scalar and x i,t is a k × 1 vector of explanatory variables.Denote u i as (u i,1 , u i,2 , . . ., u i,T ) , X i = (x i,1 , x i,2 , . . ., x i,T ) and y i as (y i,1 , y i,2 , . . ., y i,T ) .We can rewrite Equation (1) into the vector form below, where ι is a T × 1 vector of ones.By repeated substitution, we can obtain where y i_ = (y i,0 , y i,1 , . . ., y i,T −1 ) , The following are the assumptions we use throughout the paper.
for some δ > 0, all i = 1, 2, . . ., N , t = 1, 2, . . ., T and h = 1, 2, . . ., k, where x i,t,h denotes the hth element in x i,t ; (c) k and T are finite; is finite and uniformly positive definite (see [13], p. 22) where H = I T − ιι T ; (e) For any finite values of ρ, the following expression is uniformly positive, i.e., given sufficiently large N , Assumption 1 implies that X i , f i and y i,0 are strictly exogenous.In comparison to the i.i.d.regularity conditions in Lancaster [11], Assumption 2 (a)-(d) allow the distribution of X i , f i and y i,0 to be heterogeneous for cross sectional units with slightly more rigorous conditions on their moments such that the asymptotic results in the paper can hold.Assumption 2 (e) is used to simplify the proofs of Proposition 4 and Lemma 10 in Appendix D. Its purpose is to prevent the (within-group) regression of (f i ζ 1 + y i,0 ζ 2 + CX i β) on fixed effects and X i from having perfect fit asymptotically, i.e., R-squared tends to 1 as N increases and to ensure the true value of ρ to be the local mode of its marginal posterior (discussed later) asymptotically.When β = 0 (no exogenous regressors in the model), Assumption 2 (e) rules out f i = 0.When T ≥ 3, if Assumption 2 (e) is satisfied and β = 0, as shown in Appendix D (Equation (53) and its discussion), the following probability limit should also be strictly positive, (5) where . In practice, one could calculate the expression after plim in Equation ( 5) to check Assumption 2 (e) with ρ and β replaced by their consistent estimates.If the value of the expression decreases towards 0 with N ,2 there would be concern for Assumption 2 (e).We would think such case should be very rare with real data.
The MLE of the common parameters, ρ, β and σ 2 , are not consistent due to the presence of the incidental parameter f i , whose number will increase with N .For fixed T , it is impossible for the MLE of f i to be consistent.When the predetermined regressor y i,t−1 is included, the MLE for ρ will be correlated with that of f i and will also be inconsistent.To obtain the consistent estimators for the common parameters, Lancaster [11] suggested the following way to reparameterize the fixed effect: where g i is the new fixed effect, ι is a vector of ones and φ (ρ) is defined as We use the following prior for ρ, β and σ 2 and g = (g 1 , g 2 , ..., g N ) : In other words, a flat prior for g and Jeffreys' prior for σ 2 are used.ρ is uniformly distributed over [ρ L , ρ U ].The specifications of ρ L and ρ U will be discussed in Proposition 4 later.p(β|σ 2 ) takes the form of g-prior in [14]: where X = (X 1 , X 2 , . . ., X N ).The strength of the prior depends on the value of η.The smaller is η, the less informative is our prior.As discussed in Section 4, to ensure model selection consistency we can choose 0 < η(N ) = O(N α ) for α < 0. For our simulation studies below, we choose η = 1 N T .The posterior results of the model are summarized below.Proposition 3. The posterior distributions of the parameters in our model will take the following forms: where Y 0 = (y 1,0 , y 2,0 , . . ., y N,0 ), Y = (y 1 , y 2 , . . ., y N ) and IG(•) denotes inverted gamma distribution with degrees of freedom N (T − 1) and mean aρ 2 −2bρ+c N (T −1)−2 .
Note that a and c in Equations ( 15) and ( 17) are close to the sum of squared residuals (SSR) obtained by respectively regressing y i_ and y i on fixed effects and X i . 3φ(ρ) in Equation ( 14) is the term from Lancaster's reparameterization, which corrects the marginal posterior local mode of ρ to make it consistent.Dhaene and Jochmans [8] showed that the modified profile likelihood function with φ(ρ) can be infinite for ρ → ∞.Analogical to their results, we show that the marginal posterior of ρ will be infinite and hence improper when ρ → ∞ or when T is odd and ρ → −∞ in Appendix D. Lancaster [11] noted such behaviour of ρ's marginal posterior in simulations, but did not discuss much on how to specify the boundary points.In Proposition 4, we provide a data dependent way to specify ρ L and ρ U , which is necessary for model comparison to avoid Bartlett's paradox.First note that the probability limit of ψ(ρ) is where Proposition 4. If X i are the true set of exogenous regressors used to generate y i , under Assumption 1 and 2, asymptotically the marginal posterior of ρ in Equation (13) will have more than one stationary points satisfying ψ (ρ) = 0: 3 stationary points when T is odd and 2 when T is even, regardless of the true value of ρ.The local posterior mode, which is a consistent point estimator, is the stationary point nearest to the MLE satisfying ψ (ρ) < 0 asymptotically, which the other stationary point(s) do not satisfy.ρ U can be specified as the stationary point on the right of the posterior mode.When T is odd, ρ L can be specified as the stationary point on the left of the posterior mode; when T is even, ρ L could be chosen as a function of N such that ρ L (N ) < 0 is sufficiently small. 4  Choosing the boundary points as in Proposition 4 can ensure the marginal posterior of ρ to be proper and its support to contain the true value of ρ asymptotically.ρ L and ρ U are different from the boundary points in the constrained maximization in Dhaene and Jochmans [8], who only considered parameter estimation.The interval of our boundary points is wider than theirs, since we want to preserve the bell-shaped part of the posterior density curve for model comparison.Another point to note is that when the true exogenous regressors are included, the local posterior mode will exist regardless of the true value of ρ (even if it is 1) due to Assumption 2.2 (e).Next we investigate the consequences when we can not include the true regressors.

Motivations and Methods to Compare Different Model Specifications
In empirical applications, researchers are often faced with many possible regressors suggested by different economic theories to be included into Equation (1).Different models are defined by the inclusion of different combinations of the exogenous regressors and by whether or not the lagged dependent variable is present.Proposition 5 below implies there is no guarantee that the posterior mode in Equation ( 13) is a consistent estimator if some true regressors are excluded from the model.Proposition 5.The posterior mode in Equation ( 13) is a consistent estimator for ρ if and only if we have either or where When T is even, lim ρ→∞ ψ(ρ) = 0.For model comparison, we need a proper prior for ρ and hence ρ L must be finite for finite N .In practice, we can choose ρ L to ensure that [ρ L , ρ U ] contains a unique posterior mode when the true set of regressors are included.In the subsequent simulations, we choose ρ L = −N when T is even.
Here X represents the regressors in the true model and X denotes the regressors we actually include in our candidate model, while ρ is the true value of ρ.
The values of h 2 (β, ρ) and h 3 (β) depend on how the true regressors and the included regressors are related, apart from the values of β and ρ.For h 2 (β, ρ) = h 3 (β) = 0 to be satisfied, it suffices that the true regressors X are a subset of X. 5 When some true regressors are excluded, the model will suffer omitted variable bias unless Equation ( 21) holds.Given Assumption 1 and 2, one example for Equation (21) to hold could be that all the true regressors are covariance stationary and they have no serial correlation; the included regressors have zero correlation with the true regressors; moreover, For this restrictive case, it will be possible to estimate ρ consistently without any true regressors included. 6 To avoid inconsistent estimation due to model misspecification, one could include all the potential regressors into the model.For finite sample, however, that could inflate the posterior variances for the coefficients of the true regressors if too many irrelevant regressors are included.The simulation studies in Section 5 reveal that while including all the regressors does not influence the estimation of ρ in comparison to other consistent approaches, it leads to substantially high RMSE when estimating β.Hence appropriate procedures for model selection are desirable.In a Bayesian framework, one can evaluate different model specifications, denoted by M i below, by their posterior model probabilities, which can be calculated as where X j denotes the regressors included under M j and p (Y |Y 0 , X j , M j ) is the marginal likelihood, obtained by integrating out ρ in Equation ( 13) or (43) in Appendix C. K is the number of all potential exogenous regressors.The total number of models is therefore 2 K+1 .p (M j ) is the prior model probability of model j.For finite samples, Ley and Steel [15] showed that the choice of prior model probability will affect the posterior results to a large extent when the number of potential regressors is large compared to the sample size.In what follows, we focus on the asymptotic behaviour of posterior model probabilities and assume all the models are equally possible a priori.The posterior model probability p (M j |Y, Y 0 , X j ) will hence only depend on p (Y |Y 0 , X j , M j ).

5
Note that h 3 (β) is the probability limit of 1 N times the SSR obtained by regressing X i β on fixed effects and X i . 6 Due to the assumptions in the example,

Consistency in Model Selection
In this section, we discuss the situations when the posterior model probability of the true model will tend to 1 as N tends to infinity.We will also analyze whether Bayesian information criterion (BIC) based on biased MLE is consistent in model selection.
For static panel models when the true value of ρ is zero and the lagged dependent variable is not included as a regressor, the analysis of Bayes factors is similar to that of Fernandez et al. [16].In our context, we can ensure model selection consistency by setting η as a function of N with 0 < η(N ) = O(N α ) for α < 0. As for BIC, it is consistent in model selection for static panel models.
Let us now consider the case when our candidate model (M 1 ) contains y i_ and X i1 .M 1 is compared to either M 0 , which has X i0 but no y i_ , or M 2 , which has X i2 and y i_ .X ij denotes the exogenous regressors included under model M j for j = 0, 1, 2, which satisfy Assumption 2.1 and 2.2.X i1 can be the same or different from X i0 , while X i2 is different from X i1 .The Bayes factors respectively are: where k j denotes the number of columns in X ij ; a |M j , b |M j and c |M j in ψ(ρ|M j ) defined in Equation ( 14) are calculated by replacing X i with X ij in Equations ( 15) to ( 17) for j = 0, 1, 2, which multiplied by 1

N
have the following probability limits under Assumption 1 and 2 with η(N ) = O(N α ) and α < 0: plim γ |M j stands for the Nickell MLE bias of ρ under M j .We can see the MLE bias results from two sources: the incidental parameter part (σ 2 h(ρ)) and the model misspecification part (h 2 (β |M j , ρ|M j )).
Proposition 4 shows that when the model is correctly specified, the local posterior mode is a consistent estimator for ρ.In the simulation studies in Section 5 we find that when some combination of the wrong exogenous regressors are included, the marginal posterior density of ρ can be either monotonically increasing or is U-shaped depending on the value of T and does not have a local maximum.When we find such a wrong model, we will assign 0 as its posterior model probability and will not estimate the model.In Proposition 6 and 7 below, we consider the cases when the local maximum exists for ψ(ρ|M j ) in Equation ( 18) and show the sufficient conditions under which the Bayes factors in Equations ( 25) and (26) can lead to the selection of the true model asymptotically.Denote ρ * |M j as the local maximum of Proposition 6.When M 1 is the true model, i.e., ρ = 0 and X i1 is the set of true regressors to generate Y , as N increases, p(Y |Y 0 ,X 1 ,M 1 ) p(Y |Y 0 ,X 0 ,M 0 ) in Equation (25) will tend to infinity if the following holds, When M 0 is the true model, i.e., X i0 is the set of true regressors and ρ = 0, as N increases, p(Y |Y 0 ,X 0 ,M 0 ) in Equation (25) will tend to 0 if either of the following is satisfied: (a) we have If Equation ( 21) is true under M 1 , Equation (32) will hold.
(b) Equation ( 20) is true under M 1 .In this case, the left hand side of Equation ( 32) is equal to 0.
Proposition 7. When M 2 is the true model and M 1 is the misspecified model, as N increases, in Equation ( 26) will tend to 0 if either of the following holds: (a) we have If Equation ( 21) is true under M 1 , Equation (33) will hold.
(b) Equation ( 20) is true under M 1 .In this case, the left hand side of Equation ( 33) is equal to 0.
Since both Equations ( 20) and (21) imply that the local posterior mode in Equation ( 13) is a consistent estimator for ρ, from Proposition 6 and 7, we can see that if the posterior mode is consistent under the misspecified model, the misspecified model will not be chosen by the Bayes factor (model selection will be consistent).In Appendix D, we show that h(ρ) is positive over R when T is an even number.This implies φ(ρ) is an increasing function over R. Also note that φ(0) = 0. Hence φ(ρ) < 0 for ρ < 0 and it is possible for Equation (31) to be violated when T is even and ρ is a negative number.As shown in the last paragraph in Appendix E, though Equation (31) could be violated for the extreme case of T = 2 and ρ < 0, fortunately, apart from this extreme case, violation of Equation (31) could only occur when ρ < −1 for T being an even number greater than 2, which may not be relevant for most economic applications with ρ ∈ [−1, 1].
Note that Equation (32) is the special case of Equation (33) with ρ = 0.When the posterior mode is not consistent under the misspecified model, it is difficult to state under what circumstances Equation (33) is or is not satisfied since ρ * generally does not have closed form.By construction, ρ * |M 1 is a local minimum for the left of Equation (33).In our simulation studies in Section 5, we calculate Equation (32) or (33) under different settings when model selection errors based on Bayes factors occur.We cannot find a single occurrence of either Equation (32) or (33) being violated except the cases when Equation ( 20) is true, that is, when the candidate model nests the true model.It appears that the left hand sides of Equation (33) could be interpreted as how close the candidate model is to nest the true model.Note that with real data, it is difficult to check Equation ( 20), but one can assess whether Equation ( 33) is violated by replacing d(ρ|M j ) with N and supplanting σ 2 and ρ by their consistent estimates e.g., those from the model including all the potential regressors.
Proposition 8 below shows when the BIC based on the biased MLE is consistent in model selection.BIC for the model with and without the lagged dependent variable is defined respectively below, where a, b and c are defined respectively in Equations ( 15), ( 16) and ( 17) with η = 0, and k is the number of exogenous regressors included.The model with smaller BIC value will be preferred.
Proposition 8.For the comparison of the two models in Equation ( 25), when M 0 is the true model, BIC will be consistent if the following is satisfied However, if Equation ( 20) is true under M 1 , the left of Equation (36) will be greater than 0 and BIC will be inconsistent.
When M 1 is the true model, BIC will be consistent in model selection if the following condition is met: If X i0 is the same as X i1 and the probability limit of ρMLE is equal to 0, the left of Equation (37) will be 0 and BIC will be inconsistent.
For the comparison of the two models in Equation ( 26), when M 1 is the true model, BIC will be consistent in model selection if the following holds Conditions (36), (37) and (38) are just the sufficient but not necessary conditions for BIC to be consistent in model selection since BIC has a penalty term against over-parameterization (the last term in Equations (34) and ( 35)).Note that ρMLE = b a and plim 27) and (28), where γ is the Nickell bias.The violation of Equations ( 36) and (37) is related to the hypothesis test, H 0 : ρ = 0.When plim Given the SSR interpretation of a in Equation ( 15) (with η = 0), the practical implication of this result is that if BIC chooses the model with all the regressors included, which always has smaller a N for finite sample comparing to other models, we should be cautious with such choice in the application.

Simulation Studies
In this section we use Monte Carlo simulation to verify the claims in Proposition 6, 7 and 8 and investigate the impact of model uncertainty on point estimation.The number of simulations is 1000.We set T = 4, σ 2 = 1, η = 1 N T , ρ L = −N when T is even and the number of possible regressors to 8. We select 4 regressors out of 8 (K) to generate the dependent variable.The coefficient values of the chosen regressors are 0.1 0.3, 1 and 2 respectively.We draw independently f i and y i,0 from U [ −4, 4].For each simulation, we calculate the posterior model probabilities and the BIC of all the models and evaluate the performances of the two criteria.In Proposition 5, we show that the posterior mode is not a consistent estimator of ρ when neither Equation ( 20) nor (21) holds, which is possible when the regressors have collinearity and serial correlation.We generate the potential regressors to be covariance stationary and make them serially correlated and correlated with each other.The details of the data generating process (DGP) can be found in Appendix A. There are three parameters controlling the properties of the regressors: σ 2 X = 5.33 (the variance with the same value as those of f i and y i,0 ), s = 0.5 (the autocorrelation coefficient) and λ = 1 (between 0 and 1, the closer to 0, the higher the correlation among the regressors).The settings are the same for the subsequent simulation exercises unless otherwise stated.The results of robust checks with other values of σ 2 X , s and λ are shown in Appendix B.

When Model Selection is Consistent
The model selection performance results for different values of ρ > −1 appear similar and are available upon request.If some true regressors are excluded, Equation (20) or (21) would be violated more often under ρ = −1 than when ρ takes other positive values.The results presented in Table 1 are based on ρ = −1.The "ER" column shows the error rates of Bayes factors 7 while the "ERBIC"column contains those of BIC.For N = 40, BIC performs better than Bayes factors.As the sample size increases, the error rates of the two criteria get closer and both decrease, which implies both are consistent in model selection.Note that the coefficient of one of the exogenous regressors in the true model is equal to 0.1, which is close to 0. Models selected based on Bayes factors often cannot pick up this regressor when N = 40.The column "nest" indicates how often the model chosen by Bayes factors only omits the regressor with coefficient 0.1 or is the same as the true model.In other words, the true model nests the chosen model.Comparing this column to "nestbic", we can see that the models chosen by Bayes factors are more often nested inside the true model with the less important regressor excluded than the models from BIC. Column "ER11" shows the proportions of errors committed when the true model and the model chosen by Bayes factors both include y i_ but have different exogenous regressors. 8We can see that all the errors made by Bayes factors and BIC are due to the inclusion of the wrong set of exogenous regressors rather than omitting y i_ .Hence there is no point to check whether or not Equation (31) or (37) is violated.When the errors of ER11 or ERBIC11 occurred, we checked whether Equation (33) or (38) was violated.For this and the following simulation exercises, we did not find any violations of Equation (33).For Equation (38), it is only violated with its left hand side being 0 when the chosen model (M 2 ) includes all the regressors of the true model (M 1 ).For this case, BIC is still consistent.In other words, the errors of ER11 or ERBIC11 are fixable with larger sample sizes for both selection criteria.

When Equation (31) is Violated for Bayes Factors
In Section 4, we mentioned that when T is even and ρ is a small negative number, it is possible for Equation (31) to be violated.Under the settings in Section A, the left hand side of Equation (31) often has a root of ρ between −7.4 and −7.2 when the true regressors are included. 9If ρ is less than the root, Equation (31) will be violated.In our next exercise, we set ρ = −7.4 and run the simulations again.The results are in Table 2.We can see that Bayes factors cannot select the true model for once out of the 1000 simulations for all sample sizes while the error rates of BIC gradually decrease with N .All the Bayes factors errors are made when the chosen model does not contain y i_ (see "ER10") and Equation (31) is violated.Similar problems with Bayes factors arise when T = 2 and −1 < ρ < 0 as explained in Appendix E. Table 3 shows the simulation results for such situation when ρ = −0.9,σ 2 = 100 and the true model does not have X i while other settings are the same as before.Bayes factors again show no signs of model selection consistency almost always due to the violation of Equation (31).The "noreg" column shows how often in the errors made by Bayes factors, the chosen model only includes the fixed effects with no other regressors.As sample size increases, Bayes factors tend to make more such errors which BIC never commits.31) is violated, which takes place under rather extreme situations.Next we show that BIC can perform poorly under more common circumstances, which are more possible for economic applications.As discussed in Proposition 8, if ρ = 0, BIC could asymptotically choose the model with the true exogenous regressor(s) and y i_ over the true model.For the next simulation exercise, we change ρ to 0. The results are shown in Table 4. Bayes factors now have smaller error rates while BIC cannot identify the true model.As expected, BIC always chooses the models with y i_ (see "ER01BIC"), while the proportion of errors violating Equation (36), gets higher for bigger sample sizes.Column "cnestbic" shows how often the chosen model by BIC nests the true model when the errors of ER01BIC occur.The values in this column are just slightly smaller than those in "no(36)", which indicates a high proportion of the violation of Equation (36) happens when the chosen model nests the true model.
Another situation of poor BIC performance is when Equation (37) is close to violation.In Proposition 8, we mentioned that if plim N →∞ ρMLE = 0 under the true model, a candidate model with the same exogenous regressors as those of the true model will violate Equation (37).In our next experiment, we do not include any exogenous regressors into the true model and set ρ = 0.0756 to make ρMLE close to 0. If the candidate model (M 0 ) only has fixed effects, the left hand side of Equation ( 37) is close to but slightly above 0.The simulation results are given in Table 5.We can see that BIC error rates gradually increase to near 1 with the sample size.Column "noregbic" indicates the proportion of BIC errors committed when the chosen model only includes fixed effects.Note that the values in this column are the same as those in "ER10BIC", which also get closer to 1 with sample size.Clearly, the poor performance of BIC in this scenario should be related to Equation (37)., BIC will be inconsistent, which could happen when M 2 nests M 1 and f i is highly correlated with all the potential exogenous regressors.In our next exercise, we set T = 3, ρ = −1 and generate y i,0 and f * i from U [−1, 1].When we generate xi,t in Equation (40), we set s = −0.9.f i is generated as x i,t,h .In the true model, no exogenous regressors are included, which implies any candidate model including y i_ nests or is the same as the true model.The results in Table 6 show that BIC is not consistent with increasing error rates as the sample size gets larger than 200 and all the errors are of type ERBIC11.For all the errors made by BIC, we have found that Equation ( 38 from all the errors, which gets smaller with the sample size.This implies BIC tends to choose the model with lower a in comparison to the true model with larger sample sizes.Note that the simulation results are sensitive to the parameter settings.If we change T to 4 while keeping other settings the same, among the BIC errors, a |M 2 will be virtually the same as a |M 1 and BIC will show decreasing error rates, which, though, are higher than those of Bayes factors for different sample sizes.In this case, we need to change ρ and s to make BIC inconsistent.The results are available upon request.To sum up, it is possible for Equation ( 36), ( 37) or (38) to be violated and BIC can be inconsistent in model selection under more common circumstances than Bayes factors.

Point Estimation
Judging from the previous simulation results, we can see that if we simply select the model with the highest posterior model probability to provide the estimates of our interest, the chances will be high that the model selected is not the true model especially when N is small regardless of which criterion we use.Next we investigate how model uncertainty impacts on point estimation.We set ρ = 1 and the number of simulations equal to 2000.We then evaluate the performances of different consistent point estimators. 10 Table 7 shows the root mean squared errors (RMSE) with the cross section sample size (N ) equal to 40.The true values of ρ and β are shown under the column "True".There are 8 potential regressors, 4 of which are not included in the true model and hence have coefficients equal to 0. The column "Top" shows the RMSE resulting from the posterior mode of the model with the highest posterior model probability, the column "All" shows the results from the model which include all the potential regressors, while the values in the column "BMA" are from the posterior mode average of different models with the weights equal to the posterior model probabilities.To evaluate the significance of a regressor in the Bayesian context, we can calculate the sum of the posterior model probabilities of all the models which include the regressor.The RMSE in the columns headed with percentage numbers are calculated based on certain inclusion probability criterion.If the inclusion probability for a regressor is lower than the percentage number of the column, we will simply use zero as its point estimate.Otherwise, we will use the BMA estimate.From Table 7, we can see that the model including all the potential regressors has much higher RMSE for all the parameters except ρ than other methods.BMA has smaller RMSE for almost all the parameters than the top model criterion 11 and it tends to have lower RMSE than inclusion probability criteria for parameters different from 0 while larger RMSE for parameters equal to 0. Higher inclusion probability tends to give smaller RMSE when the true value of the parameter is 0 while higher RMSE for non-zero parameters.The last row of Table 7 shows the sum of RMSE in each column, which 10 The absolute value of Nickell bias under ρ = 1 is bigger than when ρ = −1.None of the conditions for model selection consistency are found to be violated. 11We have also obtained the results of finite sample biases of different point estimators (available upon request), which show that the top model point estimators are generally less biased than other criteria and hence the higher top model RMSE should be due to larger estimator variances.is a measure of the overall performances of different criteria.We can see that BMA and various inclusion probability criteria are all better than those of the top model and the all-inclusive model.The sum of RMSE is the smallest when we set the inclusion probability to 50%.To add more insights into probabilities, we present the error rates of in/excluding the wrong/right regressor based on different inclusion probability criteria in Table 8 and compare to those from the top model.Similar to the findings of RMSE, higher inclusion probabilities tend to give larger error rates for non-zero parameters while smaller error rates for the zero parameters.The last row shows the average error rates of different columns, of which the highest value appears when the 10% criterion is used and the majority of the errors are from the zero parameters.Note that for a particular regressor, the prior inclusion probability is 50% in our setting.If the posterior inclusion probability is no less than 50%, it implies the data confirm or strengthen the prior.The top model criterion has smaller average error rate than almost all the inclusion probability criteria except 40% and 50%.Table 9 presents the results of RMSE sums and average error rates under different sample sizes.BMA has smaller RMSE than the top model estimators for all sizes, while the top model average error rate is in general close to the minimum of various inclusion probability criteria.The minimums of RMSE sums are usually attained when the inclusion probability is above or equal to 50%, while the minimum average error rates appear at around 50%.Therefore, under our simulation settings, for point estimation, it seems sensible to use 50% inclusion probability to decide whether or not a regressor should be included.

Conclusions
In this paper, we investigated consistent parameter estimation and model selection for the linear dynamic panel model.We use the fixed effect reparameterization proposed by Lancaster [11] combined with our data dependent prior for estimation and calculate Bayes factors to compare different model specifications.We recommend model selection should precede parameter estimation, since Lancaster's fixed effect transformation may not necessarily lead to consistent estimation when some true exogenous regressors are excluded.We have given the conditions under which Bayes factors or BIC can lead to consistency in model selection and have shown that Bayes factors could be inconsistent in model selection when the number of time periods is 2, or when the true autoregressive coefficient is less than −1.Such situations could be rare for most economic applications.BIC based on the biased MLE can be inconsistent when the fixed effects are highly correlated with all the potential exogenous regressors or when the true autoregressive coefficient is 0 or its MLE is close to 0, which are more likely to happen in reality.
When model uncertainty is substantial, e.g. with small sample sizes, we argue for the use of Bayesian model averaging, which can produce point estimators with smaller RMSE than the model with the highest posterior model probability in our simulation exercises.Inclusion probability criteria can be helpful to reduce estimation risk and for deciding which regressor(s) should be chosen.We recommend using 50% (the prior inclusion probability) to decide the inclusion of a regressor, which usually produces the smallest RMSE and average error rates in our simulation exercises.It can be promising for future research to extend Lancaster's reparameterization to account for higher order AR models and to consider lag order selection along with regressor selection.

B. Properties of the Exogenous Regressors in the Simulation
Here we will do some robustness checks of our simulation results under different settings.Apart from the conditions in Proposition 6 to 8, model selection performance of Bayes factors and BIC is also sensitive to the properties of X i for different values of σ 2 X , s and λ.We first reduce σ 2 X to 1.33 to obtain the results in Table 10.The error rates of the two criteria are all higher for different sample sizes than those in Table 1 while the nest rates are all lower.Similar model selection deterioration could also occur with inflated error variances (σ 2 ).Hence model selection performance is affected by the relative strength of the signal compared to the noise, which is determined by their variances.
Next we show that the levels of serial correlation and collinearity in the regressors also affect model selection performance.Recall from Section A that s is the first order autocorrelation and λ controls the level of collinearity.To have the regressors with no collinearity, we can set q h to be the hth column of an identity matrix of dimension K.We set ρ = 1 and N = 200.The error rates under different levels of serial correlation and collinearity are shown in Table 11.We can see that cross regressor correlation and positive serial correlation are harmful for model selection.If different regressors are orthogonal to each other, Bayes factors and BIC will have lower error rates than when collinearity is present, while the highest error rates appear when s is 0.9 under different levels of collinearity.One intriguing phenomenon is that negative serial correlation seems to enhance model selection performance for most cases in comparison to positive or no serial correlation.

C. Proof of Proposition 3 and Proposition 5
Here we use a different way of derivation from Lancaster [11].In brief, we attempt to find a correction function attached to the marginal posterior density of ρ such that the mode of the marginal posterior is a consistent estimator for ρ.We first reparameterize the fixed effect as where r(ρ) is a function of ρ, which we will find out later.The derivation of the conditional posterior distribution p(g i , β, σ 2 |ρ, Y, Y 0 ) follows standard Bayesian techniques, see e.g., [19] Chapter 10.The details are available upon request.Here we just show the results after g i , β and σ 2 are integrated out.
Taking log and differentiating both sides with respect to ρ produces By setting the above equal to 0, we can obtain Suppose for now we have included the true regressors in our model.Taking probability limit of the right hand side by using Equations ( 27), ( 28) and (29) and evaluating both sides at ρ gives Solving the above differential equation, we will have13 where φ(ρ) is given in Equation (7).By replacing r(ρ) with exp [−φ(ρ)] in Equation (43) and dropping the terms not involving ρ, we will have the result in Equation (13).
When some true regressors are excluded from the model, the differential Equation (44) now becomes If the solution in Equation ( 45) is still valid, we should have So unless we have either = h(ρ) or h 2 (β, ρ) = h 3 (β) = 0, Equation (45) will not be a solution for Equation (46).In other words, the reparameterization of the fixed effect in Equation ( 6) cannot lead to consistent estimation of ρ.

D. Proof of Proposition 4
To prove the claims in Proposition 4, we first need to prove Lemma 9 and Lemma 10.
Lemma 9.For T ≥ 3, when T is odd, the polynomial h(ρ) is strictly increasing over (−∞, ∞) and has only one real root in [−2, −1); when T is even, the polynomial h(ρ) is greater than 0 with no real roots and is strictly decreasing for ρ < −1 and strictly increasing for ρ > −1 with −1 as the minimum point.
From Table 14, we can see that h(ρ) does not have real roots and hence it is greater than 0 when T is an even number.Table 15 shows h(ρ) has only one real root when T is odd.Since 2T > 0 and h(ρ) is strictly increasing when T is an odd number no less than 3, the real root of h(ρ) = 0 must lie in between −2 and −1.
Table 14.Sturm sequence of T h(ρ)(ρ − 1) 2 when T is even and greater than or equal to 3.
Table 15.Sturm sequence of T h(ρ)(ρ − 1) 2 when T is odd and greater than or equal to 3.

ρ h(ρ)
T −1 Additionally, we need the following lemma to show that the true value of ρ is a local posterior mode asymptotically.T −1 + h (ρ), where the equal sign holds for T = 2 or ρ = 1.In other words, the following are true: ) = 0 and there are no exogenous regressors.
) = 0 and there are no exogenous regressors.

E. Proof of Proposition 6
To prove Proposition 6 and 7, essentially we need to simplify the integral(s) which appears in the Bayes factor.One way to do it is Laplace's method, the details of which can be found in [20,21].Under the assumption that there exists only one solution ρ * in (ρ L ,ρ U ) for ψ (ρ) = 0 with ψ (ρ * ) < 0, the integral appearing in the Bayes factor can be written as Building on Equation ( 18), the first and second order derivatives of ψ(ρ) are If the chosen set of regressors can lead to consistent estimation of ρ, i.e., either Equation ( 20) or ( 21) is satisfied, evaluating Equations ( 18), ( 60) and (61) at ρ will give The Bayes factor in Equation ( 25) is N by their probability limits and η η+1 by O(N α ) with α < 0 (our prior choice for η) should not affect the analysis of the Bayes factor.Define which should have the same asymptotic behaviour as Equation (62).If X i1 is the true set of regressors to generate Y (so h 2 (β, ρ|M 1 ) = h 3 (β|M 1 ) = 0 and ρ * |M 1 = ρ), ξ 10 can be written as p(Y |Y 0 ,M 0 ) tends to infinity given ρ = 0 as long as Equation (31) holds.Now let us consider the case when the true model is M 0 in Equation (25), i.e., ρ is 0 and X i0 is the set of true regressors.ξ 10 takes the following form, ρ U 1 − ρ L 1 . (65) If Equation (32) holds, then the Bayes factor in Equation (25) will tend to 0 for large sample size.If M 1 is misspecified, but ρ can be consistently estimated, i.e., ρ * |M 1 = 0, Equation (65) can be simplified as exp N (T −1) 2 ln (T −1)σ 2 (T −1)σ 2 +h 3 (β|M 1 ) If Equation ( 21) holds, some true regressors should be excluded from M 1 and hence Equation ( 32) is true with h 3 (β|M 1 ) > 0. If Equation ( 20) holds, we have h 3 (β|M 1 ) = 0 and k 1 ≥ k 0 .The Bayes factor will therefore tend to 0 when N tends to infinity.
Finally we show when Equation (31) will be violated.If M 0 only includes the true exogenous regressors, we should have h 2 (β, ρ|M 0 ) = h 3 (β|M 0 ) = 0 and a |M 0 > σ 2 trace(C HC) as in Equation (55).The following should be true: When T = 2, the right hand side of Equation (67), i.e., υ(ρ), will be 2 ) and it is an increasing function with υ(0) = 0. Hence υ(ρ) is less than 0 when ρ < 0. The left hand side of Equation (67) can be negative for ρ < 0 if σ 2 is much larger than E(f 2 i ) and E(y 2 i,0 ). 15When T is an odd number great than or equal to 3, υ(ρ) is positive for ρ ∈ R. When T is even and greater than 2, υ(ρ) is positive for ρ ∈ (−1, ∞) and has a root less than −1.If ρ is less than the root, υ(ρ) will be negative.By direct calculation, as T increases, we find that the root of υ(ρ) will get closer to −1 from the left.Hence, to sum up, Equation (31) will hold for ρ ∈ (−1, ∞) when T is any integer greater than or equal to 3 and h 2 (β, ρ|M 0 ) = h 3 (β|M 0 ) = 0.

G. Proof of Proposition 8
The likelihood function takes the following form, By taking log of the likelihood function and solving the first order condition, we can obtain the maximum likelihood estimators as the following, Substituting the above into the log of (71) multiplied by −2 and adding the appropriate constants (number of parameters multiplied by the natural log of the sample size) yields the BIC Equations in (34)

2h 2 (ρ) T − 1 exp
N φ(ρ) + N (T − 1) 2 ln c |M 0 (T − 1)σ 2 .(64)So we can guarantee p(Y |Y 0 ,M 1 we can obtain the corresponding ξ 12 ,ξ 12 = O N α(k 1 −k 2 ) 2 exp N φ(ρ * |M 1 ) − φ(ρ) + T − 1 2 ln d(ρ|M 2 ) d(ρ * |M 1 |M 1 ) .(69)Note that d(ρ|M 2 ) = (T − 1)σ 2 .So if Equation (33) is satisfied, the Bayes factor is consistent in model selection.If M 2 despite being misspecified can still lead to consistent estimation of ρ, ξ 12 will becomeξ 12 = O N α(k 1 −k 2 ) 2 parameters to construct LR statistics.In practice, if we find ρMLE is close to 0 or the estimated Nickell bias, we should be cautious to use BIC for model selection.For Equation (38), as shown in Appendix G, if M 1 is the true model and X i2 nests X i1 , the left hand side of Equation (38) will be less than or equal to 0 asymptotically depending on whether a |M 2 is less than or equal to a |M 1 .Though Equation (38) is violated when a |M 2 = a |M 1 , BIC can still favour M 1 since there are more parameters under M 2 .However, if a |M 2 < a |M 1 , which could happen when f i is highly correlated with all the potential regressors, BIC will choose the wrong model M 2 asymptotically, as shown in Section 5.3.
[17]probability of making type I errors based on classical test statistics, such as Wald or likelihood ratio (LR), will be 1 and BIC will choose M 1 asymptotically with the left hand side of Equation (36) being a |M 1 γ 2 |M 1 > 0; when plimN →∞ ρMLE = 0and X i1 = X i0 , the probability of making type II errors will be 1 asymptotically and BIC will choose M 0 even if ρ = 0 with the left hand side of Equation (37) being a |M 1 (ρ + γ |M 1 ) 2 = 0.When incidental parameters are present, Cox and Reid[17]suggest using the likelihood conditional on the MLE of the orthogonalized incidental

Table 1 .
Simulation results when both criteria are consistent in model selection.

Table 5 .
Simulation results when Equation (37) is violated with ρ = 0.0756 and no exogenous regressors are included in the true model.|M 1 (calculated under the true model) and a |M 2 (calculated under the wrong candidate model).If a |M 1 > a |M 2 ) is violated with a |M 1 > a |M 2 .For a few cases, a |M 1 is very close to a |M 2 .The column with the heading < 0.999 in Table6indicates the percentage of the errors when a |M 2 is smaller than a |M 1 by more than 0.1% of its value.We can see that the majority of the errors happen when a |M 2 is smaller by more than a tiny fraction of a |M 1 .The column headed with E

Table 7 .
root mean squared errors (RMSE) of point estimators when N = 40.

Table 8 .
The error rates of excluding or including a regressor based on different criteria when N = 40.

Table 9 .
Sum of RMSE and averages of error rates.

Table 11 .
Error rates under different levels of serial correlation and collinearity for ρ = 1 and N = 200.