Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices

: An information matrix of a parametric model being singular at a certain true value of a parameter vector is irregular. The maximum likelihood estimator in the irregular case usually has a rate of convergence slower than the √ n -rate in a regular case. We propose to estimate such models by the adaptive lasso maximum likelihood and propose an information criterion to select the involved tuning parameter. We show that the penalized maximum likelihood estimator has the oracle properties. The method can implement model selection and estimation simultaneously and the estimator always has the usual √ n -rate of convergence.


Introduction
It has long been noted that some parametric models may have singular information matrices but still be identifiable.For example, Silvey (1959) finds that the score statistic in a single-parameter identifiable model can be zero for all data and Cox and Hinkley (1974) notice that a zero score can arise in the estimation of variance component parameters.Zero or linearly dependent scores imply that information matrices are singular.Other examples include, among others, parametric mixture models that include one homogeneous distribution (Kiefer 1982), simultaneous equations models (Sargan 1983), the sample selection model (Lee and Chesher 1986), the stochastic frontier function model (Lee 1993), and a finite mixture model (Chen 1995).Some authors have considered the asymptotic distribution of the maximum likelihood estimator (MLE) in some irregular cases with singular information matrices.Cox and Hinkley (1974) show that the asymptotic distribution of the MLE of variance components can be found after a power reparameterization.Lee (1993) derives the asymptotic distribution of the MLE for parameters in a stochastic frontier function model with a singular information matrix by several reparameterizations so that the transformed model has a nonsingular information matrix.Rotnitzky et al. (2000) consider a general parametric model where the information matrix has a rank being one less than the number of parameters, and derive the asymptotic distribution of the MLE by reparameterizations and investigating high order Taylor expansions of the first order conditions.Typically, the MLEs of some components of the parameter vector in the irregular case may have slower than the √ n-rate of convergence and have non-normal asymptotic distributions, while the MLE in the regular case has the √ n-rate of convergence and is asymptotically normally distributed.As a result, for inference purposes, one may need to first test whether the parameter vector takes a certain value at which the information matrix is singular.
We consider the case that the irregularity of a singular information matrix occurs when a subvector of the parameter vector takes a specific true value, while the information matrix at any other value is nonsingular.For example, zero true value of a variance parameter in the stochastic frontier function model, and zero true values of a correlation coefficient and coefficients for variables in the selection equation of a sample selection model can lead to singular information matrices (Lee and Chesher 1986).For such a model, if the true value of the subvector is known and imposed in the model, the restricted model will usually have a nonsingular information matrix for the remaining parameters and the MLE has the usual √ n-rate of convergence.This reminds us of the oracle properties of the lasso in linear regressions, i.e., it may select the correct model with probability approaching one (w.p.a.1.)and the resulting estimator satisfies the properties as if we knew the true model (Fan and Li 2001).In this paper, we propose to estimate an irregular parametric model by a penalized maximum likelihood (PML) which appends a lasso penalty term to the likelihood function.Without loss of generality, we consider the situation when the information matrix is singular at a zero true value θ 20 of a subvector θ 2 of the parameter vector θ. 1 We expect that a PML with oracle properties for parametric models can avoid the slow rate of convergence and nonstandard asymptotic distribution for the irregular case.We penalize θ 2 using the Euclidean norm as for the group lasso (Yuan and Lin 2006), since the interest is in whether the whole vector θ 2 rather than its individual components are zero.The penalty term is constructed to be adaptive by using an initial consistent estimator as for the adaptive lasso (Zou 2006) and adaptive group lasso (Wang and Leng 2008), so that the PML can have the oracle properties.In the irregular case, the initial estimate used to construct the adaptive penalty term has a slower rate of convergence than that in the literature, but the lasso approach can still be applied if the tuning parameter is properly selected.We prove the oracle properties under regularity conditions.Consequently, the PML can implement model selection and estimation simultaneously.Because the model with θ 20 = 0 and the restricted one with θ 20 = 0 imposed have nonsingular information matrices, the PML estimator (PMLE) always has the √ n-rate of convergence and standard asymptotic distributions.The PML criterion function has a tuning parameter in the penalty term.In asymptotic analysis, the tuning parameter is assumed to have certain order so that the PML can have the oracle properties.In finite samples, the tuning parameter needs to be chosen.For least square shrinkage methods, the generalized cross validation (GCV) and information criteria such as the Akaike information criterion (AIC) and Bayesian information criterion (BIC) are often used.While the GCV and AIC cannot identify the true model consistently (Wang et al. 2007), the BIC can (Wang and Leng 2007;Wang et al. 2007;Wang et al. 2009).Zhang et al. (2010) propose a general information criterion (GIC) that can nest the AIC and BIC and show its consistency in model selection.Following Zhang et al. (2010), we propose to choose the tuning parameter by minimizing an information criterion.We show that the procedure is consistent in model selection under regularity conditions.Because of the irregularity in the model, the proposed information criterion can be different from the traditional AIC, BIC and GIC.Jin and Lee (2017) show that, in a matrix exponential spatial specification model, the covariance matrix of the gradient vector for the nonlinear two stage least squares (N2SLS) criterion function can be singular when a subvector of the parameter vector has the true value zero.They consider the penalized lasso N2SLS estimation of the model.This paper generalizes the lasso method to the ML estimation of the several cited models with singular information matrices.For the model in Jin and Lee (2017), the true parameter vector is in the interior of the parameter space.However, for some irregular models cited above, the true parameter vector is on the boundary of the parameter space.We thus consider also the boundary case in this paper.
The PML approach proposed in this paper can be applied to all of the parametric models with singular information matrices mentioned above, e.g., the sample selection model and the stochastic frontier function model.Since the PMLE has the √ n-rate of convergence for the components which are not super-consistently estimated, we expect the PMLE to outperform the unrestricted MLE in finite samples for such models in the irregular case, e.g., in terms of smaller root mean squared errors and shorter confidence intervals.The rest of the paper is organized as follows.Section 2 presents the PML estimation procedure for general parametric models with singular information matrices.Section 3 discusses specifically the PMLEs for the sample selection model and stochastic frontier function model.Section 4 reports some Monte Carlo results.Section 5 concludes.In Appendix A, we derive the asymptotic distribution of the MLE of the sample selection model in the irregular case.Proofs are in Appendix B.

PMLE for Parametric Models
Let the data (y 1 , . . ., y n ) be i.i.d. with the probability density function (pdf) f (y; θ 0 ), a member of the family of pdf's f (y; θ), θ ∈ Θ, if y's are continuous random variables.If y's are discrete, f (y; θ) will be a probability mass function.Furthermore, if y's are mixed continuous and discrete random variables, f (y; θ) will be a mixed probability mass and density function.Assumption 1 is a standard condition for the consistency of the MLE (Newey and McFadden 1994).
Assumption 1. Suppose that y i , i = 1, . . ., n, are i.i.d. with pdf (or mixed probability mass and density function) f (y i ; θ 0 ) and (i Rothenberg (1971) shows that, if the information matrix of a parametric model has constant rank in an open neighborhood of the true parameter vector, then local identification of parameters is equivalent to nonsingularity of the information matrix at the true parameter vector.Local identification is necessary but not sufficient for global identification.For the examples in the introduction, the information matrix of a parametric model is singular when the true parameter vector takes certain value, but it is nonsingular at other values.Thus, the result in Rothenberg (1971) does not apply but the parameters may still be identifiable in all cases.
We consider the case that the information matrix of the likelihood function is singular at θ 0 , with a subvector θ 20 of θ 0 being zero.We propose to estimate θ = (θ 1 , θ 2 ) by maximizing the following penalized likelihood function where L n (θ) = 1 n ∑ n i=1 l i (θ) is the log likelihood function divided by n with l i (θ) = ln f (y i ; θ), λ n > 0 is a tuning parameter, µ > 0 is a constant, θ2n is an initial consistent estimator of θ 2 , which can be the MLE or any other consistent estimator, • denotes the Euclidean norm and I(•) is the set indicator.The PMLE θn maximizes (1).

Assumption 2. θ2n = θ 20 + o p (1).
The initial estimator θ2n can be zero in value, especially when θ 20 is on the boundary of the parameter space, e.g., a zero variance parameter for the stochastic frontier function model in Section 3.2.With a zero value for θ2n , the PMLE of θ 2 in (1) is set to zero and the value of the PMLE equals that of the restricted MLE with the restriction θ 2 = 0 imposed.The tuning parameter λ n needs to be positive which tends to zero as the sample size increases.
We have the consistency of θn as long as λ n goes to zero as n goes to infinity in Assumption 3.
The convergence rate of θn can be derived under regularity conditions.Let Θ = Θ 1 × Θ 2 , where Θ 1 and Θ 2 are, respectively, the parameter spaces for θ 1 and θ 2 .We investigate the case where θ 20 is on the boundary as well as the case where θ 20 is in the interior int(Θ 2 ) of Θ 2 .The rest of parameters θ 10 are always in the interior of Θ 1 .The following regularity condition is required.
∂θ ) exists and is nonsingular when θ 20 = 0, and E( ∂l i (θ 0 ) exists and is nonsingular when In the literature, several irregular models have parameters on the boundary: the model on simplified components of variances in Cox and Hinkley (1974, p. 117), the mixture model in Kiefer (1982) and the stochastic frontier function model in Aigner et al. (1977). 2 For these models, a scalar parameter θ 2 is always nonnegative but irregularity occurs when θ 20 = 0 on the boundary.True parameters other than θ 20 are in the interior of their parameter spaces.We thus assume that θ 20 is a scalar when it can be on the boundary of its parameter space. 3(iv)-(vii) in Assumption 4 are standard.Note that for the partial derivative with respect to θ 2 at θ 20 on the boundary, only perturbations on Θ 2 are considered, as for the (left/right) partial derivatives in Andrews (1999).The convexity of Θ 1 and Θ 2 makes such derivatives well-defined and convexity is relevant when the mean value theorem is applied to the log likelihood function.
For our main focus in this paper, at θ 20 = 0, the information matrix is singular.However, our lasso estimation method is also applicable to regular models where the information matrix might be nonsingular even at θ 20 = 0.The following proposition provides such a generality.
Proposition 2 derives the rate of convergence of the PMLE θn in the case of a nonsingular information matrix.When θ 20 = 0, we have assumed in Assumption 4 that the information matrix is nonsingular.When θ 20 = 0, Proposition 2 is relevant in the event that the PML is formulated with a reparameterized model that has a nonsingular information matrix and the reparameterized unknown parameters are represented by θ.
We now consider whether the PMLE has the sparsity property, i.e., whether θ2n is equal to zero w.p.a.1.when θ 20 = 0.For the lasso penalty function, λ n and the initial consistent estimate θn are required to have certain orders of convergence for the sparsity property.

2
As pointed out by an anonymous referee, our PML approach can also be applied to interesting economic models such as disequilibrium models and structural change models.For a market possibly in disequilibrium, an equilibrium is characterized by a parameter value on the boundary (Goldfeld and Quandt 1975;Quandt 1978).Structural changes can also be characterized by parameters on the boundary.Thus, our PML approach can be applied in those models with singular information matrices.

3
This implies that θ 0 ∈ int(Θ) when θ 20 = 0, which simplifies later presentation for the asymptotic distribution of the PMLE.In the case that θ 2 ∈ R k 2 with k 2 ≥ 2 and θ 20 is allowed to be on the boundary of Θ 2 , when θ 20 = 0, some components of θ 20 can still be on the boundaries of their parameter spaces, then the asymptotic distributions of their PMLEs will be nonstandard.
According to Rotnitzky et al. (2000), in the case that the information matrix is singular with rank being one less than the number of parameters k, there exists a reparameterization such that the MLE of one of the transformed parameter component converges at a rate slower than √ n, but the remaining k − 1 transformed components converge at the √ n-rate.As a result, some components of the MLE in terms of the original parameter vector have a slower than the √ n-rate of convergence, while the remaining components may have the √ n-rate.In this case, for θn as a whole, s < 1/2 in Assumption 5 if θn is the MLE.Assumption 5 (i) can be satisfied if λ n is selected to have a relatively slow rate of convergence to 0. The condition differs from that in the literature due to the irregularity issue we are considering.In the case that the PML is formulated with a reparameterized model that has a nonsingular information matrix and θ represents the reparameterized unknown parameter vector, Assumption 5 (ii) is relevant with s = 1/2 if θn is the MLE.
The oracle properties of the PMLE, including the sparsity property, are presented in Proposition 3. 4 When θ 20 = 0, the PMLE θ2n of θ 2 can equal zero w.p.a.1., and θ1n has the same asymptotic distribution as that of the MLE as if we knew θ 20 = 0. Proposition 3.Under Assumptions 1-5, if θ 20 = 0, then lim n→∞ P( θ2n = 0) = 1, and We next turn to the case with θ 20 = 0.The consistency of θn to θ 0 in Proposition 1 will guarantee that P( θ2n = 0) goes to 1 if θ 20 = 0.By Proposition 2, in order that θn can converge to θ 0 with √ n-consistency and without an asymptotic impact of the first order by λ n when θ 20 = 0, we need to select λ n to converge to zero with the order o(n −1/2 ).Assumption 6. λ n = o(n −1/2 ).
Assumptions 5 and 6 need to coordinate with each other as they are opposite requirements.By taking λ n = O(n −τ ) for some τ > 1/2, Assumption 6 holds.Assumption 5 (i) can be satisfied if we take µ to be large enough such that µs > τ > 1/2.For such a τ to exist, it is necessary to take µ > 1/(2s) for a given s.For the regular case in Assumption 5 (ii) , it is relatively more flexible on the value of µ.
Proposition 4.Under Assumptions 1-4 and 6, if We next consider the selection of the tuning parameter λ n .To make explicit the dependence of the PMLE θn on a tuning parameter λ, denote the PMLE θλ = arg max θ∈Θ {[L n (θ) − λ θ2n −µ θ 2 ]I( θ2n = 0) +L n (θ 1 , 0)I( θ2n = 0)} for a given λ. 5 Let Λ = [0, λ max ] be an interval from which the tuning parameter λ is selected, where λ max is a finite positive number.We propose to select the tuning parameter that maximizes the following information criterion: where {Γ n } is a positive sequence of constants, and θ2λ is the PMLE of θ 2 for a given λ.That is, given Γ n , the selected tuning parameter is λn = arg max λ∈Λ H n (λ).The term Γ n is an extra bonus for setting θ 2 to zero.Some conditions on Γ n are also invoked.
Proposition 2 is proved in the case of a nonsingular information matrix, similar to that in Fan and Li (2001).The method cannot be used in the case of a singular information matrix.However, the sparsity property can still be established by using only the consistency of θn under Assumption 5 (i).
To balance the order requirements of Γ n → 0 and n 2s Γ n → ∞, Γ n can be taken to be O(n −s ).As this order changes with s, the information criterion in (2) can be different from the traditional ones such as the AIC, BIC and Hannan-Quinn information criterion.
Proposition 5.Under Assumptions 1-7, P(sup Proposition 5 states that the model selection by the tuning parameter selection procedure is consistent.It implies that any λ in Λ n that fails to identify the true model would not be selected asymptotically by the information criterion in (2) as an optimal tuning parameter in Λ n , because such a λ is less favorable than any λn , which can identify the true model asymptotically.

Examples
In this section, we illustrate the PMLEs of the sample selection model as well as the stochastic frontier function model.In the irregular case, the true parameter vector is in the interior of its parameter space for the sample selection model, but it is on the boundary for the stochastic frontier function model.

The Sample Selection Model
We consider the sample selection model in Lee and Chesher (1986), which can have a singular information matrix.The model is as follows: where n is the sample size, (x i , z i ) is the ith observation of exogenous variables, and the vectors ( i , u i ), for i = 1, . . ., n, are independently distributed as the bivariate normal N 0, σ 2 ρσ ρσ 1 .The variable y * i is not observed, but a binary indicator I i is observed to be 1 if and only if y * i ≥ 0 and I i is 0 otherwise.The variable y i is only observed when I i = 1.Let θ = (β , σ 2 , γ , ρ) , β = (β 1 , β 2 ) and γ = (γ 1 , γ 2 ) , where β 1 and γ 1 are, respectively, the coefficients for the intercept terms in the outcome and selection equations.According to Lee and Chesher (1986), when x i contains an intercept term, but the true values of γ 2 and the correlation coefficient ρ are zero, elements of the score vector are linearly dependent and the information matrix is singular. 6For this model, the true parameter vector θ 0 which causes irregularity is in the interior of the parameter space.
Let θ 1 = (β , σ 2 , γ 1 ) and θ 2 = (γ 2 , ρ) .The PML criterion function for model (3) with the MLE θ2n is Since γ2n = O p (n −1/2 ) and ρn = O p (n −1/6 ), Assumptions 5 (i) and 6 hold when µ is greater than 3.By Assumption 7, in the information criterion function (2), Γ n should satisfy Γ n → 0 and According to the discussions in deriving the asymptotic distribution of the MLE via reparameterizations, alternatively, the criterion function for the PMLE can be formulated with the function L n3 (η, r) of the transformed parameters as where η = ( , and ω = (γ 2 , r) .While γ 2 enters the penalty terms of (4) and ( 5) in the same way, it is not the case for ρ: it is ρ in (4) but ρ 3 in (5).Since L n3 (η, r) has a nonsingular information matrix, by Proposition 2, the PMLE has the order O p (n Assumption 5 (ii) will be relevant.Thus, for the PML criterion function (5), as long as µ 1 > 0, no further condition on µ 1 is needed.Furthermore, Assumption 7 for Γ n in the information criterion function (2) with Γ n → 0 and nΓ n → ∞ as n → ∞ is relevant, and we can take Γ n = O(n −1/2 ).

The Stochastic Frontier Function Model
Consider the following stochastic frontier function model: where x i is a k-dimensional vector of exogenous variables which contains a constant term, the disturbance u i ≤ 0 represents technical inefficiency, v i represents uncontrollable disturbance, and u i and v i are independent.Following the literature, u i is assumed to be half normal with the pdf ), u ≤ 0, and v i ∼ N(0, σ 2 2 ).As in Aigner et al. (1977), let δ = σ 1 /σ 2 and σ 2 = σ 2 1 + σ 2 2 .For a random sample of size n, the log likelihood function divided by n is where θ = (β , σ 2 , δ) .In this model, δ is nonnegative and, for the irregular case, the true parameter δ 0 = 0 lies on the boundary, which represents the absence of technical inefficiency.According to Lee (1993), when δ 0 = 0, the information matrix is singular and the MLE of δ has the convergence rate n −1/6 ; when δ 0 = 0, the information matrix has full rank and the MLE has the √ n-rate of convergence.The asymptotic distribution of the MLE when δ 0 = 0 is derived by transforming the model into one with a nonsingular information matrix via several reparameterizations.Thus, the PML estimation can be formulated similarly to the sample selection model, using the original model or the transformed model.Note that in finite samples, the MLE of δ, regardless of whether δ 0 = 0 or not, can be zero with a positive probability.A necessary and sufficient condition for the MLE of δ to be zero is ∑ n i=1 ˆ 2 i ≥ 0, where ˆ i 's are the least squares residuals (Lee 1993).

Monte Carlo
In this section, we report results from some Monte Carlo experiments for both the sample selection model and the stochastic frontier function model.The code files are written and run in MATLAB.

The Sample Selection Model
For the sample selection model, in the experiments, there are two exogenous variables in x i : one is an intercept term and the other is drawn randomly from the standard normal distribution.The true vector of coefficients for x i is (1, 1) .There are also two exogenous variables in z i : an intercept term with true coefficient 1 and a variable randomly drawn from the standard normal distribution, for which the true coefficient is 2, 0.5 or 0. Two values of σ 2 0 , 2 and 0.5, are considered.The ρ 0 is either 0.7, −0.7, 0.3, −0.3 or 0. In the information criterion function (2) for the tuning parameter selection, µ is set to 4 and Γ n = 0.26n −1/2 . 8An estimate is regarded as zero if it is smaller than 10 −5 .The number of Monte Carlo repetitions is 1000.The sample sizes considered are n = 200 or 600.
Table 1 reports the probabilities that the PMLEs select the right model, i.e., the probabilities of the PMLEs of θ 2 being zero when θ 20 = 0, and being nonzero when θ 20 = 0. We use PMLE-o and PMLE-t to denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions.When γ 20 = 2 or 0.5, with the sample size n = 200, the probabilities are 1 or very closed to 1; with the sample size n = 600, all probabilities are 1.When γ 20 = 0 and ρ 0 = 0, the PMLEs estimate θ 2 = (γ 2 , ρ) as zero with high probabilities, higher than 95% for the PMLE-o and higher than 69% for the PMLE-t.The PMLE-o has higher probabilities of estimating θ 2 as zero than the PMLE-t.As the sample size increases from 200 to 600, the correct model selection probabilities of the PMLE-o increase while those of the PMLE-t decrease.When γ 20 = 0 but ρ 0 = 0, the PMLEs estimate θ 2 as nonzero with very low probabilities.With γ 20 = 0, we see that ).Thus, the scores are approximately linearly dependent as |ρ| < 1.In finite samples, even though ρ 0 = 0, the identification can be weak and the MLE behaves similarly to that in the case with ρ 0 = 0, which has large bias and variance, as seen from Tables 4 and 5 below.As a result, the PMLEs which use the MLEs to construct the penalty terms have low probabilities of estimating θ 2 to be non-zero.
Table 2 presents the biases, standard errors (SE) and root mean squared errors (RMSE) of the estimates when γ 20 = 2.For a nonzero true parameter value, the biases, SEs and RMSEs are divided by the absolute value of the true parameter value.The upper panel is for the sample with size n = 200.The restricted MLE, denoted as MLE-r, usually has the largest bias, because it imposes the wrong restriction θ 2 = 0.The MLE, PMLE-o and PMLE-t almost have identical summary statistics.Their biases and SEs are relatively low, e.g., the biases of ρ are all below or equal to 0.012, or 2.5% for a nonzero true ρ 0 , and the SEs are all below or equal to 0.246.As the SEs dominate the biases, the RMSEs have similar magnitudes as those of the SEs.As the value of ρ 0 changes, the biases, SEs and RMSEs do not change much.When σ 2 0 decreases from 2 to 0.5, all estimates of β 1 , β 2 and σ 2 tend to have smaller biases and SEs, but those for γ 1 , γ 2 and ρ show little changes.As the sample size increases to 600, all estimates have smaller biases, SEs and RMSEs.

8
In theory, the information criterion (2) can achieve model selection consistency as long as Γ n satisfies the order requirement in Assumption 7.However, the finite sample performance depends on the choice of Γ n .From the proof of Proposition 5, when θ 20 = 0, for large enough n, Γ n should be smaller than the difference between the function values of the expected log density at the true parameter vector and at the probability limit of the restricted MLE with the restriction θ 2 = 0 imposed.When θ 20 = 0, Γ n should be larger than the difference of the function values of the likelihood divided by n at the MLE and at the restricted MLE.For θ 20 = 0, σ 2 0 = 2 and n = 200, we compute the second difference 1000 times, and set Γ n = kn −1/2 to be the sample mean plus 2 times the standard error, which yields k = 0.26.We then set Γ n = 0.26n −1/2 in all cases and for all sample sizes.We also tried setting Γ n = kn −1/2 to be the sample mean plus zero to four times the standard error.The results are relatively sensitive to the choice of k.We leave the theoretical study on the choice of the constant in Γ n to future research.
Table illustrates the biases, SEs and RMSEs of the estimates when γ 20 = 0.5.The patterns are similar to those for Table 2.With a smaller γ 0 , the biases and SEs of β 2 , γ 1 and γ 2 tend to be smaller, but those of β 1 , σ 2 and ρ are larger.
Table reports the biases, SEs and RMSEs when γ 20 = 0 but ρ 0 = 0. We observe that the MLE has relatively large biases and SEs.For n = 200, the biases of ρ can be as high as 0.46 in absolute value, or higher than 100%, and the SEs can be as high as 0.72.While the biases of the MLE are usually smaller than those of the MLE-r, the SEs are usually much larger, especially for β 1 , σ 2 and ρ.In terms of the RMSEs, the MLE does not show an advantage over the MLE-r.The biases of the PMLE-o are usually smaller than those of the MLE-r and larger than those of the MLE, but the SEs of the PMLE-o are generally smaller than those of the MLE.The PMLE-t has smaller biases than those of PMLE-o but larger SEs in most cases, more similar to the MLE.That is consistent with Table 1, since the PMLE-t estimates θ as nonzero with higher probabilities.The RMSEs of the PMLEs are usually smaller than those of the MLE but larger than those of the MLE-r.In this case, even though the PML methods do not provide good probabilities of selecting the non-zero models, the shrinkage feature of the lasso does provide smaller RMSEs than those of the unconstrained MLEs.
The results for γ 20 = 0 and ρ 0 = 0 are reported in Table 5.As expected, the MLE-r usually has the smallest biases, SEs and RMSEs, since it has imposed the correct restriction θ 2 = 0.The biases, SEs and RMSEs of the PMLEs are between those of the MLE-r and MLE.The PMLE-o of β 1 , σ 2 , γ 2 and ρ have significantly smaller biases, SEs and RMSEs than those of the MLE.The biases, SEs and RMSEs of the PMLE-t are smaller than those of the MLE, but larger than those of the PMLE-o, since it estimates θ as nonzero with higher probabilities.Note that the MLEs of β 1 , σ 2 and ρ have relatively very large SEs, and the MLEs of σ 2 have very large biases, which can be larger than 50%.With a smaller σ 2 0 , the estimates generally have smaller biases, SEs and RMSEs.As n increases to 600, the summary statistics of the PMLE-o become very similar to those of the MLE-r, and all estimates have smaller biases, SEs and RMSEs in general.The penalized maximum likelihood (PMLE)-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions.When θ 20 = 0, the numbers in the table are the probabilities that the PMLEs of θ 2 are non-zero; when θ 20 = 0, the numbers are the probabilities that the PMLEs of θ 2 are zero.The MLE-r denotes the restricted MLE with the restriction θ 2 = 0 imposed, and the PMLE-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions.The three numbers in each cell are bias[SE]RMSE.(β 10 , β 20 , γ 10 ) = (1, 1, 1).

The Stochastic Frontier Function Model
In the Monte Carlo experiments for the stochastic frontier function model, there are three explanatory variables in x: the first one is the intercept term, the second one is randomly drawn from the standard normal distribution, and the third one is randomly drawn from the centered chi-squared distribution χ 2 (2) − 2. The true coefficient vector β 0 for the explanatory variables is (1, 1, 1) .We fix σ 2 20 = 1, thus σ 2 0 = δ 2 0 + 1, where δ 0 is either 2, 1, 0.5, 0.25, 0.1 or 0. For the PML criterion function (1) using the original likelihood function, µ is set to 4, and Γ n in the information criterion ( 2) is taken to be Γ n = 0.1n −1/2 , which is chosen in a way similar to that for the sample selection model.For the PML criterion function using the transformed likelihood function as in (5), 3µ 1 = 4 and Γ n = 0.1n −1/2 .Table 6 reports the probabilities that the PMLEs select the right model.For sample size n = 200, when δ 0 = 2, both the PMLE-o and PMLE-t estimate δ to be nonzero with probabilities higher than 80%.However, when δ 0 = 1, 0.5, 0.25 or 0.1, the PMLEs estimate δ to be nonzero with very low probabilities.With δ 0 = 0, the PMLEs estimate δ as zero with probabilities higher than 85%.There is a weak identification issue for the stochastic frontier function model similar to that for the sample selection model: , where θ 10 = (β 0 , σ 2 0 ) and ψ 0 = φ(0)/[1 − Φ(0)].Thus, when δ 0 is nonzero but small, the MLE and thus the PMLEs can perform poorly, which can be seen from Table 7.When the sample size increases from 200 to 600, the probabilities for δ 0 = 2 and δ 0 = 0 increase, but others decrease except that of the PMLE-o with δ 0 = 1.The PMLE-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions.When δ 0 = 0, the numbers in the table are the probabilities that the PMLEs of δ are non-zero; when δ 0 = 0, the numbers are the probabilities that the PMLEs of δ are zero.
Table 7 presents biases, SEs and RMSEs of the MLE, PMLE-o, PMLE-t and MLE-r with the restriction δ = 0 imposed, even though δ 0 = 0. Since the MLE-r imposes the wrong restriction, it has very large biases for β 1 , σ 2 and δ but it generally has the smallest SEs.The MLE, PMLE-o and PMLE-t of β 2 and β 3 have similar features.For δ 0 = 2, 1 and 0.5, the biases of the PMLEs of β 1 , σ 2 and δ are generally larger than those of the MLE, but are smaller than those of the MLE-r.The SEs of the PMLEs are larger than those of the MLE for δ 0 = 2 and 1 but are smaller for smaller values of δ 0 .For δ 0 = 0.25 and 0.1, even though the PMLEs estimate δ as zero with high probabilities, they have smaller biases, SEs and RMSEs than those of the MLE in almost all cases.As the sample size n increases, all estimates have smaller SEs, the MLEs have smaller biases, but the MLE-r and PMLEs may have smaller or larger biases.The MLE-r denotes the restricted MLE with the restriction δ = 0 imposed, and PMLE-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions.The three numbers in each cell are bias[SE]RMSE.β 0 = (1, 1, 1) .Corresponding to δ 0 = 2, 1, 0.5, 0.25 and 0.1, the true value of σ 2 is σ 2 0 = 5, 2, 1.25, 1.0625 and 1.01.
The biases, SEs and RMSEs of the estimators when δ 0 = 0 are presented in Table 8.All estimators of various estimation methods have similar summary statistics for β 2 and β 3 .For other parameters, the MLE-r has the smallest biases, SEs and RMSEs, since it imposes the correct restriction δ = 0.The PMLEs have much smaller biases, SEs and RMSEs than those of the MLE.The biases, SEs and RMSEs of the PMLE-o are smaller than those of the PMLE-t.As the sample size increases to 600, the summary statistics of the PMLE-o become very close to those of the MLE-r.For all estimates, we observe smaller biases, SEs and RMSEs for a larger sample size.The MLE-r denotes the restricted MLE with the restriction δ = 0 imposed, and PMLE-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions.The three numbers in each cell are bias[SE]RMSE.β 0 = (1, 1, 1) and σ 2 0 = 1.

Conclusions
In this paper, we investigate the estimation of parametric models with singular information matrices using the PML based on the adaptive lasso (group lasso).An irregular model has a singular information matrix occurring at a subvector θ 20 of the true parameter vector θ 0 being zero, but its information matrices at other parameter values are nonsingular.In addition, if we knew that θ 20 is zero, the restricted model always has a nonsingular information matrix.We show that the PMLEs have oracle properties.Consequently, the PMLEs always have the √ n-rate of convergence, no matter whether θ 20 = 0 or not, while the MLEs usually have slower than the √ n-rate of convergence and their asymptotic distributions might not be normal when θ 20 = 0.The PML can conduct model selection and estimation simultaneously.As examples, we consider the PMLEs for the sample selection model and the stochastic frontier function model, which can be formulated with both original structural parameters of interest and transformed parameters.Our Monte Carlo results show that the PMLE formulated with the original parameters generally performs well and outperforms the reparameterized one in terms of smaller RMSEs. where ) with φ(•) being the standard normal pdf.It is known that the variance-covariance matrix of a vector of random variables is positive definite if and only if there is no linear relation among the components of the random vector (Rao 1973, p. 107).Under the assumed regularity conditions, one can easily show that when ρ 0 = 0, the gradients (A2)-(A5) at θ 0 are linearly independent w.p.a.1., and hence the limiting matrix of 1 n I n (θ 0 ), where I n (θ 0 ) is the information matrix with the sample size n, is positive definite.Thus, there are no irregularities in the model when ρ 0 = 0, and the MLE is √ n-consistent and asymptotically normal.However, when ρ 0 = 0 and together with γ 0 , there are some irregularities in the model.With ρ 0 = 0, the first order derivatives are These derivatives are linearly independent as long as x and φ(z γ 0 )/Φ(z γ 0 ) are linearly independent, which will usually be the case if z contains some relevant continuous exogenous variables with nonzero coefficients.However, when the non-intercept variables in z have coefficients equal to zero, φ(z γ 0 )/Φ(z γ 0 ) is a constant for all i, and the first component of ∂L n (α 0 ,0) ∂β and ∂L n (α 0 ,0) ∂ρ are linearly dependent as x contains an intercept term.It follows that the information matrix must be singular.We consider this irregularity below.Let ) where ψ 0 = φ(γ 10 )/Φ(γ 10 ).Furthermore, the submatrix of the information matrix corresponding to α with the sample size n is The limit of Ξ n /n has full rank under the assumed regularity conditions.Thus, the rank of the information matrix is one less than the total number of parameters.This sample selection model (3) has irregularities similar to the stochastic frontier function model in Lee (1993), with the exception that the true parameter vector is not on the boundary of a parameter space.The asymptotic distribution of its MLE can be similarly derived.The method in Rotnitzky et al. (2000) can also be used, but the method in Lee (1993) is simpler for this particular model.
Consider the transformation of (α , θ 2 ) to (ξ , θ 2 ) defined by ξ = α − ρK 1 , where K 1 = (σ 0 ψ 0 , 0 1×(k x +1) ) with k x being the number of variables in x. 9 At ρ 0 = 0, ξ 0 = α 0 .Define L n1 (ξ, ρ) by which is the log likelihood divided by n in terms of ξ and θ 2 .Then ) and by (A10), ∂L n1 (ξ 0 , 0) ∂ρ Thus, the derivative of L n1 (ξ, θ 2 ) with respect to ρ at (ξ 0 , 0) is zero.The derivative can be interpreted as the residual vector . The linear dependence relation (A10) implies that the residual vector must be zero and E Furthermore, we see that Then by (A12) and (A7), where ξ k x +1 denotes the (k x + 1)th component of ξ.This is a second irregularity of the model.Following Lee (1993) and Rotnitzky et al. (2000), consider the transformation of (κ , ρ) to (η , ρ) defined by η = κ − 1 2 ρ 2 K 2 , where κ = (ξ , γ 2 ) and K 2 = [0 1×k x , 2σ 2 0 ψ 0 (ψ 0 + γ 10 ), 0 1×(k z +1) ] with k z being the number of parameters in z, and the function L n2 (η, ρ) defined by For the reparameterization in Lee (1993), the parameters σ and ψ in K 1 are not taken to be the true values.Both methods work.The method here might be simpler in computation. Then At ρ 0 = 0, η 0 = κ 0 .By (A13) and the linear dependence relation in (A14), and Since the first and second order derivatives of L n2 (η, ρ) with respect to ρ at (η 0 , 0) are zero, it is necessary to investigate the third order derivative of L n2 (η, ρ) with respect to ρ at (η 0 , 0).By (A18) and (A10), Note that 3 , and Then is not linearly dependent on ∂L n2 (η 0 ,0) ∂η . Under this circumstance, as in Rotnitzky et al. (2000), the asymptotic distribution of the MLE can be derived by investigating high order Taylor expansions of the first order condition of L n2 (η, ρ).For the stochastic frontier function model, Lee (1993) shows that the asymptotic distribution of the MLE can be derived by considering one more reparameterization.We employ the approach in Lee (1993).10Note that a Taylor expansion of ∂L n2 (η 0 ,ρ) ∂ρ around ρ = 0 up to the second order yields where the second equality follows by (A20) and (A21).Consider the transformation of (η, δ) to (η, r) defined by r = ρ 3 , (A24) and the function L n3 (η, r) defined by From ( A27) and (A23), ∂L n3 (η 0 ,0) ∂η and ∂L n3 (η 0 ,0) ∂r are linearly independent.Then the information matrix for L n3 (η, r) is nonsingular and the MLE ( η n , rn ) has the asymptotic distribution where The complete transformation for the model is The inverse transformation is ) ) Proof of Proposition 2. Let α n = n −1/2 + λ n .As in Fan and Li (2001), we show that for any given > 0, there exists a large enough constant C such that P{ sup We consider the two cases θ 20 = 0 and θ 20 = 0 separately.(i) θ 20 = 0. Note that Taylor's theorem still holds when some parameters are on the boundary (Andrews 1999, Theorem 6) as the parameter space is convex.Then by a first order Taylor expansion of u at 0, w.p.a.1., where u 2 is the subvector of u that consists of the last p elements of u, and ū lies between u and 0.
Under Assumption 5 (i), the first term on the l.h.s. of (A36) has the order o p (1), but the maximum of the components in absolute value of the second term goes to infinity w.p.a.1., then (A36) cannot hold with a positive probability.Under Assumption 5 (ii), the first term on the l.h.s. of (A36) multiplied by n −1/2 has the order O p (1), but the maximum of the components in absolute value of the second term multiplied by n −1/2 goes to infinity w.p.a.1., then (A36) cannot hold with a positive probability either.Hence, P( θ2n = 0) → 1 as n → ∞.
Combining the results in the above two cases, we have the result in the proposition.

Table 1 .
Probabilities that the PMLEs of the sample selection model select the right model.

Table 2 .
The biases, standard errors (SE) and root mean squared errors (RMSE) of the estimators when γ 20 = 2 in the sample selection model.

Table 3 .
The biases, SEs and RMSEs of the estimators when γ 20 = 0.5 in the sample selection model.

Table 4 .
The biases, SEs and RMSEs of the estimators when γ 20 = 0 and ρ 0 = 0 in the sample selection model.The MLE-r denotes the restricted MLE with the restriction θ 2 = 0 imposed, and the PMLE-o and PMLE-t denote the PMLEs obtained from the criterion functions formulated using, respectively, the original and transformed likelihood functions.The three numbers in each cell are bias[SE]RMSE.(β 10 , β 20 , γ 10 ) = (1, 1, 1).

Table 5 .
The biases, SEs and RMSEs of the estimators when γ 20 = 0 and ρ 0 = 0 in the sample selection model.

Table 6 .
Probabilities that the PMLEs of the stochastic frontier function model select the right model.

Table 7 .
The biases, SEs and RMSEs of the estimators when δ 0 = 0 in the stochastic frontier function model.

Table 8 .
The biases, SEs and RMSEs of the estimators when δ 0 = 0 in the stochastic frontier function model.