Next Article in Journal
Success at the Summer Olympics: How Much Do Economic Factors Explain?
Previous Article in Journal
Asymmetry and Leverage in Conditional Volatility Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model

by
Michael Pfaffermayr
1,2
1
Department of Economics, University of Innsbruck, Universitaetsstrasse 15, Innsbruck 6020, Austria
2
Austrian Institute of Economic Research, P.O.-Box 91, Vienna A-1103, Austria 
Econometrics 2014, 2(4), 151-168; https://doi.org/10.3390/econometrics2040151
Submission received: 31 July 2014 / Revised: 10 October 2014 / Accepted: 14 October 2014 / Published: 23 October 2014

Abstract

:
The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007), the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.
JEL classifications:
C23; C21

1. Introduction

The assumption of bivariate normal and homoskedastic disturbances is a prerequisite for the consistency of the maximum likelihood estimator of the Heckman sample selection model. Moreover, some studies focus on the prediction of counterfactuals based on the Heckman sample selection model taking into account both changes in participation and outcome, which is often only feasible under the assumption of bivariate normality.1 Lastly, under the assumption of bivariate normality one can the estimate the Heckman sample selection model by maximum likelihood methods that are less sensitive to weak exclusion restrictions.
Before employing alternative semiparametric estimators that do not need the normality assumption (see e.g., Newey, 2009 [3]), it seems useful to test the underlying normality assumption of sample selection models. So far, the literature offers several approaches to test this hypothesis.2 Bera et. al., (1984) [6] develop an LM test for normality of the disturbances in the general Pearson framework, which implies testing the moments up to order four. Lee (1984) [7] proposes Lagrangian multiplier tests within the bivariate Edgeworth series of distributions. Van der Klaauw and Koning (1993) [8] derive LR tests in a similar setting, while Montes-Rojas (2011) [9] proposes LM and C ( α ) tests that are likewise based on bivariate Edgeworth series expansions, but robust to local misspecification in nuisance distributional parameters. In general, these approaches tend to lead to complicated test statistics that are sometimes difficult to implement in standard econometric software. More importantly, some of these tests for bivariate normality seem to exhibit unsatisfactory performance in Monte Carlo simulations and are rejected too often in small to medium samples sizes, especially if the parameter of the Mills’ ratio is high in absolute value (see e.g., Montes-Rojas, 2011 [9], Table 1). This motivates Montes-Rojas (2011) [9] to focus on the assumptions of the two-step estimator that requires less restrictive assumptions, namely a normal marginal distribution of the disturbances of the selection equation and a linear conditional expectation of the disturbances of the outcome equation. He proposes to test for marginal normality and linearity of the conditional expectation of outcome model separately and shows that the corresponding locally size-robust test statistics based on the two-step estimator perform well in terms size and power .
In a possibly neglected, but very valuable paper, Meijer and Wansbeek (2007) [10] embed the two-step estimator of the Heckman sample selection model in a GMM-framework. In addition, they argue that within this framework it is easily possible to add moment conditions for designing Wald tests in order to check the assumption of bivariate normality and homoskedasticity of the disturbances. Their approach does not attempt to develop a most powerful test, rather they intended to design a relatively simple test for normality that can be used as an alternative to the existing tests. The test can be interpreted as a conditional moment test and checks whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. For H 0 to hold, the test in addition requires normally distributed disturbances of the selection equation and the absence of heteroskedasticity in both the outcome and the selection equation.
Meijer and Wansbeek (2007) [10] do not explicitly derive the corresponding test statistic nor do they provide Monte Carlo simulations on its performances in finite samples. The present contribution takes up their approach arguing that a GMM based pseudo-score LM test is well suited to test the hypothesis of bivariate normality and is easy to calculate. The derived LM test is similar to the widely used Jarque and Bera LM test (1980) [11], and in the absence of sample selection reverts to their LM test statistic. Monte Carlo simulations show good performance of the proposed test for samples of sizes of 1000 or larger, especially if a powerful exclusion restriction is available.

2. The GMM Based Pseudo-Score LM Test for Normality

In a cross-section of n units the Heckman (1979) [12] sample selection model is given as
y 1 i * = z i γ + u 1 i y 2 i * = x i β + u 2 i d i = 1 if y 1 i * > 0 0 otherwise y 2 i = y 2 i * if d i = 1 unobserved if d i = 0
where y 1 i * and y 2 i * denote latent random variables. The outcome variable, y 2 i * , is observed if the latent variable y 1 i * > 0 or, equivalently, if d i = 1 . z i is a k 1 × 1 vector containing the exogenous variables of the selection equation and x i is the k 2 × 1 vector of the exogenous variables of the outcome equation. z i may include the variables in x i , but also additional ones so that an exclusion restriction holds. γ and β denote the corresponding parameter vectors. Under H 0 the disturbances are assumed to be distributed as bivariate normal, i.e.,
u 1 i u 2 i | x i , z i N 0 0 , 1 τ τ σ 2
It is easy to show that under these assumptions
p i E [ d i ] = Φ ( z i γ ) λ i E [ u 1 i | u 1 i z i γ ] = ϕ ( z i γ ) 1 Φ ( z i γ ) = ϕ ( z i γ ) Φ ( z i γ )
where λ i denotes the inverse Mills’ ratio. Under the normal assumption one can specify u 2 i = τ u 1 i + ε i so that ε i i i d   N ( 0 , σ 2 τ 2 ) . ε i is independent of u 1 i as E [ ε i u 1 i ] = E [ u 2 i τ u 1 i u 1 i ] = τ τ = 0 . Since E [ u 1 i | d i = 1 ] = λ i , it holds that E [ τ ( u 1 i λ i ) + ε i | d i = 1 ] = 0 . Therefore, the two-step Heckman sample selection model includes the estimated inverse Mills’ ratio in the outcome equation as an additional regressor. For the observed outcome at d i = 1 the model can be written as
y 2 i * = x i β + τ λ i + τ u 1 i λ i + ε i w i α + τ v i + ε i
where v i = u 1 i λ i and E [ y 2 i * | d i = 1 ] = x i β + τ λ i .
Meijer and Wansbeek (2007) [10] embed the two-step Heckman sample selection estimator in a GMM framework and demonstrate that the estimation can be based on
h 1 , 1 i ( k 1 × 1 ) ( θ 1 ) ( d i p i ) ϕ i p i ( 1 p i ) z i h 1 , 2 i ( k 2 × 1 ) ( θ 1 ) d i w i ( y i w i α ) = d i w i ( τ v i + ε i ) h 1 , 3 i ( 1 × 1 ) ( θ 1 ) d i ( y i w i α ) 2 φ 2 , i = d i ( ε i + τ v i ) 2 φ 2 , i
where θ 1 = ( γ , β , τ , σ ) and φ k , i = E τ v i + ε i k | d i = 1 , k = 2 , 3 , 4 . Note, there are as many parameters as moment conditions and the model is just-identified.
The first set of moment equations is based on h ¯ 1 , 1 ( θ 1 ) = 1 n i = 1 n h 1 , 1 i ( θ 1 ) and refers to the score of the Probit model. Since these moment conditions do not include the parameters entering h 1 , 2 i and h 1 , 3 i (i.e., β , τ , σ ) and are exactly identified, estimation can proceed in steps: In the first step, one can solve 1 n i = 1 n h 1 , 1 i ( γ ^ ) = 0 and in the second step one solves the sample moment condition h ¯ 1 , 2 ( θ 1 ) = 1 n i = 1 n h 1 , 2 i ( γ ^ , β ^ , τ ^ ) = 0 using the estimated γ ^ derived in the first stage. This leads to the two-step Heckman estimator, which first estimates a Probit model, inserts the estimated Mills’ ratio λ ^ i as additional regressor in the outcome equation and applies OLS. Lastly, from h ¯ 1 , 3 ( θ 1 ) = 1 n h 1 , 3 i ( γ ^ , β ^ , τ ^ , σ ^ ) = 0 one can obtain an estimator of σ 2 .
As Meijer and Wansbeek (2007) [10] remark, a rough and simple test for normality can be based on two additional moment conditions that allow comparing the third and fourth moments of the estimated residuals of the outcome equation, y i w i α ^ , with their theoretical counterparts based on the truncated normal distribution. These moment conditions use
h 2 , 1 i ( 1 × 1 ) ( θ 1 , θ 2 ) d i ( y i w i α ) 3 φ 3 , i ξ = d i ( τ v i + ε i ) 3 φ 3 , i ξ h 2 , 2 i ( 1 × 1 ) ( θ 1 , θ 2 ) d i ( y i w i α ) 4 φ 4 , i κ = d i ( τ v i + ε i ) 4 φ 4 , i κ
Thereby, θ 2 = ( ξ , κ ) denotes additional parameters that are zero under normality. More importantly, under H 0 the expectations φ k , i can be derived recursively from the moments of the truncated normal distribution as shown in the Appendix (see alsoMeijer and Wansbeek, 2007, pp. 45–46) [10]. In general, these moments depend on the parameters θ 1 and, especially, on the inverse Mills’ ratio λ i and the parameter τ.
To detect violations of the normality assumption, one can test H 0 : ξ = 0 and κ = 0 vs. H 0 : ξ 0 and/or κ 0 . Although this hypothesis checks the third and fourth moments of the disturbances of the two-step outcome Equation (1), it can only be true if φ 3 , i and φ 4 , i are the correct expected values. Therefore, the test additionally requires the moment conditions E [ h 1 , 1 i ] = 0 and E [ h 1 , 2 i = 0 ] to hold so that the parameters of both the selection equation and the outcome equation are consistently estimated. The present hypothesis is somewhat more restrictive than that tested, e.g., in Montes-Rojas (2011) [9], who emphasizes that the Heckman two-step estimator is robust to distributional misspecification if (i) the marginal distribution of u 1 i is normal and (ii) E [ u 2 i | u 1 i ] = τ u 1 i , i.e., the conditional expectation is linear.3
In addition, H 0 also requires the absence of heteroskedasticity ( see Meijer and Wansbeek, 2007, p. 46 ) [10]. To give an example, assume that u 2 i = τ u 1 i + ε i and u 2 i are bivariate normal, but the variances of ε i differ across i and are given as σ i 2 τ 2 (see also the DGP6 in the Monte Carlo set-up below and the excess kurtosis of DGP6 in Table 1 below). Then, it follows that φ k , i = j = 0 k k j E [ ε i k j ] τ j ψ j , i , where ψ k , i E [ ( v i k | d i = 1 ] = j = 0 k ( 1 ) j k j μ j , i λ i k j and μ j , i = E [ u 1 i k | u 1 i > z i γ ] (see the Appendix). In this case, we have E [ ε i 2 ] = σ i 2 τ 2 and E [ ε i 4 ] = 3 σ i 2 τ 2 2 , while the corresponding uneven moments are zero. Hence, under heteroskedasticity φ 4 , i differs from that obtained under H 0 which assumes E [ ε i 2 ] = σ 2 τ 2 and E [ ε i 4 ] = 3 ( σ 2 τ 2 ) 2 and the population moment condition E [ h 2 , 2 i ( θ 1 , θ 2 ) ] = 0 is violated. Hence, a test based on these moments should also be able to detect heteroskedasticity, although not in the most efficient way.
Applying a pseudo-score LM test (Newey and West, 1987 [13]; Hall, 2005 [14]), in this GMM-framework leads to a χ 2 ( 2 ) -test statistic that can be calculated easily. In order to derive the LM test statistic, define h ¯ ( θ ) = 1 n i = 1 n h i ( θ ) and Ψ ¯ ( θ ) = 1 n i = 1 n h i ( θ ) h i ( θ ) , where h i ( θ ) ( h 1 , 1 i , h 1 , 2 i , h 1 , 3 i , h 2 , 1 i , h 2 , 2 i ) . It is assumed that Ψ 0 = p l i m n Ψ ¯ ( θ 0 ) exists, is positive definite and invertible. Under standard assumptions, it holds that
n 1 / 2 h ¯ ( θ 0 ) d N ( 0 , Ψ 0 ) n 1 / 2 ( θ ^ θ 0 ) d N ( 0 , A 0 )
where the subscript 0 indicates that H 0 is assumed. Thereby, A 0 = G 0 1 Ψ 0 G 0 1 and G 0 is the probability limit of G ¯ ( θ 0 ) = 1 n i = 1 n h ( θ ) θ θ = θ 0 . Note, G ¯ ( θ 0 ) is invertible as the model is just-identified.
Under H 0 the moment conditions E [ h 2 , i ( θ 1 , θ 2 ) , ξ , κ ] , h 2 , i ( θ 1 , θ 2 ) ( h 2 , 1 i , h 2 , 2 i ) referring to the third and fourth moments of the outcome equation are zero at ξ = 0 and κ = 0 and the separability result in Ahn and Schmidt (1995, Section 4) [15] can be applied. Denoting the restricted estimates under H 0 by a tilde, using the invertibility of G ¯ ( θ ˜ ) and the partitioned inverse of Ψ n ( θ ˜ ) = E [ Ψ ¯ θ ˜ ] , the pseudo-score LM test statistic can be derived as (see the Appendix for details):
L M ( θ ˜ ) = n h ¯ 2 ( θ ˜ ) Ψ n , 22 ( θ ˜ ) Ψ n , 21 ( θ ˜ ) Ψ n , 11 ( θ ˜ ) 1 Ψ n , 12 ( θ ˜ ) 1 h ¯ 2 ( θ ˜ )
Thereby, h ¯ 2 ( θ ) = 1 n i = 1 n h 2 , i ( θ ) and we use h ¯ 1 ( θ ˜ ) = 1 n i = 1 n h 1 , i ( θ ˜ ) = 0 , where h 1 , i ( θ ) = ( h 1 , 1 i , h 1 , 2 i , h 1 , 3 i ) , as well as the partitioned inverse (see the Appendix)
Ψ n , 11 ( θ ) = 1 n Z V Z 0 0 * W 1 Σ 1 W 1 d i = 1 w i φ 3 , i * * d i = 1 φ 4 , i φ 2 , i 2 Ψ n , 22 ( θ ) = 1 n i = 1 p i φ 6 , i φ 3 , i 2 p i φ 7 , i φ 3 , i φ 4 , i p i φ 7 , i φ 3 , i φ 4 , i p i φ 8 , i φ 4 , i 2 Ψ n , 12 ( θ ) = 1 n i = 1 n 0 0 p i w i φ 4 , i p i w i φ 5 , i p i φ 5 , i φ 2 , i φ 3 , i p i φ 6 , i φ 4 , i φ 2 , i
where V = d i a g ϕ 1 2 p 1 ( 1 p 1 ) , . . , ϕ n 2 p n ( 1 p n ) ,   p i = P ( d i = 1 ) ,   Z n × k 1 = ( z 1 , . . . , z n ) ,   W n × k 2 = ( w 1 , . . . , w n ) , and Σ = d i a g ( φ 2 , 1 , . . . , φ 2 , n ) . Σ 1 is obtained from Σ by deleting all rows and columns referring to d i = 0 , and similarly W 1 . Ψ n ( θ ) can be consistently estimated by plugging in θ ˜ . In addition, Meijer and Wansbeek (2007) [10] show that one can substitute d i for p i so that only information on the observed units is necessary. Note however, the summation runs over all observations (zero and ones in d i ).
Under standard assumptions it follows that under H 0 we have L M ( θ ˜ ) d χ 2 ( 2 ) (see Newey and West, 1987, pp. 781–782 [13] and Theorems 5.6 and 5.7 in Hall, 2005 [14]) . In the absence of sample selection ( τ = 0 ) it holds that φ 3 , i = φ 5 , i = 0 , while φ 2 , i = σ 2 and φ 4 , i = 3 σ 4 and the LM test statistic reverts to that of Jarque and Bera (1980) [11].

3. Monte Carlo Simulation

Monte Carlo simulations may shed light on the performance of the proposed LM test in finite samples. It is based on a design that has been used previously by van der Klaauw and Koning (1993) [8] and Montes-Rojas (2011) [9], but includes a few modifications. The simulated model is specified as
y 1 i * = 1 z 1 i + 1 x 2 i 1 + u 1 i y 2 i * = 0 . 5 x 1 i 0 . 5 x 2 i + 1 + u 2 i
where for ρ { 0 . 8 , 0 . 4 , 0 . 4 , 0 . 8 } and σ 2 { 0 . 25 , 1 } . The explanatory variables x 1 i ,   x 2 i , and z 1 i are generated as i i d   N ( 0 , 3 ) , N ( 0 , 3 ) and U ( 3 , 3 ) , respectively. With respect to the disturbances, u 1 i , and u 2 i the following data generating processes are considered. Note DGP1-DGP3 imply V a r [ u 1 i ] = 1 and V a r [ u 2 i ] = 0 . 25 . In contrast, van der Klaauw and Koning (1993) [8] and Montes-Rojas (2011) [9] consider the case with V a r [ u 2 i ] = 5 and thus receive less precise estimates of the slope parameters of the outcome equation.
  • DGP1:
    ( u 1 i , u 2 i ) i i d   N 0 , 1 0 . 5 ρ 0 . 5 ρ 0 . 25
  • DGP2:
    ε 1 i t ( 10 ) ,   ε 2 i t ( 10 ) ,   ε 1 i and ε 2 i being independent.
    u 1 i = ε 1 i 10 8 1 / 2
    u 2 i = σ ( 1 + ρ 2 ) 1 / 2 10 8 1 / 2 ε 2 i + ρ σ u 1 i
    The degrees of freedom are set to 10 to guarantee that the moments up to order 4 exists.
  • DGP3:
    ε 1 i χ 2 ( 20 ) ,   ε 2 i χ 2 ( 30 )
    u 1 i = ε 1 i 20 / 40 20
    u 2 i = σ ( 1 + ρ 2 ) 1 / 2 ε 2 i 30 / 60 + ρ σ u 1 i
  • DGP4:
    ε 1 i N ( 0 , 1 ) ,   ε 2 i χ 2 ( 30 ) and are independent.
    u 1 i = ε 1 i
    u 2 i = σ ( 1 + ρ 2 ) 1 / 2 ε 2 i 30 / 60 + ρ σ u 1 i
  • DGP5:
    ε 1 i χ 2 ( 20 ) ,   ε 3 i N ( 0 , 1 ) and are independent.
    u 1 i = ε 1 i 20 / 40 1
    u 2 i = σ ( 1 + ρ 2 ) 1 / 2 ε 2 i + ρ σ u 1 i
  • DGP6:
    ε 1 i N ( 0 , 1 ) ,   ε 2 i N ( 0 , 0 . 25 ) ,   ε 1 i and ε 2 i being independent.
    c i = 1 + e x 1 i 3 e 1 2 1 e 1 1 1 2
    u 1 i = ε 1 i
    u 2 i = σ ( 1 + ρ 2 ) 1 / 2 c i ε 2 i + 2 ρ u 1 i
  • DGP7:
    ε 1 i N ( 0 , 1 ) ,   ε 2 i N ( 0 , 1 ) ,   ε 1 i and ε 2 i being independent.
    c i = 1 + e x 2 i 3 e 1 2 1 e 1 1 1 2
    u 1 i = ε 1 i c i
    u 2 i = σ ( 1 + ρ 2 ) 1 / 2 ε 2 i + σ ρ u 1 i
DGP1 serves as a reference to assess the size of the pseudo-score LM test. The second DGP deviates from the bivariate normal in terms of a higher kurtosis, while DGP3 exhibits both higher skewness and kurtosis than the normal. DGP4 allows for deviation from normality in the outcome equation, while keeping the normality assumption in the selection equation. DGP5 reverses this pattern. The disturbances of the outcome equation are normal and those of the selection equation are not. DGP6 and DGP7 introduce heteroskedasticity in either the outcome or the selection equation, respectively. In case of the latter two, the variances of u i 1 and u i 2 is normalized to an average of 1 and 0 . 25 , respectively. Note, the explanatory variables are held fixed in repeated samples.
Overall, for these DGPs four experiments are considered. In the baseline Experiment 1 (first row of the figures of graphs) 37 % of the data remain unobserved and in the absence of sample selection the implied R 2 amounts to 1 0 . 25 1 . 75 = 0 . 86 using V a r ( u 2 i ) = 0 . 25 and V a r ( y 2 i * ) = 1 . 75 . Experiment 2 (second row of the figures of graphs) analyzed the performance of the Heckman two-step estimator under a weaker exclusion restriction, assuming z 1 i i i d   U ( 1 , 1 ) so that V a r ( z 1 i ) = 1 / 3 : Experiment 3 (third row of the figures of graphs) sets the constant of the outcome equation to zero so that 49 % instead of 37 % units are unobserved. Lastly, Experiments 4 (fourth row of the figures of graphs) considers a weaker fit in the outcome equation setting V a r ( u 2 i ) = 1 so that in the absence of sample selection we have R 2 = 0 . 43 .
Table 1 summarizes the average variance, skewness and kurtosis of the generated disturbances u 1 i and u 2 i under Experiment 1. In DGP2-DGP7, depending on ρ, the average kurtosis of u 2 i varies between 3 . 00 and 5 . 68 , while the kurtosis of u 1 i lies between 3 . 07 and 3 . 58 in DGP5. In the other ones the kurtosis of u 1 i is held constant taking values 2 . 99 (DGPs 1,4 and 6), 3 . 97 (DGP2), 3 . 58 (DGP3) and 5 . 71 (DGP7), respectively. The skewness coefficient of the generated disturbances is zero for all DGPs except for DGP3 with corresponding values of 0.63 ( u 1 i ) and 0 . 21 to 0.43 ( u 2 i ) and DGP5 where the skewness of u 1 i varies between 0 . 14 and 0 . 63 .
Table 1. Variance, Skewness and Kurtosis of the simulated disturbances.
Table 1. Variance, Skewness and Kurtosis of the simulated disturbances.
DGPρ u 1 u 2
VarianceSkewnessKurtosisVarianceSkewnessKurtosis
1all1.000.002.990.250.002.99
2−0.81.000.003.970.250.003.52
2−0.41.000.003.970.250.003.70
20.01.000.003.970.250.003.96
20.41.000.003.970.250.003.70
20.81.000.003.970.250.003.52
3−0.81.000.633.580.25−0.213.29
3−0.41.000.633.580.250.353.28
30.01.000.633.580.250.513.38
30.41.000.633.580.250.433.28
30.81.000.633.580.250.433.29
4−0.81.000.002.990.250.113.05
4−0.41.000.002.990.250.393.27
40.01.000.002.990.250.513.38
40.41.000.002.990.250.393.27
40.81.000.002.990.250.113.04
5−0.81.000.633.580.250.002.99
5−0.41.000.633.580.250.002.99
50.01.000.633.580.250.002.99
50.41.000.633.580.250.002.99
50.81.000.633.580.250.002.99
6−0.81.000.002.990.250.003.34
6−0.41.000.002.990.250.004.89
60.01.000.002.990.250.005.68
60.41.000.002.990.250.004.89
60.81.000.002.990.250.003.35
7−0.80.990.005.710.250.004.11
7−0.40.990.005.710.250.003.06
70.00.990.005.710.250.002.99
70.40.990.005.710.250.003.06
70.80.990.005.710.250.004.11
Following Davidson and MacKinnon (1998) [16] the size and power is analyzed in terms of size-discrepancy and power-size curves. The former is based on the empirical cumulative distribution function of the p-values, p r , defined as F ( q ) = 1 R r = 1 R I ( p r q ) , where R is the number of Monte Carlo replications. The size-discrepancy curves are defined as plots of F ( q ) q against q under the assumption that H 0 holds and DGP1 is the correct one. In addition, one can use a Kolmogorov and Smirnov test to see whether F ( q ) q differs significantly from 0 (see Davidson and MacKinnon 1998, p. 11) [16]. The size-power curves plot power against size, i.e., F H 1 ( q ) against F H 0 ( q ) . In both plots q 0 , 0 . 15 and step size is 0 . 001 . An important feature of this procedure is that it avoids size adjustments of the power curves if the tests reject too often under H 0 .
Figure 1 exhibits the size-discrepancy plots for Experiments 1–4 and sample sizes n = 500 , 1000 , 2000 . The plots show that the pseudo-score LM test is properly sized for ρ = 0 . 4 and ρ = 0 . 4 in all experiments, while it slightly over-rejects at ρ = 0 . 8 and ρ = 0 . 8 , especially at a small sample size ( n = 500 ). For example, at a nominal test size of 0 . 05 and a sample size of 1000 the size of LM test is too high by 0 . 012 percentage points at ρ = 0 . 8 . For ρ = 0 . 4 and ρ = 0 . 4 the size-discrepancy is within the Kolmogorov and Smirnov 5 % confidence of bound p ± 0 . 0096 for p-values smaller than 0 . 1 . A similar result has also been mentioned in Montes-Rojas (2011) [9] in case of robust LM and C ( α ) tests. A weaker exclusion restriction, setting V a r ( z 1 i ) = 1 / 3 in Experiment 2, increases the size-discrepancy at high absolute values of ρ (Experiment 2, row 2 of Figure 1), but hardly affects the size of the test at ρ = 0 . 4 . The size-discrepancy remains in the confidence bounds at medium values of ρ. Increasing the share of unobserved values to 0.49 (Experiment 3, row 3 of Figure 1) hardly affects the size-discrepancy. Lastly, Experiment 4 (last row of Figure 1) shows that a weaker fit ( V a r ( u 2 , i ) = 1 ) does not result in a larger size distortion as compared to the baseline in the first row of Figure 1. As one would expect, a larger number of observations generally enhances the performance of the LM test (see the last column in Figure 1). However, the large sample approximation improves relatively slowly with sample size under a weak exclusion restriction at high absolute values of ρ (confer the second row of graphs in Figure 1).
Figure 2, Figure 3 and Figure 4 present the power-size plots of the pseudo-score LM test for the DGPs 2–3, 4–5 and 6–7, respectively. In general and in line with the literature, for all DGPs referring to the alternative hypothesis we observe lower power of the pseudo-score LM test at high absolute values of ρ, but especially so at ρ = 0 . 8 . If the distribution of the disturbances of the outcome equation exhibits both skewness and excess kurtosis (DGP3) the simulated power of the pseudo-score LM test is higher than that of a symmetric distribution with fatter tails than the normal distribution except for ( ρ = 0 . 8 ). Furthermore, for DGP3 the power is generally lower at ρ = 0 . 8 as compared to large positive values ( ρ = 0 . 8 ) , which reflects differences in the skewness of the distribution of u 2 i with respect to ρ (confer Table 1).
Figure 3 illustrates the power of the pseudo-score LM test under non-normality in either the outcome (DGP4) or the selection equation (DGP5) but not in both. Under DGP4 the pseudo-score LM test exhibits high power at intermediate absolute values of ρ, while at high absolute values of ρ the power tends to be lower as the weight of u 1 i (that is assumed to be normal) is higher in the disturbances of the outcome equation. In case of DGP5 we see the reversed pattern. Deviations from normality are only detected in case of high absolute values of ρ. Actually, under DGP5 the test has no power at all at ρ = 0 , since in this case there is no effect of the truncation of u i 1 and disturbances of the outcome equation are normal. This results can be found in all four considered Experiments.
Figure 4 presents the size-power plot of DGP6 and DGP7 and refers to heteroskedasticity. DGP6 allows for heteroskedasticity in the outcome equation and DGP7 in the selection equation. The power-size curves indicate that the pseudo-score LM test is also able to detect this type of deviation from the model assumptions as heteroskedasticity translates into pronounced excess kurtosis of the disturbances of the outcome equation. For DGP6 this is the case at medium to low values of | ρ | . DGP7 introduces heteroskedasticity in Probit selection model. In this case, the LM test exhibits power at high absolute vales of ρ, but has virtual no power at ρ = 0 . 4 and 0 . 4 . The reason is that the nominal kurtosis of u 2 i is hardly affected (amounting to 3.06, see Table 1) and the bias of the Mills’ ratio and the estimated coefficients of the outcome equation, especially that of the Mills’ ratio turn out small in comparison.
Figure 1. Size-discrepancy plot.
Figure 1. Size-discrepancy plot.
Econometrics 02 00151 g001
Figure 2. Size power plot, DGP1-DGP3, n = 1000.
Figure 2. Size power plot, DGP1-DGP3, n = 1000.
Econometrics 02 00151 g002
Figure 3. Size power plot, DGP1, DGP4 and DGP 5, n = 1000.
Figure 3. Size power plot, DGP1, DGP4 and DGP 5, n = 1000.
Econometrics 02 00151 g003
Figure 4. Size power plot, DGP1, DGP6 and DGP7, n = 1000.
Figure 4. Size power plot, DGP1, DGP6 and DGP7, n = 1000.
Econometrics 02 00151 g004
Comparing the first and second row of graphs in Figure 2, Figure 3 and Figure 4 indicates that there is not much power lost with the weaker exclusion restriction. A higher share of unobserved units tends to slightly reduce the power of the LM test as one would expect (see the graphs in row 3 vs. 1 in Figure 2, Figure 3 and Figure 4). Comparing the first and the last row in Figure 2, Figure 3 and Figure 4 indicates that a weaker fit (i.e., V a r ( u 2 i ) is increased from 0.25 to 1) does not result in a significant loss of power. Lastly, as expected a larger sample size improves the power of the pseudo-score LM test across the board.4

4. Conclusions

Using Meijer and Wansbeek’s (2007) [10] GMM-approach for two-step estimators of the Heckman sample selection model, this paper introduces a pseudo-score LM test to check the assumption of normality and homoskedasticity of the disturbances, a prerequisite for the consistency of this estimator. The GMM-based pseudo-score LM test is easy to calculate and similar to the widely used Jarque and Bera (1980) [11] LM test. Indeed, in the absence of sample selection it reverts to their LM test statistic. In particular, the test checks whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. Under H 0 normal disturbances of the selection equation and the absence of heteroskedasticity in both the outcome and the selection equation are additionally required.
Monte Carlo simulations show good performance of the pseudo-score LM test for samples of size 1000 or larger and a powerful exclusion restriction. However, in line with other tests of the normality assumption of the Heckman sample selection model proposed in the literature the pseudo-score LM test tends to be oversized, although only slightly, if the correlation of the disturbances of the selection and the outcome equation is high in absolute value or if the exclusion restrictions are weak. Hence, this test can be recommended for sample sizes of 1000 or larger.

Acknowledgments

I am very grateful to Tom Wansbeek and two anonymous referees for detailed and constructive comments on an earlier draft. A Stata ado-file for this test is available at: http://homepage.uibk.ac.at/ c43236/publications.html.

Appendix

Deriving E τ v i + ε i k | d = 1 :

Let Z N ( 0 , 1 ) and consider μ k ( a i ) = E [ Z k | Z > a i ] ,   k = 1 , 2 . . . . The derivation the moments of Z k | Z > a i uses the following recursive formula (Meijer and Wansbeek, 2007, p. 45) [10]:
μ 0 ( a i ) = 1 μ 1 ( a i ) = λ i μ k ( a i ) = ( k 1 ) μ k 2 ( a i ) + a i k 1 λ i , k 2
Setting a i = z i γ and abbreviating μ k ( a i ) = μ k , i , one obtains
ψ k , i E [ v i k | d i = 1 ] = j = 0 k ( 1 ) j k j μ j , i λ i k j
and based on these results one can calculate the moments of τ v i + ε i k as
φ k , i E τ v i + ε i k | d = 1 = E j = 0 k k j ε i k j τ v i j | d = 1 = j = 0 k k j μ ε , k j τ j ψ j , i
where μ ε , k j E [ ε i k ] .

Pseudo-score LM test:

Denoting the GMM-estimates under H 0 by θ ˜ , the pseudo-score LM test can be written as (see Hayashi, 2000, p.491–493 [17], Newey and West, 1987, p. 780 [13] and Hall, 2005, p. 162 [14]) : 5
L M = n h ¯ ( θ ˜ ) Ψ n ( θ ˜ ) 1 G ¯ ( θ ˜ ) ( G ¯ ( θ ˜ ) Ψ n ( θ ˜ ) 1 G ¯ ( θ ˜ ) ) 1 G ¯ ( θ ˜ ) Ψ n ( θ ˜ ) 1 h ¯ ( θ ˜ )
where Ψ n ( θ ˜ ) = E [ Ψ ¯ ( θ ˜ ) ] is a consistent estimator of Ψ 0 under H 0 . Using the fact that G ¯ ( θ ˜ ) is invertible yields the LM test statistic as
L M = n h ¯ ( θ ˜ ) Ψ n ( θ ˜ ) 1 h ¯ ( θ ˜ )
which can be further simplified using the partitioned inverse
L M = n h ¯ 2 ( θ ˜ ) Ψ n , 22 ( θ ˜ ) Ψ n , 21 ( θ ˜ ) Ψ n , 11 1 ( θ ˜ ) Ψ n , 12 ( θ ˜ ) 1 h ¯ 2 ( θ ˜ )
since h ¯ 1 ( θ ˜ ) = 0 .

Variance of moments:

Under fairly general conditions (see Amemiya, 1985, Section 3.4) [18], l i m n E Ψ ¯ ( θ ) = p l i m n Ψ ¯ ( θ ) and in the formulas for the asymptotic covariance matrix, one can replace Ψ ¯ ( θ ) by its expectation. Note Ψ n ( θ 0 ) = E Ψ ¯ ( θ 0 ) can be estimated consistently in the usual way by Ψ n ( θ ˜ ) . To obtain the estimate Ψ n ( θ ˜ ) , we partition Ψ n ( θ ) in accordance to h ¯ ( θ ) = ( h ¯ 1 ( θ ) , h ¯ 2 ( θ ) ) as
Ψ n ( θ ) = Ψ n , 11 ( θ ) Ψ n , 12 ( θ ) Ψ n , 12 ( θ ) Ψ n , 22 ( θ )
Using
h 1 , i ( θ ) h 1 , i ( θ ) = d i p i ϕ i z i p i ( 1 p i ) d i w i ( τ v i + ε i ) d i τ v i + ε i 2 φ 2 , i d i p i ϕ i z i p i ( 1 p i ) d i w i ( τ v i + ε i ) d i τ v i + ε i 2 φ 2 , i = d i p i 2 ϕ i p i ( 1 p i ) 2 z i z i d i d i p i ϕ i p i ( 1 p i ) z i w i ( τ v i + ε i ) d i p i ϕ i p i ( 1 p i ) d i τ v i + ε i 2 φ 2 , i z i * d i w i w i ( τ v i + ε i ) 2 d i τ v i + ε i 3 φ 2 , i ( τ v i + ε i ) w i * * d i τ v i + ε i 2 φ 2 , i 2
one obtains for the off-diagonal elements:
1 n i = 1 n E [ d i p i ϕ i p i ( 1 p i ) d i ( τ v i + ε i ) z i w i ] = 0 1 n i = 1 n E [ d i p i ϕ i p i ( 1 p i ) d i τ v i + ε i 2 φ 2 , i z i = 0 1 n i = 1 n E d i ε i + τ v i 3 φ 2 , i ( τ v i + ε i ) w i = 1 n i = 1 n p i φ 3 , i w i
Some of the explanatory variables summarized in w i may not be observed at d i = 0 . However, one can use the reasoning in Meijer and Wansbeek (2007) [10] and establish
p l i m n 1 n ( W 1 W 1 ) lim n 1 n W Π W = p l i m n 1 n i = 1 n d i w i w i lim n 1 n i = 1 n p i w i w i = 0 p l i m n 1 n i = 1 n d i p i φ k , i w i = 0 , k = 1 , . . . , 8 p l i m n 1 n i = 1 n d i p i φ k , i φ l , i = 0 , k , l = 1 , . . . , 4 .
Here, Π = d i a g ( p 1 , . . , p n ) and W 1 is derived from W by skipping all rows with d i = 0 . Hence, one can use
Ψ n , 11 ( θ ) = 1 n Z V Z 0 0 * W 1 Σ 1 W 1 d i = 1 w i φ 3 , i * * d i = 1 φ 4 , i φ 2 , i 2
where V = d i a g ϕ 1 2 p 1 ( 1 p 1 ) , . . , ϕ n 2 p n ( 1 p n ) ,   W n × k 2 = ( w 1 , . . . , w n ) , and Σ = d i a g ( φ 2 , 1 , . . . , φ 2 , n ) . Σ 1 is obtained from Σ by deleting all rows and columns referring to d i = 0 , and similarly W 1 . Similar arguments yield at ξ = κ = 0
Ψ n , 22 ( θ ) = 1 n i = 1 N p i φ 6 , i φ 3 , i 2 p i φ 7 , i φ 3 , i φ 4 , i p i φ 7 , i φ 3 , i φ 4 , i p i φ 8 , i φ 4 , i 2
and
Ψ n , 12 ( θ ) = 1 n i = 1 n 0 0 p i w i φ 4 , i p i w i φ 5 , i p i φ 5 , i φ 2 , i φ 3 , i p i φ 6 , i φ 4 , i φ 2 , i
Again, we can insert d i fir p i . Applying the formula for the partitioned inverse yields the simplification of the pseudo-score LM test statistic:
L M = n h ¯ ( θ ˜ ) Ψ n ( θ ˜ ) 1 h ¯ ( θ ˜ ) = n 0 , h ¯ 2 ( θ ˜ ) Ψ n , 11 ( θ ˜ ) Ψ n , 12 ( θ ˜ ) Ψ n , 21 ( θ ˜ ) Ψ n , 22 ( θ ˜ ) 1 0 h ¯ 2 ( θ ˜ ) = n h ¯ 2 ( θ ˜ ) Ψ n , 22 ( θ ˜ ) Ψ n , 21 ( θ ˜ ) Ψ n , 11 ( θ ˜ ) 1 Ψ n , 12 ( θ ˜ ) 1 h ¯ 2 ( θ ˜ )
which is asymptotically distributed as χ 2 ( 2 ) under H 0 .

Conflicts of Interest

The author declares no conflict of interest.

References

  1. S.T. Yen, and J. Rosinski. “On the marginal effects of variables in the log-transformed sample selection models.” Econ. Lett. 100 (2008): 4–8. [Google Scholar] [CrossRef]
  2. K.E. Staub. “A causal interpretation of extensive and intensive margin effects in generalized Tobit models.” Rev. Econ. Stat. 96 (2014): 371–375. [Google Scholar] [CrossRef]
  3. W.K. Newey. “Two-step series estimation of sample selection models.” Econom. J. 12 (2009): 217–229. [Google Scholar] [CrossRef]
  4. C.L. Skeels, and F. Vella. “A Monte Carlo investigation of the sampling behavior of conditional moment tests in Tobit and Probit models.” J. Econom. 92 (1999): 275–294. [Google Scholar] [CrossRef]
  5. D.M. Drukker. “Bootstrapping a conditional moments test for normality after Tobit estimation.” Stata J. 2 (2002): 125–139. [Google Scholar]
  6. A.K. Bera, C.M. Jarque, and L.-F. Lee. “Testing the normality assumption in limited dependent variable models.” Int. Econ. Rev. 25 (1984): 563–578. [Google Scholar] [CrossRef]
  7. L.-F. Lee. “Tests for the bivariate normal distribution in econometric models with selectivity.” Econometrica 52 (1984): 843–863. [Google Scholar] [CrossRef]
  8. B. Van der Klaauw, and R.H. Koning. “Testing the normality assumption in the sample selection model with and application to travel demand.” J. Bus. Econ. Stat. 21 (1993): 31–42. [Google Scholar] [CrossRef]
  9. G.V. Montes-Rojas. “Robust misspecification tests for the Heckman’s two-step estimator.” Econom. Rev. 30 (2011): 154–172. [Google Scholar] [CrossRef]
  10. E. Meijer, and T. Wansbeek. “The sample selection model from a method of moments perspective.” Econom. Rev. 26 (2007): 25–51. [Google Scholar] [CrossRef]
  11. C. Jarque, and A. Bera. “Efficient tests for normality, homoskedasticity and serial independence of regression residuals.” Econ. Lett. 6 (1980): 255–259. [Google Scholar] [CrossRef]
  12. J.J. Heckman. “Sample selection bias as a specification error.” Econometrica 47 (1979): 153–161. [Google Scholar] [CrossRef]
  13. W.K. Newey, and K.D. West. “Hypothesis testing with efficient method of moments estimation.” Int. Econ. Rev. 28 (1987): 777–787. [Google Scholar] [CrossRef]
  14. A.R. Hall. Generalized Methods of Moments. Oxford, UK: Oxford University Press, 2005. [Google Scholar]
  15. S.C. Ahn, and P. Schmidt. “A separability result for GMM estimation, with applications to GLS prediction and conditional Moment Tests.” Econom. Rev. 14 (1995): 19–34. [Google Scholar] [CrossRef]
  16. R. Davidson, and J.G. MacKinnon. “Graphical methods for investigating the size and power of hypothesis tests.” Manch. Sch. 66 (1998): 1–26. [Google Scholar] [CrossRef]
  17. F. Hayashi. Econometrics. Princeton, NJ, USA; Oxford, UK: Princeton University Press, 2000. [Google Scholar]
  18. T. Amemiya. Advanced Econometrics. Harvard, UK: Harvard University Press, 1985. [Google Scholar]
  • 1.An example is the estimation of gravity models of bilateral trade flows with missing and/or zero trade. Here, the assumption of bivariate normality turns out important for deriving comparative static results with respect changes in the external and internal margin of trade following Yen and Rosinski (2008) [1] and Staub (2014) [2].
  • 2.There is also work available that proposes normality tests for the Tobit model (see Skeels and Vella, 1999 [4] and Drukker, 2002 [5]).
  • 3.Specifically, Montes-Rojas (2011)[9] mentions the case where u 1 i     N ( 0 , 1 ) , u 2 i = τ u 1 i +   ε i and u 1 i and ε i being independent, but ε i does not follow a normal distribution. φ 3 , i and φ 4 , i the moments are E [ ε i k ] are left unrestricted and estimated from the residuals of the second-stage outcome equation.
  • 4.The corresponding figures for a larger sample size of n = 2000 are available upon request from the author.
  • 5.Newey and West (1987) [13] propose to use the unrestricted estimator Ψ ¯ ( θ ^ ) , a route that is not followed here.

Share and Cite

MDPI and ACS Style

Pfaffermayr, M. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model. Econometrics 2014, 2, 151-168. https://doi.org/10.3390/econometrics2040151

AMA Style

Pfaffermayr M. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model. Econometrics. 2014; 2(4):151-168. https://doi.org/10.3390/econometrics2040151

Chicago/Turabian Style

Pfaffermayr, Michael. 2014. "A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model" Econometrics 2, no. 4: 151-168. https://doi.org/10.3390/econometrics2040151

Article Metrics

Back to TopTop