# Removing Specification Errors from the Usual Formulation of Binary Choice Models

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{7}

^{*}

^{†}

Next Article in Journal

Next Article in Special Issue

Next Article in Special Issue

Previous Article in Journal

Previous Article in Special Issue

Previous Article in Special Issue

Federal Reserve Board (Retired), Washington, DC 20551, USA

Department of Mathematics (Retired), American University, Washington, DC 20016, USA

Department of Mathematics (Retired), Temple University, Philadelphia, PA 19122, USA

Department of Economics, New York University, 44 West Fourth Street, 7-90 New York, NY 10012, USA

Leicester University, Room Astley Clarke 116, University Road, Leicester LEI 7RH, UK

Bank of Greece, 21 El. Venizelos Ave., 10250 Athens, Greece

Monetary Policy Council, Bank of Greece, 21 El. Venizelos Ave., Athens 10250, Greece

Author to whom correspondence should be addressed.

Current Address: 6333 Brocketts Crossing, Kingstowne, VA 22315, USA

Academic Editor: Kerry Patterson

Received: 22 December 2015 / Revised: 19 April 2016 / Accepted: 19 May 2016 / Published: 3 June 2016

(This article belongs to the Special Issue Discrete Choice Modeling)

We develop a procedure for removing four major specification errors from the usual formulation of binary choice models. The model that results from this procedure is different from the conventional probit and logit models. This difference arises as a direct consequence of our relaxation of the usual assumption that omitted regressors constituting the error term of a latent linear regression model do not introduce omitted regressor biases into the coefficients of the included regressors.

It is well-known that binary choice models are subject to certain specification errors. It can be shown that the usual approach of adding an error term to a mathematical function leads to a model with nonunique coefficients and error term. In this model, the conditional expectation of the dependent variable given the included regressors does not always exist. Even when it exists, its functional form may be unknown. The nonunique error term is interpreted as representing the net effect of omitted regressors on the dependent variable. Pratt and Schlaifer (1988, p. 34) [1] pointed out that omitted regressors are not unique and as a result, the condition that the included regressors be independent of “the” excluded variables themselves is “meaningless”. There are cases where the correlation between the nonunique error term and the included regressors can be made to appear and disappear at the whim of an arbitrary choice between two observationally equivalent models. To avoid these problems, we specify models with unique coefficients and error terms without misspecifying their correct functional forms. The unique error term of a model is a function of certain “sufficient sets” of omitted regressors. We derive these sufficient sets for a binary choice model in this paper. In the usual approach, omitted regressors constituting the error term of a model do not introduce omitted-regressor biases into the coefficients of the included regressors. In our approach, they do so.

Following the usual approach, Yatchew and Griliches (1984) [2]1 showed that if one of two uncorrelated regressors included in a simple binary choice model is omitted, then the estimator of the coefficient on the remaining regressor will be inconsistent. They also showed that, if the disturbances in a latent regression model are heteroscedastic, then the maximum likelihood estimators that assume homoscedasticity are inconsistent and the covariance matrix is inappropriate. In this paper, we show that the use of a latent regression model with unique coefficients and error term changes their results. Our binary choice model is different from those of such researchers as Yatchew and Griliches [2], Cramer (2006) [4], and Wooldridge (2002, Chapter 15) [5]. The concept of unique coefficients and error term is distinctive to our work. Specifically, we do not assume any incorrect functional form, and we account for relevant omitted regressors, measurement errors, and correlations between excluded and the included regressors. Our model features varying coefficients (VCs) in which we interpret the VC on a continuous regressor as a function of three quantities: (i) bias-free partial derivative of the dependent variable with respect to the continuous regressor; (ii) omitted-regressor biases; and (iii) measurement-error biases. This interpretation of the VCs is unique to our work and allows us to focus on the bias-free (i.e., partial derivatives) parts of the VCs.

The remainder of this paper is comprised of three sections. Section 2 summarizes a recent derivation of Swamy, Mehta, Tavlas and Hall (2014) [6] of all the terms involved in a binary choice model with unique coefficients and error term. The section also provides the conditions under which such a model can be consistently estimated. Section 3 presents an empirical example. Section 4 concludes. An Appendix at the end of the paper has two sections. The first section compares the relative generality of assumptions underlying different linear and nonlinear models. The second section derives the information matrix for a binary choice model with unique coefficients and error term.

Greene (2012, pp. 681–683) [3] described various situations under which the use of discrete choice models is called for. In what follows, we develop a discrete choice model that is free of several specification errors. To explain, we begin with the following specification:
where i indexes n individuals, ${\psi}_{i}({x}_{i1}^{*},...,{x}_{i{L}_{i}}^{*})$ is a mathematical function, and its arguments are mathematical variables. Let ${\psi}_{i}(.)$ be short hand for this function. We do not observe ${y}_{i}^{*}$ but view the outcome of a discrete choice as a reflection of the underlying mathematical function in Equation (1). We only observe whether a choice is made or not (see Greene (2012, p. 686) [3]).

$${y}_{i}^{*}={\psi}_{i}({x}_{i1}^{*},...,{x}_{i{L}_{i}}^{*})$$

Therefore, our observation is
where the choice “not made” is indicated by the value 0 and the choice “made” is indicated by the value 1, i.e., ${y}_{i}$ takes either 0 or 1. An example of model (2), provided in Greene (2012, Example 17.1, pp. 683–684) [3], is a situation involving labor force participation where a respondent either works or seeks work (${y}_{i}$ = 1) or does not (${y}_{i}$ = 0) in the period in which a survey was taken.2

$${y}_{i}=1\text{if}{y}_{i}^{*}\text{}0$$

$${y}_{i}=\text{}0\text{if}{y}_{i}^{*}\text{}\le 0$$

In Equation (1), ${x}_{i}^{*}=({x}_{i1}^{*},...,{x}_{i,{L}_{i}}^{*})\prime $ is ${L}_{i}\times 1$, ${L}_{i}$ denotes the total number of the arguments of ${\psi}_{i}({x}_{i1}^{*},...,{x}_{i,{L}_{i}}^{*})={\psi}_{i}(.)$; there are no omitted arguments needing an error term. Equation (1) is a mathematical equation that holds exactly. Inexactness and stochastic error term enters into (1) when we derive the appropriate error term and make distributional assumption about it. The reasons for this are explained below. The number ${L}_{i}$ may depend on i. This dependence occurs when the number of arguments of ${\psi}_{i}(.)$ is different for different individuals. For example, before deciding whether or not to make a large purchase, each consumer makes a marginal benefit/marginal cost calculations based on the utilities achieved by making the purchase and by not making the purchase, and by using the available funds for the purchase of something else. The difference between benefit and cost as an unobserved variable ${y}_{i}^{*}$ can vary across consumers if they have different utility functions with different arguments. These variations can show up in the form of different arguments of ${\psi}_{i}(.)$ for different consumers.

It should be noted that (1) represents our departure from the usual approach of adding a nonunique error term to a mathematical function and making a “meaningless” assumption about the error term. Pratt and Schlaifer (1988, p. 34) [1] severely criticized this approach. To avoid this criticism, what we have done in (1) is that we have taken all the x**-**variables and the variables constituting the error term $\mathsf{\epsilon}$ in Cramer’s (2006) [4] (or e in Wooldridge’s (2002, p. 457) [5]) latent regression model, and included them in ${\psi}_{i}(.)$ as its arguments. In addition, we also included in ${\psi}_{i}(.)$ all relevant pre-existing conditions as its arguments.3.

The problem that researchers face is that of uncovering the correct functional form of ${\psi}_{i}(.)$.4 However, any false relations can be shown to have been eliminated when we control for all relevant pre-existing conditions. To make use of this observation due to Skyrms (1988, p. 59) [7], we incorporate these pre-existing conditions into ${\psi}_{i}(.)$ by letting some of the elements of ${x}_{i}^{*}$ represent these conditions.5 Clearly, we have no way of knowing what these pre-existing conditions might be, how to measure them (if we knew them), or how many there may be. To control for these conditions, we use the following approach. We assume that all relevant pre-existing conditions appear as arguments of the function ${\psi}_{i}(.)$ in Equation (1). This is a straightforward approach. Therefore, when we take the partial derivatives of ${\psi}_{i}(.)$ with respect to ${x}_{ij}^{*}$, a determinant of ${y}_{i}^{*}$ included in ${\psi}_{i}(.)$ as its argument, the values of all pre-existing conditions are automatically held constant. This action is important because it sets the partial derivative $\partial {y}_{i}^{*}/\partial {x}_{ij}^{*}$ equal to zero whenever the relation of ${y}_{i}^{*}$ to ${x}_{ij}^{*}$, an element ${x}_{i}^{*}$, is false (see Skyrms (1988, p. 59) [7]).

The function ${\psi}_{i}(.)$ in (1) is exact and mathematical in nature, without any relevant omitted arguments. Moreover, its unknown functional form is not misspecified. Therefore, it does not require an error term; indeed, it would be incorrect to add an error term to ${\psi}_{i}(.)$. We refer to (1) as “a minimally restricted mathematical equation,” the reason being that no restriction other than the normalization rule, that the coefficient of ${y}_{i}^{*}$ is unity, is imposed on (1). Without this restriction, the function ${\psi}_{i}(.)$ is difficult to identify. The reason why no other restriction is imposed on it is that we want (1) to be a real-world relationship. With such a relationship we can estimate the causal effects of a treatment. Basmann (1988, pp. 73, 99) [8] argued that causality is a property of the real world. We define that real-world relations are those that are not misspecified. Causal relations are unique in the real world. This is the reason why we insist that the coefficients and error term of our model be unique. From Basmann’s (1988, p. 98) [8] definition it follows that (1) is free of the most serious objection, i.e., non-uniqueness, which occurs when stationarity producing transformation of observable variables are used.6 We do not use such variables in (1).

Basmann (1988, pp. 73, 99) [8] emphasized that the word “causality” designates a property of the real world. Hence we work only with appropriate real-world relationships to evaluate causal effects.

We define that the real-world relationships are those that are not subject to any specification errors. It is possible to avoid some of those errors, as we show below. The real-world relationships and their properties are always true and unique. Such relationships cannot be found, however, by imposing severe restrictions because they can be false. Examples of these restrictions are given in the Appendix to the paper. For example, certain separability conditions are imposed on (1) to obtain (A1) in the Appendix. These conditions are so severe that their truth is doubtful. For this reason, (A1) may not be a real-world relationship and may not possess the causality property. Again in the Appendix, (A2) is a general condition of statistical independence which is very strong. Model (A5) of the Appendix with a linear functional form could be misspecified.7

To avoid misspecifications of the unknown correct functional form of (1), we change the problem of estimating (1) to the problem of estimating some of its partial derivatives in
where, for $\ell =1$, … , ${L}_{i}$, ${\alpha}_{i\ell}^{*}=\frac{\partial {\psi}_{i}(.)}{\partial {x}_{i\ell}^{*}}$ if ${x}_{i\ell}^{*}$ is continuous and = $\frac{\mathsf{\Delta}{\psi}_{i}(.)}{\mathsf{\Delta}{x}_{i\ell}^{*}}$ with the right sign if ${x}_{i\ell}^{*}$ is discrete having zero as one of its possible values, ∆ is the first-difference operator, and the intercept ${\alpha}_{i0}^{*}={y}_{i}^{*}-{\displaystyle {\sum}_{\ell =1}^{{L}_{i}}{x}_{i\ell}^{*}}{\alpha}_{i\ell}^{*}$. In words, this intercept is the error of approximation due to approximating ${y}_{it}^{*}$ by (${\sum}_{\ell =1}^{{L}_{i}}{x}_{i\ell}^{*}}{\alpha}_{i\ell}^{*$). Therefore, model (3) with zero intercept does not misspecify the unknown functional form of (1) if the error of approximation is truly zero and model (3) with nonzero intercept is the same as (1) with no misspecifications of its functional form, since ${y}_{i}^{*}-{\displaystyle {\sum}_{\ell =1}^{{L}_{i}}{x}_{i\ell}^{*}}{\alpha}_{i\ell}^{*}+{\displaystyle {\sum}_{\ell =1}^{{L}_{i}}{x}_{i\ell}^{*}}{\alpha}_{i\ell}^{*}={y}_{i}^{*}$. This is how we deal with the problem of the unknown functional form of (1). Note that no separability conditions need to be imposed on (1) to write it in the form of (3). This is the advantage of (3) over (A1).

$${y}_{i}^{*}={\alpha}_{i0}^{*}+{x}_{i1}^{*}{\alpha}_{i1}^{*}+\cdots +{x}_{i{L}_{i}}^{*}{\alpha}_{i{L}_{i}}^{*}$$

Note that in the above definition of the partial derivative (${\alpha}_{i\ell}^{*}$), the values of all the arguments of ${\psi}_{it}(.)$ (including all relevant pre-existing conditions) other than ${x}_{i\ell}^{*}$ are held constant. These partial derivatives are different from those that can be derived from (A1) with ${\epsilon}_{i}$ suppressed. This is because in taking the latter derivatives the values of ${x}_{i,K+1}^{*},...,{x}_{i{L}_{t}}^{*}$ are not held constant.

Equation (3) is not a false relationship, since we held the values of all relevant pre-exiting conditions constant in deriving its coefficients. The regression in (3) has the minimally restricted equation in (1) as its basis. The coefficients of (3) are constants if (1) is linear and are variables otherwise. In the latter case, the coefficients of (3) can be the functions of all of the arguments of ${\psi}_{it}(.)$. Any function of the form (1) with unknown functional form can be written as linear in variables and nonlinear in coefficients, as in (3). We have already established that this linear-in-variables and nonlinear-in-coefficients model has the correct functional form if its intercept is zero and is the same as (1) otherwise. In either case, (3) does not have a misspecified functional form. In this paper, we take advantage of this procedure.

Not all elements of ${\mathit{x}}_{i}^{*}$ are measured; suppose that the first K of them are measured. This assumption needs the restriction that min$({L}_{1},...,{L}_{n})>K$. Even these measurements may contain errors so that the observed argument ${x}_{ij}$ is equal to the sum of the true value ${x}_{ij}^{*}$ and a measurement error, denoted by ${\nu}_{ij}^{*}$.8 The arguments, ${x}_{ig}^{*}$, $g$ = K + 1, … , ${L}_{i}$, for which no data are available, are treated as regressors omitted from (3).9 These are of two types: (i) unobserved determinants of ${y}_{i}^{*}$ and (ii) all relevant pre-existing conditions. We know nothing of these two types of variables. With these variables being present in (3), we cannot estimate it. Again without misspecifying (1) these variables should be eliminated from (3). To do so, we consider the “auxiliary” relations of each ${x}_{ig}^{*}$ to ${x}_{i1}^{*},...,{x}_{iK}^{*}$. Such relations are: For g = K + 1,…, ${L}_{i}$,
where ${\lambda}_{igj}^{*}=\frac{\partial {x}_{ig}^{*}}{\partial {x}_{ij}^{*}}$ if ${x}_{ij}^{*}$ is continuous holding the values of all the regressors of (A8) other than that of ${x}_{ij}^{*}$ constant and = $\frac{\mathsf{\Delta}{x}_{ig}^{*}}{\mathsf{\Delta}{x}_{ij}^{*}}$ with the right sign if ${x}_{ij}^{*}$ is discrete taking zero as one of its possible values and ${\lambda}_{ig0}^{*}={x}_{ig}^{*}-{\displaystyle {\sum}_{j=1}^{K}{x}_{tj}^{*}{\lambda}_{igj}^{*}}$. This intercept is the portion of ${x}_{ig}^{*}$ remaining after the effect ($\sum}_{j=1}^{K}{x}_{ij}^{*}{\lambda}_{igj}^{*$) of ${x}_{i1}^{*},...,{x}_{iK}^{*}$ on ${x}_{ig}^{*}$ has been subtracted from it.

$${x}_{ig}^{*}={\lambda}_{ig0}^{*}+{x}_{i1}^{*}{\lambda}_{ig1}^{*}+\cdots +{x}_{iK}^{*}{\lambda}_{igK}^{*}$$

In (4), there are ${L}_{i}-K$ relationships. The intercept ${\lambda}_{ig0}^{*}$ is the error due to approximating the relationship between the gth omitted regressor and all the included regressors (“the correct relationship”) by $\sum}_{j=1}^{K}{x}_{tj}^{*}{\lambda}_{igj}^{*$. If this error of approximation is truly zero, then Equation (4) with zero intercept has the same functional form as the correct relationship. In the alternative case where the error of approximation is not zero, (4) is the same as the correct relationship, i.e., ${x}_{ig}^{*}-{\displaystyle {\sum}_{j=1}^{K}{x}_{tj}^{*}{\lambda}_{igj}^{*}}+{\displaystyle {\sum}_{j=1}^{K}{x}_{tj}^{*}{\lambda}_{igj}^{*}}$. In either case (4) does not misspecify the correct functional form. According to Pratt and Schlaifer (1988, p. 34) [1], the condition that the included regressors be independent of “the” omitted regressors themselves is meaningless. This statement supports (4) but not the usual assumption that the error term of a latent regression model is uncorrelated with or independent of the included regressors.10 The problem is that omitted regressors are not unique, as Pratt and Schlaifer (1988, p. 34) [1] proved.

Substituting the right-hand side of Equation (4) for ${x}_{ig}^{*}$ in (3) gives

$${y}_{i}^{*}={\alpha}_{i0}^{*}+{\displaystyle {\sum}_{g=K+1}^{{L}_{i}}{\lambda}_{tg0}^{*}{\alpha}_{ig}^{*}}+{\displaystyle {\sum}_{j=1}^{K}{x}_{ij}^{*}({\alpha}_{ij}^{*}+{\displaystyle {\sum}_{g=K+1}^{{L}_{i}}{\lambda}_{igj}^{*}{\alpha}_{ig}^{*})}}$$

The error term, the intercept, and the coefficients of the nonconstant regressors of this model are $\sum}_{g=K+1}^{{L}_{it}}{\lambda}_{ig0}^{*}{\alpha}_{ig}^{*$, ${\alpha}_{i0}^{*}$, and $({\alpha}_{ij}^{*}+{\displaystyle {\sum}_{g=K+1}^{{L}_{i}}{\lambda}_{igj}^{*}{\alpha}_{ig}^{*})}$, j = 1, … ,K, respectively.

$${\alpha}_{ij}^{*}=\frac{\partial {\psi}_{i}(.)}{\partial {x}_{ij}^{*}}\text{or}=\frac{\mathsf{\Delta}{\psi}_{i}(.)}{\mathsf{\Delta}{x}_{ij}^{*}},\text{}j=\text{}1,\text{}\dots \text{},K$$

These partial derivatives have the correct functional form if the ${x}_{ij}^{*}$’s are continuous.

Swamy et al. (2014, pp. 197, 199, 217–219) [6] proved that the coefficients $({\alpha}_{ij}^{*}+{\displaystyle {\sum}_{g=K+1}^{{L}_{it}}{\lambda}_{igj}^{*}{\alpha}_{ig}^{*})}$ and the error term $\sum}_{g=K+1}^{{L}_{i}}{\lambda}_{ig0}^{*}{\alpha}_{ig}^{*$ of (5) are unique in the following sense:

The equations in (4) play a crucial role in Swamy et al.’s (2014) [6] proof of the uniqueness of the coefficients and error term of (5). If we had taken the sum ${x}_{i,K+1}^{*}{\alpha}_{i,K+1}^{*}$ + $\cdots $ + ${x}_{i{L}_{i}}^{*}{\alpha}_{i{L}_{i}}^{*}$ of the last ${L}_{i}-K$ terms on the right-hand side of (3) as its error term, then we would have obtained a nonunique error term. The reason why this would have happened is that omitted regressors are not unique. What (4) has done is that it has split each gth omitted regressor into a “sufficient set” and an included-regressors’ effect. This sufficient set times the coefficient of the gth omitted regressor has become a term in the (unique) error term of (5) and the included-regressors’ effect times the coefficient of the gth omitted regressor has become a term in omitted-regressor biases of the coefficients of (5). This is not the usual procedure where the whole of each omitted regressor goes into the formation of an error term and no part of it becomes a term in omitted-regressor biases. The usual procedure leads to nonunique coefficients and error term. In the YG procedure, only some of the included regressors which, when omitted, introduce omitted-regressor biases into the (nonunique) coefficients on the remaining included regressors. Yatchew and Griliches (1984) [2], Wooldridge (2002) [5], and Cramer (2006) [4] followed the usual procedure. Without using (4) it is not possible to derive a model with unique coefficients and error term.

Substituting the terms on the right-hand sides of equations ${x}_{ij}^{*}={x}_{ij}-{\nu}_{ij}^{*}$, j = 1, … ,K, for ${x}_{ij}^{*}$, j = 1, … ,K, respectively, in (5) gives a model of the form
where the intercept is defined as
and the other terms are defined as
where the set ${S}_{1}\text{}$ contains all the regressors of Equation (7) that are continuous, the set ${S}_{2}$ contains all the regressors of (7) that can take the value zero with positive probability, the ratio of $(1-\frac{{\nu}_{ij}^{*}}{{x}_{ij}})$ in the first line of Equation (9) comes from the equation ${x}_{ij}^{*}={x}_{ij}-{\nu}_{ij}^{*}=(1-\frac{{\nu}_{ij}^{*}}{{x}_{ij}}){x}_{ij}$, and this ratio does not appear in the second line of Equation (9) because ${x}_{ij}\in {S}_{2}$ can take the value zero with positive probability.

$${y}_{i}^{*}={\gamma}_{i0}+{x}_{i1}{\gamma}_{i1}+\cdots +{x}_{iK}{\gamma}_{iK}$$

$${\gamma}_{i0}={\alpha}_{i0}^{*}+{\displaystyle \sum _{g=K+1}^{{L}_{i}}{\lambda}_{ig0}^{*}{\alpha}_{ig}^{*}-{\displaystyle \sum _{{\nu}_{ij}^{*}\in {S}_{2}}{\nu}_{ij}^{*}({\alpha}_{ij}^{*}+{\displaystyle \sum _{g=K+1}^{{L}_{i}}{\lambda}_{igj}^{*}{\alpha}_{ig}^{*})}}}$$

$$\begin{array}{cc}\hfill {x}_{ij}{\gamma}_{ij}& ={x}_{ij}(1-\frac{{\nu}_{ij}^{*}}{{x}_{ij}})({\alpha}_{ij}^{*}+{\displaystyle \sum _{g=K+1}^{{L}_{i}}{\lambda}_{igj}^{*}{\alpha}_{ig}^{*}})\text{if}{x}_{ij}\in {S}_{1}\text{}\hfill \\ & ={x}_{ij}({\alpha}_{ij}^{*}+{\displaystyle \sum _{g=K+1}^{{L}_{i}}{\lambda}_{igj}^{*}{\alpha}_{ig}^{*}})\text{if}{x}_{ij}\in {S}_{2}\hfill \end{array}$$

Equation (7) implies that a model is correctly specified if it is derived by inserting measurement errors at the appropriate places in a model with unique coefficients and error term (see Swamy et al. (2014, p. 199) [6]).

Under our approach, measurement errors do not become random variables until distributional assumptions are made about them.

(i) The unknown functional form of (1) is not allowed to become the source of a specification error in (3); (ii) The uniqueness of the coefficients and error term of (5) eliminates the specification error resulting from non-unique coefficients and error term; (iii) Pratt and Schlaifer (1988, p. 34) [1] pointed out that the requirement that the included regressors be independent of the excluded regressors themselves is “meaningless”. The specification error introduced by making this meaningless assumption is avoided by taking a correct function of certain “sufficient sets” of omitted regressors as the error term of (5); (iv) The specification error of ignoring measurement errors when they are present is avoided by placing them at the appropriate places in (5). It should be noted that when we affirm that (7) is free of specification errors, we mean that it is free of specification-errors (i)–(iv). Using (3)–(6) we have derived a real-world relationship in (7) that is free of specification-errors (i)–(iv). Thus, our approach affirms that any relationship suffering from anyone of these specification errors is definitely not a real-world relationship.

In Section 2.2.3, we have seen that the relationships between each omitted regressor and the included regressors in (4) introduce omitted-regressor biases into the coefficients on the regressors of (5). We have pointed out in the last paragraph of that section that this is not how Yatchew and Griliches **(**YG) derived omitted-regressor biases. They work with models in which the coefficients and error terms do not satisfy our definition of uniqueness. YG considered a simple binary choice model and omitted one of its two included regressors. According to YG, this omission introduces omitted regressor bias into the coefficient on the regressor that is allowed to remain. The results proved by YG are: (i) even if the omitted regressor is uncorrelated with the remaining included regressor, the coefficient on the latter regressor will be inconsistent; (ii) If the errors in the underlying regression are heteroscedastic, then the maximum likelihood estimators that ignore the heteroscedasticity are inconsistent and the covariance matrix is inappropriate (see also Greene (2012, p. 713) [3]). We do not omit any of the included regressors from (5) to generate omitted-regressor biases. For YG, omitted regressors in (4) are those that generate the error term in their latent regression model. Equation (5)’s error term is a function of those variables that satisfy Pratt and Schlaifer’s definition of “sufficient sets” of our omitted regressors. Thus, YG’ concepts of omitted-regressors, included regressors, and error terms are different from ours. Their model is subject to specification errors (i)–(iv) listed in the previous section. YG’s assumptions about the error term of their model are questionable because of its non-uniqueness. Unless its coefficients and error term are unique no model can represent any real-world relationship which is unique. According to YG, Wooldridge, and Cramer, the regressors constituting the error term of a latent regression model do not produce omitted-regressors biases. Their omitted-regressor bias is not the same as those in (5). YG’s results cannot be obtained from our model (7). Their nonunique heteroscedastic error term is different from our unique heteroscedastic error term in (7). It can be shown that the results of YG arose as a direct consequence of ignoring our omitted-regressor and measurement-error biases in (9). Omitted regressors constituting the YG model’s (non-unique) error term also introduce omitted-regressor biases in our sense but not in their sense. Furthermore, the YG model suffers from all the four specification errors (i)–(iv) which equations (3)–(5), (7)–(9) avoid.

To recapitulate, misspecifications of the correct functional form of (1) are avoided by expressing it in the form of Equation (3). If the sum,$\sum _{g=K+1}^{{L}_{i}}{x}_{ig}^{*}{\alpha}_{ig}^{*}$, of the last ${L}_{i}-K$ terms on the right-hand side of (3) is treated as its error term, then this error term is not unique (see Swamy et al. (2014, p. 197) [6]). Suppose that the coefficients of (3) are constants. Then the correlation between the nonunique error term and the first K regressors of (3) can be made uncertain and certain, at the whim of an arbitrary choice between two observationally equivalent forms of (3), as shown by Swamy et al. (2014, pp. 217–218) [6]. To eliminate this difficulty, a model with unique coefficients and error term is derived by substituting the right-hand side of Equation (4) for the omitted regressor, ${x}_{ig}^{*}$, in (3) for every $g=K+1,...,{L}_{i}$. Equation (7) shows how the terms of an equation look like if this equation is made free of specification errors (i)–(iv). For each continuous ${x}_{ij}$ with j > 0 in (7), its coefficient contains the bias-free partial derivative ($\frac{\partial {\psi}_{i}(.)}{\partial {x}_{ij}^{*}}$) and omitted-regressor and measurement-error biases.

The partial derivative ($\partial {y}_{i}^{*}/\partial {x}_{ij}^{*}$) components of the coefficients (${\gamma}_{ij}$, j = 1, … , K) of (7) are the objects of our estimation. For this purpose, we parameterize (7) using our knowledge of the probability model governing the observations in (7). We assume that for j = 0, 1, … , K:
where ${z}_{i0}\equiv 1$, the $\pi $’s are fixed parameters, the z’s drive the coefficients of (7) and are, therefore, called “coefficient drivers.” These drivers are observed. We will explain below how to select these drivers. The errors (${\epsilon}_{ij}$’s) are included in Equation (10) because the p + 1 drivers may not be able to explain all variation in ${\gamma}_{ij}$.

$${\gamma}_{ij}={z}_{i0}{\pi}_{j0}+{z}_{i1}{\pi}_{j1}+\cdots +{z}_{ip}{\pi}_{jp}+{\epsilon}_{ij}$$

We use the following matrix notation: ${x}_{i}=(1,{x}_{i1},...,{x}_{iK})\prime $ is $(\mathrm{K}+1)\times 1$, ${z}_{i}=(1,{z}_{i1},...,{z}_{ip})\prime $ is $(p+1)\times 1$, ${{\pi}^{\prime}}_{j}=({\pi}_{j0},{\pi}_{j1},...,{\pi}_{jp})$ is $1\times (p+1)$, ${\gamma}_{ij}={{\pi}^{\prime}}_{j}{z}_{i}$ is a scalar, $\Pi $ is the (K + 1) $\times $ ($p$ + 1) matrix having ${{\mathit{\pi}}^{\prime}}_{j}$ as its jth row, and ${\epsilon}_{i}=({\epsilon}_{i0},...,{\epsilon}_{iK})\prime $ is $(K+1)\times 1$. Substituting the right-hand side of (10) for ${\gamma}_{ij}$ in (7) gives

$${y}_{i}^{*}={{x}^{\prime}}_{i}\Pi {z}_{i}+{{x}^{\prime}}_{i}{\epsilon}_{i}$$

The regressors of Equation (7) are conditionally independent of their coefficients given the coefficient drivers.

For all i, let $\mathrm{g}({\mathit{x}}_{\mathit{i}},{\mathit{z}}_{\mathit{i}})$ be a Borel function of $({\mathit{x}}_{\mathit{i}},{\mathit{z}}_{\mathit{i}})$, $E\left|{y}_{i}^{*}\right|<\infty $, and $E\left|{y}_{i}^{*}\mathrm{g}\right({x}_{i},{z}_{i}\left)\right|<\infty $.

For i, ${i}^{\prime}$ = 1, …,n, $\mathit{E}\left({\mathit{\epsilon}}_{\mathit{i}}\right|{\mathit{x}}_{\mathit{i}},{\mathit{z}}_{\mathit{i}})=0$, $\mathit{E}\left({\mathit{\epsilon}}_{\mathit{i}}{{\mathit{\epsilon}}^{\prime}}_{\mathit{i}}\right|{\mathit{x}}_{\mathit{i}},{\mathit{z}}_{\mathit{i}})={\mathit{\sigma}}_{\mathit{\epsilon}}^{2}{\mathsf{\Delta}}_{\mathit{\epsilon}}$, and $\mathit{E}\left({\mathit{\epsilon}}_{\mathit{i}}{{\mathit{\epsilon}}^{\prime}}_{{\mathit{i}}^{\prime}}\right|{\mathit{x}}_{\mathit{i}},{\mathit{z}}_{\mathit{i}})=0$ if $i\ne {i}^{\prime}$.

In terms of homoscedastic error term, Equation (11) can be written as
where ${\mathsf{\Delta}}_{\epsilon}$ is positive definite.

$${y}_{i}^{*}/{\sigma}_{\epsilon}\sqrt{{{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}}={{x}^{\prime}}_{i}\Pi {z}_{i}/{\sigma}_{\epsilon}\sqrt{{{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}}+{{x}^{\prime}}_{i}{\epsilon}_{i}/{\sigma}_{\epsilon}\sqrt{{{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}}$$

Under Assumptions 1–3, the conditional expectation
exists (see Rao (1973, p. 97) [14].

$$\mathrm{E}\left({y}_{i}^{*}\right|\text{}{x}_{i},{z}_{i})={{x}^{\prime}}_{i}\Pi {z}_{i}$$

The parameters of model (11) to be estimated are $\Pi $ and ${\mathit{\sigma}}_{\mathit{\epsilon}}^{2}{\mathsf{\Delta}}_{\mathit{\epsilon}}$. Due to the lack of observations on the dependent variable ${y}_{i}^{*}$ not all of these parameters are identified. Therefore, we need to impose some restrictions. The following two restrictions are imposed on model (11):

- (i)
- The ${\sigma}_{\epsilon}^{2}$ in ${\sigma}_{\epsilon}^{2}{\mathsf{\Delta}}_{\epsilon}$ cannot be estimated, since there is no information about it in the data. To solve this problem, we set ${\sigma}_{\epsilon}^{2}$ equal to 1.
- (iI)
- From (2) it follows that the conditional probability that ${y}_{i}$ = 1 (or ${y}_{i}^{*}$ > 0) given ${\mathit{x}}_{\mathit{i}}$ and ${\mathit{z}}_{\mathit{i}}$ is$$\text{Prob}({y}_{i}^{*}\text{}\text{}0|{x}_{i},{z}_{i})=\text{Prob}({{x}^{\prime}}_{i}{\epsilon}_{i}\text{}\text{}-{{x}^{\prime}}_{i}\Pi {z}_{i}|{x}_{i},{z}_{i})$$

For symmetric distributions like normal,
where F(.|.) is the conditional distribution function of ${{x}^{\prime}}_{i}{\epsilon}_{i}$. The conditional probability that ${y}_{i}=0$ (or ${y}_{i}^{*}\le 0$) given ${x}_{i}$ and ${z}_{i}$ is $1-F({{x}^{\prime}}_{i}\Pi {z}_{i}/\sqrt{{{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}}|{x}_{i},{z}_{i})$. The conditional probability that ${y}_{i}=1$ (or ${y}_{i}^{*}>0$) given ${x}_{i}$ and ${z}_{i}$ is $F({{x}^{\prime}}_{i}\Pi {z}_{i}/\sqrt{{{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}}|{x}_{i},{z}_{i})$. F(.|.) in (15) denotes the conditional normal distribution function of the random variable ${{x}^{\prime}}_{i}{\epsilon}_{i}/\sqrt{{{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}}$ with mean zero and unit variance; ${f}_{i}$ is the density function of the standard normal. Let ${\mathit{\delta}}_{\mathit{\epsilon}}$ be the column stack of ${\mathsf{\Delta}}_{\epsilon}$. To exploit the symmetry property of ${\mathsf{\Delta}}_{\epsilon}$, we add together the two elements of $({{\mathit{x}}^{\prime}}_{\mathit{i}}\otimes {{\mathit{x}}^{\prime}}_{\mathit{i}})$ corresponding to the (j, ${j}^{\prime}$) and (${j}^{\prime}$, j) elements of ${\mathsf{\Delta}}_{\epsilon}$ in ${{\mathit{x}}^{\prime}}_{\mathit{i}}{\mathsf{\Delta}}_{\epsilon}{\mathit{x}}_{\mathit{i}}$ and eliminate the (${j}^{\prime}$, j) element of ${\mathsf{\Delta}}_{\epsilon}$ from ${\mathit{\delta}}_{\mathit{\epsilon}}$ for j = 0, 1, … , K . These operations change the (1 × (K + 1)^{2}) vector $({{\mathit{x}}^{\prime}}_{\mathit{i}}\otimes {{\mathit{x}}^{\prime}}_{\mathit{i}})$ to the (1 × (K + 1)(K + 2)/2) vector, denoted by $\left(\overline{{{\mathit{x}}^{\prime}}_{\mathit{i}}\otimes {{\mathit{x}}^{\prime}}_{\mathit{i}}}\right)$, and change the (K + 1)^{2} × 1 vector ${\mathit{\delta}}_{\mathit{\epsilon}}$ to the [(K + 1)(K + 2)/2] × 1 vector, denoted by ${\overline{\mathit{\delta}}}_{\mathit{\epsilon}}$.

$$\text{Prob}({y}_{i}^{*}\text{}\text{}0|{x}_{i},{z}_{i})=\text{Prob}({{x}^{\prime}}_{i}{\epsilon}_{i}{{x}^{\prime}}_{i}\Pi {z}_{i}|{x}_{i},{z}_{i})=F({{x}^{\prime}}_{i}\Pi {z}_{i}/\sqrt{{{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}}|{x}_{i},{z}_{i})$$

The maximum likelihood (ML) method is used to estimate the elements of $\Pi $ and ${\mathsf{\Delta}}_{\epsilon}$. To do so, each observation is treated as a single draw from a binomial distribution. The model with success probability $F({{x}^{\prime}}_{i}\Pi {z}_{i}/\sqrt{{{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}}|{x}_{i},{z}_{i})$ and independent observations leads to the likelihood function,
The likelihood function for a sample of n observations can be written as
This equation gives
where $\otimes $ is a Kronecker product and ${\mathit{\pi}}^{Long}$ is the column stack of $\Pi $.

$$\begin{array}{l}\text{Prob}({Y}_{1}={y}_{1},{Y}_{2}={y}_{2},...,{Y}_{n}={y}_{n}|{x}_{i},{z}_{i})=\prod _{y=0}[1-F({{x}^{\prime}}_{i}\Pi {z}_{i}/\sqrt{{{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}}|{x}_{i},{z}_{i})]\prod _{y=1}F\\ ({{x}^{\prime}}_{i}\Pi {z}_{i}/\sqrt{{{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}}|{x}_{i},{z}_{i})\end{array}$$

$$\mathrm{L}(\Pi ,{\mathsf{\Delta}}_{\epsilon}|\text{data})={\displaystyle \prod _{i=1}^{n}[\mathit{F}(}{{x}^{\prime}}_{i}\Pi {z}_{i}/\sqrt{{{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}}|{x}_{i},{z}_{i})]{}^{{y}_{i}}[1-\mathit{F}({{x}^{\prime}}_{i}\Pi {z}_{i}/\sqrt{{{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}}|{x}_{i},{z}_{i})]{}^{1-{\mathit{y}}_{\mathit{i}}}$$

$$\mathrm{ln}\mathrm{L}={\displaystyle \sum _{i=1}^{n}\left\{{\mathit{y}}_{\mathit{i}}\mathrm{ln}\mathit{F}\left(\frac{({{\mathit{z}}^{\prime}}_{\mathit{i}}\otimes {{\mathit{x}}^{\prime}}_{\mathit{i}}){\pi}^{Long}}{\sqrt{\left(\overline{{{\mathit{x}}^{\prime}}_{\mathit{i}}\otimes {{\mathit{x}}^{\prime}}_{\mathit{i}}}\right){\overline{\mathit{\delta}}}_{\mathit{\epsilon}}}}\right)+(1-{\mathit{y}}_{\mathit{i}})\text{ln}\left[1-\mathit{F}\left(\frac{({{\mathit{z}}^{\prime}}_{\mathit{i}}\otimes {{\mathit{x}}^{\prime}}_{\mathit{i}}){\mathit{\pi}}^{\mathit{L}\mathit{o}\mathit{n}\mathit{g}}}{\sqrt{\left(\overline{{{\mathit{x}}^{\prime}}_{\mathit{i}}\otimes {{\mathit{x}}^{\prime}}_{\mathit{i}}}\right){\overline{\mathit{\delta}}}_{\mathit{\epsilon}}}}\right)\right]\right\}}$$

In the case where ${\mathsf{\Delta}}_{\epsilon}$ is identified, then its positive definite estimate may not be obtained unless the log likelihood function in (18) is maximized subject to the restriction that ${\mathsf{\Delta}}_{\epsilon}$ is positive definite. Furthermore, these constrained estimates of ${\mathit{\pi}}^{Long}$ and ${\overline{\mathit{\delta}}}_{\mathit{\epsilon}}$ do not satisfy the following likelihood equations.
where ${F}_{i}$ stands for $\mathit{F}\left(\frac{({{\mathit{z}}^{\prime}}_{\mathit{i}}\otimes {{\mathit{x}}^{\prime}}_{\mathit{i}}){\mathit{\pi}}^{\mathit{L}\mathit{o}\mathit{n}\mathit{g}}}{\sqrt{\left(\overline{{{\mathit{x}}^{\prime}}_{\mathit{i}}\otimes {{\mathit{x}}^{\prime}}_{\mathit{i}}}\right){\overline{\mathit{\delta}}}_{\mathit{\epsilon}}}}\right)$ and ${f}_{i}$ is the derivative of ${F}_{i}$.

$$\frac{\partial \text{ln}\mathrm{L}}{\partial {\mathit{\pi}}^{Long}}={\displaystyle \sum _{i=1}^{n}\left[\frac{{f}_{i}({y}_{i}-{F}_{i})}{{F}_{i}(1-{F}_{i})}\right]}\text{}\frac{({z}_{i}\otimes {x}_{i})}{\sqrt{(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}})\overline{{\mathit{\delta}}_{\mathit{\epsilon}}}}}=0$$

$$\frac{\partial \text{ln}\mathrm{L}}{\partial {\overline{\mathit{\delta}}}_{\mathit{\epsilon}}}={\displaystyle \sum _{i=1}^{n}\left[\frac{{f}_{i}({y}_{i}-{F}_{i})}{{F}_{i}(1-{F}_{i})}\right]}({{z}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}){\mathit{\pi}}^{Long}(-\frac{1}{2}){\left[\frac{1}{(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}){\overline{\mathit{\delta}}}_{\mathit{\epsilon}}}\right]}^{\frac{3}{2}}(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}})\prime =0$$

We now show that ${\mathsf{\Delta}}_{\epsilon}$ is not identified. The log likelihood function in (18) has the property that it does not change when ${\mathit{\pi}}^{Long}$ is multiplied by a positive constant $\mathsf{\kappa}$ and ${\overline{\mathsf{\delta}}}_{\mathsf{\epsilon}}$ inside the square root by ${\mathsf{\kappa}}^{2}$. This can be seen clearly from

$$\mathrm{ln}\mathrm{L}={\displaystyle \sum _{i=1}^{n}\left\{{y}_{i}\text{ln}\mathit{F}\left(\frac{({{z}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i})\left(\kappa {\pi}^{Long}\right)}{\sqrt{\overline{({{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i})}\left({\mathit{\kappa}}^{2}{\overline{\mathit{\delta}}}_{\mathit{\epsilon}}\right)}}\right)+(1-{y}_{i})\text{ln}\left[1-\mathit{F}\left(\frac{({{z}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i})\left(\kappa {\pi}^{Long}\right)}{\sqrt{\left(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\right)\left({\mathit{\kappa}}^{2}{\overline{\mathit{\delta}}}_{\mathit{\epsilon}}\right)}}\right)\right]\right\}}$$

An implication of this property is that if ln L in (18) attains a maximum value at $({\widehat{\mathsf{\pi}}}^{Long}{}^{\prime},{{\widehat{\overline{\mathsf{\delta}}}}^{\prime}}_{\mathsf{\epsilon}})\text{'}$, then $({\widehat{\mathsf{\pi}}}^{Long}{}^{\prime}\mathsf{\kappa},{{\widehat{\overline{\mathsf{\delta}}}}^{\prime}}_{\mathsf{\epsilon}}{\mathsf{\kappa}}^{2})\text{'}$ yields another point at which ln L attains its maximum value. Consequently, solving Equations (19) and (20) for ${\mathsf{\pi}}^{Long}$ and ${\overline{\mathsf{\delta}}}_{\mathsf{\epsilon}}$ gives an infinity of solutions, respectively. None of these solutions is consistent because ${\mathsf{\Delta}}_{\epsilon}$ is not identified. For this reason we set ${\mathsf{\Delta}}_{\epsilon}$ = $\mathrm{I}$. After inserting this value in Equation (19), it is solved for ${\mathsf{\pi}}^{Long}$. This solution is taken as the maximum likelihood estimate of ${\mathsf{\pi}}^{Long}$.

The information matrix, denoted by $\mathrm{I}\left({\pi}^{Long}\right)$, is
where the elements of this matrix are given in equation (A8) with ${\mathsf{\Delta}}_{\epsilon}$ = $\mathrm{I}$.

$${E}\left[-\frac{{\partial}^{2}\text{ln}\mathit{L}}{\partial {\pi}^{Long}\partial \left({\pi}^{Long}\right)\prime}\right]\text{}$$

Suppose that ${\mathsf{\Delta}}_{\epsilon}$ = I. Then the positive definiteness of (22) is a necessary condition for ${\pi}^{Long}$ to be identifiable on the basis of the observed variables in (17). If the likelihood equations in (19) have a unique solution, then the inverse of the information matrix in (22) will give the covariance matrix of the limiting distribution of the ML estimator of ${\pi}^{Long}$. Suppose that the solution of (19) is not unique. In this case, if Lehmann and Casella’s (1998, p. 467, (5.5)) [15] method of solving (19) for ${\pi}^{Long}$ is followed, then the square roots of the diagonal elements of (22) when evaluated at Lehmann and Casella’s solutions of (19), give the large sample standard errors of the estimate of ${\pi}^{Long}$.

The estimates of the coefficients of (7) are obtained by replacing the $\mathsf{\pi}$’s and ${\mathsf{\epsilon}}_{ij}$ of (10) by their maximum likelihood estimates and the mean value zero, respectively. We do not get the correct estimates of the components of ${\gamma}_{ij}$ in (9) from its estimate unless its two different functional forms in (8) and (9) and (10) are reconciled. For a continuous ${x}_{ij}$ with j > 0, we recognize that its coefficient ${\gamma}_{ij}$ in (9) and ${\gamma}_{ij}$ in (10) are the same. Therefore, the sum ${z}_{i0}{\pi}_{j0}+{z}_{i1}{\pi}_{j1}+\cdots +{z}_{ip}{\pi}_{jp}+{\epsilon}_{ij}$ in (10) is equal to the function $(1-\frac{{\nu}_{ij}^{*}}{{x}_{ij}})({\alpha}_{ij}^{*}+{\displaystyle \sum _{g=K+1}^{{L}_{i}}{\lambda}_{igj}^{*}{\alpha}_{ig}^{*}})$ in (9). We have already shown that ${\alpha}_{ij}^{*}$ is equal to $\frac{\partial {\psi}_{i}(.)}{\partial {x}_{ij}^{*}}$ and the sum of omitted-regressor and measurement-error biases (ORMEB) is equal to $\{{\displaystyle \sum _{g=K+1}^{{L}_{i}}{\lambda}_{igj}^{*}{\alpha}_{ig}^{*}}-\frac{{v}_{ij}^{*}}{{x}_{ij}}({\alpha}_{ij}^{*}+{\displaystyle \sum _{g=K+1}^{{L}_{i}}{\lambda}_{igj}^{*}{\alpha}_{ig}^{*}})\}$.

Equation (9) has the form
where ${\mathrm{D}}_{ij}=(\frac{{v}_{ij}^{*}}{{x}_{ij}})$, ${\mathrm{A}}_{ij}={\alpha}_{ij}^{*}$, ${\mathrm{B}}_{ij}={\displaystyle \sum _{g=K+1}^{{L}_{{}_{i}}}{\lambda}_{igj}^{*}{\alpha}_{ig}^{*}}$.

$${\gamma}_{ij}=(1-{\mathrm{D}}_{ij})({\mathrm{A}}_{ij}+{\mathrm{B}}_{ij})$$

Equations (9) and (10) imply that
where the $\widehat{\pi}$’s are the ML estimates of the $\pi $’s derived in Section 2.4. We do not know how to predict ${\epsilon}_{ij}$ and, therefore, we set it equal to its mean value which is equal to zero. Equation (24) reconciles the discrepancies between the functional forms of (9) and (10). We have the ML estimates of all the unknown parameters on the left-hand side of Equation (24). From these estimates, it can be determined that for individual i and regressors ${x}_{ij}$, j = 1,…, K:

$${\widehat{\pi}}_{j0}+{\displaystyle \sum _{h=1}^{p}{z}_{ih}{\widehat{\pi}}_{jh}}=(1-{\widehat{\mathrm{D}}}_{ij})({\widehat{\mathrm{A}}}_{ij}+{\widehat{\mathrm{B}}}_{ij})$$

The estimate of the partial derivative

$$({\alpha}_{ij}^{*})={\widehat{\mathrm{A}}}_{ij}={(1-{\widehat{\mathrm{D}}}_{ij})}^{-1}({\widehat{\pi}}_{j0}+{\displaystyle \sum _{h\in {G}_{1}}{z}_{ih}{\widehat{\pi}}_{jh})}$$

The estimate of omitted-regressor bias

$$({\displaystyle \sum _{g=K+1}^{{L}_{{}_{i}}}{\lambda}_{igj}^{*}{\alpha}_{ig}^{*}})={\widehat{\mathrm{B}}}_{ij}={(1-{\widehat{\mathrm{D}}}_{ij})}^{-1}({\displaystyle \sum _{h\in {G}_{2}}{z}_{ih}{\widehat{\pi}}_{jh}})$$

The estimate of measurement-error bias
where the p + 1 coefficient drivers are allocated either to a group, denoted by ${G}_{1}$, or to a group, denoted by ${G}_{2}$; ${G}_{2}=p+1-{G}_{1}$. The unknowns in formulas (25)–(27) are ${\widehat{\mathrm{D}}}_{ij}$ and ${G}_{1}$. We discuss how to determine these unknowns below.

$$[-\text{}(\frac{{\nu}_{ij}^{*}}{{x}_{ij}})({\alpha}_{ij}^{*}+{\displaystyle \sum _{g=K+1}^{{L}_{i}}{\lambda}_{igj}^{*}{\alpha}_{ig}^{*}})]=-{\widehat{\mathrm{D}}}_{ij}({\widehat{\mathrm{A}}}_{ij}+{\widehat{\mathrm{B}}}_{ij})$$

The type of data Greene (2012, pp. 244–246, Example 8.9) [3] used can tell us about ${\widehat{\mathrm{D}}}_{ij}$. Which of the terms in ${\widehat{\pi}}_{j0}+{\displaystyle \sum _{h=1}^{p}{z}_{ih}{\widehat{\pi}}_{jh}}$, should go into $({\widehat{\pi}}_{j0}+{\displaystyle \sum _{h\in {G}_{1}}{z}_{ih}{\widehat{\pi}}_{jh})}$, can be decided after examining the sign and magnitude of each term in ${\widehat{\pi}}_{j0}+{\displaystyle \sum _{h=1}^{p}{z}_{ih}{\widehat{\pi}}_{jh}}$ If we are not sure of any particular value of ${G}_{1}$, then we can present the estimated kernel functions for ${\alpha}_{ij}^{*}$, i = 1, … , n, for various values of ${G}_{1}$.

Regarding ${\mathrm{D}}_{ij}$ we can make the following assumption:

For all i and j: (i) The measurement error ${v}_{ij}^{*}$ forms a negligible proportion $(\frac{{v}_{ij}^{*}}{{x}_{ij}})$ of ${x}_{ij}$.

Alternatively, the percentage point $(\frac{{v}_{ij}^{*}}{{x}_{ij}})$ × 100 can be specified if we have the type of data Greene (2012, pp. 244–246, Example 8.9) [3] had. If such data are not available, then we can make Assumption 4. Under this assumption, ${(1-{\widehat{\mathrm{D}}}_{ij})}^{-1}$ in (25) and (26) gets equated to 1 and the number of unknown quantities in formulas (25) and (26) is reduced to 1.

Under these assumptions, we can obtain the estimates of ${\widehat{\mathrm{A}}}_{ij}$ and their standard errors. These standard errors are based on those of $\widehat{\pi}$’s involved in ${\widehat{\mathrm{A}}}_{ij}$. If the estimate of ${\mathrm{A}}_{ij}$ given by formula (25) is accurate, then our estimate of the partial derivative ${\alpha}_{ij}^{*}$ is free of omitted-regressor and measurement-error biases, and also of specification errors (i)–(iv) listed in Section 2.2.5.

The choice of the dependent variable and regressors to be included in (7) is entirely dictated by the partial derivatives we want to learn. The learning of a partial derivative, say ${\alpha}_{ij}^{*}=\frac{\partial {y}_{i}^{*}}{\partial {x}_{ij}^{*}}$, requires (i) the use of ${y}_{i}^{*}$ and ${x}_{ij}^{*}$ as the dependent variable and a regressor of (7), respectively; (ii) the use of z’s in (25) and (26) as the coefficient drivers in (10); and (iii) the use of the values of ${G}_{1}$ and ${\mathrm{D}}_{ij}$ in (25). These requirements show that the learning about one partial derivative is more straightforward than learning about more than one partial derivative. Therefore, in our practical work we will include in our basic model (7) only one non-constant regressor besides the intercept.

It should be remembered that the coefficient drivers in (10) are different from the regressors in (7). There are also certain requirements that the coefficient drivers should satisfy. They explain variations in the components of the coefficients of (7), as is clear from Equations (25) and (26). After deciding that we want to learn about ${\alpha}_{ij}^{*}=\frac{\partial {y}_{i}^{*}}{\partial {x}_{ij}^{*}}$ and knowing from (23) that this ${\alpha}_{ij}^{*}$ is only a part of the coefficient ${\gamma}_{ij}$ of the regressor ${x}_{ij}^{*}$ in the (${y}_{i}^{*}$, ${x}_{ij}^{*}$ )-relationship, we need to include in (10) those coefficient drivers that facilitate accurate evaluation of the formulas (25)–(27). Initially, we do not know what such coefficient drivers are. We have decided to use as coefficient drivers those variables that economists include in their models of the (${y}_{i}^{*}$, ${x}_{ij}^{*}$)-relationship as additional explanatory variables. Specifically, instead of using them as additional regressors we use them as coefficient drivers in (10).13^{,}14 It follows from Equations (25) and (26) that among all the coefficient drivers included in (10) there should be one subset of ${G}_{1}$ coefficient drivers that is highly correlated with the bias-free partial derivative part and another subset of ${G}_{2}$ coefficient drivers that is highly correlated with the omitted-regressor bias of the jth coefficient of (7).15

If ${G}_{1}$ and ${\mathrm{D}}_{ij}$ are unknown, as they usually are, then we should make alternative assumptions about them and compare the results obtained under these alternative assumptions.

The marginal effect of any one of the included regressors on the probability that ${y}_{i}=1$ is
where we set ${\mathsf{\Delta}}_{\epsilon}$ = $\mathrm{I}$.

$$\frac{\partial \text{Prob}({y}_{i}=1|x,{z}_{1})}{\partial {x}_{i}}={f}_{i}({{x}^{\prime}}_{i}\Pi {z}_{i}/\sqrt{{{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}}\text{}|{x}_{i},{z}_{i})\left(\frac{{{z}^{\prime}}_{i}\Pi \prime}{{\left({{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}\right)}^{(1/2)}}-\frac{{{x}^{\prime}}_{i}{\Pi \mathrm{z}}_{i}\left({\mathsf{\Delta}}_{\epsilon}{x}_{i}\right)}{{\left({{x}^{\prime}}_{i}{\mathsf{\Delta}}_{\epsilon}{x}_{i}\right)}^{(3/2)}}\right)$$

These effects are impure because they involve omitted-regressor and measurement-error biases. It is not easy to integrate omitted-regressor and measurement-error biases out of the probability in (15).16

This section is designed to give some specific empirical examples on the type of misspecification that usually found in actual data sets. Several authors studied this relationship. We are also interested in learning about the partial derivative of earnings with respect to education of individuals. For this purpose, we set up the model
where ${y}_{i}^{*}$ denotes unobserved earnings, ${x}_{i1}^{*}$ denotes unobserved education, and the components of ${\gamma}_{i0}$ and ${\gamma}_{i1}$ are given in (8) and (9), respectively. Equation (29) is derived in the same way that (7) is derived. Like (7), Equation (29) is devoid of four specification errors. In our empirical work, we use ${x}_{i1}$ = years of schooling as a proxy for education.

$${y}_{i}^{*}={\gamma}_{i0}+{x}_{i1}^{*}{\gamma}_{i1}$$

Greene (2012, p. 14) [3] used model (29) after changing it to a fixed coefficient model with added error term, in which the dependent variable is the log of earnings (hourly wage times hours worked). He pointed out that this model neglects the fact that most people have higher incomes when they are older than when they are young, regardless of their education. Therefore, Greene argued that the coefficient on education will overstate the marginal impact of education on earnings. He further pointed out that if age and education are positively correlated, then his regression model will associate all the observed increases in income with increases in education. Greene concluded that a better specification would account for the effect of age. He also pointed out that income tends to rise less rapidly in the later earning years than in the early ones. To accommodate this phenomena and the age effect, Greene (2012, p. 14) [3] included in his model the variables age and age^{2}.

Recognizing the difficulties in measuring education pointed out by Greene (2012, p. 221) [3], we measure education as hours of schooling plus measurement error. Another problem Greene discussed is that of the endogeneity of education. We handle this problem by making Assumptions II and III. Under these assumptions, the conditional expectation $\mathrm{E}\left({y}_{i}^{*}\right|{x}_{i},{z}_{i})={{x}^{\prime}}_{i}\Pi {z}_{i}$ exists. Other regressors Greene (2012, p. 708) [3] included in his labor supply model include kids, husband’s age, husband’s education and family income.

The question that arises is the following: How should we handle the variables mentioned in the previous two paragraphs? Researchers who studied earnings-education relationship have often included these variables as additional explanatory variables in earnings and education equation with fixed coefficients. Greene (2012, p. 699) [3] also included the interaction between age and education as an additional explanatory variable. Previous studies, however, have dealt with fixed coefficient models and did not have anything to do with VCs of the type in (29). The coefficients of the earnings-education relationship in (29) have unwanted omitted-regressor and measurement-error biases as their portions. We need to separate them from the corresponding partial derivatives, as shown in (7) and (9). How do we perform this separation? Based on the above derivation in (1)–(9), we use the variables identified in the previous two paragraphs as the coefficient drivers.

When these coefficient drivers are included, the following two equations get added to Equation (29):
where ${z}_{i0}$ = 1 for all i, ${z}_{i1}$ = Wife’s Age, ${z}_{i2}$ = Wife’s ${\text{Age}}^{2}$, ${z}_{i3}$ = Kids, ${z}_{i4}$ = Husband’s age, ${z}_{i5}$ = Husband’s education, and ${z}_{i6}$ = Family income.

$${\gamma}_{ij}={z}_{i0}{\pi}_{j0}+{z}_{i1}{\pi}_{j1}+\cdots +{z}_{i6}{\pi}_{j6}+{\epsilon}_{ij}j=0,1$$

It can be seen from (11) that Equation (30) with j = 0 makes the coefficient drivers act as additional regressors in (29) and Equation (30) with j = 1 introduces the interactions between education and each of the coefficient drivers. Greene (2012, p. 699) [3] informed us that binary choice models with interaction terms received considerable attention in recent applications. Note that for j = 1, h = 1, … , 6, ${\pi}_{jh}$ should not be equated to $\frac{{\partial}^{2}{y}_{i}^{*}}{\partial {x}_{i1}^{*}\partial {z}_{ih}}$ because ${\gamma}_{i1}$ is not equal to $\frac{\partial {y}_{i}^{*}}{\partial {x}_{i1}^{*}}$.

Appendix Table F5.1 of Greene (2012) [3] contains 753 observations used in the Mroz study of the labor supply behavior of married women. We use these data in this section. Of the 753 married women in the sample, 428 were participants and the remaining 325 were nonparticipants in the formal labor market. This means that ${y}_{i}$ = 1 for 428 observations and ${y}_{i}$ = 0 for 325 observations. The data on ${x}_{i1}$ and the z’s for these 753 married women are obtained from Greene’s Appendix Table F5.1. Using these data and applying an iteratively rescaled generalized least squares method to (29) and (30) we obtain

$$\begin{array}{l}{\widehat{\gamma}}_{i0}=\underset{\left(81.0820\right)}{27.6573}+\underset{\left(3.9504\right)}{0.1316}{\mathrm{z}}_{i1}\text{}-\text{}\underset{\left(0.0441\right)}{0.0049}{z}_{i2}-\underset{\left(7.0849\right)}{11.9494}{z}_{i3}-\underset{\left(0.6261\right)}{0.4414}{z}_{i4}\\ -\underset{\left(0.7910\right)}{1.4708}{z}_{i5}+\underset{\left(0.0002\right)}{0.0003}{z}_{i6}\end{array}$$

$$\begin{array}{l}{\widehat{\gamma}}_{i1}=-\underset{\left(6.9397\right)}{4.2328}+\underset{\left(0.3405\right)}{0.1696}{z}_{i1}\text{}-\underset{\left(0.0038\right)}{0.0019}{z}_{i2}+\underset{\left(0.6076\right)}{0.5168}{z}_{i3}+\underset{\left(0.0550\right)}{0.0261}{z}_{i4}\text{}\\ +\underset{\left(0.0676\right)}{0.0702}{z}_{i5}\text{}-\underset{\left(0.000019\right)}{0.000013}{z}_{i6}\end{array}$$

From these equations we have computed the estimates of ${\gamma}_{i0}$ and ${\gamma}_{i1}$ and their standard errors. To conserve space, Table 1 below is restricted to contain these quantities only for five married women.

From (23) we obtain
Our interest is in the partial derivative ${\widehat{\mathrm{A}}}_{i1}={\alpha}_{i1}^{*}=\frac{\partial {y}_{i}^{*}}{\partial {x}_{i1}^{*}}$ which is the bias-free portion of ${\widehat{\gamma}}_{i1}$. This partial derivative measures the “impact” of the ith married woman’s education on her earnings. Our prior belief is that the right sign for this bias-free portion is positive. Now it is appropriate to use the formula $({\alpha}_{i1}^{*})={\widehat{\mathrm{A}}}_{i1}={(1-{\widehat{\mathrm{D}}}_{i1})}^{-1}({\widehat{\pi}}_{10}+{\displaystyle \sum _{h\in {G}_{1}}{z}_{ih}{\widehat{\pi}}_{1h})}$ in (25) with j = 1 to estimate${\alpha}_{i1}^{*}=\frac{\partial {y}_{i}^{*}}{\partial {x}_{i1}^{*}}$. We assume that ${\widehat{\mathrm{D}}}_{i1}$ is negligible. We need to choose the terms in the sum $({\widehat{\pi}}_{10}+{\displaystyle \sum _{h\in {G}_{1}}{z}_{ih}{\widehat{\pi}}_{1h})}$ from the terms on the right-hand side of Equation (32). It can be seen from this equation that if we retain the estimate ${\widehat{\pi}}_{10}$ = −$\underset{\left(6.9397\right)}{4.2328}$ in the sum $({\widehat{\pi}}_{10}+{\displaystyle \sum _{h\in {G}_{1}}{z}_{ih}{\widehat{\pi}}_{1h})}$, then this sum does not give positive estimate of ${\alpha}_{i1}^{*}$ for any combination of the six coefficient drivers in (32). Therefore, we remove ${\widehat{\pi}}_{10}$ from $({\widehat{\pi}}_{10}+{\displaystyle \sum _{h\in {G}_{1}}{z}_{ih}{\widehat{\pi}}_{1h})}$. We expect the impact of education on earnings to be small. To obtain the smallest possible positive estimate of ${\alpha}_{i1}^{*}$, we choose the smallest positive term on the right-hand side of (32). This term is + $\underset{\left(0.0676\right)}{0.0702}{z}_{i5}$. Hence we set the z’s other than ${z}_{i5}$ in $\sum _{h\in {G}_{1}}{z}_{ih}{\widehat{\pi}}_{1h}$ equal to zero. Thus, we obtain ${G}_{1}$ = 1 and ${\widehat{\alpha}}_{i1}^{*}$ = + $\underset{\left(0.0676\right)}{0.0702}{z}_{i5}$. The value of ${z}_{i5}$ times 0.0702 gives the estimate of the impact of the ith married woman’s education on her earnings.

$${\widehat{\gamma}}_{i1}=(1-{\widehat{\mathrm{D}}}_{i1}{\widehat{\mathrm{A}}}_{i1}+{\widehat{\mathrm{B}}}_{i1})$$

To conserve space, we present the values of ${\widehat{\alpha}}_{i1}^{*}$ = + $\underset{\left(0.0676\right)}{0.0702}{z}_{i5}$ only for only i = 1, …, 5 in Table 2. The impact estimates for all 753 married women are presented in the form of a histogram or a kernel density function in Figure 1 below. We interpret the estimate $\underset{\left(0.0676\right)}{0.0702}{z}_{i5}$ to imply that an additional year of schooling is associated with a $\underset{\left(0.0676\right)}{0.0702}{z}_{i5}$ × 100 percent increase in earnings. This impact of education on earnings is different for different married women. The impact of a wife’s education on her earnings is 0.0702 times her husband’s education.18 Our results in Table 2 and Figure 1 below show that the greater are the years of schooling of a husband, the larger is the impact of his wife’s education on her earnings. However, the estimates of ${\alpha}_{i1}^{*}$ appear to be high at least for some married women whose husbands had larger years of schooling. Therefore, they may contain some omitted-regressor biases.

A histogram and a kernel density function presented in Figure 1 are much more revealing than a table containing the values, $0.0702{z}_{i5}$, i = 1,…, 753, and their standard errors would. Also, such a table occupies a lot of space without telling us mush. We are using Figure 1 as a descriptive device. The kernel function in Figure 1 is multimodal. All the estimates in this figure have the correct signs.

Greene’s (2012, p. 708) [3] estimate 0.0904 of the coefficient of education in his estimated labor supply model is not comparable to the estimates in Figure 1 because (i) his model is different from our model; (ii) the dependent variable of his labor supply model in Greene (2012, p. 683) [3] is the log of the dependent variable of our model (29); and (iii) our definition of $\frac{\partial {y}_{i}^{*}}{\partial {x}_{i1}^{*}}$ in (3) is different from Greene’s (2012, p. 14) [3] definition of $\frac{\partial {y}_{i}}{\partial {x}_{i1}}$. Greene’s estimate is some kind of average estimate applicable to all 753 married women. It is unreasonable to expect his average estimate to be close to the estimate for each married women. We will now show that given the 6 coefficient drivers in (32), it is not possible to reduce the magnitudes of all the estimates in Figure 1 without changing the positive sign of some of these estimates in the left tail end of Figure 1 to the negative sign. This is what has happened in Figure 2. To reduce the magnitudes of the estimates of bias-free parts ${\widehat{\mathrm{A}}}_{i1}={\alpha}_{i1}^{*}=\frac{\partial {y}_{i}^{*}}{\partial {x}_{i1}^{*}})$ of the ${\widehat{\gamma}}_{i1}$’s given in Figure 1 for all i, we use the alternative estimates, $0.0702{z}_{i5}$ − $0.000013$${z}_{i6}$ of the bias-free parts of the ${\widehat{\gamma}}_{i1}$’s called “modified ${\widehat{\mathrm{A}}}_{i1}$, i = 1 ,…, 753.” The histogram and kernel density function for the modified ${\widehat{\mathrm{A}}}_{i1}$ is given in Figure 2 below.

Five of the estimates in the left-tail end of this figure have the wrong (negative) sign. More number of wrong signs will occur if we try to further reduce the magnitudes of the modified estimates. The kernel density function of the modified estimates is unimodal unlike the kernel density function in Figure 1. The range of the modified estimates is smaller than that of the estimates in Figure 1.

From these results it is incorrect to conclude that the conventional discrete choice models and their method of estimation give better and unambiguous results than the latent regression model in (11) and (14) and formula (25). The reasons for this circumstance are the following: (i) The conventional models including discrete choice models suffer from four specification errors listed in Section 2.2.5 and the model in (11) and (14) is free of these errors; (ii) The conventional latent regression models have nonunique coefficients and error terms and the model in (11) and (14) is based on model (5) which has unique coefficients and error term. How can a model with nonunique coefficients and error term give unambiguous results? (iii) The conventional method of estimating the discrete choice models appears to be simple because these models are based on the assumption that “the” omitted regressors constituting their error terms do not introduce omitted-regressor biases into the coefficients of their included regressors. The model in (11) and (14) is not based on any such assumption; (iv) Pratt and Schlaifer pointed out that in the conventional model the condition that its regressors be independent of “the” omitted regressors constituting its error term is meaningless. The error terms of the model in (11) and (14) are not the functions of “the” omitted-regressors.

We have removed four major specification errors from the conventional formulation of probit and logit models. A reformulation of Yatchew and Griliches’ probit model so that it is devoid of these specification errors changes their results. We also find that their model has nonunique coefficients and error term. YG make the assumption that omitted regressors constituting the error term of their model do not introduce omitted-regressor biases into the coefficients of the included regressors. We have developed a method of calculating the bias-free partial derivative portions of the coefficients of a correctly specified probit model.

We are grateful to four referees for their thoughtful comments.

All authors contributed equally to the paper.

The authors declare no conflict of interest.

In this Appendix, we show that any of the models estimated in the econometric literature is more restrictive than (1). We also show that these restrictions, when imposed on (1), lead to several specification errors.

It is widely assumed that the error term in an econometric model arises because of omitted regressors influencing the dependent variable. We can use appropriate Felipe and Fisher’s (2003) [17] separability and other conditions to separate the included regressors, ${x}_{i1}^{*},...,{x}_{iK}^{*}$, from omitted regressors, ${x}_{i,K+1}^{*},...,{x}_{i{L}_{i}}^{*}$, so that (1) can be written as
where ${\epsilon}_{i}$ = ${\psi}_{i2}({x}_{i,K+1}^{*},...,{x}_{i,{L}_{i}}^{*})$ is a function of omitted regressors. Let ${\epsilon}_{i}$ be the random error term and let ${\psi}_{i1}({x}_{i1}^{*},...,{x}_{iK}^{*})$ be equal to ${\psi}_{i1}({x}_{i1}^{*},...,{x}_{iK}^{*};{\beta}_{1},...,{\beta}_{p})$ which is an unknown function of ${x}_{i1}^{*},...,{x}_{iK}^{*}$.20 Let ${\beta}_{1},...,{\beta}_{p}$ be the fixed parameters representing the constant features of model (A1). From the above derivation we know what type of conditions which, when imposed on (1), give exactly the model in Greene (2012, p. 181, (7-3)) [3].

$${y}_{i}^{*}={\psi}_{i1}({x}_{i1}^{*},...,{x}_{iK}^{*})+{\psi}_{i2}({x}_{i,K+1}^{*},...,{x}_{i,{L}_{i}}^{*})={\psi}_{i1}({x}_{i1}^{*},...,{x}_{iK}^{*};{\beta}_{1},...,{\beta}_{p})+{\epsilon}_{i}$$

The separability conditions used to rewrite (1) in the form of (A1) are very restrictive, as shown by Felipe and Fisher (2003) [17]. Furthermore, in his scrutiny of the Rotterdam School demand models, Goldberger (1987) [19] pointed out that the treatment of any features of (1) as constant parameters such as ${\beta}_{1},...,{\beta}_{p}$ may be questioned and these parameters are not unique.21 Use of non-unique parameters is a specification error. Therefore, the functional form of (A1) is most probably misspecified.

Skyrms (1988, p. 59) [7] made the important point that spurious correlations disappear when we control for all relevant pre-existing conditions.22 Even though some of the regressors, ${x}_{t,K+1}^{*},...,{x}_{t,{L}_{t}}^{*}$, represent all relevant pre-existing conditions in our formulation of (A1), they cannot be controlled for, as we should to eliminate false (spurious) correlations, since they are included in the error term of (A1). Therefore, in (A1), the correlations between ${y}_{t}^{*}$ and some of ${x}_{t1}^{*},...,{x}_{tK}^{*}$ can be spurious.

Karlsen, Myklebust and Tjøstheim (KMT) (2007) [10] considered a model of the type (A1) for time series data. They assumed that $\left\{{\epsilon}_{t}\right\}$ is an unobserved stationary process and $\{{X}_{t1}^{*},...,{X}_{tK}^{*}\}$ and $\left\{{Y}_{t}^{*}\right\}$ are both observed nonstationary processes and are of unit-root type. White (1980, 1982) [11,12] also considered (A1) for time-series data and assumed that the ${\epsilon}_{t}$’s are serially independent and are distributed with mean zero and constant variance. He also assumed that ${\epsilon}_{t}$ is uncorrelated with $\{{X}_{t1}^{*},...,{X}_{tK}^{*}\}$ for all t. Pratt and Schlaifer (1988, 1984) [1,9] criticized that these assumptions are meaningless because they are about ${\epsilon}_{t}$ which is not unique and is composed of variables of which we know nothing. Any distributional assumption about a nonunique error term is arbitrary.

Consider (A1) again. Let X = ${\psi}_{i1}({x}_{i1}^{*},...,{x}_{iK}^{*})$, Y = ${y}_{i}^{*}$ and M = ${\epsilon}_{i}$, be three random variables. Then X and M are statistically independent if their joint distribution can be expressed as the product of their marginal distributions. It is not possible to verify this condition.

Let H(X) and K(M) be the functions of X and ${\epsilon}_{i}$, respectively. As Whittle (1976) [20] pointed out, we must live with the idea that, for the given random variables like M and X, we may be only able to assert the validity of the condition
where the functions H and K are such that E[H(X)] < $\infty $ and E[K(M)] < $\infty $. If condition (A2) holds only for certain functions, H and K, then we cannot say that X and M are independent. Suppose that Equation (A2) holds only for linear K, so that E[H(X)M] = E(M)E[H(X)] for any H for which E[H(X)] < $\infty $. This equation is equivalent to $\mathrm{E}\left({\epsilon}_{i}\right|x)=\mathrm{E}({\epsilon}_{i})$ which shows that the disturbance at observation i is mean independent of x at i. This may be true for all i in the sample. This mean independence implies Greene’s (2012, p. 183) [3] assumption (3).

E[H(X)K(M)] = E[H(X)]E[K(M)]

Let us now drop condition (A2) and let us assume instead that
Using these assumptions, Rao (1973, p. 97) [14] proved that
Equations (A3) and (A4) prove that under conditions (A2.1)–(A2.3), E (Y |x) exists.23

H(X) be a Borel function of X,

E|Y| < ∞,

E|YH(X)| < ∞.

E[H(X)Y | X = x] = H(x)E(Y |x)

E{H(X)[Y – E(Y |X)]} = 0

Under the necessary and sufficient conditions of Kagan, Linnik and Rao’s (KLR’s) (1973, pp. 11–12) [22] lemma (reproduced in Swamy and von zur Muehlen (1988, pp. 114–115) [23]), the following two equations hold almost certainly:
and

$$\mathrm{E}({Y}_{i}^{*}|{x}_{i1}^{*},...,{x}_{iK}^{*})={\displaystyle \sum _{j=0}^{K}{x}_{ij}^{*}{\beta}_{j}}\text{with}{x}_{i0}^{*}\equiv 1\text{for}i=1,...,n$$

$$\text{Var}\left({Y}_{i}^{*}\right|{x}_{i1}^{*},...,{x}_{iK}^{*})={\sigma}_{\epsilon}^{2},\text{afinite},\text{positiveconstantforall}i=1,\dots ,n$$

If the conditions of KLR’s lemma are not satisfied, then (A5) and (A6) are not the correct first and second conditional moments of ${Y}_{i}^{*}$. The problem is that we cannot know a priori whether or not these conditions are satisfied. The conditions of KLR’s Lemma are not satisfied if $\{{x}_{t1}^{*},...,{x}_{tK}^{*}\}$ and $\left\{{y}_{t}^{*}\right\}$ are integrated series.24 Furthermore, $\left\{{y}_{t}^{*}\right\}$ cannot be made stationary by first differencing it once or more than once because of the nonlinearity of ${\psi}_{t1}({x}_{t1}^{*},...,{x}_{tK}^{*})$. In these cases, we can use, as Berenguer-Rico and Gonzalo (2013) [24] do, the concepts of summability, cosummability and balanced relationship to analyze model (A1). Clearly the conditions of KLR’s lemma are stronger than White’s assumptions which, in turn, are stronger than KMT’s (2007) [10] assumptions. It is clear that KMT’s assumptions are not always satisfied.

Consider the log likelihood function in (18). For this function,
where ${{f}^{\prime}}_{i}$ is the partial derivative of ${f}_{i}$ with respect to ${\mathit{\pi}}^{Long}$.

$$\begin{array}{l}\frac{{\partial}^{2}\mathrm{ln}L}{\partial {\mathit{\pi}}^{Long}\partial ({\mathit{\pi}}^{Long}{)}^{\prime}}=\frac{\partial}{\partial \left({\mathit{\pi}}^{Long}\right)\prime}\text{}{\displaystyle \sum _{i=1}^{n}\left[\frac{{y}_{i}{f}_{i}}{{F}_{i}}-\frac{(1-{y}_{i}){f}_{i}}{(1-{F}_{i})}\right]}\text{}\frac{({z}_{i}\otimes {x}_{i})}{\sqrt{\left(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\right){\overline{\mathit{\delta}}}_{\mathit{\epsilon}}}}\\ ={\displaystyle \sum _{i=1}^{n}[{y}_{i}(\frac{{{f}^{\prime}}_{i}}{{F}_{i}}-\frac{{f}_{i}^{2}}{{F}_{i}^{2}})-(1-{y}_{i})(\frac{{{f}^{\prime}}_{i}}{(1-{F}_{i})}+\frac{{f}_{i}^{2}}{{(1-{F}_{i})}^{2}})]}\text{}\frac{({z}_{i}\otimes {x}_{i})({{z}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i})}{\left(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\right){\overline{\mathit{\delta}}}_{\mathit{\epsilon}}}\end{array}$$

$\mathrm{E}\left({y}_{i}\right)=1\times {F}_{i}+0\times (1-{F}_{i})={F}_{i}$. Using this result in (A7) gives
where the condition that n > (K + 1)(p + 1) is needed for the matrix on the right-hand side of this equation to be positive definite.
Taking the expectation of both sides of Equation (A11) gives
where the condition that n > (K + 1)(K + 1) is needed for the matrix on the right-hand side of this equation to be positive definite.

$$\mathrm{E}\left[-\frac{{\partial}^{2}\mathrm{ln}L}{\partial {\mathit{\pi}}^{Long}\partial ({\mathit{\pi}}^{Long}{)}^{\prime}}\right]=\text{}{\displaystyle \sum _{i=1}^{n}\frac{{f}_{i}^{2}}{{F}_{i}(1-{F}_{\mathrm{i}})}}\text{}\frac{({z}_{i}\otimes {x}_{i})({{z}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i})}{\left(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\right){\overline{\mathit{\delta}}}_{\mathit{\epsilon}}}\text{}$$

$$\begin{array}{l}\frac{{\partial}^{2}\text{ln}\mathrm{L}}{\partial {\pi}^{Long}\partial {\overline{{\mathit{\delta}}^{\prime}}}_{\mathit{\epsilon}}}=\frac{\partial}{\partial {\overline{{\mathit{\delta}}^{\prime}}}_{\mathit{\epsilon}}}\text{}{\displaystyle \sum _{i=1}^{n}\frac{({z}_{i}\otimes {x}_{i})}{\sqrt{\overline{({{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i})}{\overline{\delta}}_{\epsilon}}}\left[\frac{{y}_{i}{f}_{i}}{{F}_{i}}-\frac{(1-{y}_{i}){f}_{i}}{(1-{F}_{i})}\right]}\\ ={{\displaystyle \sum _{i=1}^{n}\frac{({z}_{i}\otimes {x}_{i})}{\sqrt{\overline{({{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i})}{\overline{\delta}}_{\epsilon}}}\left[{y}_{i}(\frac{{{f}^{\prime}}_{i}}{{F}_{i}}-\frac{{\mathit{f}}_{\mathit{i}}^{2}}{{\mathit{F}}_{\mathit{i}}^{2}})-(1-{y}_{i})(\frac{{{f}^{\prime}}_{i}}{(1-{F}_{i})}+\frac{{\mathit{f}}_{\mathit{i}}^{2}}{{(1-{F}_{i})}^{2}})\right]}}_{}\\ \times (-\frac{1}{2})\frac{({{z}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}){\pi}^{Long}\left(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\right)}{{\left(\right(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\left){\overline{\delta}}_{\epsilon}\right)}^{3/2}}+{\displaystyle \sum _{i=1}^{n}\left[\frac{{y}_{i}{f}_{i}}{{F}_{i}}-\frac{(1-{y}_{i}){f}_{i}}{(1-{F}_{i})}\right]}(-\frac{1}{2})\frac{({z}_{i}\otimes {x}_{i})\left(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\right)}{{\left(\right(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\left){\overline{\delta}}_{\epsilon}\right)}^{3/2}}\end{array}$$

$$\mathrm{E}\left[-\frac{{\partial}^{2}\text{ln}\mathrm{L}}{\partial {\pi}^{Long}\partial {{\delta}^{\prime}}_{\epsilon}}\right]={\displaystyle \sum _{i=1}^{n}\frac{{\mathit{f}}_{\mathit{i}}^{2}}{{\mathrm{F}}_{i}(1-{\mathrm{F}}_{i})}}\text{}\frac{({z}_{i}\otimes {x}_{i})}{\sqrt{\left(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\right){\overline{\delta}}_{\epsilon}}}(-\frac{1}{2})\frac{({{z}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}){\pi}^{Long}\left(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\right)}{{\left(\right(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\left){\overline{\delta}}_{\epsilon}\right)}^{3/2}}\text{}$$

$$\begin{array}{l}\frac{{\partial}^{2}\text{ln}L}{\partial {\overline{\delta}}_{\epsilon}\partial {\overline{{\delta}^{\prime}}}_{\epsilon}}=\frac{\partial}{\partial {\overline{{\delta}^{\prime}}}_{\epsilon}}\text{}{\displaystyle \sum _{i=1}^{n}\left[\frac{{y}_{i}{f}_{i}}{{F}_{i}}-\frac{(1-{y}_{i}){f}_{i}}{(1-{F}_{i})}\right]}({{z}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}){\mathit{\pi}}^{Long}(-\frac{1}{2}){\left[\frac{1}{\left(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\right){\overline{\mathit{\delta}}}_{\mathit{\epsilon}}}\right]}^{\frac{3}{2}}(\overline{{x}_{i}{}^{\prime}\otimes {x}_{i}{}^{\prime}})\prime \\ ={\displaystyle \sum _{i=1}^{n}\left[{y}_{i}(\frac{{{f}^{\prime}}_{i}}{{F}_{i}}-\frac{{f}_{i}^{2}}{{F}_{i}^{2}})-(1-{y}_{i})(\frac{{{f}^{\prime}}_{i}}{(1-{F}_{i})}+\frac{{f}_{i}^{2}}{{(1-{F}_{i})}^{2}})\right]}\\ \times [({{z}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}){\mathit{\pi}}^{Long}{]}^{2}(\frac{1}{4}){\left[\frac{1}{\left(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\right){\overline{\mathit{\delta}}}_{\mathit{\epsilon}}}\right]}^{3}(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}})\prime (\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}})\\ +{\displaystyle \sum _{i=1}^{n}\left[\frac{{y}_{i}{f}_{i}}{{F}_{i}}-\frac{(1-{y}_{i}){f}_{i}}{(1-{F}_{i})}\right]}\\ \times ({{z}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}){\mathit{\pi}}^{Long}(-\frac{1}{2})(-\frac{3}{2}){\left[\frac{1}{\left(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}}\right){\overline{\mathit{\delta}}}_{\mathit{\epsilon}}}\right]}^{\frac{5}{2}}\overline{({x}_{i}{}^{\prime}\otimes {x}_{i}{}^{\prime})\prime}(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}})\end{array}$$

$$\mathrm{E}\left[-\frac{{\partial}^{2}\text{lnL}}{\partial {\overline{\delta}}_{\epsilon}\partial {\overline{{\delta}^{\prime}}}_{\epsilon}}\right]={\displaystyle \sum _{i=1}^{n}\frac{{\mathit{f}}_{\mathit{i}}^{2}}{{\mathrm{F}}_{i}(1-{\mathrm{F}}_{i})}}(\frac{1}{4})\frac{{\left[\right({{z}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}\left){\pi}^{Long}\right]}^{2}}{\left[\right(\overline{{{x}^{\prime}}_{i}\otimes {x}_{i}{}^{\prime}}){\overline{\delta}}_{\epsilon}{]}^{3}}(\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}})\prime (\overline{{{x}^{\prime}}_{i}\otimes {{x}^{\prime}}_{i}})\text{}$$

- J.W. Pratt, and R. Schlaifer. “On the Interpretation and Observation of Laws.” J. Econom. 39 (1988): 23–52. [Google Scholar] [CrossRef]
- A. Yatchew, and Z. Griliches. “Specification Error in Probit Models.” Rev. Econom. Stat. 66 (1984): 134–139. [Google Scholar] [CrossRef]
- W. Greene. Econometric Analysis, 7th ed. Upper Saddle River, NJ, USA: Pearson, Prentice Hall, 2012. [Google Scholar]
- J.S. Cramer. “Robustness of Logit Analysis: Unobserved Heterogeneity and Misspecified Disturbances, Discussion Paper 2006/07.” Amsterdam, The Netherlands: Department of Quantitative Economics, Amsterdam School of Economics, 2006. [Google Scholar]
- J.M. Wooldridge. Econometric Analysis of Cross-Section and Panel Data. Cambridge, MA, USA: The MIT Press, 2002. [Google Scholar]
- P.A.V.B. Swamy, J.S. Mehta, G.S. Tavlas, and S.G. Hall. “Small Area Estimation with Correctly Specified Linking Models.” In Recent Advances in Estimating Nonlinear Models, with Applications in Economics and Finance. Edited by J. Ma and M. Wohar. New York, NY, USA: Springer, 2014, pp. 193–228. [Google Scholar]
- B. Skyrms. “Probability and Causation.” J. Econom. 39 (1998): 53–68. [Google Scholar] [CrossRef]
- R.L. Basmann. “Causality Tests and Observationally Equivalent Representations of Econometric Models.” J. Econom. 39 (1988): 69–104. [Google Scholar] [CrossRef]
- J.W. Pratt, and R. Schlaifer. “On the Nature and Discovery of Structure (with discussion).” J. Am. Stat. Assoc. 79 (1984): 9–21. [Google Scholar] [CrossRef]
- H.A. Karlsen, T. Myklebust, and D. Tjøstheim. “Nonparametric Estimation in a Nonlinear Cointegration Type Model.” Ann. Stat. 35 (2007): 252–299. [Google Scholar] [CrossRef]
- H. White. “Using Least Squares to Approximate Unknown Regression Functions.” Int. Econ. Rev. 21 (1980): 149–170. [Google Scholar] [CrossRef]
- H. White. “Maximum Likelihood Estimation of Misspecified Models.” Econometrica 50 (1982): 1–25. [Google Scholar] [CrossRef]
- J. Pearl. Causality. Cambridge, UK: Cambridge University Press, 2000. [Google Scholar]
- C.R. Rao. Linear Statistical Inference and Its Applications, 2nd ed. New York, NY, USA: John Wiley & Sons, 1973. [Google Scholar]
- E.L. Lehmann, and G. Casella. Theory of Point Estimation. New York, NY, USA: Spriger Verlag, Inc., 1998. [Google Scholar]
- P.A.V.B. Swamy, G.S. Tavlas, and S.G. Hall. “On the Interpretation of Instrumental Variables in the Presence of Specification Errors.” Econometrics 3 (2015): 55–64. [Google Scholar] [CrossRef]
- J. Felipe, and F.M. Fisher. “Aggregation in Production Functions: What Applied Economists Should Know.” Metroeconomica 54 (2003): 208–262. [Google Scholar] [CrossRef]
- J.J. Heckman, and D. Schmierer. “Tests of Hypotheses Arising in the Correlated Random Coefficient Model.” Econ. Modell. 27 (2010): 1355–1367. [Google Scholar] [CrossRef] [PubMed]
- A.S. Goldberger. Functional Form and Utility: A Review of Consumer Demand Theory. Boulder, CO, USA: Westview Press, 1987. [Google Scholar]
- P. Whittle. Probability. New York, NY, USA: John Wiley & Sons, 1976. [Google Scholar]
- J.J. Heckman, and E.J. Vytlacil. “Structural Equations, Treatment Effects and Econometric Policy Evaluation.” Econometrica 73 (2005): 669–738. [Google Scholar] [CrossRef]
- A.M. Kagan, Y.V. Linnik, and C.R. Rao. Characterization Problems in Mathematical Statistics. New York, NY, USA: John Wiley & Sons, 1973. [Google Scholar]
- P.A.V.B. Swamy, and P. von zur Muehlen. “Further Thoughts on Testing for Causality with Econometric Models.” J. Econom. 39 (1988): 105–147. [Google Scholar] [CrossRef]
- V. Berenguer-Rico, and J. Gonzalo. “Departamento de Economía, Universidad Carlos III de Madrid. Summability of Stochastic Processes: A Generalization of Integration and Co-integration valid for Non-linear Processes.” Unpublished work. , 2013. [Google Scholar]

^{1}See, also, Greene (2012, Chapter 17, p. 713) [3].^{2}We will show below that the inconsistency problems Yatchew and Griliches (1984, p. 713) [2] pointed out with the probit and logit models are eliminated by replacing these models by the model in (1) and (2).^{3}We explain in the next paragraph why we have included these conditions.^{4}Some researchers may believe that there is no such thing as the true functional form of (1). Whenever we talk of the correct functional form of (1), we mean the functional form of (1) that is appropriate to the particular binary choice in (2).^{5}Here we are using Skyrms’ (1988, p. 59) [7] definition of the term “all relevant pre-existing conditions.”^{6}This is Basmann’s (1988, pp. 73, 99) [8] statement.^{8}We postpone making stochastic assumptions about measurement errors.^{9}The label “omitted” means that we would remove them from (3).^{12}A similar admissibility condition for covariates is given in Pearl (2000, p. 79) [13]. Pearl (2000, p. 99) [13] also gives an equation that forms a connection between the opaque phrase “the value that the coefficient vector of (3) would take in unit i, had ${X}_{i}=({X}_{i1},...,{X}_{iK})\text{'}$ been ${x}_{i}=({x}_{i1},...,{x}_{iK}{)}^{\prime}$” and the physical processes that transfer changes in ${X}_{i}$ into changes in ${y}_{i}^{*}$.^{13}We illustrate this procedure in Section 3 below.^{14}Pratt and Schlaifer (1988) [1] consider what they call “concomitants” that absorb “proxy effects” and include them as additional regressors in their model. The result in (9) calls for Equation (10) which justifies our label for its right-hand side variables.^{15}An important difference between coefficient drivers and instrumental variables is that a valid instrument is one that is uncorrelated with the error term, which often proves difficult to find, particularly when the error term is nonunique. For a valid driver we need variables which should satisfy Equations (25) and (26). On the problems with instrumental variables, see Swamy, Tavlas, and Hall (2015) [16].^{16}These biases are not involved in Wooldridge’s marginal effects because according to that researcher omitted regressors constituting his model’s error term do not introduce omitted-regressor biases into the coefficients of the included regressors.^{17}The standard errors of estimates are given in parentheses below the estimates for five married women. The estimates and their standard errors for other married women are available from the authors upon request.^{18}According to Geene (2012, p, 708) [3], it would be natural to assume that all the determinants of a wife’s labor force participation would be correlated with the husband’s hours which is defined as a linear stochastic function of the husband’s age and education and the family income. Our inclusion of husband’s variables in (32) is consistent with this assumption.^{19}The standard errors of estimates are given in parentheses below the estimates for five married women. These estimates and standards errors for other married women are available from I-Lok Chang upon request.^{20}Another widely cited work that utilized a set of separability conditions is that of Heckman and Schmierer (2010) [18]. These authors postulated a threshold crossing model which assumes separability between observables Z that affect choice and an unobservable V. They used a function of Z as an instrument and used the distribution of V to define a fundamental treatment parameter known as the marginal treatment effect.^{21}The “uniqueness” is defined in Section 2.2.3.^{22}We have been using the cross-sectional subscript i so far. We change this subscript to the time subscript t wherever the topic under discussion requires the use of the latter subscript.^{23}This proof is relevant to Heckman’s interpretation that in any of his models, the error term is the deviation of the dependent variable from its conditional expectation (see Heckman and Vytlacil (2005) [21]. Conditions (A2.1)–(A2.3) do not always hold and hence this conditional expectation does not always exist.^{24}A nonstationary series is integrated of order d if it becomes stationary after being first differenced d times (see Greene (2012, p. 943) [3]). If $\left\{{y}_{t}^{*}\right\}$ in (A1) is a nonstationary series of this type, then it cannot be made stationary by first differencing it once or more than once if ${\psi}_{t1}({x}_{t1}^{*},...,{x}_{tK}^{*})$ is nonlinear. Basmann (1988, p. 98) [8] acknowledged that a model representation is not free of the most serious objection, i.e., nonuniqueness, if stationarity producing transformations of its observable dependent variable are used.

${\widehat{\mathit{\gamma}}}_{\mathit{i}0}$ (Standard Error) | ${\widehat{\mathit{\gamma}}}_{\mathit{i}1}$ (Standard Error) |
---|---|

−13.280 | 1.2811 |

(6.5787) | (0.5535) |

−5.2539 | 0.7930 |

(9.1608) | (0.7830) |

−15.221 | 1.5029 |

(4.9202) | (0.4197) |

−21.577 | 1.8393 |

(11.213) | (0.9942) |

−9.2086 | 1.0386 |

(7.8416) | (0.6504) |

$0.0702{z}_{i5}$ |

0.8419 |

(0.8110) |

0.6314 |

(0.6083) |

0.8419 |

(0.8110) |

0.7016 |

(0.6758) |

0.8419 |

(0.8110) |

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license ( http://creativecommons.org/licenses/by/4.0/).