Next Article in Journal
Quantum Entanglement Concentration Based on Nonlinear Optics for Quantum Communications
Previous Article in Journal
Bayesian and Quasi-Bayesian Estimators for Mutual Information from Discrete Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Data-Constrained Generalized Maximum Entropy Estimator of the GLM: Asymptotic Theory and Inference

by
Ron Mittelhammer
1,*,
Nicholas Scott Cardell
2 and
Thomas L. Marsh
3
1
Economic Sciences and Statistics, Washington State University, Pullman, WA 99164, USA
2
Salford Systems, San Diego, CA 92126, USA
3
Economic Sciences and IMPACT Center, Washington State University, Pullman, WA 99164, USA
*
Author to whom correspondence should be addressed.
Entropy 2013, 15(5), 1756-1775; https://doi.org/10.3390/e15051756
Submission received: 7 April 2013 / Revised: 23 April 2013 / Accepted: 7 May 2013 / Published: 14 May 2013

Abstract

:
Maximum entropy methods of parameter estimation are appealing because they impose no additional structure on the data, other than that explicitly assumed by the analyst. In this paper we prove that the data constrained GME estimator of the general linear model is consistent and asymptotically normal. The approach we take in establishing the asymptotic properties concomitantly identifies a new computationally efficient method for calculating GME estimates. Formulae are developed to compute asymptotic variances and to perform Wald, likelihood ratio, and Lagrangian multiplier statistical tests on model parameters. Monte Carlo simulations are provided to assess the performance of the GME estimator in both large and small sample situations. Furthermore, we extend our results to maximum cross-entropy estimators and indicate a variant of the GME estimator that is unbiased. Finally, we discuss the relationship of GME estimators to Bayesian estimators, pointing out the conditions under which an unbiased GME estimator would be efficient.

1. Introduction

Information theoretic estimators have been receiving increasing attention in the econometric-statistics literature [1,2,3,4,5,6,7]. In other work, [3] proposed an information theoretic estimator based on minimization of the Kullback-Leibler Information Criterion as an alternative to optimally-weighted generalized method of moments estimation. This specific estimator handles weakly dependent data generating mechanisms and under reasonable regulatory assumptions it is consistent and asymptotically normally distributed. Subsequently, [1] proposed an information theoretic estimator based on minimization of the Cressie-Read discrepancy statistic as an alternative approach to inference in moment condition models. In [1] identified a special case of the Cressie-Read statistic—the Kullback-Leibler Information Criterion (e.g., maximum entropy)—as being preferred over other estimators (e.g., empirical likelihood) because of its efficiency and robustness properties. Special issues of the Journal of Econometrics (March 2002) and Econometric Reviews (May 2008) were devoted to this particular topic of information estimators.
Historically, information theoretic estimators have been motivated in several ways. The Cressie-Read statistic directly minimizes an information based concept of closeness between the estimated and empirical distribution [1]. Alternatively, the maximum entropy principle is based on an axiomatic approach that defines a unique objective function to measure uncertainty of a collection of events [8,9,10]. Interest in maximum entropy estimators stems from the prospect to recover and process information when the underlying sampling model is incompletely or incorrectly known and the data are limited, partial, or incomplete [10]. To date the principle of maximum entropy has been applied in an abundance of circumstances, including in the fields of econometrics and statistics [11,12,13,14,15,16,17], economic theory and applications [18,19,20,21,22,23,24], accounting and finance [25,26,27], and resources and agricultural economics [28,29,30,31,32]. Moreover, widely used econometric software packages are now incorporating procedures to calculate maximum entropy estimators in their latest releases (e.g., SAS, SHAZAM, and GAUSSX).
In most cases, rigorous investigation of small and large sample properties of information theoretic estimators have lagged far behind empirical applications [3]. Exceptions include [1,2,3] who examined information theoretic alternatives to generalized method of moments estimation; [14] who derived the statistical properties of the generalized maximum entropy estimator in the context of modeling multinomial response data; and, [10] who provided asymptotic properties for the moment-constrained generalized maximum entropy (GME) estimator for the general linear model (showing it is asymptotically equivalent to ordinary least squares). An alternative information theoretic estimator of the general linear model (GLM), yet to be rigorously investigated, but that has arisen in empirical applications (e.g., [24]), is the purely data-constrained formulation of the generalized maximum entropy estimator [10]. In a purely data-constrained formulation the regression model itself, as opposed to moment conditions of it, represents the constraining function to the entropy objective function. In the maximum entropy framework, unlike ordinary least square or maximum likelihood estimators of the GLM, moment constraints are not necessary to uniquely identify parameter estimates. Moreover, there exists distinct differences between the data and moment constrained versions of the GME for the GLM. For [10] have shown the data-constrained GME estimator to be mean square error superior to the moment-constrained GME estimator of the GLM in selected Monte Carlo experiments.
Our paper contributes to the econometric literature in several ways. First, regularity conditions are identified that provide a solid foundation from which to develop statistical properties of the data constrained GME estimator of the GLM and hypothesis tests on model parameters. Given the regularity conditions, we define a conditional maximum entropy function to rigorously prove consistency and asymptotic normality. As demonstrated in this paper the data-constrained GME estimator is not asymptotically equivalent to the moment-constrained GME estimator or ordinary least squares estimator. However, the GME estimator is shown to be nearly asymptotically efficient. Moreover, we derive formulae to compute the asymptotic variance of the proposed estimator. This allows us to define classical Wald, Likelihood Ratio, and Lagrange Multiplier tests for testing hypothesis about model parameters.
Second, theoretical extensions to unbiased, cross entropy, and Bayesian estimation are also identified. Further, we demonstrate that the GME specification can be extended from finite-discrete parameter and error spaces to infinite-continuous parameter and error spaces. Alternative formulations of the data constrained GME estimator of the GLM under selected regularity conditions, and the implications to properties of the estimator, are also discussed.
Third, to compliment the theoretical results, Monte Carlo experiments are used in comparing the performance of the data-constrained GME estimates to least squares estimates for small and medium size samples. The performance of the GME estimator is tested relative to selected distributions of the errors, to the user supplied supports of the parameters and errors, and to its robustness to model misspecification. Monte Carlo experiments are also performed to examine the size and power of the Wald, Likelihood Ratio, and Lagrange Multiplier test statistics.
Fourth, insight into computational efficiency and guidelines for setting boundaries of parameters and error support spaces are discussed. The conditional maximum entropy formulation utilized in proof of asymptotic properties provides a basis for new computationally efficient method of calculating GME estimates. The approach involves a nonlinear search over a K-vector of coefficient parameters, which is much more efficient than numerical approaches proposed elsewhere in the literature. Finally, practical guidelines for setting boundaries of parameters and error support spaces are analyzed and discussed.

2. The Data-Constrained GME Formulation

Let Y = X β + ε represent the general linear model with Y being an N × 1 dependent variable vector, X being a fixed N × K matrix of explanatory variables, β being a K × 1 vector of parameters, and ε being an N × 1 vector of disturbance terms (All of our results can be extended to stochastic X. For example, if X i is iid with V a r ( X i ) = Ω , a positive definite matrix, then the asymptotic properties are identical to those developed below). The GME rule for defining the estimator of the unknown β in the general linear model formulation is given by β ^ = Z p ^ with p ^ = ( p ^ 1 , , p ^ K ) derived from the following constrained maximum entropy problem: p = ( p 1 , , p K )
Max p k , w i : k , i ( k = 1 K p k ln ( p k ) i = 1 N w i ln ( w i ) )
subject to:
Y = X Z p + V w
1 p k = 1   k
1 w i = 1   i
p k > [ 0 ] , w i > [ 0 ] , i , k .
In the preceding formulation, the matrices Z and V are K × K M and N × N J matrices of support points for the β and ε vectors, respectively, as:
Z = ( z 1 0 0 0 z 2 0 0 0 z K )  and  V = ( v 1 0 0 0 v 2 0 0 0 v N ) ,
where z k = ( z k 1 , , z k M ) is a M × 1 vector such that z k 1 z k 2 z k M and β k ( z k 1 , z k M ) k = 1 , , K , and similarly v i = ( v i 1 , , v i J ) is a J × 1 vector such that v i 1 v i 2 v i J and ε i ( v i 1 , v i J ) i = 1 , , N (in their original formulation, [10] required ε i to be contained in a fixed interval with arbitrarily high probability. Here we assume such an event occurs with probability). The M × 1 p k vectors and The J × 1 w i vectors are weight vectors having nonnegative elements that sum to unity and are used to represent the β and ε vectors as β = Z p , for   p = ( p 1 , , p K ) , and ε = V w , for   w = ( w 1 , , w J ) .
The basic principle underlying the estimator β ^ = Z p ^ for β is to choose an estimate that contains only the information available. In this way the maximum entropy estimator is not constrained by any extraneous assumptions. The information used is the observed information contained in the data, the information contained in the constraints on the admissible values of β, and the information inherent in the structure of the model, including the choice of the supports for The β k ’s. In effect, the information set used in estimation is shrunk to the boundary of the observed data and the parameter constraint information. Because the objective function value increases as the weights in pi and wi are more uniformly distributed, any deviation from uniformity represents the effect of the data constraints on the weighting of the support points used for representing β and ε. This fact also motivates the interpretation of the GME as a shrinkage-type estimator that in the absence of constraints on β will shrink β ^ to the centers of the supports defined in the specification of Z. We next establish consistency and asymptotic normality results for the GME estimator under general regularity conditions on the specification of the estimation problem.

3. Consistency and Asymptotic Normality of the GME Estimator

Regularity Conditions. To establish asymptotic results for the GME estimator, we utilize the following regularity conditions for the problem of estimating β in Y = X β + ε .
R1
The ε i ′s are iid with c 1 + δ ε i c J δ for some δ > 0 and large enough finite positive c J = c 1 .
R2
The pdf of ε i , f ( ε i ) , is symmetric around 0 with variance σ 2 .
R3
β k ( β k L , β k H ) , for finite β k L  and  β k H , k = 1 , , K .
R4
X has full column rank.
R5
1 N ( X X ) is O(1) and the smallest eigenvalue of 1 N ( X X ) > ε for some ε > 0, and N > N * , where N** is some positive integer.
R6
1 N ( X X ) Ω , a finite positive definite symmetric matrix.
Note that condition R1 states that the support of ε i is contained in the interior of some large enough closed finite interval [ c 1 , c J ] . Condition R3 states that the true value of parameter β k can be enclosed within some open interval ( β k L , β k H ) . The conditions R4-R6 on X are familiar analogues to typical assumptions made in the least squares context for establishing asymptotic properties of the least squares estimator of β. We utilize condition R6 to simplify the demonstration of asymptotic normality, but the result can be established under weaker conditions, as alluded to in the proof. Finally, our proof of the asymptotic results will utilize symmetry of the disturbance distribution, which is the content of condition R2.
Reformulated GME Rule. The asymptotic results are derived within the context of the following representation of the GME model, represented in scalar notation to facilitate exposition of the proof. The GME representation described below is completely consistent with the formulation in Section 2 under the condition that the support points represented by the vector vi are chosen to be symmetrically dispersed around 0. We use the same vector of support points for each of The ε i ′s, consistent with the iid nature of the disturbances, and so henceforth v refers to the common t h scalar support point in the development below. The representation is also more general than the representation in Section II in the sense that different numbers of support points can be used for the representation of different β k parameters. The constrained maximum entropy problem is as follows:
Max b , p , w ( k = 1 K = 1 J k p k ln ( p k ) i = 1 N = 1 J w i ln ( w i ) )
subject to:
C1
= 1 J k z k p k = b k , β k L = z k 1 z k 2 z k J k = β k H ,   k = 1 , , K
C2
= 1 J v w i = e i = y i X i b = e i ( b ) ,   i = 1 , , N
C3
c 1 = v 1 v 2 v J = c J
C4
v = v J + 1 ( thus for  J  odd  v J + 1 2 0 )
C5
= 1 J k p k = 1 , k = 1 , , K
C6
= 1 J w i = 1 , i = 1 , , N
As will become apparent, the nonnegativity restrictions on p k and w i are inherently enforced by the structure of the optimization problem itself, and thus need not be explicitly incorporated into the constraint set.
Asymptotic Properties. The following theorem establishes the consistency and asymptotic normality of the GME estimator of β in the GLM.
Theorem. Under regularity conditions R1-R5, the GME estimator β ^ = Z p ^ is a consistent estimator of β. With the addition of regularity condition R6, the GME estimator is asymptotically normally distributed as
β ^ ~ a N ( β , σ γ 2 N ξ 2 Ω 1 )
for appropriate definitions of σ γ 2 , ξ ,   a n d   Ω .
Proof. Define the maximized entropy function, conditional on b = τ , as:
F ( τ ) = Max p , w : b = τ ( C 1 ) ( C 6 ) ( k = 1 K = 1 J k p k ln ( p k ) i = 1 N = 1 J w i ln ( w i ) )
The optimal value of w i = ( w i 1 , , w i J ) in the conditionally-maximized entropy function is given by:
w i ( τ ) = arg max w i : C 6 , = 1 J v w i = e i ( τ ) ( = 1 J w i ln ( w i ) ) ,
which is the maximizing solution to the Lagrangian:
L w i = = 1 J w i ln ( w i ) + λ i w ( = 1 J w i 1 ) + γ i ( = 1 J v w i e i ( τ ) ) .
The optimal value of w i is then:
w i ( γ ( e i ( τ ) ) ) = w ( γ ( e i ( τ ) ) ) = e γ ( e i ( τ ) ) v m = 1 J e γ ( e i ( τ ) ) v m ,   = 1 , , J ,
where γ ( e i ( τ ) ) is the optimal value of the Lagrangian multiplier γ i under the condition b = τ , and w ( γ ) e γ v m = 1 J e γ v m . It follows from the symmetry of the vis around zero that:
= 1 J v w ( γ ( e i ( τ ) ) ) = = 1 J v w ( γ ( e i ( τ ) ) )
Similarly, the optimal value of p k = ( p k 1 , , p k J k ) in the conditionally-maximized entropy function is given by:
p k ( τ k ) = arg max p k : C 5 , = 1 J k z k p k = τ k ( = 1 J k p k ln ( p k ) ) ,
which is the maximizing solution to the Lagrangian:
L p k = = 1 J k p k ln ( p k ) + λ k p ( = 1 J k p k 1 ) + η k ( = 1 J k z k p k τ k ) .
The optimal value of p k is then:
p k ( τ k ) = e η k ( τ k ) z k m = 1 J k e η k ( τ k ) z k m , k = 1 , , K ,
where η k ( τ k ) is the optimal value of the Lagrangian multiplier η k under the condition b k = τ k .
Substituting the optimal solutions for the p k ’s and w i ’s into (2) obtains the conditional maximum value function:
F ( τ ) = k = 1 K ( η k ( τ k ) τ k ln ( m = 1 J k e η k ( τ k ) z k m ) ) i = 1 N ( γ ( e i ( τ ) ) e i ( τ ) ln ( m = 1 J e γ ( e i ( τ ) ) v m ) ) .
Define the gradient vector of F ( τ )  as  G ( τ ) = F ( τ ) τ so that:
G k ( τ ) = F ( τ ) τ k = η k ( τ k ) + i = 1 N γ ( e i ( τ ) ) X i k , k = 1 , , K ,
and thus G ( τ ) = η ( τ ) + X γ ( e ( τ ) ) , where η ( τ ) and γ ( e ( τ ) ) are K × 1 and N × 1 vectors of Lagrangian multipliers. It follows that the Hessian matrix of F ( τ ) is given by:
H ( τ ) = 2 F ( τ ) τ τ = G ( τ ) τ = ( η 1 ( τ 1 ) τ 1 0 0 0 η 2 ( τ 2 ) τ 2 0 0 0 η K ( τ k ) τ K ) + X γ ( e ( τ ) ) τ .
Regarding the functional form of the derivatives of the Lagrangian multipliers appearing in the definition of H ( τ ) , it follows from (C2) that:
= 1 J ν w ( γ ( e i ( τ ) ) ) γ ( e i ( τ ) ) γ ( e i ( τ ) ) e i ( τ ) = 1 ,
so that from (3):
γ ( e i ( τ ) ) e i ( τ ) = ( = 1 J ν 2 w ( γ ( e i ( τ ) ) ) e i 2 ( τ ) ) 1 .
Then, from (C2) e i ( τ ) τ k = X i k , and thus:
H k ( τ ) = i = 1 N X i k X i ( = 1 J ν 2 w ( γ ( e i ( τ ) ) ) e i 2 ( τ ) )  for  k .
Also, based on (C1):
η k ( τ k ) τ k = ( = 1 J k z k 2 p k τ k 2 ) 1 ,
so that:
H k k ( τ ) = ( i = 1 N X i k 2 ( = 1 J ν 2 w ( γ ( e i ( τ ) ) ) e i 2 ( τ ) ) ) 1 ( = 1 J k z k 2 p k τ k 2 ) .
Because the denominators of the terms in the definition of the H k ’s are positive valued, it follows that H ( τ ) is a negative definite matrix, because X X is positive definite.
Now consider the case where τ = β , so that:
e i ( β ) = y i X i β = ε i = = 1 J ν w ( γ ( e i ( β ) ) ) , i = 1 , , N
are iid with mean zero, and thus:
ε i = = 1 J ν e γ ( e i ( β ) ) ν m = 1 J e γ ( e i ( β ) ) ν m
are iid with mean zero. Because ε i is bounded in the interior of [ ν 1 , ν J ] , the range of γ ( e i ( β ) ) γ ( ε i ) is bounded as well. In addition, γ ( e i ( β ) ) is symmetrically distributed around zero because The ε i ’s are so distributed, and, from (4):
ε i = ζ = = 1 J ν e γ ( e i ( β ) ) ν m = 1 J e γ ( e i ( β ) ) ν m = 1 J ν e γ ( e i ( β ) ) ν m = 1 J e γ ( e i ( β ) ) ν m = ζ = ε i
It follows that E ( γ ( e i ( β ) ) ) = 0 , the γ ( ε i ) γ ( e i ( β ) ) ’s are iid, and γ ( ε i ) has finite variance, say V a r ( γ ( ε i ) ) = σ γ 2 . Then, using a multivariate version of Liapounov's central limit theorem, and given condition R6 (asymptotic normality can be established without regularity condition R6. In fact, the boundedness properties on the X-matrix stated in R5 would be sufficient. See [33] for a related proof under the weaker regularity conditions).
1 N G ( β ) = 1 N ( η ( β ) + X γ ( e i ( β ) ) ) d N ( [ 0 ] , σ γ 2 Ω )

3.1. Consistency

For any τ, represent the conditional maximum value function, F ( τ ) , by a second order Taylor series around β as:
F ( τ ) = F ( β ) + G ( β ) ( τ β ) + 1 2 ( τ β ) H ( β * ) ( τ β )
where β * lies between τ and β. The value of the quadratic term in the expansion can be bounded by:
1 2 ( τ β ) H ( β * ) ( τ β ) 1 2 λ s ( 1 N H ( β * ) ) N τ β 2
where λ s ( 1 N H ( β * ) ) denotes the smallest eigenvalue of 1 N H ( β * )  and  a ( k = 1 K a k 2 ) 1 2 [34]. The smallest eigenvalue exhibits a positive lower bound given by ( 1 C J 2 ) λ s ( 1 N X X ) whatever the value of β * .
The value of the linear term in the expansion is bounded in probability; that is, α > 0 and for N > N ( α ) , there exists a finite A ( α ) such that:
P ( | G ( β ) ( τ β ) | < N A ( α ) | τ β | , τ ) > 1 α
because 1 N G ( β ) d N ( [ 0 ] , σ γ 2 Ω ) . It follows from Equations (7)–(9) that, for all δ > 0 , P ( Max τ : | β τ | > δ ( F ( τ ) ) < F ( β ) ) 1  as  N . Thus β ^ = arg max τ ( F ( τ ) ) p β , and the GME estimator of β is consistent.

3.2. Asymptotic Normality

Expand G(b) in a Taylor series around β, where β ^ = arg max τ F ( τ ) is the GME estimator of β, to obtain:
G ( β ^ ) = G ( β ) + H ( β * ) ( β ^ β )
where β * is between β ^ and β. In general, different β * points will be required to represent the different coordinate functions in G ( β ^ ) . At the optimum, G ( β ^ ) = [ 0 ] and β ^ is a consistent estimator of β; therefore β * p β , and:
N ( β ^ β ) d _ _ ( 1 N H ( β ) ) 1 1 N G ( β ) ,
where d _ _ denotes equivalence of limiting distributions. Using e i ( β ) ε i , note that:
1 N H ( β ) = 1 N i = 1 N X i X i = 1 J ν 2 w ( γ ( ε i ) ) ε i 2 + 0 P ( 1 N ) ,
where = 1 J ν 2 w ( γ ( ε i ) ) ε i 2 , i = 1 , , N are iid. It follows from R6 that 1 N H ( β ) p ξ Ω with ξ = E ( ( = 1 J ν 2 w ( γ ( ε i ) ) ε i 2 ) 1 ) . Recalling that 1 N G ( β ) d N ( [ 0 ] , σ γ 2 Ω ) , Slutsky's Theorem [34] implies that:
N ( β ^ β ) d N ( [ 0 ] , σ γ 2 ξ 2 Ω 1 )
Note that holding the support of ε constant, one can reduce the interval (c1, cJ). As δ 0 , the asymptotic variance of N ( β ^ β ) may tend to zero, but cannot grow without bound. For example, if at δ = 0 , ε > 0 such that P ( ε k ) ε ( k c 1 ) , all k ( c 1 , c J ) ( P ( ε k ) ε ( c J k ) all k ( c 1 , c J ) ) , then lim δ 0 σ γ 2 ξ 2 = 0 .
Also note that, for large samples, the parameters reliance on the supports vanishes. In contrast, the supports on the errors influence the computed covariance matrix. Finally, for non-homogenous errors, the covariance matrix estimator could be adjusted following a standard White’s covariance correction.

3.3. Cross-Entropy Extensions

To extend the previous asymptotic results to the case of cross-entropy maximization [10], first suppose that z k = z k + 1 and/or ν = ν + 1 for some . Let z k * , = 1 , , J k *  and  ν * , = 1 , , J * denote the distinct values among the z k ’s and ν * ’s, respectively, and let a k  and  α denote the respective multiplicities of the values z k *  and  ν * . From Equations (3) and (5), w i ( γ ( e i ( τ ) ) ) w i m ( γ ( e i ( τ ) ) ) if ν = ν m and p k ( τ k ) p k m ( τ k ) if z k = z k m . Thus, the maximization problem given by Equation (2) and Conditions C1-C6 is equivalent to:
max b , p , w ( k = 1 K = 1 J k * p k * ln ( p k * a k ) i = 1 N = 1 J * w i * ln ( w i * α ) )
with obvious changes being made to C1-C6. The only alterations needed to the preceding proof are:
w i * ( γ ( e i ( τ ) ) ) = w ( γ ( e i ( τ ) ) ) = α e γ ( e i ( τ ) ) ν m = 1 J * α m e γ ( e i ( τ ) ) ν m , = 1 , , J * ,  and 
p k * ( τ k ) = a k e η k ( τ k ) z k m = 1 J k * a k m e η k ( τ k ) z k m , k = 1 , , K
More generally, the same representation (11)-(13) applies for any a k > 0 , α > 0 . Furthermore, Equations (12) and (13) are homogeneous of degree zero in ( α 1 , , α J * ) and ( a k 1 , , a k J k * ) , respectively. Thus, without loss of generality, the normalization conditions:
= 1 J * α 1  and  = 1 J k * a k 1
can be imposed.
Using Equations (11), (12), and (13), we have characterized the maximum cross entropy solution. Upon substitution of Equations (11)–(13) in the appropriate arguments, all results, including the results in the next section on statistical testing, apply to the maximum cross-entropy paradigm.

4. Statistical Tests

The GME estimator β ^ = Z p ^ is consistent and asymptotically normally distributed. Therefore, asymptotically valid normal and χ 2 test statistics can be used to test hypotheses about β. For empirical implementation of such tests a consistent estimate of the asymptotic covariance matrix of β ^ will be required. An estimate of 1 N ξ 2 Ω 1 is straightforwardly obtained by calculating M ( β ^ ) 1 ( X X ) M ( β ^ ) 1 , where:
M ( β ^ ) = i = 1 N ( X i X i = 1 J ν 2 w ( γ ( e i ( β ^ ) ) ) e i ( β ^ ) 2 ) .
An estimate of the variance, σ γ 2 , of The γ i ’s can be constructed as σ ^ γ 2 ( β ^ ) = 1 N i = 1 N γ ( e i ( β ^ ) ) 2 . Then the asymptotic covariance matrix of β ^ can be estimated by:
V a r ^ ( β ^ ) = σ ^ γ 2 ( β ^ ) M ( β ^ ) 1 ( X X ) M ( β ^ ) 1 .
Alternatively, ξ can be estimated by:
ξ ^ ( β ^ ) = 1 N i = 1 N 1 = 1 J ν 2 w ( γ ( e i ( β ^ ) ) ) e i ( β ^ ) 2 .
Then:
V a r ^ ( β ^ ) = σ ^ γ 2 ( β ^ ) ξ ^ 2 ( β ^ ) ( X X ) 1 .

4.1. Asymptotically Normal Tests

Because T Z = β ^ k β k 0 V a r ^ ( β ^ ) k k is asymptotically N(0,1) under the null hypothesis H 0 : β k = β k 0 the statistic Tz can be used to test hypotheses about the values of the β k ’s.

4.2. Wald Tests

Wald tests of linear restrictions on the elements of β can be expressed in the usual form. Let H 0 : R β = r be the null hypothesis to be tested, where R is a L × K matrix with rank ( R ) = L K . Then N ( R β ^ r ) d N ( 0 , R ( σ γ 2 ξ 2 Ω 1 ) R ) . Thus, the Wald test statistic has a χ 2 limiting distribution as:
T W = ( R β ^ r ) ( R ( V a r ^ ( β ^ ) ) R ) 1 ( R β ^ r ) d χ L 2
under the null hypothesis H0. Similarly, for nonlinear restrictions g ( β ) = [ 0 ] , where g ( β ) is a continuously differentiable L-dimensional vector function with q = g ( b ) b and rank ( q ( β ) ) = L K , it follows that:
T W = g ( β ^ ) ( q ( β ^ ) V a r ^ ( β ^ ) q ( β ^ ) ) 1 g ( β ^ ) d χ L 2 .

4.3. Likelihood Ratio Tests

To establish a pseudo-likelihood ratio test of functional restrictions on the β vector, first note that:
F ( β ^ ) F ( β ) d 1 2 ( 1 N G ( β ) ) 1 ξ Ω 1 ( 1 N G ( β ) ) ,
which follows from Equations (7) and (10) and the fact that 1 N H ( β ) p ξ Ω . Thus:
2 ξ ^ ( β ^ ) σ ^ γ 2 ( β ^ ) ( F ( β ^ ) F ( β ) ) d χ K 2 .
Now let β ^ R be a restricted GME estimator of β. Thus, β ^ R = arg max b : R b = r ( F ( b ) ) for a linear null hypothesis H 0 : R β = r ,  or  β ^ R = arg max b : g ( b ) = 0 ( F ( b ) ) for a general null hypothesis H 0 : g ( β ) = [ 0 ] . As before, let L = rank ( R ) K for a linear hypothesis or L = rank ( q ( β ) ) K for a general hypothesis.
Then:
2 ξ ^ ( β ^ ) σ ^ γ 2 ( β ^ ) ( F ( β ^ ) F ( β ^ R ) ) d χ L 2
under the null hypothesis.

Lagrange Multiplier Tests

Define R, r, g, J, and β ^ R as above. Then a Lagrangian multiplier test of functional restrictions on β can be based on the fact that:
1 σ ^ γ 2 ( β ^ R ) G ( β ^ R ) ( X X ) 1 G ( β ^ R ) d χ L 2
under the null hypothesis.

5. Monte Carlo Simulations

A Monte Carlo experiment was conducted to explore the sampling behavior of test situations based on the Generalized Maximum Entropy Estimator. The data were generated based on a linear model containing an intercept term, a dichotomous explanatory variable, and two continuously measured explanatory variables. The results of the Monte Carlo experiment also add additional perspective to simulation results relating the bias and mean square error to the maximum entropy estimator generated previously by [10].
The linear model Y = + ε is specified as Y = 2 + 1 X 1 1 X 2 + 3 X 3 + ε , where X 1 is a discrete random variable such that X i 1 ~ i i d Bernoulli(.5), observations on the pair of explanatory random variables ( X i 2 , X i 3 ) are generated from iid outcomes of N ( ( 2 5 ) , ( 1 .5 .5 1 ) ) that are censored at the mean ±3 standard deviations, and outcomes of the disturbance term are defined as ε = ( i = 1 12 U i ) 6 , where U i ~ i i d Uniform(0,1). The support points for the disturbance terms were specified as V = (−10, 0, 10)' (recall C2 and C3) for all experiments. Three different sets of support points were specified for the β-vector, given by:
Z I = ( 2 2 6 3 1 5 5 1 3 1 3 7 ) , Z I I = ( 3 1 5 4 0 4 4 0 4 0 4 8 ) ,
and:
Z I I I = ( 10 0 10 10 0 10 10 0 10 10 0 10 )
(recall C1). The support points in ZI were chosen to be most favorable to the GME estimator, where the elements of the true β-vector are located in the center of their respective supports and the widths of the supports are relatively narrow. The supports represented by ZII are tilted to the left of β1 and β2 and to the right of β3 and β4 by 1 unit, with the widths of the supports being the same as their counterparts in ZI . The last set of supports represented by ZIII are wider and effectively define an upper bound of 10 on the absolute values of each of the elements of β.
To explore the respective sizes of the various tests presented in Section 4, the hypothesis H 0 : β 2 = c was tested using the TZ test, and the hypothesis H 0 : β 2 = c , β 3 = d was tested using the Wald, pseudo-likelihood, and Lagrange Multiplier tests, with c and d set equal to the true values of β2 and β3, i.e., c = 1 and d = −1. Critical values of the tests were based on their respective asymptotic distributions and a 0.05 level of significance. An observation on the power of the respective tests was obtained by performing a test of significance whereby c = d = 0 in the preceding hypotheses. All scenarios were analyzed using 10,000 Monte Carlo repetitions, and sample sizes of n = 25, 100, 400, and 1,600 were examined. In the course of calculating values of the test statistics, both unrestricted and restricted (by β2 = c and/or β3 = d) GME estimators needed to be calculated. Therefore, bias and mean square error measures relating to these and the least squares estimators were calculated as well. Monte Carlo results for the test statistics and for the unrestricted GME and OLS estimators are presented in Table 1 and Table 2, respectively, while results relating to the restricted GME and OLS estimators are presented in Table 3. Because the choice of which asymptotic covariance matrix to use in calculating the TZ and Wald tests was inconsequential, only the results for the second suggested covariance matrix representation are presented here.
Regarding properties of the test statistics, their behavior under a true H0 is consistent with the behavior expected from the respective asymptotic distributions when n is large (sample size of 1600), their sizes being approximately .05 regardless of the choice of support for β. The sizes of the tests remain within 0.01 of their asymptotic size when n decreases to 400, except for the Lagrange Multiplier test under support ZII, which has a slightly larger size. Across all support choices and ranging over all sample sizes from small to large, the sizes of the TZ and Wald tests remain in the 0−0.10 range; for ZI supports and small sample sizes, the sizes of the tests are substantially less than 0.05. Results were similar for the pseudo-likelihood and Lagrange Multiplier tests, except for the cases of ZII support and n ≤ 100, where the size of the test increased as high as 0.36 for the pseudo-likelihood test and 0.73 for the Lagrange multiplier test when n = 25.
Table 1. Rejection Probabilities for True ( β 2 = 1 , β 3 = 1 ) and False ( β 2 = β 3 = 0 ) Hypotheses.
Table 1. Rejection Probabilities for True ( β 2 = 1 , β 3 = 1 ) and False ( β 2 = β 3 = 0 ) Hypotheses.
SupportsTzWALDPseudo-LikelihoodLagrange Multiplier
H0H0H0H0
ZI β2 = 1β2 = 0β2 = 1β2 = 0β2 = 1β2 = 0
β2 = 1β2 = 0β3 = −1β3 = 0β3 = −1β3 = 0β3 = −1β3 = 0
n = 250.0000.8250.0040.9980.0211.0000.0591.000
n = 1000.0170.9990.0221.0000.0381.0000.0561.000
n = 4000.0411.0000.0421.0000.0481.0000.0531.000
n = 16000.0471.0000.0461.0000.0491.0000.0501.000
ZII
n = 250.1010.0470.0800.8940.3570.9800.7340.995
n = 1000.0850.9960.0671.0000.1141.0000.1721.000
n = 4000.0531.0000.0481.0000.0581.0000.0661.000
n = 16000.0521.0000.0521.0000.0551.0000.0571.000
ZIII
n = 250.0380.6700.0700.9670.0970.9800.0880.972
n = 1000.0450.9990.0501.0000.0571.0000.0521.000
n = 4000.0451.0000.0501.0000.0511.0000.0501.000
n = 16000.0511.0000.0511.0000.0521.0000.0511.000
The powers of the tests were all substantial in rejecting false null hypotheses except for the TZ test in the case of ZII support and the smallest sample size, the latter result being indicative of a notably biased test. Overall, the choice of support did impact the power of tests for rejecting the errant hypotheses, although the effect was small for all but the TZ test.
In the case of unrestricted estimators and the most favorable support choice (ZI ), the GME estimator dominated the OLS estimator in terms of MSE, and GME superiority was substantial for sample sizes of n ≤ 100 (Table 2). The GME-ZI estimator and, of course, the OLS estimator, were unbiased, with the GME-ZI estimator exhibiting substantially smaller variances for smaller n. The choice of support has a significant effect on the bias and MSE of the GME estimator for small sample sizes. Neither the GME-ZII or GME-ZIII estimator dominates the OLS estimator, although the GME-ZIII estimator is generally the better estimator across the various sample sizes. When n = 25, the GME-ZII estimator offers notable improvement over OLS for estimating three of the four elements of β, but is significantly worse for estimating β2. For larger sample sizes, the GME-ZII estimator is generally inferior to the OLS estimator. Although the centers of the ZIII support are on average further from the true β’s than are the centers of the ZII support, the wider widths of the former result in a superior GME estimator.
The results for the restricted GME estimators in Table 3 indicate that under the errant constraints β 2 = β 3 = 0 , the GME dominates the OLS estimator for all sample sizes and for all support choices. The superiority of the GME estimator is substantial for smaller sample sizes, but dissipates as sample size increases. The results suggest a misspecification robustness of the GME estimator that deserves further investigation.
Table 2. E ( β ^ i ) and Mean Square Error Measures–Unrestricted Estimators.
Table 2. E ( β ^ i ) and Mean Square Error Measures–Unrestricted Estimators.
Estimatorβ1 = 2β2 = 1β3 = −1β4 = 3
E ( β ^ 1 ) MSE E ( β ^ 2 ) MSE E ( β ^ 3 ) MSE E ( β ^ 4 ) MSE
GME-ZI
n = 252.0000.0151.0010.038−1.0010.0283.0000.006
n = 1002.0030.0341.0030.026−1.0000.0112.9990.004
n = 4002.0000.0321.0010.009−1.0000.0033.0000.002
n = 16002.0000.0141.0000.002−1.0000.0013.0000.001
GME-ZII
n = 251.0220.9770.4840.309−0.8400.0583.1820.040
n = 1001.3060.5190.8260.056−0.9660.0133.1390.023
n = 4001.6720.1410.9600.010−0.9960.0033.0660.006
n = 16001.8920.0260.9910.002−1.0000.0013.0220.001
GME-ZIII
n = 251.2780.7570.9460.131−0.8810.0693.0920.028
n = 1001.7090.2520.9950.037−0.9780.0143.0460.011
n = 4001.9140.0680.9990.010−0.9960.0033.0150.003
n = 16001.9780.0170.9990.002−0.9990.0013.0040.001
OLS
n = 251.9971.3421.0020.181−1.0020.0663.0010.065
n = 1002.0090.2831.0030.041−1.0000.0142.9980.014
n = 4002.0010.0681.0010.010−1.0000.0033.0000.003
n = 16002.0000.0171.0000.003−1.0000.0013.0000.001
Table 3. E ( β ^ i ) and Mean Square Error Measures – Restricted Estimators Under the Errant Restriction β 2 = β 3 = 0 .
Table 3. E ( β ^ i ) and Mean Square Error Measures – Restricted Estimators Under the Errant Restriction β 2 = β 3 = 0 .
Estimatorβ1 = 2β4 = 3
E ( β ^ 1 ) MSE E ( β ^ 4 ) MSE
GME-ZI
n = 252.0780.0412.6810.011
n = 1002.3400.1912.6300.142
n = 4002.6890.5372.6000.196
n = 16002.8980.8322.5200.232
GME-ZII
n = 251.0640.9152.8850.018
n = 1001.6030.2342.7720.056
n = 4002.3300.1692.6300.140
n = 16002.7760.6282.5430.210
GME-ZIII
n = 251.6860.5892.7500.084
n = 1002.4680.5422.6010.172
n = 4002.8420.8232.5300.225
n = 16002.9580.9482.5080.243
OLS
n = 253.0113.3422.4970.342
n = 1003.0131.5752.4970.274
n = 4003.0051.1382.4990.256
n = 16002.9991.0302.5000.251

Asymmetric Error Supports

We present further Monte Carlo simulations to show that regularity condition R2, which assumes symmetry of the disturbance term, is not a necessary condition for identification of the GME slope parameters. It is demonstrated below that if the supports of the error distribution asymmetric, then only the intercept term of the GME regression estimator is asymptotically biased.
The Monte Carlo experiments that follow are identical to those above except for specification of the user supplied support points for the error terms and the underlying true error distribution. To illustrative the impact of asymmetric errors, experiments are based on one set of support points symmetric about zero, V I = ( 10 , 0 , 10 ) , and two sets of support points not symmetric about zero, V I I = ( 5 , 5 , 15 ) and V I I I = ( 5 , 0 , 15 ) . The support VII is a simple translation of VI by five positive units in magnitude and retaining symmetry centered about 5. The asymmetric support VIII translates the truncation points by five positive units in magnitude, but retains the center support point 0. The true error distribution is generated in two ways: a symmetric distribution specified as a N(0,1) distribution truncated at (−3,3) and an asymmetric distribution specified as a Beta(3,2) translated and scaled from support (0,1) to (−3,3) with mean 0.6. Supports on the parameter coefficients terms are retained as ZI, providing symmetric support points about the true coefficient values.
Monte Carlo experiments presented in Table 4 and Table 5 are generated for sample sizes 25, 100, and 400 with 1,000 replications for each sample size. Consider when the true distribution is symmetric about zero. Slope coefficients for error supports that are not symmetric about zero appear biased in smaller sample sizes. However, the bias and MSE of the slope coefficients decrease as the sample sizes increases. Next, suppose the true distribution is asymmetric. For symmetric and asymmetric supports only the intercept terms are persistently biased, diverging from the true parameter values as the sample size increases. These results demonstrate the robustness of GME slope coefficients to asymmetric error distributions and user supplied supports.
Table 4. Mean and MSE of 1,000 Monte Carlo Simulations with True Distribution Symmetric. Symmetric and Asymmetric Error Supports and Coefficient Support ZI.
Table 4. Mean and MSE of 1,000 Monte Carlo Simulations with True Distribution Symmetric. Symmetric and Asymmetric Error Supports and Coefficient Support ZI.
Estimatorβ1 = 2β2 = 1β3 = −1β4 = 3
E(β1)MSEE(β2)MSEE(β3)MSEE(β4)MSE
GME-ZI,VI
252.0020.0161.0030.042−1.0000.0302.9970.007
1002.0000.0331.0010.026−1.0020.0113.0020.004
4002.0000.0351.0010.010−0.9980.0032.9990.002
GME-ZI,VII
251.2590.5850.8150.101−1.0090.0482.2090.636
1000.2083.2580.8040.071−0.9440.0202.3810.389
400−1.1449.9030.8680.028−0.9590.0052.6400.132
GME-ZI,VIII
251.5060.2710.8750.069−1.0050.0382.4760.282
1000.7521.5980.8750.045−0.9610.0152.6020.163
400−0.2355.0240.9250.015−0.9770.0042.7940.044
OLS
252.0141.3211.0070.204−0.9980.0692.9930.065
1001.9990.2801.0010.042−1.0020.0143.0020.014
4002.0010.0751.0010.011−0.9970.0032.9990.003
Table 5. Mean and MSE of 1000 Monte Carlo Simulations with True Distribution Asymmetric. Symmetric and Asymmetric Error Supports and Coefficient Support ZI.
Table 5. Mean and MSE of 1000 Monte Carlo Simulations with True Distribution Asymmetric. Symmetric and Asymmetric Error Supports and Coefficient Support ZI.
β1=2β2=1β3=−1β4=3
EstimatorE(β1)MSEE(β2)MSEE(β3)MSEE(β4)MSE
GME-ZI,VI
252.0890.0311.0380.060−1.0050.0413.0940.018
1002.2330.1081.0230.033−1.0060.0163.0710.010
4002.4270.2291.0150.012−1.0040.0053.0330.004
GME-ZI,VII
251.3580.4490.8430.103−1.0210.0572.3050.496
1000.4102.5830.8260.073−0.9660.0192.4630.294
400−0.8608.2090.8900.025−0.9660.0062.7000.092
GME-ZI,VIII
251.5970.1900.9050.075−1.0190.0492.5740.193
1000.9641.1290.8890.055−0.9670.0202.6740.112
4000.1263.5530.9460.016−0.9810.0052.8350.030
OLS
252.6002.3241.0410.261−1.0090.0972.9980.099
1002.6160.8131.0010.052−0.9990.0202.9970.021
4002.6100.4711.0030.013−1.0000.0052.9970.005

6. Further Results

Unbiased GME Estimation. It is apparent from the proof of the theorem in Section 3 that the = 1 J k p k ln ( p k ) terms are asymptotically uninformative. It is instructive to note that if these terms are deleted from the GME objective function and the resulting objective function is then maximized through choosing b and w subject to constraints C2–C4 and C6, the resulting GME estimator is in fact unbiased for estimating β. This follows because The ε i ’s are iid mean zero and symmetrically distributed around zero, and the new estimator, say β ˜ , is such that β ˜ β is a symmetric function of The ε i ’s.
Bayesian Analogues. As pointed out by [35] maximum entropy methods can be motivated as an empirical Bayes rule. We expand on their analogy by noting a strong formal parallel to the traditional Bayesian framework of inference. In particular, one can view k = 1 K = 1 J k p k ln ( p k ) as the maximum entropy analogue to the log of a non-normalized Bayesian prior and i = 1 N = 1 J w i ln ( w i ) as the maximum entropy analogue to the non-normalized log of the probability density kernel or log-likelihood function. For any given set of support points Z and V, we can define functions f β k  and  f ε by:
f β k ( b k ) = e = 1 J K p k ( b k ) ln ( p k ( b k ) ) z k 1 z k J k e = 1 J K p k ( τ k ) ln ( p k ( τ k ) ) d τ k  and  f ε ( x ) = e = 1 J w k ( x ) ln ( w ( x ) ) v 1 v J e = 1 J w ( y ) ln ( w ( y ) ) d y
Then for ε ~ i i d f ε , the maximum likelihood estimator of β is β ˜ , and if one adds priors β k ~ i n d f β k , then β ^ is the Bayesian posterior mode estimator of β. We note the following consequences of these equivalences. First, if the support points v 1 , , v J can be chosen so that f ε is very close to the true distribution of ε, then the GME estimator should be nearly asymptotically efficient. Second, in finite samples the prior information influences β ^ such that β ^ is generally not unbiased. Third, the support points used in the GME estimator have no particular relationship to the points of support of the distribution of a discrete random variable. The distributions f ε and f β k are absolutely continuous for any choice of Z and V.
The previous Monte Carlo results illustrate the Bayesian-like character of the maximum entropy results. The GME with reasonably narrow points of support centered on the true values of β dominated the OLS estimator and was sometimes far better. On the other hand, the GME performed poorly when the points of support were similarly narrow and mis-centered by only one-eighth the range of the points of support. In the latter case, mean squared errors were often much worse than OLS and biases were often substantial. Finally, wider points of support, even though they were the most mis-centered of the cases examined, were quite similar to OLS results for moderate to large sample sizes, and provided some degree of improvement over OLS for small samples.
Finally, the GME approach is a special case of generalized cross entropy, which incorporates a reference probability distribution over support points. This allows a direct method of including prior information, akin to a Bayesian framework. However, in a classical sense, the empirical estimation strategies are inherently different.
GME Calculation Method. The conditional maximum entropy formulation (2) utilized in the proof of asymptotic results represents the basis for a computationally efficient method of obtaining GME estimates. In particular, maximizing F ( τ ) through choice of τ involves a nonlinear search over a vector of relatively low dimension (K) as opposed to searching over the (KM + NJ) dimensional space of (p,w) values. In the process of concentrating the objective function, note that the needed Lagrange multiplier functions η k ( τ K )  and  γ ( e i ( τ ) ) can be expressed as elementary functions for three support points or less, and still exist in closed form (using inverse hyperbolic functions) for support vectors having five elements. As a point of comparison, the calculation of GME estimates in the Monte Carlo experiment with N = 1,600 was completed in a matter of seconds on a 133 mhz personal computer. Such a calculation would be intractable, let alone efficient, in the space of (p,w) values. We note further that the dual algorithm of [10] would still involve a search over a space of dimension N = 1,600, which would be infeasible here and in other problems in which the number of data points is large.

7. Conclusions

We have shown that the data-constrained GME estimator of the GLM is consistent and asymptotically normal as long as the coefficients and errors obey the constraints of the constrained maximum entropy problem. Furthermore, we have demonstrated the possibility that the GME estimator can be asymptotically efficient. Thus, depending on the distribution of the errors, GME may be more or less efficient than alternatives such as least squares. We performed Monte Carlo tests showing that the quality of the GME estimates depends on the quality of the supports chosen. The Monte Carlo results suggest that GME with wide supports will often perform better than OLS while providing some robustness to misspecification.
We have shown how all the conventional types of asymptotic tests can be calculated for GME estimates. In the Monte Carlo study these asymptotic tests performed extremely well in samples of 400 or more. In smaller samples the tests performed less well, particularly when the supports were narrow, although some of the results were quite acceptable. We have also demonstrated that all our results can be applied to a maximum cross-entropy estimator. While our focus has been on asymptotic properties, we have also shown how the entropy terms involving the coefficients play a role analogous to a Bayesian prior. Furthermore, these terms are asymptotically uninformative and can be omitted if the researcher wishes to use an unbiased GME estimator.

References

  1. Imbens, G.; Spady, R.; Johnson, P. Information Theoretic Approaches to Inference in Moment Condition Models. Econometrica 1998, 66, 333–357. [Google Scholar] [CrossRef]
  2. Imbens, G. A New Approach to Generalized Method of Moments Estimation, Harvard Institute of Economic Research Discussion Paper No. 1633. Harvard University: Cambridge, MA, USA, 1993. [Google Scholar]
  3. Kitamura, Y.; Stutzer, M. An Information-Theoretic Alternative to Generalized Method of Moments Estimation. Econometrica 1997, 65, 861–874. [Google Scholar] [CrossRef]
  4. Cressie, N.; Read, T.R.C. Multinomial goodness of fit tests. J. Roy. Stat. Soc B 1984, 46, 440–464. [Google Scholar]
  5. Pompe, B. On Some Entropy Measures in Data Analysis. Chaos Solutions Fractals 1994, 4, 83–96. [Google Scholar] [CrossRef]
  6. Seidenfeld, T. Entropy and Uncertainty. Philo. Sci. 1986, 53, 467–491. [Google Scholar] [CrossRef]
  7. Judge, G.G.; Mittelhammer, R.C. An Information Theoretic Approach to Econometrics; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  8. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  9. Jaynes, E.T. Information Theory and Statistical Mechanics. In Statistical Physics; Ford, K., Ed.; Benjamin: New York, NY, USA, 1963; p. 181. [Google Scholar]
  10. Golan, A.; Judge, G.; Miller, D. Maximum Entropy Econometrics: Robust Estimation with Limited Data; Wiley & Sons: New York, NY, USA, 1996. [Google Scholar]
  11. Zellner, A.; Highfield, R.A. Calculation of maximum entropy distribution and approximation of marginal posterior distributions. J. Econometrics 1998, 37, 195–209. [Google Scholar] [CrossRef]
  12. Soofi, E. Information Theoretic Regression Methods. In Applying Maximum Entropy to Econometric Problems (Advances in Econometrics); Fomby, T., Hill, R.C., Eds.; Emerald Group Publishing Limited: London, UK, 1997. [Google Scholar]
  13. Ryu, H.K. Maximum entropy estimation of density and regression functions. J. Econometrics 1993, 56, 397–440. [Google Scholar] [CrossRef]
  14. Golan, A.; Judge, G.; Perloff, J.M. A maximum entropy approach to recovering information from multinomial response data. JASA 1996, 91, 841–853. [Google Scholar] [CrossRef]
  15. Vinod, H.D. Maximum Entropy Ensembles for Time Series Inference in Economics. Asian Econ. 2006, 17, 955–978. [Google Scholar] [CrossRef]
  16. Holm, J. Maximum entropy Lorenz curves. J. Econometrics 1993, 59, 377–389. [Google Scholar] [CrossRef]
  17. Marsh, T.L.; Mittelhammer, R.C. Generalized Maximum Entropy Estimation of a First Order Spatial Autoregressive Model. In Spatial and Spatiotemporal Econometrics (Advances in Econometrics); Pace, R.K., LeSage, J.P., Eds.; Emerald Group Publishing Limited: London, UK, 2004. [Google Scholar]
  18. Krebs, T. Statistical Equilibrium in One-Step Forward Looking Economic Models. JET 1997, 73, 365–394. [Google Scholar] [CrossRef]
  19. Golan, A.; Judge, G.; Karp, L. A maximum entropy approach to estimation and inference in dynamic models or counting fish in the sea using maximum entropy. JEDC 1996, 20, 559–582. [Google Scholar] [CrossRef]
  20. Kattuman, P.A. On the size Distribution of Establishments of Large Enterprises: An Analysis for UK Manufacturing; University of Cambridge: Cambridge, UK, 1995. [Google Scholar]
  21. Callen, J.L.; Kwan, C.C.Y.; Yip, P.C.Y. Foreign-Exchange Rate Dynamics: An Empirical Study Using Maximum Entropy Spectral Analysis. J. Bus. Econ. Stat. 1985, 3, 149–155. [Google Scholar]
  22. Bellacicco, A.; Russo, A. Dynamic Updating of Labor Force Estimates: JARES. Labor 1991, 5, 165–175. [Google Scholar]
  23. Sengupta, J.K. The maximum entropy approach in production frontier estimation. Math. Soc. Sci. 1992, 25, 41–57. [Google Scholar] [CrossRef]
  24. Fraser, I. An application of maximum entropy estimation: The demand for meat in the United Stated Kingdom. Appl. Econ. 2000, 32, 45–59. [Google Scholar] [CrossRef]
  25. Lev, B.; Theil, H. A Maximum Entropy Approach to the Choice of Asset Depreciation. J. Accounting Res. 1978, 16, 286–293. [Google Scholar] [CrossRef]
  26. Stuzer, M. A simple nonparametric approach to derivative security valuation. J. Financ. 1996, 51, 1633–1652. [Google Scholar] [CrossRef]
  27. Buchen, P.W.; Kelly, M. The Maximum Entropy Distribution of an Asset Inferred from Option Prices. J. Financ. and Quant. Anal. 1996, 31, 143–159. [Google Scholar] [CrossRef]
  28. Preckel, P.V. Least squares and entropy: A penalty function perspective. Am. J. Agr. Econ. 2001, 83, 366–377. [Google Scholar] [CrossRef]
  29. Paris, Q.; Howitt, R. An Analysis of Ill-Posed Production Problems Using Maximum Entropy. Am. J. Agr. Econ. 1998, 80, 124–138. [Google Scholar] [CrossRef]
  30. Lence, S.H.; Miller, D.J. Recovering Output-Specific Inputs from Aggregate Input Data: A Generalized Cross-Entropy Approach. Am. J. Agr. Econ. 1998, 80, 852–867. [Google Scholar] [CrossRef]
  31. Miller, D.J.; Plantinga, A.J. Modeling Land Use Decisions with Aggregate Data. Am. J. Agr. Econ. 1999, 81, 180–194. [Google Scholar] [CrossRef]
  32. Fernandez, L. Recovering Wastewater Treatment Objectives: An Application of Entropy Estimation for Inverse Control Problems. In Advances in Econometrics, Applying Maximum Entropy to Econometric Problems; Fomby, T., Hill, R.C., Eds.; Jai Press Inc.: London, UK, 1997. [Google Scholar]
  33. White, H. Asymptotic Theory for Econometricians; Academic Press: New York, NY, USA, 1984. [Google Scholar]
  34. Rao, C.R. Linear Statistical Inference and Its Applications, 2nd ed.; Wiley & Sons: New York, NY, USA, 1973. [Google Scholar]
  35. Miller, D.; Judge, G.; Golan, A. Robust Estimation and Conditional Inference with Noisy Data; University of California: Berkeley, CA, USA, 1996. [Google Scholar]

Share and Cite

MDPI and ACS Style

Mittelhammer, R.; Cardell, N.S.; Marsh, T.L. The Data-Constrained Generalized Maximum Entropy Estimator of the GLM: Asymptotic Theory and Inference. Entropy 2013, 15, 1756-1775. https://doi.org/10.3390/e15051756

AMA Style

Mittelhammer R, Cardell NS, Marsh TL. The Data-Constrained Generalized Maximum Entropy Estimator of the GLM: Asymptotic Theory and Inference. Entropy. 2013; 15(5):1756-1775. https://doi.org/10.3390/e15051756

Chicago/Turabian Style

Mittelhammer, Ron, Nicholas Scott Cardell, and Thomas L. Marsh. 2013. "The Data-Constrained Generalized Maximum Entropy Estimator of the GLM: Asymptotic Theory and Inference" Entropy 15, no. 5: 1756-1775. https://doi.org/10.3390/e15051756

APA Style

Mittelhammer, R., Cardell, N. S., & Marsh, T. L. (2013). The Data-Constrained Generalized Maximum Entropy Estimator of the GLM: Asymptotic Theory and Inference. Entropy, 15(5), 1756-1775. https://doi.org/10.3390/e15051756

Article Metrics

Back to TopTop