Consistency and Generalization Bounds for Maximum Entropy Density Estimation

We investigate the statistical properties of maximum entropy density estimation, both for the complete data case and the incomplete data case. We show that under certain assumptions, the generalization error can be bounded in terms of the complexity of the underlying feature functions. This allows us to establish the universal consistency of maximum entropy density estimation.


Introduction
The maximum entropy (ME) principle, originally proposed by Jaynes in 1957 [1], is an effective method for combining different sources of evidence from complex, yet structured, natural systems.It has since been widely applied in science, engineering and economics.In machine learning, ME was first popularized by Della Pietra et al. [2], who applied it to induce overlapping features for a Markov random field model of natural language.Later, it was applied to other machine learning areas, such as information fusion [3] and reinforcement learning [4].It is now well known that for complete data, the ME principle is equivalent to maximum likelihood estimation (MLE) in a Markov random field.In fact, these two problems are exact duals of one another.Recently, Wang et al. [5] proposed the latent maximum entropy (LME) principle to extend Jaynes' maximum entropy principle to deal with hidden variables, and demonstrated its effectiveness in many statistical models, such as mixture models, Boltzmann machines and language models [6].We show that LME is different from both Jaynes' maximum entropy principle and maximum likelihood estimation, but it often yields better estimates in the presence of hidden variables and limited training data.This paper investigates the statistical properties of maximum entropy density estimation for both the complete and incomplete data cases.
Large sample asymptotic convergence results for MLE are typically based on point estimation analysis [7] in parametric models.Although point estimators have been extensively studied in the statistics literature since Fisher, these analyses typically do not consider generalization ability.Vapnik and Chervonenkis famously reformulated the problem of MLE for density estimation in the framework of empirical risk minimization and provided the first necessary and sufficient conditions for consistency [8,9].However, the model they considered is still in a Fisher-Wald parametric setting.Barron and Sheu [10] considered a density estimation problem very similar to the one we address here, but only restricted to the one-dimensional case within a bounded interval.Their analysis cannot be easily generalized to a high dimensional case.Recently, Dudik et al. [11] analyzed regularized maximum entropy density estimation with inequality constraints and derived generalization bounds for this model.However, once again, their analysis does not easily extend beyond the specific model considered.
Some researchers have studied the consistency of maximum likelihood estimators under the Hellinger divergence [12], which is a particularly convenient measure for studying maximum likelihood estimation in a general distribution-free setting.However, Kullback-Leibler divergence is a more natural measure for probability distributions and is closely related to the perplexity measure used in language modeling and speech recognition research [13,14].Moreover, convergence in the Kullback-Leibler divergence always establishes consistency in terms of Hellinger divergence [12], but not vice versa.Therefore, we concentrate on using the Kullback-Leibler divergence in our analysis.
In this paper, we investigate consistency and generalization bounds for maximum entropy density estimation with respect to the Kullback-Leibler divergence.The main technique we use in our analysis is Rademacher complexity, first used by Koltchinskii and Panchenko [15] to analyze the generalization error of combined classification methods, such as boosting, support vector machines and neural networks.Since then, the convenience of Rademacher analysis has been exploited by many to analyze various learning problems in classification and regression.For example, Rakhlin et al. [16] have used this technique to derive risk bounds for the density estimation of mixture models, which basically belong to directed graphical models using a conditional parameterization.Here, we use the Rademacher technique to analyze the generalization error of maximum entropy density estimation for general Markov random fields.

Maximum Entropy Density Estimation: Complete Data
Let X ∈ X be a random variable.Given a set of feature functions F(x) = {f 1 (x), ..., f N (x)} specifying properties one would like to match in the data, the maximum entropy principle states that we should select a probability model, p(x), from the space of all probability distributions, P(x), over X , to maximize entropy subject to the constraint that the feature expectations are preserved: where p 0 (x) denotes the unknown underlying true density and µ denotes a given σ-finite measure on X .If X is finite or countably infinite, then µ is the counting measure, and integrals reduce to sums.If X is a subset of a finite dimensional space, µ is the Lebesgue measure.If X is a combination of both cases, µ will be a combination of both measures.The dual problem is: where We will use the following notation and terminology throughout the analysis below.Define: and let: denote the exponential family induced by the set of feature functions.The restriction, λ ∈ Ω, will guarantee that the maximum likelihood estimate is an interior point of the set of λ's for which p λ (x) is defined.The optimal solution, p λ(x), of Equation (1) or Equation ( 3) is called the information projection [10,17] of p 0 (x) to the exponential family, E(x).
In practice, the true distribution, p 0 (x), is not known, but instead, a collection of data X = (x 1 , • • • , x M ) sampled from p 0 (x) is given.Therefore, instead of using the true distribution, p 0 (x), we use the empirical distribution, p(x), to calculate the feature expectations.The ME principle then becomes: and the dual problem using these constraints becomes: The optimal solution, p λ * (x), of Equation ( 4) or Equation ( 6) is the information projection of p(x) to the exponential family, E(x); see Figure 1.

Figure 1.
In the space of all probability distributions, P(x) over X , p λ(x) is the information projection of the true (but unknown) distribution, p 0 (x), to the exponential family, E, and p λ * (x) is the information projection of the empirical distribution, p(x), to E.

Consistency and Generalization Bounds of Estimation Error
There are two standard ways to measure the quality of the estimate, p λ * (x).
One approach, based on the Kullback-Leibler divergence, was first considered by Barron and Sheu [10].Basically, it uses the following well-known Pythagorean property (see Lemma 3 in [10]): and, in particular, the decomposition: Here, the first term, D(p 0 (x) p λ(x)), is the approximation error introduced by the bias of the set of feature functions, F, which measures how closely the feature functions are able to approximate the true probability distribution.The second term, D(p λ(x) p λ * (x)), is the estimation error for densities in the exponential family, introduced by the variance of using a finite sample size.These two terms resemble the bias-variance tradeoff in least squares cost estimation [18].The approximation error always exists unless the set of feature functions, F, is rich enough [19].In this paper, we assume that the set of feature functions is given and study how close p λ(x) is to p λ * (x).
Another way to evaluate the quality of p λ * (x) is to measure the difference between the best expected likelihood and the empirical likelihood of the estimate, which is a more desirable approach, because we can directly calculate the empirical likelihood of the estimate from the training data.
Both approaches, in fact, fall under the umbrella of Vapnik's empirical risk minimization principle [8] for density estimation.Here, we use the first approach to show that the maximum entropy solution converges to the best possible solution, p λ(x), and the second approach to show that the value of empirical likelihood converges to the maximum expected likelihood.

Maximum Entropy Principle
Exploiting tools from empirical process theory, including symmetrization, concentration and contraction inequalities, Koltchinskii and Panchenko [15] were able to give bounds on the generalization error of boosting, support vector machines (SVMs) and neural networks.We show through the following theorem that we can use similar tools to establish generalization bounds for the estimation error of maximum entropy density estimation.All proofs can be found in the Appendix.
Theorem 1. Assuming sup λ∈Ω λ 1 < ∞ and sup f ∈F ,x∈X f (x) ∞ < ∞, then there exist 0 < ζ < α < ∞, such that, for any η ∈ (0, 1), with a probability of at least 1 − η, where M is the number of instances, N (F(x), , d x ) is the random covering number [20,21] for linear combinations of functions in F(x) at scale with empirical Euclidean distance d x on sample X , and C 1 and C 2 are positive constants that do not depend on the instance.
If the linear combination of feature functions belongs to a VC-subgraph with Vapnik-Chervonenkis (VC) dimension V , it is well known that the Dudley integral is bounded by

√
V [12,20,22].Rakhlin et al. [16] first applied a similar technique to derive generalization bounds for the density estimation of mixture models.Here, we apply the technique to derive bounds for maximum entropy density estimation in general Markov random fields.Even though the analysis we use is standard, our results show that the generalization bound is not related to the log partition function, but instead is upper bounded by the covering number of linear combinations of feature functions.This is an important observation, because the log partition function-a fundamental quantity associated with any graph-structured distribution-is NP-hard to compute in general [23].
Although the assumption that sup f ∈F ,x∈X f (x) ∞ < ∞ seems rather restrictive, most of the graphical models studied in machine learning [23] are, in fact, discrete Markov random fields, which always satisfy this condition.
To eliminate the assumption of boundedness on the parameters and feature functions, we use a result adapted from [24].
Theorem 2. Assume that there exists a positive number, K(F), such that for all τ > 0, where λ • are parameters, such that E p λ • (x) f (x) = f (x).Then, for all λ ∈ Ω, we have, with a probability of at least 1 − η, By the above result, Theorem 1 can be proven by replacing C 2 with K(F).Since the value, K(F), is hard to determine in practice, we state our results in terms of the bound on feature functions instead.Nevertheless, the reader should keep in mind that this bound can be replaced by K(F) in all our results.
Using the result of [21], the following theorem gives a useful bound on the covering number.
Theorem 3. Assume sup λ∈Ω λ 2 < a and sup f ∈F , x∈X f (x) 2 < b, then: We then have the following corollaries.The first is a direct consequence of Theorem 1.The second provides a generalization bound on the expected value of the estimation error, which can be shown using the proof in [16,20].
will converge to p λ(x) in terms of the Kullback-Leibler divergence with rate O( 1 √ M ), independent of the form of the true distribution, p 0 .Corollary 2. Under the conditions of Theorem 1, the generalization bound of the expected estimation error is: Next, we turn to the second approach to evaluating the quality of p λ * (x); namely, by measuring the difference between the expected log-likelihood and the empirical log-likelihood.This is the approach used by Dudik et al. [11].
Then, from Theorem 1 and the McDiarmid concentration inequality, we obtain the following result for the sample log-likelihood, which is similar to Dudik et al. [11].
Theorem 4.There are 0 < ζ < α < ∞, such that, with a probability of at least 1 − η: By the Glivenko-Cantelli theorem [9], we know that the empirical distribution converges to the true distribution.Therefore, under certain conditions, the entropy of empirical distribution will also converge to the entropy of the true distribution.Whenever such convergence holds, we can combine this with Theorem 3 to show that D(p 0 (x) p λ(x)) − D(p(x) p λ * (x)) → 0 as M → ∞, establishing a stronger form of consistency than Corollary 1.

Regularized Maximum Entropy Principle
In many statistical modeling settings, the constraints used in the ME principle are subject to errors from the empirical nature of the data.This is particularly true in domains with sparse, high dimensional data.One way to gain robustness, though, is to relax the constraints and add a penalty to the entropy, leading to the regularized ME (RME) principle [25]: Here, a = (a 1 , ..., a N ) T , a i is the error for each constraint, and U : N → is a cost function that has a value of zero at zero.The function penalizes any constraint violations, and can be used to penalize deviations in the more reliably observed constraints to a greater degree than deviations in less reliably observed constraints.
The dual problem of RME becomes: which is equivalent to maximum a posteriori (MAP) estimation with prior U * (λ).Note that the prior, U * (λ), is derived from the penalty function, U , over errors a, by setting U * to the convex (Fenchel) conjugate [26,27] of U , i.e., U * (λ) = sup a λ, a − U (a).This function is always convex, regardless of the convexity of U .Conversely, given the convex conjugate cost function, U * , the corresponding penalty function, U , can be derived by using the property of Fenchel biconjugation [26]; that is, the conjugate of the conjugate of a convex function is the original convex function, U = U * * .Conventionally, U (a) is chosen to be nonnegative and have a minimum of zero at zero.To illustrate, consider a quadratic penalty , which specifies a Gaussian prior on λ.A different example can be obtained by considering the Laplacian prior on λ, U * (λ) = λ 1 = N i=1 |λ i |, which leads to the penalty function: However, there is an important aspect of the validity of choosing a legal (log) prior, U * , which has been ignored in previous studies on the RME principle [11,25,28,29].To see this, consider the following.By plugging the true distribution, p 0 (x), instead of the empirical distribution into Equation ( 16), one obtains: The dual problem is: Since p λ(x) is the information projection of p 0 (x) to the exponential family, E(x), we must have U (â) = U * ( λ) = 0 and â = 0, where â denotes the a corresponding to λ.Moreover, as a penalty term, the prior, U * (λ), should be nonnegative with a minimum of zero.We call the prior satisfying these conditions a legal prior.Both the standard Gaussian prior and standard Laplacian prior U * (λ) = λ 1 do not satisfy U * ( λ) = 0.The correct choices for these priors should be for a Gaussian prior and U * (λ − λ) = λ − λ 1 for a Laplacian prior, respectively.Consequently, U (a) should be chosen as U (a) + a, λ .Note that U (a) + a, λ does have a value of zero at zero, but it is no longer nonnegative.If we let p λ denote the solution to Equation ( 17), we then obtain the following generalization bound on estimation error without any restrictions on U (a) or U * (λ).
We then have the following result that guarantees the universal consistency of RME.If we choose U * (λ) to be a legal prior, then we also have a further result.
Moreover, as M → ∞, both U * (λ ) → 0 and p λ (x) will converge to p λ(x) in terms of their Kullback-Leibler divergence with rate O( 1 √ M ), for any true distribution, p 0 (x), and prior distribution.
The legal prior guarantees RME's consistency without introducing any regularization parameter, as commonly done in machine learning [9,19].Since U * (λ ) ≥ 0 for any legal prior, this result shows that RME always obtains a lower generalization bound than ME, even without assuming the truth of the prior.
Using the result of Theorem 5 and the McDiarmid concentration inequality, we are able to derive a generalization bound on the difference between the best expected log-likelihood and the log of the MAP probability.This holds for any regularization cost function, as long as U * ( λ) ≤ U * (λ ), without requiring that U * (λ) be convex.We note that Dudik et al. [11] gave a similar result, but only for the special case of using the l 1 norm as a penalty in the dual formulation; see Equation (17).
For a broad family of regularization functions, i.e., log priors in Equation (17), such that the Hessian matrices of λ (the prior information matrices) are positive definite, we can improve the convergence rate from O( The techniques needed to prove the O( 1 M ) convergence rate were first proposed by Zhang [19,30,31] to show the consistency and generalization bounds of various classification methods based on convex risk optimization with the l 2 -penalty.
Define the convex function: Consider training samples ).Let λ be the solution of Equation (24) with training data XM+1 .
Let λk be the solution of Equation (24) with training sample x k removed from the set, XM+1 .We have the following lemma that extends the quadratic case considered in [19,31].We prove the result in its full generality, though our proof is essentially a variant of the stability results by Zhang [19,30,31].
Lemma 1. Assume the Hessian matrix of U * (λ) is positive definite with smallest eigenvalue κ.Then, Finally, we can obtain the following leave-one-out error bound, which has a convergence rate of O( 1M ).
. When the legal prior, U (λ), is chosen, such that its Hessian matrix of λ is positive definite with smallest eigenvalue κ, the expected generalization error can then be bounded as:

Maximum Entropy Density Estimation: Incomplete Data
Here, we consider a variable to be "latent" or "hidden" if it is never observed, but causally effects the observations [6].In practice, many of the natural patterns we wish to classify are the result of causal processes that have a hidden hierarchical structure, yielding data that does not report the value of latent variables [6].Obtaining fully labeled data is tedious or impossible in many realistic cases.This motivates us to propose an unsupervised statistical learning technique, the latent maximum entropy (LME) principle, which is still formulated in terms of maximizing entropy, except that we must now change the problem formulation to respect hidden variables.
Following the terminology of the expectation-maximization (EM) algorithm [6], let Y ∈ Y be the observed incomplete data, Z ∈ Z be the missing data and then X = (Y, Z) ∈ X be the random variable denoting the complete data That is, X = (Y, Z).For example, Y might be observed natural language in the form of text and Z might be its missing syntactic and semantic information.If we let p(x) and p(y) denote the densities of X and Y , respectively, and let p(z|y) denote the conditional density of Z given Y , then p(y) = z∈Z p(x) µ(dz), and p(x) = p(y)p(z|y).The LME principle can be stated as follows.
Given features f 1 (x), ..., f N (x), specifying the properties that we would like to match in the data, select a joint probability model, p(x), from the space of all probability distributions, P(x), over X , to maximize the entropy: where x = (y, z) and p(y) is the empirical distribution of the observed data, Ỹ = (y 1 , • • • , y M ) denotes the set of observed Y values and p(z|y) is the conditional distribution of latent variables given the observed data.Intuitively, the constraints specify that we require the expectations of f i (X) in the joint model to match their empirical expectations on the incomplete data, Y , taking into account the structure of the implied dependence of the unobserved component, Z, on Y .Note that the conditional distribution, p(z|y), implicitly encodes the latent structure and is a nonlinear mapping of p(x).That is, p(z|y) = p(y,z) dz ) , where x = (y, z) and x = (y, z ) by definition.Clearly, p(z|y) is a nonlinear function of p(x) because of the division.
Unfortunately, there is no simple optimal solution for p(x) in Equations ( 26) and (27).However, a good approximation can be obtained by restricting the model to is the normalization constant.This restriction provides a free parameter, λ i , for each feature function, f i (x).By adopting such a "log-linear" restriction, it turns out that we can formulate a practical algorithm for approximately satisfying the LME principle.
In [5], we exploited the following connection between LME and maximum likelihood estimation (MLE) to derive a practical training algorithm.Theorem 7. [5] Under the log-linear assumption, maximizing the likelihood of log-linear models on incomplete data is equivalent to satisfying the feasibility constraints of the LME principle.That is, the only distinction between MLE and LME in log-linear models is that, among local maxima (feasible solutions), LME selects the model with the maximum entropy, whereas MLE selects the model with the maximum likelihood.
Define L λ (y)) = − y∈ Ỹ p(y) log p λ (y) and H(λ, λ) = − y∈ Ỹ p(y) z∈Z p λ (z|y) log p λ (z|y)µ(dz).The latter is the conditional entropy of a hidden variable over observed sample data Ỹ that measures the uncertainty of the hidden variables.Then, in the case where λ is a feasible log-linear solution according to Equation ( 27), we have the following relationship between likelihood over observed data and the entropy of the joint model.

Corollary 5. [5]
If λ is in the set of feasible solutions, then: We will use the following notation and terminology throughout the analysis below.Denote the manifold of the nonlinear constraint set in Equation ( 27) as C. We then define p λ(y) = arg max p λ (x)∈E y∈Y p 0 (y) log p λ (y) as the nearest point in terms of D p 0 (y) p λ (y) from p 0 (y) to the marginalized exponential family over z, using p λ (y) = z∈Z p λ (y, z)µ(dz), where p λ (x) ∈ E(x); see Figure 2.
Figure 2. The operator, T , denotes the marginalization of p(x) over z and maps the entire space of all probability distributions, P(x) over X , into the space of all probability distributions, P(y) over Y. Here, p λ(y) is the information projection of p 0 (y) to the marginalized exponential family, E; p λ * (y) is the information projection of p(y) to the marginalized exponential family, E; and p λ (y) is the distribution that in joint model space has the highest entropy among the intersection points of the exponential family, E, and the nonlinear constraint set, C. In [29,32], we formulate the regularized latent maximum entropy principle (RLME) as the following:

T P(x) p(y) P(y)
subject to: Again a = (a 1 , ..., a N ), where a i is the error for each constraint, and U : N → is a convex function with its minimum at zero.The standard maximum a posteriori (MAP) estimate minimizes the negative penalized log-likelihood R(λ) = − y p(y) log p λ (y) + U * (λ).
Our key result in [29,32] is that locally minimizing R(λ) is equivalent to satisfying the feasibility constraints in Equation ( 30) of the RLME principle.
Theorem 8. [29,32] Under the log-linear assumption, locally maximizing the posterior probability of log-linear models on incomplete data is equivalent to satisfying the feasibility constraints of the RLME principle.That is, the only distinction between MAP and RLME in log-linear models is that, among local maxima (feasible solutions), RLME selects the model with the maximum regularized entropy, whereas MAP selects the model with the maximum posterior probability Corollary 6. [29,32] If λ * is in the set of feasible solutions of Equation ( 30), then R(λ) = H(p λ ) − U (a) − H(λ, λ).

Consistency and Generalization Bounds for Estimation Error
To measure the quality of the maximum likelihood and maximum entropy estimates, we do not consider the divergence of the models in the original joint space, P(x).Instead, we consider the marginalized models in the observed data space, P(y).However, to measure the divergence between models in the observed data space, we have to take the difference of D(p 0 (y) p λ * (y)) and D(p 0 (y) p λ(y)), even though, technically, the Pythagorean property no longer holds in this case.Nevertheless, this still gives a useful measure of the approximation quality.

Maximum Likelihood Estimate
We first establish consistency and provide generalization bounds for the maximum likelihood density estimate, p λ * (y).Note that if we attempt to use a technique similar to the complete data case here, we will obtain a bound that is governed by the covering number of the log of the marginal feature functions F(y) = z∈Z exp λ, f (y, z) µ(dz).Bounding the covering number of log-int-expF(x) is more difficult than bounding it for F(y) directly.To cope with this issue, we use the refined version of the Rademacher comparison inequality proposed in [24] to eliminate the log function.This is a slightly different approach than that taken by Rakhlin et al. [16], who, instead, use the contraction technique of [15,20,33] to derive bounds for mixture model density estimation.Here, we pursue a streamlined analysis that avoids working with the likelihood ratio (also, see [34]),hence avoiding the second application of contraction,which results in tighter constants.
where N (F(y), , d y ) is the random covering number of the marginal feature functions, F(y) = z∈Z exp λ, f (y, z) µ(dz), at scale with empirical Euclidean distance d y on sample data Ỹ.
Similarly, we can eliminate the assumption of boundedness on the parameters and feature functions, F(y), by using a result adapted from [24].
Theorem 10.Assume that there exists a positive number, K(F), such that for all τ > 0: where λ • are the parameters achieving the maximum subject to Then, for all λ ∈ Ω, we have with a probability of at least 1 − η, Using the above result, Theorem 9 can be proven by replacing C 4 with K(F).Again, since the value, K(F), is hard to determine in practice, we will state our results below in terms of a bound on feature functions, but as before, the reader should bear in mind that the bound on feature functions can be replaced by K(F).
From the results above, we can establish the following consistency property.Similar to the complete data case, using the result of Theorem 9 and the McDiarmid concentration inequality, we are also able to derive the generalization bound for the difference of the best expected log-likelihood and the maximum empirical log-likelihood.

Latent Maximum Entropy Estimate
Let p λ (y) denote the maximum entropy estimate of Equation ( 26) over the exponential family, E. We use similar techniques to the case of complete data regularized maximum entropy (Section 2.1.2) to prove consistency and generalization bounds for using the latent maximum entropy density estimate, p λ (y).
Theorem 11. (LME principle) Assume for all λ ∈ Ω and for all y ∈ Y, we have 0 < a ≤ F(y) ≤ b.Then, there exist 0 < ζ < α < ∞, such that with a probability of at least 1 − η, Using this result, we can then easily establish the following consistency property.
Corollary 8. Universal consistency: If log N (F(y), , d y )d is bounded and also E p(y) log p λ(y) ≤ E p(y) log p λ (y), then p λ (y) will converge to p λ(y) (in terms of the difference of the Kullback-Leibler divergence to the true distribution, p 0 (y)) with rate O( 1 √ M ), for any true distribution, p 0 (y).Corollary 8 gives a sufficient condition, i.e., E p(y) log p λ(y) ≤ E p(y) log p λ (y), that leads to the universal consistency of latent maximum entropy estimation.This perhaps partially explains our observations of experimental results on synthetic data conducted in [5].
Note that in the proof of Theorem 11 and Corollary 8, it is not necessary to restrict p λ to be the model that has global maximum joint entropy over all feasible log-linear solutions.It turns out that the conclusion still holds for all feasible log-linear models, p λ (y), that have greater empirical log-likelihood, E p(y) log p λ (y), than the empirical log-likelihood, E p(y) log p λ(y), of the optimal expected log-likelihood estimate, p λ(y).That is, as the sample size grows, any of these feasible log-linear models will converge to p λ(y) (in terms of the difference of the Kullback-Leibler divergence to the true distribution, p 0 (y)) with rate O( 1 √ M ).

Maximum a Posteriori Estimate
In a similar manner, it is straightforward to have the following generalization bound for the MAP estimate, p λ (y).
By the above theorem, one can easily obtain the following consistency result.
Corollary 9. Universal consistency: If log N (F(y), , d y )d is bounded and U * ( λ) ≤ U * (λ ), then p λ (y) will converge to p λ(y) in terms of the difference of the Kullback-Leibler divergence to the true distribution, p 0 (y), with rate O( 1 √ M ) without assuming the form of the true distribution, p 0 (y), nor the true prior distribution.

Regularized Latent Maximum Entropy Estimate
We can also, in a similar manner, establish the following generalization bound for the RLME estimate, p λ (y).
Theorem 13. (RLME principle) Assume for all λ ∈ Ω and for all y ∈ Y, 0 < a ≤ F(y) ≤ b.Then, with a probability of at least 1 − η, By the above theorem, we can easily obtain the following consistency result.
Corollary 10.Universal consistency: If log N (F(y), , d y )d is bounded and E p(y) log p λ(y) + U * ( λ) ≤ E p(y) log p λ (y) + U * (λ ), then p λ (y) will converge to p λ(y) in terms of the difference of the Kullback-Leibler divergence to the true distribution, p 0 (y), with rate O( 1 √ M ) without assuming the form of true distribution p 0 (y) and true prior distribution.

Conclusions
We have investigated the statistical properties of using the maximum entropy principle for density estimation, in both the complete and incomplete data cases.For complete data, maximum entropy is equivalent to maximum likelihood estimation in a Markov random field.Here, we derived bounds on the generalization error based on the complexity of linear combinations of feature functions, and used this to establish a form of universal consistency.We then provided a similar analysis for regularized maximum entropy estimation, which, interestingly, yields a better generalization bound (and maintains consistency) for any legal prior.Moreover, if the information matrix of the prior is positive definite, we can further show that the convergence rate can be improved to For incomplete data, maximum entropy is no longer equivalent to maximum likelihood estimation, and the analysis becomes more difficult.Nevertheless, we established bounds on the generalization error of maximum likelihood in terms of the complexity of the marginalizedfeature functions, again achieving a form of universal consistency.With additional assumptions, we were able to extend this analysis to apply it to latent maximum entropy estimation and to prove its universal consistency, as well.Analogous conclusions can be drawn for regularized situations.Finally, we note that an alternative analysis can be based on replacing the Kullback-Leibler divergence with the more general Bregman distance [35,36].The analysis here can be easily extended to this more general setting.
In our future work, we are planning to study the trade-off between approximation error and estimation error to select the best set of feature functions.
Proof of Theorem 1.
Proof.The techniques we use are quite standard [15,16,20,37] and have appeared in many papers.To be concise, following lecture notes 14-15 in [37], the first key technique we are using is the method of bounded differences [38].Define: Then: By the McDiarmid concentration inequality in Equation [38], we have: 2M .Then: Therefore, with a probability of at least 1 − η, Next, we use the symmetrization technique of [33,39], which states that if: where ω = (ω 1 , • • • , ω M ) is a Rademacher sequence.We then have with a probability of at least 1 − η, A classical result of Dudley establishes that Rademacher averages over linear combinations in F(x) are bounded by Dudley's entropy integral [16,20,22,37], where 0 < ζ < α < ∞; provided, as observed in [20,40], that sup λ∈Ω λ ∞ and sup f ∈F ,x∈X f (x) ∞ are bounded.One can then show that with a probability of at least 1 − η, Therefore: by Equation ( 8) where the second inequality comes from the fact that 1 M M j=1 log p λ(x j ) p λ * (x j ) ≤ 0, since p λ * (x) has maximum likelihood in the exponential family, E(x).
Therefore, with a probability of at least 1 − η, Proof.The proof is similar to the ME case.Consider the chain of inequalities: by ( 7) where the second inequality follows from the fact that [E p log p λ − U * (λ)] − [E p log p λ − U * (λ )] ≤ 0, since p λ maximizes the a posteriori objective over the exponential family, E. Therefore, with a probability of at least 1 − η, Proof of Corollary 4.
Proof.Since U * (λ) is a legal prior, U * ( λ) = 0. We thus have the inequality by the last theorem.As M → ∞, the right-hand side of the last inequality goes to −U * (λ ), which is nonpositive; however, the left-hand side is nonnegative.Therefore, we must have U * (λ ) = 0.
Proof of Lemma 1.
Proof.By the definition of λ, we have: The Bregman divergence of L λ for the Markov random field model can be written as: which is always nonnegative.Then, we have: By Taylor's expansion, we know there exists θ ∈ Ω ⊆ L , such that: where the last inequality is due to the assumption that the Hessian matrix of U * (λ) is positive definite with the smallest eigenvalue κ > 0. Furthermore, note that by the definition of λk , we have: Combining the results of three inequalities in Equations ( 42), ( 43) and (44), we then obtain: The last equality follows from Equation (41).By canceling λ − λk from the above inequality, we obtained the desired bound.
Proof of Theorem 6.
Proof.We use the same leave-one-out technique in [19,30,31].It follows from Lemma 1 that: Therefore: After summing over k from one to M + 1, then taking the expectation with respect to the training data and using Jensen's inequality, we obtain: Since U * (λ) is a legal prior, U * (λ) ≥ 0 and U * ( λ) = 0, and since λ is the optimal solution of Equation ( 24), we have: Combining the results of inequalities in Equations ( 45) and (46), yields: Proof of Theorem 9.
Proof.By working with p λ (y) and using the same techniques as before, i.e., the McDiarmid concentration inequality [38] and symmetrization [33,39], we have with a probability of at least 1 − η, Now, we apply the refined version of the Rademacher comparison inequality proposed in Theorem 7 of Meir and Zhang [24], which says that for l-Lipschitz functions φ i : → , i = 1, • • • , M , one obtains the inequality: By the arguments in [20,33], it is easy to show that the absolute value version is valid as: Let φ(x) = log(x), where a ≤ x ≤ b.It is easy to verify that φ(x) is 1 a -Lipschitz, so: Proof.Choose the function to be log p λ (y), λ ∈ Ω.Then, cosh 2τ log p λ (x) = 1 2 p λ (y) 2τ − 1 p λ (y) 2τ .By Theorem 3 in [24], we have that if there exists a positive number, K(F), such that for all τ > 0, log E p 0 (y) sup λ∈Ω p λ (y) 2τ − 1 p λ (y) 2τ ≤ τ K(F) 2 then, for all λ ∈ Ω with a probability of at least 1 − η, (41) will hold.
Next, we take the derivative of p λ (y) 2τ − 1 p λ (y) 2τ with respect to λ and set this to zero.After some routine calculation, we obtain for i = 1...N : p λ (x) 2τ −1 − 1 p λ (y) 2τ +1 p λ (y) E p λ (x) f i (x) − E p λ (z|y) f i (y, z) = 0 Thus, λ • = arg sup λ∈Ω p λ (y) 2τ − 1 p λ (y) 2τ are those parameters, such that they achieve E p λ • (x) f (x) = E p λ (z|y) f i (y, z) .Moreover, they achieve the maximum of p λ (y) 2τ − 1 p λ (y) 2τ .Even though these parameters can be uniquely determined for each fixed y, unlike the complete data case, they may not be the same as the MLE or LME estimates, due to the existence of multiple feasible solutions.
Proof of Theorem 11.
Proof.The coefficients are the same as in Theorem 8, and the proof is similar.

Corollary 7 .
Universal consistency: If α ζ log N (F(y), , d y )d is bounded, then p λ * (y) will converge to p λ(y) (in terms of the difference of the Kullback-Leibler divergence to the true distribution, p 0 (y)) with rate O( 1 √ M ), regardless of the form of true distribution p 0 (y).