Abstract
Estimators derived from a divergence criterion such as divergences are generally more robust than the maximum likelihood ones. We are interested in particular in the so-called minimum dual –divergence estimator (MDDE), an estimator built using a dual representation of –divergences. We present in this paper an iterative proximal point algorithm that permits the calculation of such an estimator. The algorithm contains by construction the well-known Expectation Maximization (EM) algorithm. Our work is based on the paper of Tseng on the likelihood function. We provide some convergence properties by adapting the ideas of Tseng. We improve Tseng’s results by relaxing the identifiability condition on the proximal term, a condition which is not verified for most mixture models and is hard to be verified for “non mixture” ones. Convergence of the EM algorithm in a two-component Gaussian mixture is discussed in the spirit of our approach. Several experimental results on mixture models are provided to confirm the validity of the approach.
1. Introduction
The Expectation Maximization (EM) algorithm is a well-known method for calculating the maximum likelihood estimator of a model where incomplete data is considered. For example, when working with mixture models in the context of clustering, the labels or classes of observations are unknown during the training phase. Several variants of the EM algorithm were proposed (see []). Another way to look at the EM algorithm is as a proximal point problem (see [,]). Indeed, one may rewrite the conditional expectation of the complete log-likelihood as a sum of the log-likelihood function and a distance-like function over the conditional densities of the labels provided an observation. Generally, the proximal term has a regularization effect in the sense that a proximal point algorithm is more stable and frequently outperforms classical optimization algorithms (see []). Chrétien and Hero [] prove superlinear convergence of a proximal point algorithm derived from the EM algorithm. Notice that EM-type algorithms usually enjoy no more than linear convergence.
Taking into consideration the need for robust estimators, and the fact that the maximum likelihood estimator (MLE) is the least robust estimator among the class of divergence-type estimators that we present below, we generalize the EM algorithm (and the version of Tseng []) by replacing the log-likelihood function by an estimator of a divergence between the true distribution of the data and the model. A –divergence in the sense of Csiszár [] is defined in the same way as [] by:
where is a nonnegative strictly convex function. Examples of such divergences are: the Kullback–Leibler (KL) divergence , the modified KL divergence, the Hellinger distanceamong others. All these well-known divergences belong to the class of Cressie-Read functions [] defined by
for respectively. For , the limit is calculated, and we denote for the case of the modified KL and for the KL.
Since the divergence calculus uses the unknown true distribution, we need to estimate it. We consider the dual estimator of the divergence introduced independently by [,]. The use of this estimator is motivated by many reasons. Its minimum coincides with the MLE for . In addition, it has the same form for discrete and continuous models, and does not consider any partitioning or smoothing.
Let be a parametric model with , and denote as the true set of parameters. Let be the Lebesgue measure defined on . Suppose that , the probability measure is absolutely continuous with respect to and denote the corresponding probability density. The dual estimator of the divergence given an sample is given by:
with . Al Mohamad [] argues that this formula works well under the model; however, when we are not, this quantity largely underestimates the divergence between the true distribution and the model, and proposes the following modification:
where is the Rosenblatt–Parzen kernel estimate with window parameter w. Whether it is , or , the minimum dual divergence estimator (MDDE) is defined as the argument of the infimum of the dual approximation:
Asymptotic properties and consistency of these two estimators can be found in [,]. Robustness properties were also studied using the influence function approach in [,]. The kernel-based MDDE (5) seems to be a better estimator than the classical MDDE (4) in the sense that the former is robust whereas the later is generally not. Under the model, the estimator given by (4) is, however, more efficient, especially when the true density of the data is unbounded. More investigation is needed in the context of unbounded densities, since we may use asymmetric kernels in order to improve the efficiency of the kernel-based MDDE, see [] for more details.
In this paper, we propose calculation of the MDDE using an iterative procedure based on the work of Tseng [] on the log-likelihood function. This procedure has the form of a proximal point algorithm, and extends the EM algorithm. Our convergence proof demands some regularity (continuity and differentiability) of the estimated divergence with respect to the parameter vector φ) which is not simply checked using (2). Recent results in the book of Rockafellar and Wets [] provide sufficient conditions to prove continuity and differentiability of supremal functions of the form of (2) with respect to φ. Differentiability with respect to φ still remains a very hard task; therefore, our results cover cases when the objective function is not differentiable.
The paper is organized as follows: in Section 2, we present the general context. We also present the derivation of our algorithm from the EM algorithm and passing by Tseng’s generalization. In Section 3, we present some convergence properties. We discuss in Section 4 a variant of the algorithm with a theoretical global infimum, and an example of the two-Gaussian mixture model and a convergence proof of the EM algorithm in the spirit of our approach. Finally, Section 5 contains simulations confirming our claim about the efficiency and the robustness of our approach in comparison with the MLE. The algorithm is also applied to the so-called minimum density power divergence (MDPD) introduced by [].
2. A Description of the Algorithm
2.1. General Context and Notations
Let be a couple of random variables with joint probability density function parametrized by a vector of parameters . Let be n copies of independently and identically distributed. Finally, let be n realizations of the n copies of . The s are the unobserved data (labels) and the s are the observations. The vector of parameters φ is unknown and needs to be estimated. The observed data are supposed to be real numbers, and the labels belong to a space not necessarily finite unless mentioned otherwise. The marginal density of the observed data is given by , where is a measure defined on the label space (for example, the counting measure if we work with mixture models).
For a parametrized function f with a parameter a, we write . We use the notation for sequences with the index above. The derivatives of a real valued function ψ defined on are denoted etc. We denote the gradient of a real function f defined on . For a generic function of two (vectorial) arguments , then denotes the gradient with respect to the first (vectorial) variable. Finally, for any set A, we use to denote the interior of A.
2.2. EM Algorithm and Tseng’s Generalization
The EM algorithm estimates the unknown parameter vector by (see []):
where , and . By independence between the couples ’s, the previous iteration may be written as:
where is the conditional density of the labels (at step k) provided which we suppose to be positive almost everywhere. It is well-known that the EM iterations can be rewritten as a difference between the log-likelihood and a Kullback–Liebler distance-like function. Indeed,
The final line is justified by the fact that is a density, therefore it integrates to 1. The additional term does not depend on ϕ and, hence, can be omitted. We now have the following iterative procedure:
The previous iteration has the form of a proximal point maximization of the log-likelihood, i.e., a perturbation of the log-likelihood by a distance-like function defined on the conditional densities of the labels. Tseng [] generalizes this iteration by allowing any nonnegative convex function ψ to replace the function. Tseng’s recurrence is defined by:
where J is the log-likelihood function and is given by:
for any real nonnegative convex function ψ such that . is nonnegative, and if and only if almost everywhere.
2.3. Generalization of Tseng’s Algorithm
We use the relationship between maximizing the log-likelihood and minimizing the Kullback–Liebler divergence to generalize the previous algorithm. We, therefore, replace the log-likelihood function by an estimate of a divergence between the true distribution and the model. We use the dual estimators of the divergence presented earlier in the introduction (2) or (3), which we denote in the same manner , unless mentioned otherwise. Our new algorithm is defined by:
where is defined by (8). When , it is easy to see that we get recurrence (7). Indeed, for the case of (2) we have:
Using the fact that the first term in does not depend on φ, so it does not count in the arg inf defining , we easily get (7). The same applies for the case of (3). For notational simplicity, from now on, we redefine with a normalization by n, i.e.,
Hence, our set of algorithms is redefined by:
We will see later that this iteration forces the divergence to decrease and that, under suitable conditions, it converges to a (local) minimum of . It results that algorithm (11) being a way to calculate both the MDDE (4) and the kernel-based MDDE (5).
3. Some Convergence Properties of
We show here how, according to some possible situations, one may prove convergence of the algorithm defined by (11). Let be a given initialization, and define
which we suppose to be a subset of . The idea of defining this set in this context is inherited from the paper Wu [], which provided the first correct proof of convergence for the EM algorithm. Before going any further, we recall the following definition of a (generalized) stationary point.
Definition 1.
Let be a real valued function. If f is differentiable at a point such that , we then say that is a stationary point of f. If f is not differentiable at but the subgradient of f at , say , exists such that , then is called a generalized stationary point of f.
Remark 1.
In the whole paper, the subgradient is defined for any function not necessarily convex (see Definition 8.3) in [] for more details.
We will be using the following assumptions:
- A0.
- Functions are lower semicontinuous;
- A1.
- Functions and are defined and continuous on, respectively, and ;
- AC.
- Function is defined and continuous on Φ;
- A2.
- is a compact subset of int;
- A3.
- for all .
Recall also that we suppose that We relax the convexity assumption of function ψ. We only suppose that ψ is nonnegative and iff . In addition, if .
Continuity and differentiability assumptions of function for the case of (3) can be easily checked using Lebesgue theorems. The continuity assumption for the case of (2) can be checked using Theorem 1.17 or Corollary 10.14 in []. Differentiability can also be checked using Corollary 10.14 or Theorem 10.31 in the same book. In what concerns , continuity and differentiability can be obtained merely by fulfilling Lebesgue theorems conditions. When working with mixture models, we only need the continuity and differentiability of ψ and functions . The later is easily deduced from regularity assumptions on the model. For assumption A2, there is no universal method, see Section 4.2 for an Example. Assumption A3 can be checked using Lemma 2 in [].
We start the convergence properties by proving that the objective function decreases alongside the the sequence , and give a possible set of conditions for the existence of the sequence .
Proposition 1.
(a) Assume that the sequence is well defined in Φ, then , and (b) . (c) Assume A0 and A2 are verified, then the sequence is defined and bounded. Moreover, the sequence converges.
Proof.
We prove . We have by definition of the arginf:
We use the fact that for the right-hand side and that for the left-hand side of the previous inequality. Hence, .
We prove using the decreasing property previously proved in (a). We have by recurrence . The result follows directly by definition of .
We prove by induction on k. For , clearly is well defined since we choose it. The choice of the initial point of the sequence may influence the convergence of the sequence. See the Example of the Gaussian mixture in Section 4.2. Suppose, for some , that exists. We prove that the infimum is attained in . Let be any vector at which the value of the optimized function has a value less than its value at , i.e., . We have:
The first line follows from the non negativity of . As , then . Thus, the infimum can be calculated for vectors in instead of Φ. Since is compact and the optimized function is lower semicontinuous (the sum of two lower semicontinuous functions), then the infimum exists and is attained in . We may now define to be a vector whose corresponding value is equal to the infimum.
Convergence of the sequence comes from the fact that it is non increasing and bounded. It is non increasing by virtue of (a). Boundedness comes from the lower semicontinuity of . Indeed, . The infimum of a proper lower semicontinuous function on a compact set exists and is attained on this set. Hence, the quantity exists and is finite. This ends the proof. □
Compactness in part (c) can be replaced by inf-compactness of function and continuity of with respect to its first argument. The convergence of the sequence is an interesting property, since, in general, there is no theoretical guarantee, or it is difficult to prove that the whole sequence converges. It may also continue to fluctuate around a minimum. The decrease of the error criterion between two iterations helps us decide when to stop the iterative procedure.
Proposition 2.
Suppose A1 verified, is closed and .
- (a)
- If AC is verified, then any limit point of is a stationary point of ;
- (b)
- If AC is dropped, then any limit point of is a “generalized” stationary point of , i.e., zero belongs to the subgradient of calculated at the limit point.
Proof.
We prove . Let be a convergent subsequence of which converges to . First, , because is closed and the subsequence is a sequence of elements of (proved in Proposition 1b).
Let us now show that the subsequence also converges to . We simply have:
Since and , we conclude that .
By definition of , it verifies the infimum in recurrence (11), so that the gradient of the optimized function is zero:
Using the continuity assumptions A1 and AC of the gradients, one can pass to the limit with no problem:
However, the gradient because (recall that ) for any
which is equal to zero since . This implies that .
We prove (b). We use again the definition of the arginf. As the optimized function is not necessarily differentiable at the points of the sequence , a necessary condition for to be an infimum is that 0 belongs to the subgradient of the function on . Since is assumed to be differentiable, the optimality condition is translated into:
Since is continuous, then its subgradient is outer semicontinuous (see [] Chapter 8, Proposition 7). We use the same arguments presented in (a) to conclude the existence of two subsequences and which converge to the same limit . By definition of outer semicontinuity, and since , we have:
We want to prove that . By definition of the (outer) limsup (see [] Chapter 4, Definition 1 or Chapter 5B):
In our scenario, , , and . The continuity of with respect to both arguments and the fact that the two subsequences and converge to the same limit, imply that . Hence, . By inclusion (12), we get our result:
This ends the proof. □
The assumption used in Proposition 2 is not easy to be checked unless one has a close formula of . The following proposition gives a method to prove such assumption. This method seems simpler, but it is not verified in many mixture models (see Section 4.2 for a counter Example).
Proposition 3.
Assume that A1, A2 and A3 are verified, then . Thus, by Proposition 2 (according to whether AC is verified or not), any limit point of the sequence is a (generalized) stationary point of .
Proof.
By contradiction, let us suppose that does not converge to 0. There exists a subsequence such that . Since belongs to the compact set , there exists a convergent subsequence such that . The sequence belongs to the compact set ; therefore, we can extract a further subsequence such that . Besides . Finally since the sequence is convergent, a further subsequence also converges to the same limit . We have proved the existence of a subsequence of such that does not converge to 0 and such that , with .
The real sequence converges as proved in Proposition 1c. As a result, both sequences and converge to the same limit being subsequences of the same convergent sequence. In the proof of Proposition 1, we can deduce the following inequality:
which is also verified for any substitution of k by . By passing to the limit on k, we get . However, the distance-like function is nonnegative, so that it becomes zero. Using assumption A3, implies that . This contradicts the hypothesis that does not converge to 0.
The second part of the Proposition is a direct result of Proposition 2. □
Corollary 1.
Under assumptions of Proposition 3, the set of accumulation points of is a connected compact set. Moreover, if is strictly convex in the neighborhood of a limit point of the sequence , then the whole sequence converges to a local minimum of .
Proof.
Since the sequence is bounded and verifies , then Theorem 28.1 in [] implies that the set of accumulation points of is a connected compact set. It is not empty since is compact. The remaining of the proof is a direct result of Theorem 3.3.1 from []. The strict concavity of the objective function around an accumulation point is replaced here by the strict convexity of the estimated divergence. □
Proposition 3 and Corollary 1 describe what we may hope to get of the sequence . Convergence of the whole sequence is bound by a local convexity assumption in the neighborhood of a limit point. Although simple, this assumption remains difficult to be checked since we do not know where might be the limit points. In addition, assumption A3 is very restrictive, and is not verified in mixture models.
Propositions 2 and 3 were developed for the likelihood function in the paper of Tseng []. Similar results for a general class of functions replacing and which may not be differentiable (but still continuous) are presented in []. In these results, assumption A3 is essential. Although in [] this problem is avoided, their approach demands that the log-likelihood has limit as . This is simply not verified for mixture models. We present a similar method to the one in [] based on the idea of Tseng [] of using the set which is valid for mixtures. We lose, however, the guarantee of consecutive decrease of the sequence .
Proposition 4.
Assume A1, AC and A2 verified. Any limit point of the sequence is a stationary point of . If AC is dropped, then 0 belongs to the subgradient of calculated at the limit point.
Proof.
If converges to, say, , then the result falls simply from Proposition 2.
If does not converge. Since is compact and (proved in Proposition 1), there exists a subsequence such that . Let us take the subsequence . This subsequence does not necessarily converge; it is still contained in the compact , so that we can extract a further subsequence which converges to, say, . Now, the subsequence converges to , because it is a subsequence of . We have proved until now the existence of two convergent subsequences and with a priori different limits. For simplicity and without any loss of generality, we will consider these subsequences to be and , respectively.
Conserving previous notations, suppose that and . We use again inequality (13):
By taking the limits of the two parts of the inequality as k tends to infinity, and using the continuity of the two functions, we have
Recall that under A1-2, the sequence converges, so that it has the same limit for any subsequence, i.e., . We also use the fact that the distance-like function is non negative to deduce that . Looking closely at the definition of this divergence (10), we get that if the sum is zero, then each term is also zero since all terms are nonnegative. This means that:
The integrands are nonnegative functions, so they vanish almost everywhere with respect to the measure defined on the space of labels.
The conditional densities are supposed to be positive (which can be ensured by a suitable choice of the initial point ), i.e., Hence, On the other hand, ψ is chosen in a way that iff . Therefore:
Since is, by definition, an infimum of , then the gradient of this function is zero on . It results that:
Taking the limit on k, and using the continuity of the derivatives, we get that:
Let us write explicitly the gradient of the second divergence:
We use now the identities (14), and the fact that , to deduce that:
This entails using (15) that .
Comparing the proved result with the notation considered at the beginning of the proof, we have proved that the limit of the subsequence is a stationary point of the objective function. Therefore, the final step is to deduce the same result on the original convergent subsequence . This is simply due to the fact that is a subsequence of the convergent sequence , hence they have the same limit.
When assumption AC is dropped, similar arguments to those used in the proof of Proposition 2b. are employed. The optimality condition in (11) implies:
Function is continuous, hence its subgradient is outer semicontinuous and:
By definition of the limsup:
In our scenario, , , and . We have proved above in this proof that using only the convergence of , inequality (13) and the properties of . Assumption AC was not needed. Hence, . This proves that . Finally, using the inclusion (16), we get our result:
which ends the proof. □
The proof of the previous proposition is very similar to the proof of Proposition 2. The key idea is to use the sequence of conditional densities instead of the sequence . According to the application, one may be interested only in Proposition 1 or in Propositions 2–4. If one is interested in the parameters, Propositions 2 to 4 should be used, since we need a stable limit of . If we are only interested in minimizing an error criterion between the estimated distribution and the true one, Proposition 1 should be sufficient.
4. Case Studies
4.1. An Algorithm With Theoretically Global Infimum Attainment
We present a variant of algorithm (11) which ensures theoretically the convergence to a global infimum of the objective function as soon as there exists a convergent subsequence of . The idea is the same as Theorem 3.2.4 in []. Define by:
The proof of convergence is very simple and does not depend on the differentiability of any of the two functions or . We only assume A1 and A2 to be verified. Let be a convergent subsequence. Let be its limit. This is guaranteed by the compactness of and the fact that the whole sequence resides in (see Proposition 1b). Suppose also that the sequence converges to 0 as k goes to infinity.
Now assumptions of Theorem 3.2.4. from [] are verified. Thus, using the same lines from the proof of this theorem (inverting all inequalities since we are minimizing instead of maximizing), we may prove that is a global infimum of the estimated divergence, that is
The problem with this approach is that it depends heavily on the fact that the supremum on each step of the algorithm is calculated exactly. This does not happen in general unless function is convex or that we dispose of an algorithm that can perfectly solve non convex optimization problems (In this case, there is no meaning in applying an iterative proximal algorithm. We would have used the optimization algorithm directly on the objective function . Although in our approach, we use a similar assumption to prove the consecutive decreasing of , we can replace the infimum calculus in (11) by two things. We require at each step that we find a local infimum of whose evaluation with is less than the previous term of the sequence . If we can no longer find any local minima verifying the claim, the procedure stops with . This ensures the availability of all the proofs presented in this paper with no change.
4.2. The Two-Component Gaussian Mixture
We suppose that the model is a mixture of two gaussian densities, and that we are only interested in estimating the means and the proportion . The use of η is to avoid cancellation of any of the two components, and to keep the hypothesis for verified. We also suppose that the components variances are reduced (). The model takes the form
Here, . The regularization term is defined by (8) where:
Functions are clearly of class (int(Φ)), and so does . We prove that is closed and bounded, which is sufficient to conclude its compactness, since the space provided with the euclidean distance is complete.
If we are using the dual estimator of the divergence given by (2), then assumption A0 can be verified using the maximum theorem of Berge []. There is still a great difficulty in studying the properties (closedness or compactness) of the set . Moreover, all convergence properties of the sequence require the continuity of the estimated divergence with respect to ϕ. In order to prove the continuity of the estimated divergence, we need to assume that Φ is compact, i.e., assume that the means are included in an interval of the form . Now, using Theorem 10.31 from [], is continuous and differentiable almost everywhere with respect to φ.
The compactness assumption of Φ implies directly the compactness of . Indeed,
is then the inverse image by a continuous function of a closed set, so it is closed in Φ. Hence, it is compact.
Conclusion 1.
Using Propositions 4 and 1, if , the sequence defined through Formula (2) converges and there exists a subsequence which converges to a stationary point of the estimated divergence. Moreover, every limit point of the sequence is a stationary point of the estimated divergence.
If we are using the kernel-based dual estimator given by (3) with a Gaussian kernel density estimator, then function is continuously differentiable over Φ even if the means and are not bounded. For example, take defined by (1). There is one condition which relates the window of the kernel, say w, with the value of γ. Indeed, using Formula (3), we can write
In order to study the continuity and the differentiability of the estimated divergence with respect to ϕ, it suffices to study the integral term. We have
The dominating term at infinity in the nominator is , whereas it is in the denominator. It suffices now in order that the integrand to be bounded by an integrable function independently of that we have . That is , which is equivalent to . This argument also holds if we differentiate the integrand with respect to λ or either of the means or . For (the Pearson’s ), we need . For (the Hellinger), there is no condition on w.
Closedness of is proved similarly to the previous case. Boundedness, however, must be treated differently since Φ is not necessarily compact and is supposed to be . For simplicity, take . The idea is to choose an initialization for the proximal algorithm in a way that does not include unbounded values of the means. Continuity of permits calculation of the limits when either (or both) of the means tends to infinity. If both the means go to infinity, then . Thus, for , we have . For , the limit is infinity. If only one of the means tends to ∞, then the corresponding component vanishes from the mixture. Thus, if we choose such that:
then the algorithm starts at a point of Φ whose function value is inferior to the limits of at infinity. By Proposition 1, the algorithm will continue to decrease the value of and never goes back to the limits at infinity. In addition, the definition of permits to conclude that if is chosen according to conditions (18) and (19), then is bounded. Thus, becomes compact. Unfortunately the value of can be calculated but numerically. We will see next that in the case of the likelihood function, a similar condition will be imposed for the compactness of , and there will be no need for any numerical calculus.
Conclusion 2.
Using Propositions 4 and 1, under conditions (18) and (19) the sequence defined through Formula (3) converges and there exists a subsequence that converges to a stationary point of the estimated divergence. Moreover, every limit point of the sequence is a stationary point of the estimated divergence.
In the case of the likelihood , the set can be written as:
where is the log-likelihood function of the Gaussian mixture model. The log-likelihood function is clearly of class (int(Φ)). We prove that is closed and bounded which is sufficient to conclude its compactness, since the space provided with the euclidean distance is complete.
Closedness. The set is the inverse image by a continuous function (the log-likelihood) of a closed set. Therefore it is closed in .
Boundedness. By contradiction, suppose that is unbounded, then there exists a sequence which tends to infinity. Since , then either of or tends to infinity. Suppose that both and tend to infinity, we then have . Any finite initialization will imply that so that . Thus, it is impossible for both and to go to infinity.
Suppose that , and that converges (or that is bounded; in such case we extract a convergent subsequence) to . The limit of the likelihood has the form:
which is bounded by its value for and . Indeed, since , we have:
The right-hand side of this inequality is the likelihood of a Gaussian model , so that it is maximized when . Thus, if is chosen in a way that , the case when tends to infinity and is bounded would never be allowed. For the other case where and is bounded, we choose in a way that . In conclusion, with a choice of such that:
the set is bounded.
This condition on is very natural and means that we need to begin at a point at least better than the extreme cases where we only have one component in the mixture. This can be easily verified by choosing a random vector , and calculating the corresponding log-likelihood value. If does not verify the previous condition, we draw again another random vector until satisfaction.
Conclusion 3.
Using Propositions 4 and 1, under condition (20) the sequence converges and there exists a subsequence which converges to a stationary point of the likelihood function. Moreover, every limit point of the sequence is a stationary point of the likelihood.
Assumption A3 is not fulfilled (this part applies for all aforementioned situations). As mentioned in the paper of Tseng [], for the two Gaussian mixture example, by changing and by the same amount and suitably adjusting λ, the value of would be unchanged. We explore this more thoroughly by writing the corresponding equations. Let us suppose, absurdly, that for distinct ϕ and , we have . By definition of , it is given by a sum of nonnegative terms, which implies that all terms need to be equal to zero. The following lines are equivalent :
Looking at this set of n equations as an equality of two polynomials on y of degree 1 at n points, we deduce that as we have two distinct observations, say, and , the two polynomials need to have the same coefficients. Thus, the set of n equations is equivalent to the following two equations:
These two equations with three variables have an infinite number of solutions. Take, for example, .
Remark 2.
The previous conclusion can be extended to any two-component mixture of exponential families having the form:
One may write the corresponding n equations. The polynomial of has a degree of at most . Thus, if one disposes of distinct observations, the two polynomials will have the same set of coefficients. Finally, if with , then assumption A3 does not hold.
Unfortunately, we have no an information about the difference between consecutive terms except for the case of which corresponds to the classical EM recurrence:
Tseng [] has shown that we can prove directly that converges to 0.
5. Simulation Study
We summarize the results of 100 experiments on 100 samples by giving the average of the estimates and the error committed, and the corresponding standard deviation. The criterion error is the total variation distance (TVD), which is calculated using the distance. Indeed, the Scheffé Lemma (see [] (Page 129)) states that:
The TVD gives a measure of the maximum error we may commit when we use the estimated model in lieu of the true distribution. We consider the Hellinger divergence for estimators based on divergences, which corresponds to . Our preference of the Hellinger divergence is that we hope to obtain robust estimators without loss of efficiency (see []). is calculated with . The kernel-based MDDE is calculated using the Gaussian kernel, and the window is calculated using Silverman’s rule. We included in the comparison the minimum density power divergence (MDPD) of []. The estimator is defined by:
where . This is a Bregman divergence and is known to have good efficiency and robustness for a good choice of the tradeoff parameter. According to the simulation results in [], the value of seems to give a good tradeoff between robustness against outliers and a good performance under the model. Notice that the MDPD coincides with MLE when a tends to zero. Thus, our methodology presented here in this article, is applicable on this estimator and the proximal point algorithm can be used to calculate the MDPD. The proximal term will be kept the same, i.e., .
Remark 3
(Note on the robustness of the used estimators) In Section 3, we have proved under mild conditions that the proximal point algorithm (11) ensures the decrease of the estimated divergence. This means that when we use the dual Formulas (2) and (3), then the proximal point algorithm (11) returns at convergence the estimators defined by (4) and (5), respectively. Similarly, if we use the density power divergence of Basu et al. [], then the proximal-point algorithm returns at convergence the MDPD defined by (22). The robustness properties of the dual estimators (4) and (5) are studied in [] and [] respectively using the influence function (IF) approach. On the other hand, the robustness properties of the MDPD are studied using the IF approach in []. The MDDE (4) has generally an unbounded IF (see [] Section 3.1), whereas the kernel-based MDDE’s IF may be bounded for example in a Gaussian model and for any divergence with with , see [] Example 2. On the other hand, the MDPD has generally a bounded IF if the tradeoff parameter a is positive, and, in particular, in the Gaussian model. The MDPD becomes more robust as the tradeoff parameteraincreases (see Section 3.3 in []). Therefore, we should expect that the proximal point algorithm produces robust estimators in the case of the kernel-based MDφDE and the MDPD, and thus obtain better results than the MLE calculated using the EM algorithm.
Simulations from two mixture models are given below—a Gaussian mixture and a Weibull mixture. The MLE for both mixtures was calculated using the EM algorithm.
Optimizations were carried out using the Nelder–Mead algorithm [] under the statistical tool R []. Numerical integrations in the Gaussian mixture were calculated using the distrExIntegrate function of package distrEx. It is a slight modification of the standard function integrate. It performs a Gauss–Legendre quadrature when function integrate returns an error. In the Weibull mixture, we used the integral function from package pracma. Function integral includes a variety of adaptive numerical integration methods such as Kronrod–Gauss quadrature, Romberg’s method, Gauss–Richardson quadrature, Clenshaw–Curtis (not adaptive) and (adaptive) Simpson’s method. Although function integral is slow, it performs better than other functions even if the integrand has a relatively bad behavior.
5.1. The Two-Component Gaussian Mixture Revisited
We consider the Gaussian mixture (17) presented earlier with true parameters , and known variances equal to 1. Contamination was done by adding in the original sample to the five lowest values random observations from the uniform distribution . We also added to the five largest values random observations from the uniform distribution . Results are summarized in Table 1. The EM algorithm was initialized according to condition (20). This condition gave good results when we are under the model, whereas it did not always result in good estimates (the proportion converged towards 0 or 1) when outliers were added, and thus the EM algorithm was reinitialized manually.
Table 1.
The mean and the standard deviation of the estimates and the errors committed in a 100 run experiment of a two-component Gaussian mixture. The true set of parameters is , .
Figure 1 shows the values of the estimated divergence for both Formulas (2) and (3) on a logarithmic scale at each iteration of the algorithm.
Figure 1.
Decrease of the (estimated) Hellinger divergence between the true density and the estimated model at each iteration in the Gaussian mixture. The figure to the left is the curve of the values of the kernel-based dual Formula (3). The figure to the right is the curve of values of the classical dual Formula (2). Values are taken at a logarithmic scale .
Concerning our simulation results, the total variation of all four estimation methods is very close when we are under the model. When we added outliers, the classical MDDE was as sensitive as the maximum likelihood estimator. The error was doubled. Both the kernel-based MDDE and the MDPD are clearly robust since the total variation of these estimators under contamination has slightly increased.
5.2. The Two-Component Weibull Mixture Model
We consider a two-component Weibull mixture with unknown shapes and a proportion . The scales are known an equal to . The desity function is given by:
Contamination was done by replacing 10 observations of each sample chosen randomly by 10 i.i.d. observations drawn from a Weibull distribution with shape and scale . Results are summarized in Table 2. Notice that it would have been better to use asymmetric kernels in order to build the kernel-based MDDE since their use in the context of positive-supported distributions is advised in order to reduce the bias at zero, see [] for a detailed comparison with symmetric kernels. This is not, however, the goal of this paper. In addition, the use of symmetric kernels in this mixture model gave satisfactory results.
Table 2.
The mean and the standard deviation of the estimates and the errors committed in a 100-run experiment of a two-component Weibull mixture. The true set of parameter is .
Simulations results in Table 2 confirm once more the validity of our proximal point algorithm and the clear robustness of both the kernel-based MDDE and the MDPD.
6. Conclusions
We introduced in this paper a proximal-point algorithm that permits calculation of divergence-based estimators. We studied the theoretical convergence of the algorithm and verified it in a two-component Gaussian mixture. We performed several simulations which confirmed that the algorithm works and is a way to calculate divergence-based estimators. We also applied our proximal algorithm on a Bregman divergence estimator (the MDPD), and the algorithm succeeded to produce the MDPD. Further investigations about the role of the proximal term and a comparison with direct optimization methods in order to show the practical use of the algorithm may be considered in a future work.
Acknowledgments
The authors are grateful to Laboratoire de Statistique Théorique et Appliquée, Université Pierre et Marie Curie, for financial support.
Author Contributions
Michel Broniatowski proposed use of a proximal-point algorithm in order to calculate the MDDE. Michel Broniatowski proposed building a work based on the paper of []. Diaa Al Mohamad proposed the generalization in Section 2.3 and provided all of the convergence results in Section 3. Diaa Al Mohamad also conceived the simulations. Finally, Michel Broniatowski contributed to improving the text written by Diaa Al Mohamad. Both authors have read and approved the final manuscript.
Conflicts of Interest
The authors declare no conflict of interest.
References
- McLachlan, G.J.; Krishnan, T. The EM Algorithm and Extensions; Wiley: Hoboken, NJ, USA, 2007. [Google Scholar]
- Tseng, P. An Analysis of the EM Algorithm and Entropy-Like Proximal Point Methods. Math. Oper. Res. 2004, 29, 27–44. [Google Scholar] [CrossRef]
- Chrétien, S.; Hero, A.O. Generalized Proximal Point Algorithms and Bundle Implementations. Available online: http://www.eecs.umich.edu/techreports/systems/cspl/cspl-316.pdf (acceesed on 25 July 2016).
- Goldstein, A.; Russak, I. How good are the proximal point algorithms? Numer. Funct. Anal. Optim. 1987, 9, 709–724. [Google Scholar] [CrossRef]
- Chrétien, S.; Hero, A.O. Acceleration of the EM algorithm via proximal point iterations. In Proceedings of the IEEE International Symposium on Information Theory, Cambridge, MA, USA, 16–21 August 1998.
- Csiszár, I. Eine informationstheoretische Ungleichung und ihre anwendung auf den Beweis der ergodizität von Markoffschen Ketten. Publ. Math. Inst. Hung. Acad. Sci. 1963, 8, 95–108. (In German) [Google Scholar]
- Broniatowski, M.; Keziou, A. Parametric estimation and tests through divergences and the duality technique. J. Multivar. Anal. 2009, 100, 16–36. [Google Scholar] [CrossRef]
- Cressie, N.; Read, T.R.C. Multinomial goodness-of-fit tests. J. R. Stat. Soc. Ser. B 1984, 46, 440–464. [Google Scholar]
- Broniatowski, M.; Keziou, A. Minimization of divergences on sets of signed measures. Stud. Sci. Math. Hung. 2006, 43, 403–442. [Google Scholar] [CrossRef]
- Liese, F.; Vajda, I. On Divergences and Informations in Statistics and Information Theory. IEEE Trans. Inf. Theory 2006, 52, 4394–4412. [Google Scholar] [CrossRef]
- Al Mohamad, D. Towards a better understanding of the dual representation of phi divergences. 2016; arXiv:1506.02166. [Google Scholar]
- Toma, A.; Broniatowski, M. Dual divergence estimators and tests: Robustness results. J. Multivar. Anal. 2011, 102, 20–36. [Google Scholar] [CrossRef]
- Rockafellar, R.T.; Wets, R.J.B. Variational Analysis, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
- Basu, A.; Harris, I.R.; Hjort, N.L.; Jones, M.C. Robust and Efficient Estimation by Minimising a Density Power Divergence. Biometrika 1998, 85, 549–559. [Google Scholar] [CrossRef]
- Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. B 1977, 39, 1–38. [Google Scholar]
- Wu, C.F.J. On the Convergence Properties of the EM Algorithm. Ann. Stat. 1983, 11, 95–103. [Google Scholar] [CrossRef]
- Ostrowski, A. Solution of Equations and Systems of Equations; Academic Press: Cambridge, MA, USA, 1966. [Google Scholar]
- Chrétien, S.; Hero, A.O. On EM algorithms and their proximal generalizations. ESAIM Probabil. Stat. 2008, 12, 308–326. [Google Scholar] [CrossRef]
- Berge, C. Topological Spaces: Including a Treatment of Multi-valued Functions, Vector Spaces, and Convexity; Dover Publications: Mineola, NY, USA, 1963. [Google Scholar]
- Meister, A. Deconvolution Problems in Nonparametric Statistics; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
- Jiménz, R.; Shao, Y. On robustness and efficiency of minimum divergence estimators. Test 2001, 10, 241–248. [Google Scholar] [CrossRef]
- Nelder, J.A.; Mead, R. A Simplex Method for Function Minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
- The R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2013. [Google Scholar]
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).