Next Article in Journal
Expected Logarithm of Central Quadratic Form and Its Use in KL-Divergence of Some Distributions
Next Article in Special Issue
Entropy Minimizing Curves with Application to Flight Path Design and Clustering
Previous Article in Journal
Symmetric Fractional Diffusion and Entropy Production
Previous Article in Special Issue
Link between Lie Group Statistical Mechanics and Thermodynamics of Continua
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Proximal Point Algorithm for Minimum Divergence Estimators with Application to Mixture Models †

by
Diaa Al Mohamad
* and
Michel Broniatowski
Laboratoire de Statistique Théorique et Appliquée, Université Pierre et Marie CURIE, 4 place Jussieu, 75005 Paris, France
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in the 2nd Conference on Geometric Science of Information, Palaiseau, France, 28–30 October 2015.
Entropy 2016, 18(8), 277; https://doi.org/10.3390/e18080277
Submission received: 11 June 2016 / Revised: 20 July 2016 / Accepted: 21 July 2016 / Published: 27 July 2016
(This article belongs to the Special Issue Differential Geometrical Theory of Statistics)

Abstract

:
Estimators derived from a divergence criterion such as φ - divergences are generally more robust than the maximum likelihood ones. We are interested in particular in the so-called minimum dual φ–divergence estimator (MD φ DE), an estimator built using a dual representation of φ –divergences. We present in this paper an iterative proximal point algorithm that permits the calculation of such an estimator. The algorithm contains by construction the well-known Expectation Maximization (EM) algorithm. Our work is based on the paper of Tseng on the likelihood function. We provide some convergence properties by adapting the ideas of Tseng. We improve Tseng’s results by relaxing the identifiability condition on the proximal term, a condition which is not verified for most mixture models and is hard to be verified for “non mixture” ones. Convergence of the EM algorithm in a two-component Gaussian mixture is discussed in the spirit of our approach. Several experimental results on mixture models are provided to confirm the validity of the approach.

Graphical Abstract

1. Introduction

The Expectation Maximization (EM) algorithm is a well-known method for calculating the maximum likelihood estimator of a model where incomplete data is considered. For example, when working with mixture models in the context of clustering, the labels or classes of observations are unknown during the training phase. Several variants of the EM algorithm were proposed (see [1]). Another way to look at the EM algorithm is as a proximal point problem (see [2,3]). Indeed, one may rewrite the conditional expectation of the complete log-likelihood as a sum of the log-likelihood function and a distance-like function over the conditional densities of the labels provided an observation. Generally, the proximal term has a regularization effect in the sense that a proximal point algorithm is more stable and frequently outperforms classical optimization algorithms (see [4]). Chrétien and Hero [5] prove superlinear convergence of a proximal point algorithm derived from the EM algorithm. Notice that EM-type algorithms usually enjoy no more than linear convergence.
Taking into consideration the need for robust estimators, and the fact that the maximum likelihood estimator (MLE) is the least robust estimator among the class of divergence-type estimators that we present below, we generalize the EM algorithm (and the version of Tseng [2]) by replacing the log-likelihood function by an estimator of a φ - divergence between the true distribution of the data and the model. A φ –divergence in the sense of Csiszár [6] is defined in the same way as [7] by:
D φ ( Q , P ) = φ d Q d P ( y ) d P ( y ) ,
where φ is a nonnegative strictly convex function. Examples of such divergences are: the Kullback–Leibler (KL) divergence , the modified KL divergence, the Hellinger distanceamong others. All these well-known divergences belong to the class of Cressie-Read functions [8] defined by
φ γ ( x ) = x γ - γ x + γ - 1 γ ( γ - 1 ) for γ R \ { 0 , 1 } .
for γ = 1 2 , 0 , 1 respectively. For γ { 0 , 1 } , the limit is calculated, and we denote φ 0 ( x ) = - log x + x - 1 for the case of the modified KL and φ 1 ( x ) = x log x - x + 1 for the KL.
Since the φ - divergence calculus uses the unknown true distribution, we need to estimate it. We consider the dual estimator of the divergence introduced independently by [9,10]. The use of this estimator is motivated by many reasons. Its minimum coincides with the MLE for φ ( t ) = - log ( t ) + t - 1 . In addition, it has the same form for discrete and continuous models, and does not consider any partitioning or smoothing.
Let ( P ϕ ) ϕ Φ be a parametric model with Φ R d , and denote ϕ T as the true set of parameters. Let d y be the Lebesgue measure defined on R . Suppose that ϕ Φ , the probability measure P ϕ is absolutely continuous with respect to d y and denote p ϕ the corresponding probability density. The dual estimator of the φ - divergence given an n - sample y 1 , , y n is given by:
D ^ φ ( p ϕ , p ϕ T ) = sup α Φ φ p ϕ p α ( x ) p ϕ ( x ) d x - 1 n i = 1 n φ # p ϕ p α ( y i ) ,
with φ # ( t ) = t φ ( t ) - φ ( t ) . Al Mohamad [11] argues that this formula works well under the model; however, when we are not, this quantity largely underestimates the divergence between the true distribution and the model, and proposes the following modification:
D ˜ φ ( p ϕ , p ϕ T ) = φ p ϕ K n , w ( x ) p ϕ ( x ) d x - 1 n i = 1 n φ # p ϕ K n , w ( y i ) ,
where K n , w is the Rosenblatt–Parzen kernel estimate with window parameter w. Whether it is D ^ φ , or D ˜ φ , the minimum dual φ - divergence estimator (MD φ DE) is defined as the argument of the infimum of the dual approximation:
ϕ ^ n = arg inf ϕ Φ D ^ φ ( p ϕ , p ϕ T ) ,
ϕ ˜ n = arg inf ϕ Φ D ˜ φ ( p ϕ , p ϕ T ) .
Asymptotic properties and consistency of these two estimators can be found in [7,11]. Robustness properties were also studied using the influence function approach in [11,12]. The kernel-based MD φ DE (5) seems to be a better estimator than the classical MD φ DE (4) in the sense that the former is robust whereas the later is generally not. Under the model, the estimator given by (4) is, however, more efficient, especially when the true density of the data is unbounded. More investigation is needed in the context of unbounded densities, since we may use asymmetric kernels in order to improve the efficiency of the kernel-based MD φ DE, see [11] for more details.
In this paper, we propose calculation of the MD φ DE using an iterative procedure based on the work of Tseng [2] on the log-likelihood function. This procedure has the form of a proximal point algorithm, and extends the EM algorithm. Our convergence proof demands some regularity (continuity and differentiability) of the estimated divergence with respect to the parameter vector φ) which is not simply checked using (2). Recent results in the book of Rockafellar and Wets [13] provide sufficient conditions to prove continuity and differentiability of supremal functions of the form of (2) with respect to φ. Differentiability with respect to φ still remains a very hard task; therefore, our results cover cases when the objective function is not differentiable.
The paper is organized as follows: in Section 2, we present the general context. We also present the derivation of our algorithm from the EM algorithm and passing by Tseng’s generalization. In Section 3, we present some convergence properties. We discuss in Section 4 a variant of the algorithm with a theoretical global infimum, and an example of the two-Gaussian mixture model and a convergence proof of the EM algorithm in the spirit of our approach. Finally, Section 5 contains simulations confirming our claim about the efficiency and the robustness of our approach in comparison with the MLE. The algorithm is also applied to the so-called minimum density power divergence (MDPD) introduced by [14].

2. A Description of the Algorithm

2.1. General Context and Notations

Let ( X , Y ) be a couple of random variables with joint probability density function f ( x , y | ϕ ) parametrized by a vector of parameters ϕ Φ R d . Let ( X 1 , Y 1 ) , , ( X n , Y n ) be n copies of ( X , Y ) independently and identically distributed. Finally, let ( x 1 , y 1 ) , , ( x n , y n ) be n realizations of the n copies of ( X , Y ) . The x i s are the unobserved data (labels) and the y i s are the observations. The vector of parameters φ is unknown and needs to be estimated. The observed data y i are supposed to be real numbers, and the labels x i belong to a space X not necessarily finite unless mentioned otherwise. The marginal density of the observed data is given by p ϕ ( y ) = f ( x , y | ϕ ) d x , where d x is a measure defined on the label space (for example, the counting measure if we work with mixture models).
For a parametrized function f with a parameter a, we write f ( x | a ) . We use the notation ϕ k for sequences with the index above. The derivatives of a real valued function ψ defined on R are denoted ψ , ψ , etc. We denote f the gradient of a real function f defined on R d . For a generic function of two (vectorial) arguments D ( ϕ | θ ) , then 1 D ( ϕ | θ ) denotes the gradient with respect to the first (vectorial) variable. Finally, for any set A, we use i n t ( A ) to denote the interior of A.

2.2. EM Algorithm and Tseng’s Generalization

The EM algorithm estimates the unknown parameter vector by (see [15]):
ϕ k + 1 = arg max Φ E log ( f ( X , Y | ϕ ) ) Y = y , ϕ k ,
where X = ( X 1 , , X n ) , Y = ( Y 1 , , Y n ) and y = ( y 1 , , y n ) . By independence between the couples ( X i , Y i ) ’s, the previous iteration may be written as:
ϕ k + 1 = arg max Φ i = 1 n E log ( f ( X i , Y i | ϕ ) ) Y i = y i , ϕ k = arg max Φ i = 1 n X log ( f ( x , y i | ϕ ) ) h i ( x | ϕ k ) d x ,
where h i ( x | ϕ k ) = f ( x , y i | ϕ k ) p ϕ k ( y i ) is the conditional density of the labels (at step k) provided y i which we suppose to be positive d x - almost everywhere. It is well-known that the EM iterations can be rewritten as a difference between the log-likelihood and a Kullback–Liebler distance-like function. Indeed,
ϕ k + 1 = arg max Φ i = 1 n X log h i ( x | ϕ ) × p ϕ ( y i ) h i ( x | ϕ k ) d x = arg max Φ i = 1 n X log p ϕ ( y i ) h i ( x | ϕ k ) d x + i = 1 n X log h i ( x | ϕ ) h i ( x | ϕ k ) d x = arg max Φ i = 1 n log p ϕ ( y i ) + i = 1 n X log h i ( x | ϕ ) h i ( x | ϕ k ) h i ( x | ϕ k ) d x + i = 1 n X log h i ( x | ϕ k ) h i ( x | ϕ k ) d x .
The final line is justified by the fact that h i ( x | ϕ ) is a density, therefore it integrates to 1. The additional term does not depend on ϕ and, hence, can be omitted. We now have the following iterative procedure:
ϕ k + 1 = arg max Φ i = 1 n log p ϕ ( y i | ϕ ) + i = 1 n X log h i ( x | ϕ ) h i ( x | ϕ k ) h i ( x | ϕ k ) d x .
The previous iteration has the form of a proximal point maximization of the log-likelihood, i.e., a perturbation of the log-likelihood by a distance-like function defined on the conditional densities of the labels. Tseng [2] generalizes this iteration by allowing any nonnegative convex function ψ to replace the t - log ( t ) function. Tseng’s recurrence is defined by:
ϕ k + 1 = arg sup ϕ J ( ϕ ) - D ψ ( ϕ , ϕ k ) ,
where J is the log-likelihood function and D ψ is given by:
D ψ ( ϕ , ϕ k ) = i = 1 n X ψ h i ( x | ϕ ) h i ( x | ϕ k ) h i ( x | ϕ k ) d x ,
for any real nonnegative convex function ψ such that ψ ( 1 ) = ψ ( 1 ) = 0 . D ψ ( ϕ 1 , ϕ 2 ) is nonnegative, and D ψ ( ϕ 1 , ϕ 2 ) = 0 if and only if i , h i ( x | ϕ 1 ) = h i ( x | ϕ 2 ) d x almost everywhere.

2.3. Generalization of Tseng’s Algorithm

We use the relationship between maximizing the log-likelihood and minimizing the Kullback–Liebler divergence to generalize the previous algorithm. We, therefore, replace the log-likelihood function by an estimate of a φ - divergence D φ between the true distribution and the model. We use the dual estimators of the divergence presented earlier in the introduction (2) or (3), which we denote in the same manner D ^ φ , unless mentioned otherwise. Our new algorithm is defined by:
ϕ k + 1 = arg inf ϕ D ^ φ ( p ϕ , p ϕ T ) + 1 n D ψ ( ϕ , ϕ k ) ,
where D ψ ( ϕ , ϕ k ) is defined by (8). When φ ( t ) = - log ( t ) + t - 1 , it is easy to see that we get recurrence (7). Indeed, for the case of (2) we have:
D ^ φ ( p ϕ , p ϕ T ) = sup α 1 n i = 1 n log ( p α ( y i ) ) - 1 n i = 1 n log ( p ϕ ( y i ) ) .
Using the fact that the first term in D ^ φ ( p ϕ , p ϕ T ) does not depend on φ, so it does not count in the arg inf defining ϕ k + 1 , we easily get (7). The same applies for the case of (3). For notational simplicity, from now on, we redefine D ψ with a normalization by n, i.e.,
D ψ ( ϕ , ϕ k ) = 1 n i = 1 n X ψ h i ( x | ϕ ) h i ( x | ϕ k ) h i ( x | ϕ k ) d x .
Hence, our set of algorithms is redefined by:
ϕ k + 1 = arg inf ϕ D ^ φ ( p ϕ , p ϕ T ) + D ψ ( ϕ , ϕ k ) .
We will see later that this iteration forces the divergence to decrease and that, under suitable conditions, it converges to a (local) minimum of D ^ φ ( p ϕ , p ϕ T ) . It results that algorithm (11) being a way to calculate both the MD φ DE (4) and the kernel-based MD φ DE (5).

3. Some Convergence Properties of ϕ k

We show here how, according to some possible situations, one may prove convergence of the algorithm defined by (11). Let ϕ 0 be a given initialization, and define
Φ 0 : = { ϕ Φ : D ^ φ ( p ϕ , p ϕ T ) D ^ φ ( p ϕ 0 , p ϕ T ) } ,
which we suppose to be a subset of i n t ( Φ ) . The idea of defining this set in this context is inherited from the paper Wu [16], which provided the first correct proof of convergence for the EM algorithm. Before going any further, we recall the following definition of a (generalized) stationary point.
Definition 1. 
Let f : R d R be a real valued function. If f is differentiable at a point ϕ * such that f ( ϕ * ) = 0 , we then say that ϕ * is a stationary point of f. If f is not differentiable at ϕ * but the subgradient of f at ϕ * , say f ( ϕ * ) , exists such that 0 f ( ϕ * ) , then ϕ * is called a generalized stationary point of f.
Remark 1. 
In the whole paper, the subgradient is defined for any function not necessarily convex (see Definition 8.3) in [13] for more details.
We will be using the following assumptions:
A0.
Functions ϕ D ^ φ ( p ϕ | p ϕ T ) , D ψ are lower semicontinuous;
A1.
Functions ϕ D ^ φ ( p ϕ | p ϕ T ) , D ψ and 1 D ψ are defined and continuous on, respectively, Φ , Φ × Φ and Φ × Φ ;
AC.
Function ϕ D ^ φ ( p ϕ | p ϕ T ) is defined and continuous on Φ;
A2.
Φ 0 is a compact subset of int ( Φ ) ;
A3.
D ψ ( ϕ , ϕ ¯ ) > 0 for all ϕ ¯ ϕ Φ .
Recall also that we suppose that h i ( x | ϕ ) > 0 , d x - a . e . We relax the convexity assumption of function ψ. We only suppose that ψ is nonnegative and ψ ( t ) = 0 iff t = 1 . In addition, ψ ( t ) = 0 if t = 1 .
Continuity and differentiability assumptions of function ϕ D ^ φ ( p ϕ | p ϕ T ) for the case of (3) can be easily checked using Lebesgue theorems. The continuity assumption for the case of (2) can be checked using Theorem 1.17 or Corollary 10.14 in [13]. Differentiability can also be checked using Corollary 10.14 or Theorem 10.31 in the same book. In what concerns D ψ , continuity and differentiability can be obtained merely by fulfilling Lebesgue theorems conditions. When working with mixture models, we only need the continuity and differentiability of ψ and functions h i . The later is easily deduced from regularity assumptions on the model. For assumption A2, there is no universal method, see Section 4.2 for an Example. Assumption A3 can be checked using Lemma 2 in [2].
We start the convergence properties by proving that the objective function D ^ φ ( p ϕ | p ϕ T ) decreases alongside the the sequence ( ϕ k ) k , and give a possible set of conditions for the existence of the sequence ( ϕ k ) k .
Proposition 1. 
(a) Assume that the sequence ( ϕ k ) k is well defined in Φ, then D ^ φ ( p ϕ k + 1 | p ϕ T ) D ^ φ ( p ϕ k | p ϕ T ) , and (b) k , ϕ k Φ 0 . (c) Assume A0 and A2 are verified, then the sequence ( ϕ k ) k is defined and bounded. Moreover, the sequence ( D ^ φ ( p ϕ k | p ϕ T ) ) k converges.
Proof. 
We prove ( a ) . We have by definition of the arginf:
D ^ φ ( p ϕ k + 1 , p ϕ T ) + D ψ ( ϕ k + 1 , ϕ k ) D ^ φ ( p ϕ k , p ϕ T ) + D ψ ( ϕ k , ϕ k ) .
We use the fact that D ψ ( ϕ k , ϕ k ) = 0 for the right-hand side and that D ψ ( ϕ k + 1 , ϕ k ) 0 for the left-hand side of the previous inequality. Hence, D ^ φ ( p ϕ k + 1 , p ϕ T ) D ^ φ ( p ϕ k , p ϕ T ) .
We prove ( b ) using the decreasing property previously proved in (a). We have by recurrence k , D ^ φ ( p ϕ k + 1 , p ϕ T ) D ^ φ ( p ϕ k , p ϕ T ) D ^ φ ( p ϕ 0 , p ϕ T ) . The result follows directly by definition of Φ 0 .
We prove ( c ) by induction on k. For k = 0 , clearly ϕ 0 is well defined since we choose it. The choice of the initial point ϕ 0 of the sequence may influence the convergence of the sequence. See the Example of the Gaussian mixture in Section 4.2. Suppose, for some k 0 , that ϕ k exists. We prove that the infimum is attained in Φ 0 . Let ϕ Φ be any vector at which the value of the optimized function has a value less than its value at ϕ k , i.e., D ^ φ ( p ϕ , p ϕ T ) + D ψ ( ϕ , ϕ k ) D ^ φ ( p ϕ k , p ϕ T ) + D ψ ( ϕ k , ϕ k ) . We have:
D ^ φ ( p ϕ , p ϕ T ) D ^ φ ( p ϕ , p ϕ T ) + D ψ ( ϕ , ϕ k ) D ^ φ ( p ϕ k , p ϕ T ) + D ψ ( ϕ k , ϕ k ) D ^ φ ( p ϕ k , p ϕ T ) D ^ φ ( p ϕ 0 , p ϕ T ) .
The first line follows from the non negativity of D ψ . As D ^ φ ( p ϕ , p ϕ T ) D ^ φ ( p ϕ 0 , p ϕ T ) , then ϕ Φ 0 . Thus, the infimum can be calculated for vectors in Φ 0 instead of Φ. Since Φ 0 is compact and the optimized function is lower semicontinuous (the sum of two lower semicontinuous functions), then the infimum exists and is attained in Φ 0 . We may now define ϕ k + 1 to be a vector whose corresponding value is equal to the infimum.
Convergence of the sequence ( D ^ φ ( p ϕ k , p ϕ T ) ) k comes from the fact that it is non increasing and bounded. It is non increasing by virtue of (a). Boundedness comes from the lower semicontinuity of ϕ D ^ φ ( p ϕ , p ϕ T ) . Indeed, k , D ^ φ ( p ϕ k , p ϕ T ) inf ϕ Φ 0 D ^ φ ( p ϕ , p ϕ T ) . The infimum of a proper lower semicontinuous function on a compact set exists and is attained on this set. Hence, the quantity inf ϕ Φ 0 D ^ φ ( p ϕ , p ϕ T ) exists and is finite. This ends the proof.   □
Compactness in part (c) can be replaced by inf-compactness of function ϕ D ^ φ ( p ϕ | p ϕ T ) and continuity of D ψ with respect to its first argument. The convergence of the sequence ( D ^ φ ( ϕ k | ϕ T ) ) k is an interesting property, since, in general, there is no theoretical guarantee, or it is difficult to prove that the whole sequence ( ϕ k ) k converges. It may also continue to fluctuate around a minimum. The decrease of the error criterion D ^ φ ( ϕ k | ϕ T ) between two iterations helps us decide when to stop the iterative procedure.
Proposition 2. 
Suppose A1 verified, Φ 0 is closed and { ϕ k + 1 - ϕ k } 0 .
(a) 
If AC is verified, then any limit point of ( ϕ k ) k is a stationary point of ϕ D ^ φ ( p ϕ | p ϕ T ) ;
(b) 
If AC is dropped, then any limit point of ( ϕ k ) k is a “generalized” stationary point of ϕ D ^ φ ( p ϕ | p ϕ T ) , i.e., zero belongs to the subgradient of ϕ D ^ φ ( p ϕ | p ϕ T ) calculated at the limit point.
Proof. 
We prove ( a ) . Let ( ϕ n k ) k be a convergent subsequence of ( ϕ k ) k which converges to ϕ . First, ϕ Φ 0 , because Φ 0 is closed and the subsequence ( ϕ n k ) is a sequence of elements of Φ 0 (proved in Proposition 1b).
Let us now show that the subsequence ( ϕ n k + 1 ) also converges to ϕ . We simply have:
ϕ n k + 1 - ϕ ϕ n k - ϕ + ϕ n k + 1 - ϕ n k .
Since ϕ k + 1 - ϕ k 0 and ϕ n k ϕ , we conclude that ϕ n k + 1 ϕ .
By definition of ϕ n k + 1 , it verifies the infimum in recurrence (11), so that the gradient of the optimized function is zero:
D ^ φ ( p ϕ n k + 1 , p ϕ T ) + D ψ ( ϕ n k + 1 , ϕ n k ) = 0 .
Using the continuity assumptions A1 and AC of the gradients, one can pass to the limit with no problem:
D ^ φ ( p ϕ , p ϕ T ) + D ψ ( ϕ , ϕ ) = 0 .
However, the gradient D ψ ( ϕ , ϕ ) = 0 because (recall that ψ ( 1 ) = 0 ) for any ϕ Φ
D ψ ( ϕ , ϕ ) = i = 1 n X h i ( x | ϕ ) h i ( x | ϕ ) ψ h i ( x | ϕ ) h i ( x | ϕ ) h i ( x | ϕ ) d x = i = 1 n X h i ( x | ϕ ) ψ ( 1 ) d x ,
which is equal to zero since ψ ( 1 ) = 0 . This implies that D ^ φ ( p ϕ , p ϕ T ) = 0 .
We prove (b). We use again the definition of the arginf. As the optimized function is not necessarily differentiable at the points of the sequence ( ϕ k ) k , a necessary condition for ϕ k + 1 to be an infimum is that 0 belongs to the subgradient of the function on ϕ k + 1 . Since D ψ ( ϕ , ϕ k ) is assumed to be differentiable, the optimality condition is translated into:
- D ψ ( ϕ k + 1 , ϕ k ) D ^ φ ( p ϕ k + 1 , p ϕ T ) k .
Since D ^ φ ( p ϕ , p ϕ T ) is continuous, then its subgradient is outer semicontinuous (see [13] Chapter 8, Proposition 7). We use the same arguments presented in (a) to conclude the existence of two subsequences ( ϕ n k ) k and ( ϕ n k + 1 ) k which converge to the same limit ϕ . By definition of outer semicontinuity, and since ϕ n k + 1 ϕ , we have:
lim sup ϕ n k + 1 ϕ D ^ φ ( p ϕ n k + 1 , p ϕ T ) D ^ φ ( p ϕ , p ϕ T ) .
We want to prove that 0 lim sup ϕ n k + 1 ϕ D ^ φ ( p ϕ n k + 1 , p ϕ T ) . By definition of the (outer) limsup (see [13] Chapter 4, Definition 1 or Chapter 5B):
lim sup ϕ ϕ D ^ φ ( p ϕ , p ϕ T ) = u | ϕ k ϕ , u k u with u k D ^ φ ( p ϕ k , p ϕ T ) .
In our scenario, ϕ = ϕ n k + 1 , ϕ k = ϕ n k + 1 , u = 0 and u k = 1 D ψ ( ϕ n k + 1 , ϕ n k ) . The continuity of 1 D ψ with respect to both arguments and the fact that the two subsequences ϕ n k + 1 and ϕ n k converge to the same limit, imply that u k 1 D ψ ( ϕ , ϕ ) = 0 . Hence, u = 0 lim sup ϕ n k + 1 ϕ D ^ φ ( p ϕ n k + 1 , p ϕ T ) . By inclusion (12), we get our result:
0 D ^ φ ( p ϕ , p ϕ T ) .
This ends the proof.   □
The assumption { ϕ k + 1 - ϕ k } 0 used in Proposition 2 is not easy to be checked unless one has a close formula of ϕ k . The following proposition gives a method to prove such assumption. This method seems simpler, but it is not verified in many mixture models (see Section 4.2 for a counter Example).
Proposition 3. 
Assume that A1, A2 and A3 are verified, then { ϕ k + 1 - ϕ k } 0 . Thus, by Proposition 2 (according to whether AC is verified or not), any limit point of the sequence ϕ k is a (generalized) stationary point of D ^ φ ( . | ϕ T ) .
Proof. 
By contradiction, let us suppose that ϕ k + 1 - ϕ k does not converge to 0. There exists a subsequence such that ϕ N 0 ( k ) + 1 - ϕ N 0 ( k ) > ε , k k 0 . Since ( ϕ k ) k belongs to the compact set Φ 0 , there exists a convergent subsequence ( ϕ N 1 N 0 ( k ) ) k such that ϕ N 1 N 0 ( k ) ϕ ¯ . The sequence ( ϕ N 1 N 0 ( k ) + 1 ) k belongs to the compact set Φ 0 ; therefore, we can extract a further subsequence ( ϕ N 2 N 1 N 0 ( k ) + 1 ) k such that ϕ N 2 N 1 N 0 ( k ) + 1 ϕ ˜ . Besides ϕ ^ ϕ ˜ . Finally since the sequence ( ϕ N 1 N 0 ( k ) ) k is convergent, a further subsequence also converges to the same limit ϕ ¯ . We have proved the existence of a subsequence of ( ϕ k ) k such that ϕ N ( k ) + 1 - ϕ N ( k ) does not converge to 0 and such that ϕ N ( k ) + 1 ϕ ˜ , ϕ N ( k ) ϕ ¯ with ϕ ¯ ϕ ˜ .
The real sequence ( D ^ φ ( p ϕ k , p ϕ T ) ) k converges as proved in Proposition 1c. As a result, both sequences D ^ φ ( p ϕ N ( k ) + 1 , p ϕ T ) and D ^ φ ( p ϕ N ( k ) , p ϕ T ) converge to the same limit being subsequences of the same convergent sequence. In the proof of Proposition 1, we can deduce the following inequality:
D ^ ( p ϕ k + 1 , p ϕ T ) + D ψ ( ϕ k + 1 , ϕ k ) D ^ ( p ϕ k , p ϕ T ) ,
which is also verified for any substitution of k by N ( k ) . By passing to the limit on k, we get D ψ ( ϕ ˜ , ϕ ¯ ) 0 . However, the distance-like function D ψ is nonnegative, so that it becomes zero. Using assumption A3, D ψ ( ϕ ˜ , ϕ ¯ ) = 0 implies that ϕ ˜ = ϕ ¯ . This contradicts the hypothesis that ϕ k + 1 - ϕ k does not converge to 0.
The second part of the Proposition is a direct result of Proposition 2.   □
Corollary 1. 
Under assumptions of Proposition 3, the set of accumulation points of ( ϕ k ) k is a connected compact set. Moreover, if ϕ D ^ ( p ϕ , p ϕ T ) is strictly convex in the neighborhood of a limit point of the sequence ( ϕ k ) k , then the whole sequence ( ϕ k ) k converges to a local minimum of D ^ ( p ϕ , p ϕ T ) .
Proof. 
Since the sequence ( ϕ ) k is bounded and verifies ϕ k + 1 - ϕ k 0 , then Theorem 28.1 in [17] implies that the set of accumulation points of ( ϕ k ) k is a connected compact set. It is not empty since Φ 0 is compact. The remaining of the proof is a direct result of Theorem 3.3.1 from [18]. The strict concavity of the objective function around an accumulation point is replaced here by the strict convexity of the estimated divergence.   □
Proposition 3 and Corollary 1 describe what we may hope to get of the sequence ϕ k . Convergence of the whole sequence is bound by a local convexity assumption in the neighborhood of a limit point. Although simple, this assumption remains difficult to be checked since we do not know where might be the limit points. In addition, assumption A3 is very restrictive, and is not verified in mixture models.
Propositions 2 and 3 were developed for the likelihood function in the paper of Tseng [2]. Similar results for a general class of functions replacing D ^ φ and D ψ which may not be differentiable (but still continuous) are presented in [3]. In these results, assumption A3 is essential. Although in [18] this problem is avoided, their approach demands that the log-likelihood has - limit as ϕ . This is simply not verified for mixture models. We present a similar method to the one in [18] based on the idea of Tseng [2] of using the set Φ 0 which is valid for mixtures. We lose, however, the guarantee of consecutive decrease of the sequence ( ϕ k ) k .
Proposition 4. 
Assume A1, AC and A2 verified. Any limit point of the sequence ( ϕ k ) k is a stationary point of ϕ D ^ ( p ϕ , p ϕ T ) . If AC is dropped, then 0 belongs to the subgradient of ϕ D ^ ( p ϕ , p ϕ T ) calculated at the limit point.
Proof. 
If ( ϕ k ) k converges to, say, ϕ , then the result falls simply from Proposition 2.
If ( ϕ k ) k does not converge. Since Φ 0 is compact and k , ϕ k Φ 0 (proved in Proposition 1), there exists a subsequence ( ϕ N 0 ( k ) ) k such that ϕ N 0 ( k ) ϕ ˜ . Let us take the subsequence ( ϕ N 0 ( k ) - 1 ) k . This subsequence does not necessarily converge; it is still contained in the compact Φ 0 , so that we can extract a further subsequence ( ϕ N 1 N 0 ( k ) - 1 ) k which converges to, say, ϕ ¯ . Now, the subsequence ( ϕ N 1 N 0 ( k ) ) k converges to ϕ ˜ , because it is a subsequence of ( ϕ N 0 ( k ) ) k . We have proved until now the existence of two convergent subsequences ϕ N ( k ) - 1 and ϕ N ( k ) with a priori different limits. For simplicity and without any loss of generality, we will consider these subsequences to be ϕ k and ϕ k + 1 , respectively.
Conserving previous notations, suppose that ϕ k + 1 ϕ ˜ and ϕ k ϕ ¯ . We use again inequality (13):
D ^ ( p ϕ k + 1 , p ϕ T ) + D ψ ( ϕ k + 1 , ϕ k ) D ^ ( p ϕ k , p ϕ T ) .
By taking the limits of the two parts of the inequality as k tends to infinity, and using the continuity of the two functions, we have
D ^ ( p ϕ ˜ , p ϕ T ) + D ψ ( ϕ ˜ , ϕ ¯ ) D ^ ( p ϕ ¯ , p ϕ T ) .
Recall that under A1-2, the sequence D ^ φ ( p ϕ k , p ϕ T ) k converges, so that it has the same limit for any subsequence, i.e., D ^ ( p ϕ ˜ , p ϕ T ) = D ^ ( p ϕ ¯ , p ϕ T ) . We also use the fact that the distance-like function D ψ is non negative to deduce that D ψ ( ϕ ˜ , ϕ ¯ ) = 0 . Looking closely at the definition of this divergence (10), we get that if the sum is zero, then each term is also zero since all terms are nonnegative. This means that:
i { 1 , , n } , X ψ h i ( x | ϕ ˜ ) h i ( x | ϕ ¯ ) h i ( x | ϕ ¯ ) d x = 0 .
The integrands are nonnegative functions, so they vanish almost everywhere with respect to the measure d x defined on the space of labels.
i { 1 , , n } , ψ h i ( x | ϕ ˜ ) h i ( x | ϕ ¯ ) h i ( x | ϕ ¯ ) = 0 d x - a . e .
The conditional densities h i are supposed to be positive (which can be ensured by a suitable choice of the initial point ϕ 0 ), i.e., h i ( x | ϕ ¯ ) > 0 , d x - a . e . Hence, ψ h i ( x | ϕ ˜ ) h i ( x | ϕ ¯ ) = 0 , d x - a . e . On the other hand, ψ is chosen in a way that ψ ( z ) = 0 iff z = 1 . Therefore:
i { 1 , , n } , h i ( x | ϕ ˜ ) = h i ( x | ϕ ¯ ) d x - a . e .
Since ϕ k + 1 is, by definition, an infimum of ϕ D ^ ( p ϕ , p ϕ T ) + D ψ ( ϕ , ϕ k ) , then the gradient of this function is zero on ϕ k + 1 . It results that:
D ^ ( p ϕ k + 1 , p ϕ T ) + D ψ ( ϕ k + 1 , ϕ k ) = 0 , k .
Taking the limit on k, and using the continuity of the derivatives, we get that:
D ^ ( p ϕ ˜ , p ϕ T ) + D ψ ( ϕ ˜ , ϕ ¯ ) = 0 .
Let us write explicitly the gradient of the second divergence:
D ψ ( ϕ ˜ , ϕ ¯ ) = i = 1 n X h i ( x | ϕ ˜ ) h i ( x | ϕ ¯ ) ψ h i ( x | ϕ ˜ ) h i ( x | ϕ ¯ ) h i ( x | ϕ ¯ ) .
We use now the identities (14), and the fact that ψ ( 1 ) = 0 , to deduce that:
D ψ ( ϕ ˜ , ϕ ¯ ) = 0 .
This entails using (15) that D ^ ( p ϕ ˜ , p ϕ T ) = 0 .
Comparing the proved result with the notation considered at the beginning of the proof, we have proved that the limit of the subsequence ( ϕ N 1 N 0 ( k ) ) k is a stationary point of the objective function. Therefore, the final step is to deduce the same result on the original convergent subsequence ( ϕ N 0 ( k ) ) k . This is simply due to the fact that ( ϕ N 1 N 0 ( k ) ) k is a subsequence of the convergent sequence ( ϕ N 0 ( k ) ) k , hence they have the same limit.
When assumption AC is dropped, similar arguments to those used in the proof of Proposition 2b. are employed. The optimality condition in (11) implies:
- D ψ ( ϕ k + 1 , ϕ k ) D ^ φ ( p ϕ k + 1 , p ϕ T ) k .
Function ϕ D ^ φ ( p ϕ , p ϕ T ) is continuous, hence its subgradient is outer semicontinuous and:
lim sup ϕ k + 1 ϕ D ^ φ ( p ϕ k + 1 , p ϕ T ) D ^ φ ( p ϕ ˜ , p ϕ T ) .
By definition of the limsup:
lim sup ϕ ϕ D ^ φ ( p ϕ , p ϕ T ) = u | ϕ k ϕ , u k u with u k D ^ φ ( p ϕ k , p ϕ T ) .
In our scenario, ϕ = ϕ k + 1 , ϕ k = ϕ k + 1 , u = 0 and u k = 1 D ψ ( ϕ k + 1 , ϕ k ) . We have proved above in this proof that 1 D ψ ( ϕ ˜ , ϕ ¯ ) = 0 using only the convergence of ( D ^ φ ( p ϕ k , p ϕ T ) ) k , inequality (13) and the properties of D ψ . Assumption AC was not needed. Hence, u k 0 . This proves that u = 0 lim sup ϕ k + 1 ϕ D ^ φ ( p ϕ n k + 1 , p ϕ T ) . Finally, using the inclusion (16), we get our result:
0 D ^ φ ( p ϕ ˜ , p ϕ T ) ,
which ends the proof.   □
The proof of the previous proposition is very similar to the proof of Proposition 2. The key idea is to use the sequence of conditional densities h i ( x | ϕ k ) instead of the sequence ϕ k . According to the application, one may be interested only in Proposition 1 or in Propositions 2–4. If one is interested in the parameters, Propositions 2 to 4 should be used, since we need a stable limit of ( ϕ k ) k . If we are only interested in minimizing an error criterion D ^ φ ( p ϕ , p ϕ T ) between the estimated distribution and the true one, Proposition 1 should be sufficient.

4. Case Studies

4.1. An Algorithm With Theoretically Global Infimum Attainment

We present a variant of algorithm (11) which ensures theoretically the convergence to a global infimum of the objective function D ^ φ ( p ϕ , p ϕ T ) as soon as there exists a convergent subsequence of ( ϕ k ) k . The idea is the same as Theorem 3.2.4 in [18]. Define ϕ k + 1 by:
ϕ k + 1 = arg inf ϕ D ^ φ ( p ϕ , p ϕ T ) + β k D ψ ( ϕ , ϕ k ) .
The proof of convergence is very simple and does not depend on the differentiability of any of the two functions D ^ φ or D ψ . We only assume A1 and A2 to be verified. Let ( ϕ N ( k ) ) k be a convergent subsequence. Let ϕ be its limit. This is guaranteed by the compactness of Φ 0 and the fact that the whole sequence ( ϕ k ) k resides in Φ 0 (see Proposition 1b). Suppose also that the sequence ( β k ) k converges to 0 as k goes to infinity.
Now assumptions of Theorem 3.2.4. from [18] are verified. Thus, using the same lines from the proof of this theorem (inverting all inequalities since we are minimizing instead of maximizing), we may prove that ϕ is a global infimum of the estimated divergence, that is
D ^ φ ( p ϕ , p ϕ T ) D ^ φ ( p ϕ , p ϕ T ) , ϕ Φ .
The problem with this approach is that it depends heavily on the fact that the supremum on each step of the algorithm is calculated exactly. This does not happen in general unless function D ^ φ ( p ϕ , p ϕ T ) + β k D ψ ( ϕ , ϕ k ) is convex or that we dispose of an algorithm that can perfectly solve non convex optimization problems (In this case, there is no meaning in applying an iterative proximal algorithm. We would have used the optimization algorithm directly on the objective function D ^ φ ( p ϕ , p ϕ T ) . Although in our approach, we use a similar assumption to prove the consecutive decreasing of D ^ φ ( p ϕ , p ϕ T ) , we can replace the infimum calculus in (11) by two things. We require at each step that we find a local infimum of D ^ φ ( p ϕ , p ϕ T ) + D ψ ( ϕ , ϕ k ) whose evaluation with ϕ D ^ φ ( p ϕ , p ϕ T ) is less than the previous term of the sequence ϕ k . If we can no longer find any local minima verifying the claim, the procedure stops with ϕ k + 1 = ϕ k . This ensures the availability of all the proofs presented in this paper with no change.

4.2. The Two-Component Gaussian Mixture

We suppose that the model ( p ϕ ) ϕ Φ is a mixture of two gaussian densities, and that we are only interested in estimating the means μ = ( μ 1 , μ 2 ) R 2 and the proportion λ [ η , 1 - η ] . The use of η is to avoid cancellation of any of the two components, and to keep the hypothesis h i ( x | ϕ ) > 0 for x = 1 , 2 verified. We also suppose that the components variances are reduced ( σ i = 1 ). The model takes the form
p λ , μ ( x ) = λ 2 π e - 1 2 ( x - μ 1 ) 2 + 1 - λ 2 π e - 1 2 ( x - μ 2 ) 2 .
Here, Φ = [ η , 1 - η ] × R 2 . The regularization term D ψ is defined by (8) where:
h i ( 1 | ϕ ) = λ e - 1 2 ( y i - μ 1 ) 2 λ e - 1 2 ( y i - μ 1 ) 2 + ( 1 - λ ) e - 1 2 ( y i - μ 2 ) 2 , h i ( 2 | ϕ ) = 1 - h i ( 1 | ϕ ) .
Functions h i are clearly of class C 1 (int(Φ)), and so does D ψ . We prove that Φ 0 is closed and bounded, which is sufficient to conclude its compactness, since the space [ η , 1 - η ] × R 2 provided with the euclidean distance is complete.
If we are using the dual estimator of the φ - divergence given by (2), then assumption A0 can be verified using the maximum theorem of Berge [19]. There is still a great difficulty in studying the properties (closedness or compactness) of the set Φ 0 . Moreover, all convergence properties of the sequence ϕ k require the continuity of the estimated φ - divergence D ^ φ ( p ϕ , p ϕ T ) with respect to ϕ. In order to prove the continuity of the estimated divergence, we need to assume that Φ is compact, i.e., assume that the means are included in an interval of the form [ μ min , μ max ] . Now, using Theorem 10.31 from [13], ϕ D ^ φ ( p ϕ , p ϕ T ) is continuous and differentiable almost everywhere with respect to φ.
The compactness assumption of Φ implies directly the compactness of Φ 0 . Indeed,
Φ 0 = ϕ Φ , D ^ φ ( p ϕ , p ϕ T ) D ^ φ ( p ϕ 0 , p ϕ T ) = D ^ φ ( p ϕ , p ϕ T ) - 1 ( - , D ^ φ ( p ϕ 0 , p ϕ T ) ] .
Φ 0 is then the inverse image by a continuous function of a closed set, so it is closed in Φ. Hence, it is compact.
Conclusion 1. 
Using Propositions 4 and 1, if Φ = [ η , 1 - η ] × [ μ min , μ max ] 2 , the sequence ( D ^ φ ( p ϕ k , p ϕ T ) ) k defined through Formula (2) converges and there exists a subsequence ( ϕ N ( k ) ) which converges to a stationary point of the estimated divergence. Moreover, every limit point of the sequence ( ϕ k ) k is a stationary point of the estimated divergence.
If we are using the kernel-based dual estimator given by (3) with a Gaussian kernel density estimator, then function ϕ D ^ φ ( p ϕ , p ϕ T ) is continuously differentiable over Φ even if the means μ 1 and μ 2 are not bounded. For example, take φ = φ γ defined by (1). There is one condition which relates the window of the kernel, say w, with the value of γ. Indeed, using Formula (3), we can write
D ^ φ ( p ϕ , p ϕ T ) = 1 γ - 1 p ϕ γ K n , w γ - 1 ( y ) d y - 1 γ n i = 1 n p ϕ γ K n , w γ ( y i ) - 1 γ ( γ - 1 ) .
In order to study the continuity and the differentiability of the estimated divergence with respect to ϕ, it suffices to study the integral term. We have
p ϕ γ K n , w γ - 1 ( y ) = λ 2 π exp - 1 2 ( y - μ 1 ) 2 + 1 - λ 2 π exp - 1 2 ( y - μ 2 ) 2 γ 1 n w i = 1 n exp - ( y - y i ) 2 2 w 2 γ - 1 .
The dominating term at infinity in the nominator is exp ( - γ y 2 / 2 ) , whereas it is exp ( - ( γ - 1 ) y 2 / ( 2 w 2 ) ) in the denominator. It suffices now in order that the integrand to be bounded by an integrable function independently of ϕ = ( λ , μ ) that we have - γ + ( γ - 1 ) / w 2 < 0 . That is - γ w 2 + γ - 1 < 0 , which is equivalent to γ ( w 2 - 1 ) < - 1 . This argument also holds if we differentiate the integrand with respect to λ or either of the means μ 1 or μ 2 . For γ = 2 (the Pearson’s χ 2 ), we need w 2 > 1 / 2 . For γ = 1 / 2 (the Hellinger), there is no condition on w.
Closedness of Φ 0 is proved similarly to the previous case. Boundedness, however, must be treated differently since Φ is not necessarily compact and is supposed to be Φ = [ η , 1 - η ] × R 2 . For simplicity, take φ = φ γ . The idea is to choose ϕ 0 an initialization for the proximal algorithm in a way that Φ 0 does not include unbounded values of the means. Continuity of ϕ D ^ φ ( p ϕ , p ϕ T ) permits calculation of the limits when either (or both) of the means tends to infinity. If both the means go to infinity, then p ϕ ( x ) 0 , x . Thus, for γ ( 0 , ) \ { 1 } , we have D ^ φ ( p ϕ , p ϕ T ) 1 γ ( γ - 1 ) . For γ < 0 , the limit is infinity. If only one of the means tends to , then the corresponding component vanishes from the mixture. Thus, if we choose ϕ 0 such that:
D ^ φ ( p ϕ 0 , p ϕ T ) < min 1 γ ( γ - 1 ) , inf λ , μ D ^ φ ( p ( λ , , μ ) , p ϕ T ) if γ ( 0 , ) \ { 1 } ,
D ^ φ ( p ϕ 0 , p ϕ T ) < inf λ , μ D ^ φ ( p ( λ , , μ ) , p ϕ T ) if γ < 0 ,
then the algorithm starts at a point of Φ whose function value is inferior to the limits of D ^ φ ( p ϕ , p ϕ T ) at infinity. By Proposition 1, the algorithm will continue to decrease the value of D ^ φ ( p ϕ , p ϕ T ) and never goes back to the limits at infinity. In addition, the definition of Φ 0 permits to conclude that if ϕ 0 is chosen according to conditions (18) and (19), then Φ 0 is bounded. Thus, Φ 0 becomes compact. Unfortunately the value of inf λ , μ D ^ φ ( p ( λ , , μ ) , p ϕ T ) can be calculated but numerically. We will see next that in the case of the likelihood function, a similar condition will be imposed for the compactness of Φ 0 , and there will be no need for any numerical calculus.
Conclusion 2. 
Using Propositions 4 and 1, under conditions (18) and (19) the sequence ( D ^ φ ( p ϕ k , p ϕ T ) ) k defined through Formula (3) converges and there exists a subsequence ( ϕ N ( k ) ) that converges to a stationary point of the estimated divergence. Moreover, every limit point of the sequence ( ϕ k ) k is a stationary point of the estimated divergence.
In the case of the likelihood φ ( t ) = - log ( t ) + t - 1 , the set Φ 0 can be written as:
Φ 0 = ϕ Φ , J N ( ϕ ) J N ( ϕ 0 ) = J N - 1 [ J N ( ϕ 0 ) , + ) ,
where J N is the log-likelihood function of the Gaussian mixture model. The log-likelihood function J N is clearly of class C 1 (int(Φ)). We prove that Φ 0 is closed and bounded which is sufficient to conclude its compactness, since the space [ η , 1 - η ] × R 2 provided with the euclidean distance is complete.
Closedness. The set Φ 0 is the inverse image by a continuous function (the log-likelihood) of a closed set. Therefore it is closed in [ η , 1 - η ] × R 2 .
Boundedness. By contradiction, suppose that Φ 0 is unbounded, then there exists a sequence ( ϕ l ) l which tends to infinity. Since λ l [ η , 1 - η ] , then either of μ 1 l or μ 2 l tends to infinity. Suppose that both μ 1 l and μ 2 l tend to infinity, we then have J N ( ϕ l ) - . Any finite initialization ϕ 0 will imply that J N ( ϕ 0 ) > - so that ϕ Φ 0 , J N ( ϕ ) J N ( ϕ 0 ) > - . Thus, it is impossible for both μ 1 l and μ 2 l to go to infinity.
Suppose that μ 1 l , and that μ 2 l converges (or that μ 2 l is bounded; in such case we extract a convergent subsequence) to μ 2 . The limit of the likelihood has the form:
L ( λ , , ϕ 2 ) = i = 1 n ( 1 - λ ) 2 π e - 1 2 ( y i - μ 2 ) 2 ,
which is bounded by its value for λ = 0 and μ 2 = 1 n i = 1 n y i . Indeed, since 1 - λ 1 , we have:
L ( λ , , ϕ 2 ) i = 1 n 1 2 π e - 1 2 ( y i - μ 2 ) 2 .
The right-hand side of this inequality is the likelihood of a Gaussian model N ( μ 2 , 0 ) , so that it is maximized when μ 2 = 1 n i = 1 n y i . Thus, if ϕ 0 is chosen in a way that J N ( ϕ 0 ) > J N 0 , , 1 n i = 1 n y i , the case when μ 1 tends to infinity and μ 2 is bounded would never be allowed. For the other case where μ 2 and μ 1 is bounded, we choose ϕ 0 in a way that J N ( ϕ 0 ) > J N 1 , 1 n i = 1 n y i , . In conclusion, with a choice of ϕ 0 such that:
J N ( ϕ 0 ) > max J N 0 , , 1 n i = 1 n y i , J N 1 , 1 n i = 1 n y i , ,
the set Φ 0 is bounded.
This condition on ϕ 0 is very natural and means that we need to begin at a point at least better than the extreme cases where we only have one component in the mixture. This can be easily verified by choosing a random vector ϕ 0 , and calculating the corresponding log-likelihood value. If J N ( ϕ 0 ) does not verify the previous condition, we draw again another random vector until satisfaction.
Conclusion 3. 
Using Propositions 4 and 1, under condition (20) the sequence ( J N ( ϕ k ) ) k converges and there exists a subsequence ( ϕ N ( k ) ) which converges to a stationary point of the likelihood function. Moreover, every limit point of the sequence ( ϕ k ) k is a stationary point of the likelihood.
Assumption A3 is not fulfilled (this part applies for all aforementioned situations). As mentioned in the paper of Tseng [2], for the two Gaussian mixture example, by changing μ 1 and μ 2 by the same amount and suitably adjusting λ, the value of h i ( x | ϕ ) would be unchanged. We explore this more thoroughly by writing the corresponding equations. Let us suppose, absurdly, that for distinct ϕ and ϕ , we have D ψ ( ϕ | ϕ ) = 0 . By definition of D ψ , it is given by a sum of nonnegative terms, which implies that all terms need to be equal to zero. The following lines are equivalent i { 1 , , n } :
h i ( 0 | λ , μ 1 , μ 2 ) = h i ( 0 | λ , μ 1 , μ 2 ) , λ e - 1 2 ( y i - μ 1 ) 2 λ e - 1 2 ( y i - μ 1 ) 2 + ( 1 - λ ) e - 1 2 ( y i - μ 2 ) 2 = λ e - 1 2 ( y i - μ 1 ) 2 λ e - 1 2 ( y i - μ 1 ) 2 + ( 1 - λ ) e - 1 2 ( y i - μ 2 ) 2 , log 1 - λ λ - 1 2 ( y i - μ 2 ) 2 + 1 2 ( y i - μ 1 ) 2 = log 1 - λ λ - 1 2 ( y i - μ 2 ) 2 + 1 2 ( y i - μ 1 ) 2 .
Looking at this set of n equations as an equality of two polynomials on y of degree 1 at n points, we deduce that as we have two distinct observations, say, y 1 and y 2 , the two polynomials need to have the same coefficients. Thus, the set of n equations is equivalent to the following two equations:
μ 1 - μ 2 = μ 1 - μ 2 log 1 - λ λ + 1 2 μ 1 2 - 1 2 μ 2 2 = log 1 - λ λ + 1 2 μ 1 2 - 1 2 μ 2 2 .
These two equations with three variables have an infinite number of solutions. Take, for example, μ 1 = 0 , μ 2 = 1 , λ = 2 3 , μ 1 = 1 2 , μ 2 = 3 2 , λ = 1 2 .
Remark 2. 
The previous conclusion can be extended to any two-component mixture of exponential families having the form:
p ϕ ( y ) = λ e i = 1 m 1 θ 1 , i y i - F ( θ 1 ) + ( 1 - λ ) e i = 1 m 2 θ 2 , i y i - F ( θ 2 ) .
One may write the corresponding n equations. The polynomial of y i has a degree of at most max ( m 1 , m 2 ) . Thus, if one disposes of max ( m 1 , m 2 ) + 1 distinct observations, the two polynomials will have the same set of coefficients. Finally, if ( θ 1 , θ 2 ) R d - 1 with d > max ( m 1 , m 2 ) , then assumption A3 does not hold.
Unfortunately, we have no an information about the difference between consecutive terms ϕ k + 1 - ϕ k except for the case of ψ ( t ) = φ ( t ) = - log ( t ) + t - 1 which corresponds to the classical EM recurrence:
λ k + 1 = 1 n i = 1 n h i ( 0 | ϕ k ) , μ 1 k + 1 = i = 1 n y i h i ( 0 | ϕ k ) i = 1 n h i ( 0 | ϕ k ) μ 1 k + 1 = i = 1 n y i h i ( 1 | ϕ k ) i = 1 n h i ( 1 | ϕ k ) .
Tseng [2] has shown that we can prove directly that ϕ k + 1 - ϕ k converges to 0.

5. Simulation Study

We summarize the results of 100 experiments on 100 samples by giving the average of the estimates and the error committed, and the corresponding standard deviation. The criterion error is the total variation distance (TVD), which is calculated using the L 1 distance. Indeed, the Scheffé Lemma (see [20] (Page 129)) states that:
sup A B n ( R ) P ϕ ( A ) - P ϕ T ( A ) = 1 2 R p ϕ ( y ) - p ϕ T ( y ) d y .
The TVD gives a measure of the maximum error we may commit when we use the estimated model in lieu of the true distribution. We consider the Hellinger divergence for estimators based on φ - divergences, which corresponds to φ ( t ) = 1 2 ( t - 1 ) 2 . Our preference of the Hellinger divergence is that we hope to obtain robust estimators without loss of efficiency (see [21]). D ψ is calculated with ψ ( t ) = 1 2 ( t - 1 ) 2 . The kernel-based MD φ DE is calculated using the Gaussian kernel, and the window is calculated using Silverman’s rule. We included in the comparison the minimum density power divergence (MDPD) of [14]. The estimator is defined by:
ϕ ^ n = arg inf ϕ Φ p ϕ 1 + a ( z ) d z - a + 1 a 1 n i n p ϕ a ( y i ) = arg inf ϕ Φ E P ϕ p ϕ a - a + 1 a E P n p ϕ a ,
where a ( 0 , 1 ] . This is a Bregman divergence and is known to have good efficiency and robustness for a good choice of the tradeoff parameter. According to the simulation results in [11], the value of a = 0 . 5 seems to give a good tradeoff between robustness against outliers and a good performance under the model. Notice that the MDPD coincides with MLE when a tends to zero. Thus, our methodology presented here in this article, is applicable on this estimator and the proximal point algorithm can be used to calculate the MDPD. The proximal term will be kept the same, i.e., ψ ( t ) = 1 2 ( t - 1 ) 2 .
Remark 3 
(Note on the robustness of the used estimators) In Section 3, we have proved under mild conditions that the proximal point algorithm (11) ensures the decrease of the estimated divergence. This means that when we use the dual Formulas (2) and (3), then the proximal point algorithm (11) returns at convergence the estimators defined by (4) and (5), respectively. Similarly, if we use the density power divergence of Basu et al. [14], then the proximal-point algorithm returns at convergence the MDPD defined by (22). The robustness properties of the dual estimators (4) and (5) are studied in [12] and [11] respectively using the influence function (IF) approach. On the other hand, the robustness properties of the MDPD are studied using the IF approach in [14]. The MD φ DE (4) has generally an unbounded IF (see [12] Section 3.1), whereas the kernel-based MD φ DE’s IF may be bounded for example in a Gaussian model and for any φ - divergence with φ = φ γ with γ ( 0 , 1 ) , see [11] Example 2. On the other hand, the MDPD has generally a bounded IF if the tradeoff parameter a is positive, and, in particular, in the Gaussian model. The MDPD becomes more robust as the tradeoff parameteraincreases (see Section 3.3 in [14]). Therefore, we should expect that the proximal point algorithm produces robust estimators in the case of the kernel-based MDφDE and the MDPD, and thus obtain better results than the MLE calculated using the EM algorithm.
Simulations from two mixture models are given below—a Gaussian mixture and a Weibull mixture. The MLE for both mixtures was calculated using the EM algorithm.
Optimizations were carried out using the Nelder–Mead algorithm [22] under the statistical tool R [23]. Numerical integrations in the Gaussian mixture were calculated using the distrExIntegrate function of package distrEx. It is a slight modification of the standard function integrate. It performs a Gauss–Legendre quadrature when function integrate returns an error. In the Weibull mixture, we used the integral function from package pracma. Function integral includes a variety of adaptive numerical integration methods such as Kronrod–Gauss quadrature, Romberg’s method, Gauss–Richardson quadrature, Clenshaw–Curtis (not adaptive) and (adaptive) Simpson’s method. Although function integral is slow, it performs better than other functions even if the integrand has a relatively bad behavior.

5.1. The Two-Component Gaussian Mixture Revisited

We consider the Gaussian mixture (17) presented earlier with true parameters λ = 0 . 35 , μ 1 = - 2 , μ 2 = 1 . 5 and known variances equal to 1. Contamination was done by adding in the original sample to the five lowest values random observations from the uniform distribution U [ - 5 , - 2 ] . We also added to the five largest values random observations from the uniform distribution U [ 2 , 5 ] . Results are summarized in Table 1. The EM algorithm was initialized according to condition (20). This condition gave good results when we are under the model, whereas it did not always result in good estimates (the proportion converged towards 0 or 1) when outliers were added, and thus the EM algorithm was reinitialized manually.
Figure 1 shows the values of the estimated divergence for both Formulas (2) and (3) on a logarithmic scale at each iteration of the algorithm.
Concerning our simulation results, the total variation of all four estimation methods is very close when we are under the model. When we added outliers, the classical MD φ DE was as sensitive as the maximum likelihood estimator. The error was doubled. Both the kernel-based MD φ DE and the MDPD are clearly robust since the total variation of these estimators under contamination has slightly increased.

5.2. The Two-Component Weibull Mixture Model

We consider a two-component Weibull mixture with unknown shapes ν 1 = 1 . 2 , ν 2 = 2 and a proportion λ = 0 . 35 . The scales are known an equal to σ 1 = 0 . 5 , σ 2 = 2 . The desity function is given by:
p ϕ ( x ) = 2 λ α 1 ( 2 x ) α 1 - 1 e - ( 2 x ) α 1 + ( 1 - λ ) α 2 2 x 2 α 2 - 1 e - x 2 α 2 .
Contamination was done by replacing 10 observations of each sample chosen randomly by 10 i.i.d. observations drawn from a Weibull distribution with shape ν = 0 . 9 and scale σ = 3 . Results are summarized in Table 2. Notice that it would have been better to use asymmetric kernels in order to build the kernel-based MD φ DE since their use in the context of positive-supported distributions is advised in order to reduce the bias at zero, see [11] for a detailed comparison with symmetric kernels. This is not, however, the goal of this paper. In addition, the use of symmetric kernels in this mixture model gave satisfactory results.
Simulations results in Table 2 confirm once more the validity of our proximal point algorithm and the clear robustness of both the kernel-based MD φ DE and the MDPD.

6. Conclusions

We introduced in this paper a proximal-point algorithm that permits calculation of divergence-based estimators. We studied the theoretical convergence of the algorithm and verified it in a two-component Gaussian mixture. We performed several simulations which confirmed that the algorithm works and is a way to calculate divergence-based estimators. We also applied our proximal algorithm on a Bregman divergence estimator (the MDPD), and the algorithm succeeded to produce the MDPD. Further investigations about the role of the proximal term and a comparison with direct optimization methods in order to show the practical use of the algorithm may be considered in a future work.

Acknowledgments

The authors are grateful to Laboratoire de Statistique Théorique et Appliquée, Université Pierre et Marie Curie, for financial support.

Author Contributions

Michel Broniatowski proposed use of a proximal-point algorithm in order to calculate the MD φ DE. Michel Broniatowski proposed building a work based on the paper of [2]. Diaa Al Mohamad proposed the generalization in Section 2.3 and provided all of the convergence results in Section 3. Diaa Al Mohamad also conceived the simulations. Finally, Michel Broniatowski contributed to improving the text written by Diaa Al Mohamad. Both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. McLachlan, G.J.; Krishnan, T. The EM Algorithm and Extensions; Wiley: Hoboken, NJ, USA, 2007. [Google Scholar]
  2. Tseng, P. An Analysis of the EM Algorithm and Entropy-Like Proximal Point Methods. Math. Oper. Res. 2004, 29, 27–44. [Google Scholar] [CrossRef]
  3. Chrétien, S.; Hero, A.O. Generalized Proximal Point Algorithms and Bundle Implementations. Available online: http://www.eecs.umich.edu/techreports/systems/cspl/cspl-316.pdf (acceesed on 25 July 2016).
  4. Goldstein, A.; Russak, I. How good are the proximal point algorithms? Numer. Funct. Anal. Optim. 1987, 9, 709–724. [Google Scholar] [CrossRef]
  5. Chrétien, S.; Hero, A.O. Acceleration of the EM algorithm via proximal point iterations. In Proceedings of the IEEE International Symposium on Information Theory, Cambridge, MA, USA, 16–21 August 1998.
  6. Csiszár, I. Eine informationstheoretische Ungleichung und ihre anwendung auf den Beweis der ergodizität von Markoffschen Ketten. Publ. Math. Inst. Hung. Acad. Sci. 1963, 8, 95–108. (In German) [Google Scholar]
  7. Broniatowski, M.; Keziou, A. Parametric estimation and tests through divergences and the duality technique. J. Multivar. Anal. 2009, 100, 16–36. [Google Scholar] [CrossRef]
  8. Cressie, N.; Read, T.R.C. Multinomial goodness-of-fit tests. J. R. Stat. Soc. Ser. B 1984, 46, 440–464. [Google Scholar]
  9. Broniatowski, M.; Keziou, A. Minimization of divergences on sets of signed measures. Stud. Sci. Math. Hung. 2006, 43, 403–442. [Google Scholar] [CrossRef]
  10. Liese, F.; Vajda, I. On Divergences and Informations in Statistics and Information Theory. IEEE Trans. Inf. Theory 2006, 52, 4394–4412. [Google Scholar] [CrossRef]
  11. Al Mohamad, D. Towards a better understanding of the dual representation of phi divergences. 2016; arXiv:1506.02166. [Google Scholar]
  12. Toma, A.; Broniatowski, M. Dual divergence estimators and tests: Robustness results. J. Multivar. Anal. 2011, 102, 20–36. [Google Scholar] [CrossRef]
  13. Rockafellar, R.T.; Wets, R.J.B. Variational Analysis, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  14. Basu, A.; Harris, I.R.; Hjort, N.L.; Jones, M.C. Robust and Efficient Estimation by Minimising a Density Power Divergence. Biometrika 1998, 85, 549–559. [Google Scholar] [CrossRef]
  15. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. B 1977, 39, 1–38. [Google Scholar]
  16. Wu, C.F.J. On the Convergence Properties of the EM Algorithm. Ann. Stat. 1983, 11, 95–103. [Google Scholar] [CrossRef]
  17. Ostrowski, A. Solution of Equations and Systems of Equations; Academic Press: Cambridge, MA, USA, 1966. [Google Scholar]
  18. Chrétien, S.; Hero, A.O. On EM algorithms and their proximal generalizations. ESAIM Probabil. Stat. 2008, 12, 308–326. [Google Scholar] [CrossRef]
  19. Berge, C. Topological Spaces: Including a Treatment of Multi-valued Functions, Vector Spaces, and Convexity; Dover Publications: Mineola, NY, USA, 1963. [Google Scholar]
  20. Meister, A. Deconvolution Problems in Nonparametric Statistics; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  21. Jiménz, R.; Shao, Y. On robustness and efficiency of minimum divergence estimators. Test 2001, 10, 241–248. [Google Scholar] [CrossRef]
  22. Nelder, J.A.; Mead, R. A Simplex Method for Function Minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  23. The R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2013. [Google Scholar]
Figure 1. Decrease of the (estimated) Hellinger divergence between the true density and the estimated model at each iteration in the Gaussian mixture. The figure to the left is the curve of the values of the kernel-based dual Formula (3). The figure to the right is the curve of values of the classical dual Formula (2). Values are taken at a logarithmic scale log ( 1 + x ) .
Figure 1. Decrease of the (estimated) Hellinger divergence between the true density and the estimated model at each iteration in the Gaussian mixture. The figure to the left is the curve of the values of the kernel-based dual Formula (3). The figure to the right is the curve of values of the classical dual Formula (2). Values are taken at a logarithmic scale log ( 1 + x ) .
Entropy 18 00277 g001
Table 1. The mean and the standard deviation of the estimates and the errors committed in a 100 run experiment of a two-component Gaussian mixture. The true set of parameters is λ = 0 . 35 , μ 1 = - 2 , μ 2 = 1 . 5 .
Table 1. The mean and the standard deviation of the estimates and the errors committed in a 100 run experiment of a two-component Gaussian mixture. The true set of parameters is λ = 0 . 35 , μ 1 = - 2 , μ 2 = 1 . 5 .
Estimation Methodλsd (λ) μ 1 sd ( μ 1 ) μ 2 sd ( μ 2 )TVDsd (TVD)
Without Outliers
Classical MD φ DE0.3490.049–1.9890.2071.5110.1510.0610.029
New MD φ DE–Silverman0.3490.049–1.9870.2081.5200.1550.0620.029
MDPD a = 0 . 5 0.3600.053–1.9970.2261.4890.1350.0650.025
EM (MLE)0.3600.054–1.9890.2041.4930.1360.0640.025
With 10 % Outliers
Classical MD φ DE0.3570.022–2.6290.0941.7340.1110.1460.034
New MD φ DE–Silverman0.3520.057–1.7560.2241.3580.1320.0870.033
MDPD a = 0 . 5 0.3640.056–1.8190.2181.4040.1320.0780.030
EM (MLE)0.3420.064–2.6170.2881.7130.1720.1500.034
Table 2. The mean and the standard deviation of the estimates and the errors committed in a 100-run experiment of a two-component Weibull mixture. The true set of parameter is λ = 0 . 35 , ν 1 = 1 . 2 , ν 2 = 2 .
Table 2. The mean and the standard deviation of the estimates and the errors committed in a 100-run experiment of a two-component Weibull mixture. The true set of parameter is λ = 0 . 35 , ν 1 = 1 . 2 , ν 2 = 2 .
Estimation Methodλsd (λ) μ 1 sd ( μ 1 ) μ 2 sd ( μ 2 )TVDsd (TVD)
Without Outliers
Classical MD φ DE0.3560.0661.2450.2282.0550.2370.0520.025
New MD φ DE–Silverman0.3870.0671.2290.2412.1450.2890.0580.029
MDPD a = 0 . 5 0.3540.0681.2380.2302.0710.3450.0560.029
EM (MLE)0.3550.0661.2450.2282.0540.2370.0520.025
With 10 % Outliers
Classical MD φ DE0.2500.0851.0890.3001.4700.3350.0920.037
New MD φ DE–Silverman0.3490.0761.1220.2521.8240.3240.0670.034
MDPD a = 0 . 5 0.3220.0771.1580.2361.8580.3440.0600.029
EM (MLE)0.2590.0950.9410.3681.5650.3250.0950.035

Share and Cite

MDPI and ACS Style

Al Mohamad, D.; Broniatowski, M. A Proximal Point Algorithm for Minimum Divergence Estimators with Application to Mixture Models. Entropy 2016, 18, 277. https://doi.org/10.3390/e18080277

AMA Style

Al Mohamad D, Broniatowski M. A Proximal Point Algorithm for Minimum Divergence Estimators with Application to Mixture Models. Entropy. 2016; 18(8):277. https://doi.org/10.3390/e18080277

Chicago/Turabian Style

Al Mohamad, Diaa, and Michel Broniatowski. 2016. "A Proximal Point Algorithm for Minimum Divergence Estimators with Application to Mixture Models" Entropy 18, no. 8: 277. https://doi.org/10.3390/e18080277

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop