Next Article in Journal
Entropy Analysis of Neonatal Electrodermal Activity during the First Three Days after Birth
Next Article in Special Issue
Contingency Table Analysis and Inference via Double Index Measures
Previous Article in Journal
H Observer Based on Descriptor Systems Applied to Estimate the State of Charge
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Divergences between Densities of Truncated Exponential Families with Nested Supports: Duo Bregman and Duo Jensen Divergences

Sony Computer Science Laboratories, Tokyo 141-0022, Japan
Entropy 2022, 24(3), 421; https://doi.org/10.3390/e24030421
Submission received: 2 March 2022 / Revised: 14 March 2022 / Accepted: 16 March 2022 / Published: 17 March 2022
(This article belongs to the Special Issue Information and Divergence Measures)

Abstract

:
By calculating the Kullback–Leibler divergence between two probability measures belonging to different exponential families dominated by the same measure, we obtain a formula that generalizes the ordinary Fenchel–Young divergence. Inspired by this formula, we define the duo Fenchel–Young divergence and report a majorization condition on its pair of strictly convex generators, which guarantees that this divergence is always non-negative. The duo Fenchel–Young divergence is also equivalent to a duo Bregman divergence. We show how to use these duo divergences by calculating the Kullback–Leibler divergence between densities of truncated exponential families with nested supports, and report a formula for the Kullback–Leibler divergence between truncated normal distributions. Finally, we prove that the skewed Bhattacharyya distances between truncated exponential families amount to equivalent skewed duo Jensen divergences.

Graphical Abstract

1. Introduction

1.1. Exponential Families

Let ( X , Σ ) be a measurable space, and consider a regular minimal exponential family [1] E of probability measures P θ all dominated by a base measure μ ( P θ μ ):
E = { P θ : θ Θ } .
The Radon–Nikodym derivatives or densities of the probability measures P θ with respect to μ can be written canonically as
p θ ( x ) = d P θ d μ ( x ) = exp θ t ( x ) F ( θ ) + k ( x ) ,
where θ denotes the natural parameter, t ( x ) the sufficient statistic [1,2,3,4], and F ( θ ) the log-normalizer [1] (or cumulant function). The optional auxiliary term k ( x ) allows us to change the base measure μ into the measure ν such that d ν d μ ( x ) = e k ( x ) . The order D of the family is the dimension of the natural parameter space Θ :
Θ = θ R D : X exp θ t ( x ) + k ( x ) d μ ( x ) < ,
where R denotes the set of reals. The sufficient statistic t ( x ) = ( t 1 ( x ) , , t D ( x ) ) is a vector of D functions. The sufficient statistic t ( x ) is said to be minimal when the D + 1 functions 1, t 1 ( x ) , , t D ( x ) are linearly independent [1]. The sufficient statistics t ( x ) are such that the probability Pr [ X | θ ] = Pr [ X | t ( X ) ] . That is, all information necessary for the statistical inference of parameter θ is contained in t ( X ) . Exponential families are characterized as families of parametric distributions with finite-dimensional sufficient statistics [1]. Exponential families { p λ } include among others the exponential, normal, gamma/beta, inverse gamma, inverse Gaussian, and Wishart distributions once a reparameterization θ = θ ( λ ) of the parametric distributions { p λ } is performed to reveal their natural parameters [1].
When the sufficient statistic t ( x ) is x, these exponential families [1] are called natural exponential families or tilted exponential families [5] in the literature. Indeed, the distributions P θ of the exponential family E can be interpreted as distributions obtained by tilting the base measure μ [6]. In this paper, we consider either discrete exponential families like the family of Poisson distributions (univariate distributions of order D = 1 with respect to the counting measure) or continuous exponential families like the family of normal distributions (univariate distributions of order D = 2 with respect to the Lebesgue measure). The Radon–Nikodym derivative of a discrete exponential family is a probability mass function (pmf), and the Radon–Nikodym derivative of a continuous exponential family is a probability density function (pdf). The support of a pmf p ( x ) is supp ( p ) = { x Z : p ( x ) > 0 } (where Z denotes the set of integers) and the support of a d-variate pdf p ( x ) is supp ( p ) = { x R d : p ( x ) > 0 } . The Poisson distributions have support N { 0 } where N denotes the set of natural numbers { 1 , 2 , , } . Densities of an exponential family all have coinciding support [1].

1.2. Truncated Exponential Families with Nested Supports

In this paper, we shall consider truncated exponential families [7] with nested supports. A truncated exponential family is a set of parametric probability distributions obtained by truncation of the support of an exponential family. Truncated exponential families are exponential families but their statistical inference is more subtle [8,9]. Let E Trunc = { q θ } be a truncated exponential family of E = { p θ } with nested supports supp ( q θ ) supp ( p θ ) . The canonical decompositions of densities p θ and q θ have the following expressions:
p θ ( x ) = exp θ t ( x ) + k ( x ) F ( θ ) ,
q θ ( x ) = p θ ( x ) Z X Trunc ( θ ) = exp θ t ( x ) + k ( x ) F Trunc ( θ ) ,
where the log-normalizer of the truncated exponential family is:
F Trunc ( θ ) = F ( θ ) + log Z X Trunc ( θ ) ,
where Z X Trunc ( θ ) is a normalizing term that takes into account the truncated support X Trunc . These equations show that densities of truncated exponential families only differ by their log-normalizer functions. Let X Trunc denote the support of the distributions of E Trunc = supp ( q θ ) and X = supp ( p θ ) the support of E . Family E Trunc is a truncated exponential family of E that can be notationally written as E X Trunc . Family E can also be interpreted as the (un)truncated exponential family E X with densities p θ X = p θ . A truncated exponential family E X Trunc of E is said to have nested support when X Trunc X . For example, the family of half-normal distributions defined on the support X Trunc = [ 0 , ) is a nested truncated exponential family of the family of normal distributions defined on the support X = ( , ) .

1.3. Kullback–Leibler Divergence between Exponential Family Distributions

For two σ -finite probability measures P and Q on ( X , Σ ) such that P is dominated by Q ( P Q ), the Kullback–Leibler divergence between P and Q is defined by
D KL [ P : Q ] = X log d P d Q d P = E P log d P d Q ,
where E P [ X ] denotes the expectation of a random variable X P [10]. When P Q , we set D KL [ P : Q ] = + . Gibbs’ inequality [11] D KL [ P : Q ] 0 shows that the Kullback–Leibler divergence (KLD for short) is always non-negative. The proof of Gibbs’ inequality relies on Jensen’s inequality and holds for the wide class of f-divergences [12] induced by convex generators f ( u ) :
I f [ P : Q ] = X f d Q d P d P f X d Q d P d P f ( 1 ) .
The KLD is an f-divergence obtained for the convex generator f ( u ) = log u .

1.4. Kullback–Leibler Divergence between Exponential Family Densities

It is well-known that the KLD between two distributions P θ 1 and P θ 2 of E amounts to computing an equivalent Fenchel–Young divergence [13]:
D KL [ P θ 1 : P θ 2 ] = X p θ 1 ( x ) log p θ 1 ( x ) p θ 2 ( x ) d μ ( x ) = Y F , F * ( θ 2 , η 1 ) ,
where η = F ( θ ) = E P θ [ t ( x ) ] is the moment parameter [1] and
F ( θ ) = θ 1 F ( θ ) , , θ D F ( θ ) ,
is the gradient of F with respect to θ = [ θ 1 , , θ D ] . The Fenchel–Young divergence is defined for a pair of strictly convex conjugate functions [14] F ( θ ) and F * ( η ) related by the Legendre–Fenchel transform by
Y F , F * ( θ 1 , η 2 ) : = F ( θ 1 ) + F * ( η 2 ) θ 1 η 2 .
Amari (1985) first introduced this formula as the canonical divergence of dually flat spaces in information geometry [15] (Equation 3.21), and proved that the Fenchel–Young divergence is obtained as the KLD between densities belonging to the same exponential family [15] (Theorem 3.7). Azoury and Warmuth expressed the KLD D KL [ P θ 1 : P θ 2 ] using dual Bregman divergences in [13] (2001):
D KL [ P θ 1 : P θ 2 ] = B F ( θ 2 : θ 1 ) = B F * ( η 1 : η 2 ) ,
where a Bregman divergence [16] B F ( θ 1 : θ 2 ) is defined for a strictly convex and differentiable generator F ( θ ) by:
B F ( θ 1 : θ 2 ) : = F ( θ 1 ) F ( θ 2 ) ( θ 1 θ 2 ) F ( θ 2 ) .
Acharyya termed the divergence Y F , F * the Fenchel–Young divergence in his PhD thesis [17] (2013), and Blondel et al. called such divergences Fenchel–Young losses (2020) in the context of machine learning [18] (Equation (9) in Definition 2). This term was also used by the author the Legendre–Fenchel divergence in [19]. The Fenchel–Young divergence stems from the Fenchel–Young inequality [14,20]:
F ( θ 1 ) + F * ( η 2 ) θ 1 η 2 ,
with equality if and only if η 2 = F ( θ 1 ) .
Figure 1 visualizes the 1D Fenchel–Young divergence and gives a geometric proof that Y F , F * ( θ 1 , η 2 ) 0 with equality if and only if η 2 = F ( θ 1 ) . Indeed, by considering the behavior of the Legendre–Fenchel transformation under translations:
  • if F t ( θ ) = F ( θ + t ) then F t * ( η ) = F * ( η ) η t for all t R , and
  • if F λ ( θ ) = F ( θ ) + λ then F λ * ( η ) = F * ( η ) λ for all λ R ,
we may assume without loss of generality that F ( 0 ) = 0 . The function F ( θ ) is strictly increasing and continuous since F ( θ ) is a strictly convex and differentiable convex function. Thus we have F ( θ ) = 0 θ F ( θ ) d θ and F * ( η ) = 0 η F * ( η ) d η = 0 η F 1 ( η ) d η .
Figure 1. Visualizing the Fenchel–Young divergence.
Figure 1. Visualizing the Fenchel–Young divergence.
Entropy 24 00421 g001
The Bregman divergence B F ( θ 1 : θ 2 ) amounts to a dual Bregman divergence [13] between the dual parameters with swapped order: B F ( θ 1 : θ 2 ) = B F * ( η 2 : η 1 ) where η i = F ( θ i ) for i { 1 , 2 } . Thus the KLD between two distributions P θ 1 and P θ 2 of E can be expressed equivalently as follows:
D KL [ P θ 1 : P θ 2 ] = Y F , F * ( θ 2 : η 1 ) = B F ( θ 2 : θ 1 ) = B F * ( η 1 : η 2 ) = Y F * , F ( η 1 : η 2 ) .
The symmetrized Kullback–Leibler divergence D J [ P θ 1 : P θ 2 ] between two distributions P θ 1 and P θ 2 of E is called Jeffreys’ divergence [21] and amounts to a symmetrized Bregman divergence [22]:
D J [ P θ 1 : P θ 2 ] = D KL [ P θ 1 : P θ 2 ] + D KL [ P θ 2 : P θ 1 ] ,
= B F ( θ 2 : θ 1 ) + B F ( θ 1 : θ 2 ) ,
= ( θ 2 θ 1 ) ( η 2 η 1 ) : = S F ( θ 1 , θ 2 ) .
Note that the Bregman divergence B F ( θ 1 : θ 2 ) can also be interpreted as a surface area:
B F ( θ 1 : θ 2 ) = θ 2 θ 1 ( F ( θ ) F ( θ 2 ) ) d θ .
Figure 2 illustrates the sided and symmetrized Bregman divergences.

1.5. Contributions and Paper Outline

We recall in Section 2 the formula obtained for the Kullback–Leibler divergence between two exponential family densities equivalent to each other [23] (Equation (29)). Inspired by this formula, we give a definition of the duo Fenchel–Young divergence induced by a pair of strictly convex functions F 1 and F 2 (Definition 1) in Section 3, and prove that the divergence is always non-negative provided that F 1 upper bounds F 2 . We then define the duo Bregman divergence (Definition 2) corresponding to the duo Fenchel–Young divergence. In Section 4, we show that the Kullback–Leibler divergence between a truncated density and a density of a same parametric exponential family amounts to a duo Fenchel–Young divergence or equivalently to a duo Bregman divergence on swapped parameters (Theorem 1). That is, we consider a truncated exponential family [7] E 1 of an exponential family E 1 such that the common support of the distributions of E 1 is contained in the common support of the distributions of E 2 and both canonical decompositions of the families coincide (see Equation (2)). In particular, when E 2 is also a truncated exponential family of E , then we express the KLD between two truncated distributions as a duo Bregman divergence. As examples, we report the formula for the Kullback–Leibler divergence between two densities of truncated exponential families (Corollary 1), and illustrate the formula for the Kullback–Leibler divergence between truncated exponential distributions (Example 6) and for the Kullback–Leibler divergence between truncated normal distributions (Example 7).
In Section 5, we further consider the skewed Bhattacharyya distance between densities of truncated exponential families and prove that it amounts to a duo Jensen divergence (Theorem 2). Finally, we conclude in Section 6.

2. Kullback–Leibler Divergence between Different Exponential Families

Consider now two exponential families [1] P and Q defined by their Radon–Nikodym derivatives with respect to two positive measures μ P and μ Q on ( X , Σ ) :
P = P θ : θ Θ ,
Q = Q θ : θ Θ .
The corresponding natural parameter spaces are
Θ = θ R D : X exp ( θ t P ( x ) + k P ( x ) ) d μ P ( x ) < ,
Θ = θ R D : X exp ( θ t Q ( x ) + k Q ( x ) ) d μ Q ( x ) < ,
The order of P is D, t P ( x ) denotes the sufficient statistics of P θ , and k P ( x ) is a term to adjust/tilt the base measure μ P . Similarly, the order of Q is D , t Q ( x ) denotes the sufficient statistics of Q θ , and k Q ( x ) is an optional term to adjust the base measure μ Q . Let p θ and q θ denote the Radon–Nikodym derivatives with respect to the measures μ P and μ Q , respectively:
p θ = d P θ d μ P = exp ( θ t P ( x ) F P ( θ ) + k P ( x ) ) ,
q θ = d Q θ d μ Q = exp ( θ t Q ( x ) F Q ( θ ) + k Q ( x ) ) ,
where F P ( θ ) and F Q ( θ ) denote the corresponding log-normalizers of P and Q , respectively.
F P ( θ ) = log exp ( θ t P ( x ) + k P ( x ) ) d μ P ( x ) ,
F Q ( θ ) = log exp ( θ t Q ( x ) + k Q ( x ) ) d μ Q ( x ) .
The functions F P and F Q are strictly convex and real analytic [1]. Hence, those functions are infinitely many times differentiable on their open natural parameter spaces.
Consider the KLD between P θ P and Q θ Q such that μ P = μ Q (and hence P θ Q θ ). Then the KLD between P θ and Q θ was first considered in [23]:
D KL [ P θ : Q θ ] = E P log d P θ d Q θ , = E P θ θ t P ( x ) θ t Q ( x ) F P ( θ ) + F Q ( θ ) + k P ( x ) k Q ( x ) d μ P d μ Q = 1 , = F Q ( θ ) F P ( θ ) + θ E P θ [ t P ( x ) ] θ E P θ t Q ( x ) + E P θ k P ( x ) k Q ( x ) .
Recall that the dual parameterization of an exponential family density P θ is P η with η = E P θ [ t P ( x ) ] = F P ( θ ) [1], and that the Fenchel–Young equality is F ( θ ) + F * ( η ) = θ η for η = F ( θ ) . Thus the KLD between P θ and Q θ can be rewritten as
D KL [ P θ : Q θ ] = F Q ( θ ) + F P * ( η ) θ E P θ t Q ( x ) + E P θ k P ( x ) k Q ( x ) .
This formula was reported in [23] and generalizes the Fenchel–Young divergence [17] obtained when P = Q (with t P ( x ) = t Q ( x ) , k P ( x ) = k Q ( x ) , and F ( θ ) = F P ( θ ) = F Q ( θ ) and F * ( η ) = F P * ( η ) = F Q * ( η ) ).
The formula of Equation (29) was illustrated in [23] with two examples: the KLD between Laplacian distributions and zero-centered Gaussian distributions, and the KLD between two Weibull distributions. Both these examples use the Lebesgue base measure for μ P and μ Q .
Let us report another example that uses the counting measure as the base measure for μ P and μ Q .
Example 1.
Consider the KLD between a Poisson probability mass function (pmf) and a geometric pmf. The canonical decompositions of the Poisson and geometric pmfs are summarized in Table 1. The KLD between a Poisson pmf p λ and a geometric pmf q p is equal to
D KL [ P λ : Q p ] = F Q ( θ ) + F P * ( η ) E P θ [ t Q ( x ) ] · θ + E P θ [ k P ( x ) k Q ( x ) ] ,
= log p + λ log λ λ λ log ( 1 p ) E P λ [ log x ! ]
Since E p λ [ log x ! ] = k = 0 e λ λ k log ( k ! ) k ! , we have
D KL [ P λ : Q p ] = log p + λ log λ 1 p λ k = 0 e λ λ k log ( k ! ) k ! .
Note that we can calculate the KLD between two geometric distributions Q p 1 and Q p 2 as
D KL [ Q p 1 : Q p 2 ] = B F Q ( θ ( p 2 ) : θ ( p 1 ) ) ,
= F Q ( θ ( p 2 ) ) F Q ( θ ( p 1 ) ) ( θ ( p 2 ) θ ( p 1 ) ) η ( p 1 ) ,
We obtain:
D KL [ Q p 1 : Q p 2 ] = log p 1 p 2 1 1 p 1 log 1 p 1 1 p 2 .

3. The Duo Fenchel–Young Divergence and Its Corresponding Duo Bregman Divergence

Inspired by formula of Equation (29), we shall define the duo Fenchel–Young divergence using a dominance condition on a pair ( F 1 ( θ ) , F 2 ( θ ) ) of strictly convex generators.
Definition 1
(duo Fenchel–Young divergence). Let F 1 ( θ ) and F 2 ( θ ) be two strictly convex functions such that F 1 ( θ ) F 2 ( θ ) for any θ Θ 12 = dom ( F 1 ) dom ( F 2 ) . Then the duo Fenchel–Young divergence Y F 1 , F 2 * ( θ , η ) is defined by
Y F 1 , F 2 * ( θ , η ) : = F 1 ( θ ) + F 2 * ( η ) θ η .
When F 1 ( θ ) = F 2 ( θ ) = : F ( θ ) , we have F 1 * ( η ) = F 2 * ( η ) = : F * ( η ) , and we retrieve the ordinary Fenchel–Young divergence [17]:
Y F , F * ( θ , η ) : = F ( θ ) + F * ( η ) θ η 0 .
Note that in Equation (35), we have η = F 2 ( θ ) .
Property 1
(Non-negative duo Fenchel–Young divergence). The duo Fenchel–Young divergence is always non-negative.
Proof. 
The proof relies on the reverse dominance property of strictly convex and differentiable conjugate functions:
Lemma 1
(Reverse majorization order of functions by the Legendre–Fenchel transform). Let F 1 ( θ ) and F 2 ( θ ) be two Legendre-type convex functions [14]. Then if F 1 ( θ ) F 2 ( θ ) then we have F 2 * ( η ) F 1 * ( η ) .
Proof .
This property is graphically illustrated in Figure 3. The reverse dominance property of the Legendre–Fenchel transformation can be checked algebraically as follows:
F 1 * ( η ) = sup θ Θ { η θ F 1 ( θ ) } ,
= η θ 1 F 1 ( θ 1 ) ( with η = F 1 ( θ 1 ) ) ,
η θ 1 F 2 ( θ 1 ) ,
sup θ Θ { η θ F 2 ( θ ) } = F 2 * ( η ) .
 □
Thus we have F 1 * ( η ) F 2 * ( η ) when F 1 ( θ ) F 2 ( θ ) . Therefore it follows that Y F 1 , F 2 * ( θ , η ) 0 since we have
Y F 1 , F 2 * ( θ , η ) : = F 1 ( θ ) + F 2 * ( η ) θ η ,
F 1 ( θ ) + F 1 * ( η ) θ η = Y F 1 , F 1 * ( θ , η ) 0 ,
where Y F 1 , F 1 * is the ordinary Fenchel–Young divergence, which is guaranteed to be non-negative from the Fenchel–Young inequality. □
Figure 3. (a) Visual illustration of the Legendre–Fenchel transformation: F * ( η ) is measured as the vertical gap (left long black line with both arrows) between the origin and the hyperplane of the “slope” η tangent at F ( θ ) evaluated at θ = 0 . (b) The Legendre transforms F 1 * ( η ) and F 1 * ( η ) of two functions F 1 ( θ ) and F 2 ( θ ) such that F 1 ( θ ) F 2 ( θ ) reverse the dominance order: F 2 * ( η ) F 1 * ( η ) .
Figure 3. (a) Visual illustration of the Legendre–Fenchel transformation: F * ( η ) is measured as the vertical gap (left long black line with both arrows) between the origin and the hyperplane of the “slope” η tangent at F ( θ ) evaluated at θ = 0 . (b) The Legendre transforms F 1 * ( η ) and F 1 * ( η ) of two functions F 1 ( θ ) and F 2 ( θ ) such that F 1 ( θ ) F 2 ( θ ) reverse the dominance order: F 2 * ( η ) F 1 * ( η ) .
Entropy 24 00421 g003
We can express the duo Fenchel–Young divergence using the primal coordinate systems as a generalization of the Bregman divergence to two generators that we term the duo Bregman divergence (see Figure 4):
B F 1 , F 2 ( θ : θ ) : = Y F 1 , F 2 * ( θ , η ) = F 1 ( θ ) F 2 ( θ ) ( θ θ ) F 2 ( θ ) ,
with η = F 2 ( θ ) .
This generalized Bregman divergence is non-negative when F 1 ( θ ) F 2 ( θ ) . Indeed, we check that
B F 1 , F 2 ( θ : θ ) = F 1 ( θ ) F 2 ( θ ) ( θ θ ) F 2 ( θ ) ,
F 2 ( θ ) F 2 ( θ ) ( θ θ ) F 2 ( θ ) = B F 2 ( θ : θ ) 0 .
Definition 2
(duo Bregman divergence). Let F 1 ( θ ) and F 2 ( θ ) be two strictly convex functions such that F 1 ( θ ) F 2 ( θ ) for any θ Θ 12 = dom ( F 1 ) dom ( F 2 ) . Then the generalized Bregman divergence is defined by
B F 1 , F 2 ( θ : θ ) = F 1 ( θ ) F 2 ( θ ) ( θ θ ) F 2 ( θ ) 0 .
Example 2.
Consider F 1 ( θ ) = a 2 θ 2 for a > 0 . We have η = a θ , θ = η a , and
F 1 * ( η ) = η 2 a a 2 η 2 a 2 = η 2 2 a .
Let F 2 ( θ ) = 1 2 θ 2 so that F 1 ( θ ) F 2 ( θ ) for a 1 . We check that F 1 * ( η ) = η 2 2 a F 2 * ( η ) when a 1 . The duo Fenchel–Young divergence is
Y F 1 , F 2 * ( θ , η ) = a 2 θ 2 + 1 2 η 2 θ η 0 ,
when a 1 . We can express the duo Fenchel–Young divergence in the primal coordinate systems as
B F 1 , F 2 ( θ , θ ) = a 2 θ 2 + 1 2 θ 2 θ θ .
When a = 1 , F 1 ( θ ) = F 2 ( θ ) = 1 2 θ 2 : = F ( θ ) , and we obtain B F ( θ , θ ) = 1 2 θ θ 2 2 , half the squared Euclidean distance as expected. Figure 5 displays the graph plot of the duo Bregman divergence for several values of a.
Example 3.
Consider F 1 ( θ ) = θ 2 and F 2 ( θ ) = θ 4 on the domain Θ = [ 0 , 1 ] . We have F 1 ( θ ) F 2 ( θ ) for θ Θ . The convex conjugate of F 1 ( η ) is F 1 * ( η ) = 1 4 η 2 . We have
F 2 * ( η ) = η 4 3 1 4 1 3 1 4 4 3 = 3 4 4 3 η 4 3
with η 2 ( θ ) = 4 θ 3 . Figure 6 plots the convex functions F 1 ( θ ) and F 2 ( θ ) , and their convex conjugates F 1 * ( η ) and F 2 * ( η ) . We observe that F 1 ( θ ) F 2 ( θ ) on θ [ 0 , 1 ] and that F 1 * ( η ) F 2 * ( η ) on H = [ 0 , 2 ] .
We now state a property between dual duo Bregman divergences:
Property 2
(Dual duo Fenchel–Young and Bregman divergences). We have
Y F 1 , F 2 * ( θ : η ) = B F 1 , F 2 ( θ : θ ) = B F 2 * , F 1 * ( η : η ) = Y F 2 * , F 1 ( η : θ )
Proof. 
From the Fenchel–Young equalities of the inequalities, we have F 1 ( θ ) = θ η F 1 * ( η ) for η = F 1 ( θ ) and F 2 ( θ ) = θ η F 2 * ( η ) with η = F 2 ( θ ) . Thus we have
B F 1 , F 2 ( θ : θ ) = F 1 ( θ ) F 2 ( θ ) ( θ θ ) F 2 ( θ ) ,
= θ η F 1 * ( η ) θ η + F 2 * ( η ) ( θ θ ) η ,
= F 2 * ( η ) F 1 * ( η ) ( η η ) θ ,
= B F 2 * , F 1 * ( η : η ) .
Recall that F 1 ( θ ) F 2 ( θ ) implies that F 1 * ( η ) F 2 * ( η ) (Lemma 1), θ = F 1 * ( η ) , and therefore the dual duo Bregman divergence is non-negative:
B F 2 * , F 1 * ( η : η ) = F 2 * ( η ) F 1 * ( η ) ( η η ) θ , F 1 * ( η ) F 1 * ( η ) ( η η ) F 1 * ( η ) B F 1 * ( η : η ) 0 .
 □

4. Kullback–Leibler Divergence between Distributions of Truncated Exponential Families

Let E 1 = { P θ : θ Θ 1 } be an exponential family of distributions all dominated by μ with Radon–Nikodym density p θ ( x ) = exp ( θ t ( x ) F 1 ( θ ) + k ( x ) ) d μ ( x ) defined on the support X 1 . Let E 2 = { Q θ : θ Θ 2 } be another exponential family of distributions all dominated by μ with Radon–Nikodym density q θ ( x ) = exp ( θ t ( x ) F 2 ( θ ) + k ( x ) ) d μ ( x ) defined on the support X 2 such that X 1 X 2 . Let p ˜ θ ( x ) = exp ( θ t ( x ) + k ( x ) ) d μ ( x ) be the common unnormalized density so that
p θ ( x ) = p ˜ θ ( x ) Z 1 ( θ )
and
q θ ( x ) = p ˜ θ ( x ) Z 2 ( θ ) = Z 1 ( θ ) Z 2 ( θ ) p θ ( x ) ,
with Z 1 ( θ ) = exp ( F 1 ( θ ) ) and Z 2 ( θ ) = exp ( F 2 ( θ ) ) being the log-normalizer functions of E 1 and E 2 , respectively.
We have
D KL [ p θ 1 : q θ 2 ] = X 1 p θ 1 ( x ) log p θ 1 ( x ) q θ 2 ( x ) d μ ( x ) ,
= X 1 p θ 1 ( x ) log p θ 1 ( x ) p θ 2 ( x ) d μ ( x ) + X 1 p θ 1 ( x ) log Z 2 ( θ 2 ) Z 1 ( θ 2 ) d μ ( x ) ,
= D KL [ p θ 1 : p θ 2 ] + log Z 2 ( θ 2 ) log Z 1 ( θ 2 ) .
Since D KL [ p θ 1 : p θ 2 ] = B F 1 ( θ 2 : θ 1 ) and log Z i ( θ ) = F i ( θ ) , we obtain
D KL [ p θ 1 : q θ 2 ] = B F 1 ( θ 2 : θ 1 ) + F 2 ( θ 2 ) F 1 ( θ 2 ) ,
= F 1 ( θ 2 ) F 1 ( θ 1 ) ( θ 2 θ 1 ) F 1 ( θ 1 ) + F 2 ( θ 2 ) F 1 ( θ 2 ) ,
= F 2 ( θ 2 ) F 1 ( θ 1 ) ( θ 2 θ 1 ) F 1 ( θ 1 ) = : B F 2 , F 1 ( θ 2 : θ 1 ) .
Observe that since X 1 X 2 , we have:
F 2 ( θ ) = log X 2 p ˜ θ ( x ) d μ ( x ) log X 1 p ˜ θ ( x ) d μ ( x ) : = F 1 ( θ ) .
Therefore Θ 2 Θ 1 , and the common natural parameter space is Θ 12 = Θ 1 Θ 2 = Θ 2 .
Notice that the reverse Kullback–Leibler divergence D KL * [ p θ 1 : q θ 2 ] = D KL [ q θ 2 : p θ 1 ] = + since Q θ 2 P θ 1 .
Theorem 1
(Kullback–Leibler divergence between truncated exponential family densities). Let E 2 = { q θ 2 } be an exponential family with support X 2 , and E 1 = { p θ 1 } a truncated exponential family of E 2 with support X 1 X 2 . Let F 1 and F 2 denote the log-normalizers of E 1 and E 2 and η 1 and η 2 the moment parameters corresponding to the natural parameters θ 1 and θ 2 . Then the Kullback–Leibler divergence between a truncated density of E 1 and a density of E 2 is
D KL [ p θ 1 : q θ 2 ] = Y F 2 , F 1 * ( θ 2 : η 1 ) = B F 2 , F 1 ( θ 2 : θ 1 ) = B F 1 * , F 2 * ( η 1 : η 2 ) = Y F 1 * , F 2 ( η 1 : θ 2 ) .
For example, consider the calculation of the KLD between an exponential distribution (view as half a Laplacian distribution, i.e., a truncated Laplacian distribution on the positive real support) and a Laplacian distribution defined on the real line support.
Example 4.
Let R + + = { x R : x > 0 } denote the set of positive reals. Let E 1 = { p λ ( x ) = λ exp ( λ x ) , λ R + + , x > 0 } and E 2 = { q λ ( x ) = λ exp ( λ | x | ) , λ R + + , x R } denote the exponential families of exponential distributions and Laplacian distributions, respectively. We have the sufficient statistic t ( x ) = | x | and natural parameter θ = λ so that p ˜ θ ( x ) = exp ( | x | θ ) . The log-normalizers are F 1 ( θ ) = log θ and F 2 ( θ ) = log θ + log 2 (hence F 2 ( θ ) F 1 ( θ ) ). The moment parameter η = F 1 ( θ ) = F 2 ( θ ) = 1 θ = 1 λ . Thus using the duo Bregman divergence, we have:
D KL [ p θ 1 : q θ 2 ] = B F 2 , F 1 ( θ 2 : θ 1 ) ,
= F 2 ( θ 2 ) F 1 ( θ 1 ) ( θ 2 θ 1 ) F 1 ( θ 1 ) ,
= log 2 + log λ 1 λ 2 + λ 2 λ 1 1 .
Moreover, we can interpret that divergence using the Itakura–Saito divergence [24]:
D IS [ λ 1 : λ 2 ] : = λ 1 λ 2 log λ 1 λ 2 1 0 .
we have
D KL [ p θ 1 : q θ 2 ] = D IS [ λ 2 : λ 1 ] + log 2 0 .
We check the result using the duo Fenchel–Young divergence:
D KL [ p θ 1 : q θ 2 ] = Y F 2 , F 1 * ( θ 2 : η 1 ) ,
with F 1 * ( η ) = 1 + log 1 η :
D KL [ p θ 1 : q θ 2 ] = Y F 2 , F 1 * ( θ 2 : η 1 ) ,
= log λ 2 + log 2 1 + log λ 1 + λ 2 λ 1 ,
= log λ 1 λ 2 + λ 2 λ 1 + log 2 1 .
Next, consider the calculation of the KLD between a half-normal distribution and a (full) normal distribution:
Example 5.
Consider E 1 and E 2 to be the scale family of the half standard normal distributions and the scale family of the standard normal distribution, respectively. We have p ˜ θ ( x ) = exp x 2 2 σ 2 with Z 1 ( θ ) = σ π 2 and Z 2 ( θ ) = σ 2 π . Let the sufficient statistic be t ( x ) = x 2 2 so that the natural parameter is θ = 1 σ 2 R + + . Here, we have both Θ 1 = Θ 2 = R + + . For this example, we check that Z 1 ( θ ) = 1 2 Z 2 ( θ ) . We have F 1 ( θ ) = 1 2 log θ + 1 2 log π 2 and F 2 ( θ ) = 1 2 log θ + 1 2 log ( 2 π ) (with F 2 ( θ ) F 1 ( θ ) ). We have η = 1 2 θ = 1 2 σ 2 . The KLD between two half scale normal distributions is
D KL [ p θ 1 : p θ 2 ] = B F 1 ( θ 2 : θ 1 ) ,
= 1 2 log σ 2 2 σ 1 2 + σ 1 2 σ 2 2 1 .
Since F 1 ( θ ) and F 2 ( θ ) differ only by a constant and the Bregman divergence is invariant under an affine term of its generator, we have
D KL [ q θ 1 : q θ 2 ] = B F 2 ( θ 2 : θ 1 ) ,
= B F 1 ( θ 2 : θ 1 ) = 1 2 log σ 2 2 σ 1 2 + σ 1 2 σ 2 2 1 .
Moreover, we can interpret those Bregman divergences as half of the Itakura–Saito divergence:
D KL [ p θ 1 : p θ 2 ] = D KL [ q θ 1 : q θ 2 ] = B F 2 ( θ 2 : θ 1 ) = 1 2 D IS [ σ 1 2 : σ 2 2 ] .
It follows that
D KL [ p θ 1 : q θ 2 ] = B F 2 , F 1 ( θ 2 : θ 1 ) = F 2 ( θ 2 ) F 1 ( θ 1 ) ( θ 2 θ 1 ) F 1 ( θ 1 ) ,
= 1 2 log σ 2 2 σ 1 2 + σ 1 2 σ 2 2 + log 4 1 ,
= D KL [ q θ 1 : q θ 2 ] + log 2 .
Since log 2 > 0 , we have D KL [ p θ 1 : q θ 2 ] D KL [ q θ 1 : q θ 2 ] .
Thus the Kullback–Leibler divergence between a truncated density and another density of the same exponential family amounts to calculate a duo Bregman divergence on the reverse parameter order: D KL [ p θ 1 : q θ 2 ] = B F 2 , F 1 ( θ 2 : θ 1 ) . Let D KL * [ p : q ] : = D KL [ q : p ] be the reverse Kullback–Leibler divergence. Then D KL * [ q θ 2 : p θ 1 ] = B F 2 , F 1 ( θ 2 : θ 1 ) .
Notice that truncated exponential families are also exponential families but those exponential families may be non-steep [25].
Let E 1 = { p θ a 1 , b 1 } and E 2 = { p θ a 2 , b 2 } be two truncated exponential families of the exponential family E = { p θ = d P θ d μ } with log-normalizer F ( θ ) such that
p θ a i , b i ( x ) = p θ ( x ) Z a i , b i ( θ ) ,
with Z a i , b i ( θ ) = Φ θ ( b i ) Φ θ ( a i ) , where Φ θ ( x ) denotes the CDF of p θ ( x ) . Then the log-normalizer of E i is F i ( θ ) = F ( θ ) + log ( Φ θ ( b i ) Φ θ ( a i ) ) for i { 1 , 2 } .
Corollary 1
(Kullback–Leibler divergence between densities of truncated exponential families). Let E i = { p θ a i , b i } be truncated exponential families of the exponential family E = { p θ } with support X i = [ a i , b i ] X (where X denotes the support of E ) for i { 1 , 2 } . Then the Kullback–Leibler divergence between p θ 1 a 1 , b 1 and p θ 2 a 2 , b 2 is infinite if [ a 1 , b 1 ] [ a 2 , b 2 ] and has the following formula when [ a 1 , b 1 ] [ a 2 , b 2 ] :
D KL [ p θ 1 a 1 , b 1 : p θ 2 a 2 , b 2 ] = D KL [ p θ 1 a 1 , b 1 : p θ 2 a 1 , b 1 ] + log Z a 2 , b 2 ( θ 2 ) Z a 1 , b 1 ( θ 2 ) .
Proof. 
We have p θ a 1 , b 1 = p θ Z a 1 , b 1 ( θ ) and p θ a 2 , b 2 = p θ Z a 2 , b 2 ( θ ) . Therefore p θ a 2 , b 2 = p θ a 1 , b 1 Z a 1 , b 1 ( θ ) Z a 2 , b 2 ( θ ) . Thus we have
D KL [ p θ 1 a 1 , b 1 : p θ 2 a 2 , b 2 ] = X 1 p θ 1 a 1 , b 1 ( x ) log p θ 1 a 1 , b 1 ( x ) p θ 2 a 2 , b 2 d μ ( x ) ,
= X 1 p θ 1 a 1 , b 1 ( x ) log p θ 1 a 1 , b 1 ( x ) p θ 2 a 1 , b 1 d μ ( x ) + log Z a 2 , b 2 ( θ 2 ) Z a 1 , b 1 ( θ 2 ) ,
= D KL [ p θ 1 a 1 , b 1 : p θ 2 a 1 , b 1 ] + log Z a 2 , b 2 ( θ 2 ) Z a 1 , b 1 ( θ 2 ) .
 □
Thus the KLD between truncated exponential family densities p θ 1 a 1 , b 1 and p θ 2 a 2 , b 2 amounts to the KLD between the densities with the same truncation parameter with an additive term depending on the log ratio of the mass with respect to the truncated supports evaluated at θ 2 . We shall illustrate with two examples the calculation of the KLD between truncated exponential families.
Example 6.
Consider the calculation of the KLD between a truncated exponential distribution p λ 1 a 1 , b 1 with support X 1 = [ a 1 , b 1 ] ( b 1 > a 1 0 ) and another truncated exponential distribution p λ 2 a 2 , b 2 with support X 2 = [ a 2 , b 2 ] ( b 2 > a 2 0 ). We have p λ ( x ) = λ exp ( λ x ) (density of the untruncated exponential family with natural parameter θ = λ , sufficient statistic t ( x ) = x and log-normalizer F ( θ ) = log θ ), p λ 1 a 1 , b 1 = 1 Z a 1 , b 1 ( λ ) p λ 1 ( x ) , and p λ 2 a 2 , b 2 = 1 Z a 2 , b 2 ( λ ) p λ 2 ( x ) . Let Φ λ ( x ) = 1 exp ( λ x ) denote the cumulative distribution function of the exponential distribution. We have Z a , b ( λ ) = Φ b ( λ ) Φ a ( λ ) and
F a , b ( λ ) = F ( λ ) + log ( Φ b ( λ ) Φ a ( λ ) ) = log λ + log ( e λ a e λ b ) .
If [ a 1 , b 1 ] [ a 2 , b 2 ] then D KL [ p λ 1 : q λ 2 ] = + . Otherwise, [ a 1 , b 1 ] [ a 2 , b 2 ] , and the exponential family { p λ } is the truncated exponential family { q λ } . Using the computer algebra system Maxima (https://maxima.sourceforge.io/ accessed on 15 March 2022), we find that
E p λ [ x ] = ( 1 + λ b ) e λ a ( 1 + λ a ) e λ b λ ( e λ b e λ a ) = F a , b ( λ ) .
Thus we have:
D KL [ p λ 1 a 1 , b 1 : q λ 2 a 2 , b 2 ] = B F 2 , F 1 ( θ 2 : θ 1 ) , = F a 2 , b 2 ( λ 2 ) F a 1 , b 1 ( λ 1 ) ( λ 2 λ 1 ) F a 1 , b 1 ( λ 1 ) ,
= log λ 1 λ 2 + ( λ 2 λ 1 ) E p λ 1 [ x ] + log e λ 2 a 2 e λ 2 b 2 e λ 1 a 1 e λ 1 b 1 .
When a 1 = a 2 = 0 and b 1 = b 2 = + , we recover the KLD between two exponential distributions p λ 1 and p λ 2 :
D KL [ p λ 1 : p λ 2 ] = B F ( λ 2 : λ 1 ) ,
= F ( θ 2 ) F ( θ 1 ) ( θ 2 θ 1 ) F ( θ 1 ) ,
= λ 2 λ 1 log λ 2 λ 1 1 = D IS [ λ 2 : λ 1 ] .
Note that the KLD between two truncated exponential distributions with the same truncation support X = [ a , b ] is
D KL [ p λ 1 a , b : p λ 2 a , b ] = log λ 2 λ 1 + log Φ λ 2 ( b ) Φ λ 2 ( a ) Φ λ 1 ( b ) Φ λ 1 ( a ) + ( λ 2 λ 1 ) E p 1 a , b [ x ] .
We also check Corollary 1:
D KL [ p λ 1 a 1 , b 1 : p λ 2 a 2 , b 2 ] = D KL [ p λ 1 a 1 , b 1 : p λ 2 a 1 , b 1 ] + log Z a 2 , b 2 ( λ 2 ) Z a 1 , b 1 ( λ 2 ) ,
The next example shows how to compute the Kullback–Leibler divergence between two truncated normal distributions:
Example 7.
Let N a , b ( m , s ) denote a truncated normal distribution with support the open interval ( a , b ) ( a < b ) and probability density function defined by:
p m , s a , b ( x ) = 1 Z a , b ( m , s ) exp ( x m ) 2 2 s 2 ,
where Z a , b ( m , s ) is related to the partition function [26] expressed using the cumulative distribution function (CDF) Φ m , s ( x ) :
Z a , b ( m , s ) = 2 π s Φ m , s ( b ) Φ m , s ( a ) ,
with
Φ m , s ( x ) = 1 2 1 + erf x m 2 s ,
where erf ( x ) is the error function:
erf ( x ) : = 2 π 0 x e t 2 d t .
Thus we have erf ( x ) = 2 Φ ( 2 x ) 1 where Φ ( x ) = Φ 0 , 1 ( x ) .
The pdf can also be written as
p m , s a , b ( x ) = 1 s ϕ ( x m s ) Φ ( b m s ) Φ ( a m s ) ,
where ϕ ( x ) denotes the standard normal pdf ( ϕ ( x ) = p 0 , 1 , + ( x ) ):
ϕ ( x ) : = 1 2 π exp x 2 2 ,
and Φ ( x ) = Φ 0 , 1 ( x ) = x ϕ ( t ) d t is the standard normal CDF. When a = and b = + , we have Z , ( m , s ) = 2 π s since Φ ( ) = 0 and Φ ( + ) = 1 .
The density p m , s a , b ( x ) belongs to an exponential family E a , b with natural parameter θ = m s 2 , 1 2 s 2 , sufficient statistics t ( x ) = ( x , x 2 ) , and log-normalizer:
F a , b ( θ ) = θ 1 2 4 θ 2 + log Z a , b ( θ )
The natural parameter space is Θ = R × R where R = { x R : x < 0 } denotes the set of negative real numbers.
The log-normalizer can be expressed using the source parameters ( m , s ) (which are not the mean and standard deviation when the support is truncated, hence the notation m and s):
F a , b ( m , s ) = m 2 2 s 2 + log Z a , b ( m , s ) ,
= m 2 2 s 2 + 1 2 log 2 π s 2 + log Φ m , s ( b ) Φ m , s ( a ) .
We shall use the fact that the gradient of the log-normalizer of any exponential family distribution amounts to the expectation of the sufficient statistics [1]:
F a , b ( θ ) = E p m , s a , b [ t ( x ) ] = η .
Parameter η is called the moment or expectation parameter [1].
The mean μ ( m , s ; a , b ) = E p m , s a , b [ x ] = θ 1 F a , b ( θ ) and the variance σ 2 ( m , s ; a , b ) = E p m , s a , b [ x 2 ] μ 2 (with E p m , s a , b [ x 2 ] = θ 2 F a , b ( θ ) ) of the truncated normal p m , s a , b can be expressed using the following formula [26,27] (page 25):
μ ( m , s ; a , b ) = m s ϕ ( β ) ϕ ( α ) Φ ( β ) Φ ( α ) ,
σ 2 ( m , s ; a , b ) = s 2 1 β ϕ ( β ) α ϕ ( α ) Φ ( β ) Φ ( α ) ϕ ( β ) ϕ ( α ) Φ ( β ) Φ ( α ) 2 ,
where α : = a m s and β : = b m s . Thus we have the following moment parameter η = ( η 1 , η 2 ) with
η 1 ( m , s ; a , b ) = E p m , s a , b [ x ] = μ ( m , s ; a , b ) ,
η 2 ( m , s ; a , b ) = E p m , s a , b [ x 2 ] = σ 2 ( m , s ; a , b ) + μ 2 ( m , s ; a , b ) .
Now consider two truncated normal distributions p m 1 , s 1 a 1 , b 1 and p m 2 , s 2 a 2 , b 2 with [ a 1 , b 1 ] [ a 2 , b 2 ] (otherwise, we have D KL [ p m 1 , s 1 a 1 , b 1 : p m 2 , s 2 a 2 , b 2 ] = + ). Then the KLD between p m 1 , s 1 a 1 , b 1 and p m 2 , s 2 a 2 , b 2 is equivalent to a duo Bregman divergence:
D KL [ p m 1 , s 1 a 1 , b 1 : p m 2 , s 2 a 2 , b 2 ] = F m 2 , s 2 ( θ 2 ) F m 1 , s 1 ( θ 1 ) ( θ 2 θ 1 ) F m 1 , s 1 ( θ 1 ) , = m 2 2 s 2 2 m 1 2 s 1 2 + log Z a 2 , b 2 ( m 2 , s 2 ) Z a 1 , b 1 ( m 1 , s 1 ) m 2 s 2 2 m 1 s 1 2 η 1 ( m 1 , s 1 ; a 1 , b 1 ) 1 2 s 1 2 1 2 s 2 2 η 2 ( m 1 , s 1 ; a 1 , b 1 ) .
Note that F m 2 , s 2 ( θ ) F m 1 , s 1 ( θ ) .
This formula is valid for (1) the KLD between two truncated normal distributions, or for (2) the KLD between a truncated normal distribution and a (full support) normal distribution. Note that the formula depends on the erf function used in function Φ. Furthermore, when a 1 = a 2 = and b 1 = b 2 = + , we recover (3) the KLD between two univariate normal distributions, since log Z a 2 , b 2 ( m 2 , s 2 ) Z a 1 , b 1 ( m 1 , s 1 ) = log σ 2 σ 1 = 1 2 log σ 2 2 σ 1 2 :
D KL [ p m 1 , s 1 : p m 2 , s 2 ] = 1 2 log s 2 2 s 1 2 + σ 1 2 σ 2 2 + ( m 2 m 1 ) 2 s 2 2 1 . .
Note that for full support normal distributions, we have μ ( m , s ; ; + ) = m and σ 2 ( m , s ; ; + ) = s 2 .
The entropy of a truncated normal distribution (an exponential family [28]) is h [ p m , s a , b ] = a b p m , s a , b ( x ) log p m , s a , b d x = F * ( η ) = F ( θ ) θ η . We find that
h [ p m , s a , b ] = log 2 π e s Φ ( β ) Φ ( α ) + α ϕ ( α ) β ϕ ( β ) 2 Φ ( β ) Φ ( α ) .
When ( a , b ) = ( , ) , we have Φ ( β ) Φ ( α ) = 1 and α ϕ ( α ) β ϕ ( β ) = 0 since β = α , ϕ ( x ) = ϕ ( x ) (an even function), and lim β + β ϕ ( β ) = 0 . Thus we recover the differential entropy of a normal distribution: h [ p μ , σ ] = log 2 π e σ .

5. Bhattacharyya Skewed Divergence between Truncated Densities of an Exponential Family

The Bhattacharyya α -skewed divergence [29,30] between two densities p ( x ) and q ( x ) with respect to μ is defined for a skewing scalar parameter α ( 0 , 1 ) as:
D Bhat , α [ p : q ] : = log X p ( x ) α q ( x ) 1 α d μ ( x ) ,
where X denotes the support of the distributions. The Bhattacharyya distance is
D Bhat [ p , q ] = D Bhat , 1 2 [ p : q ] = log X p ( x ) q ( x ) d μ ( x ) .
The Bhattacharyya distance is not a metric distance since it does not satisfy the triangle inequality. The Bhattacharyya distance is related to the Hellinger distance [31] as follows:
D H [ p , q ] = 1 2 X p ( x ) q ( x ) 2 d μ ( x ) = 1 exp ( D Bhat [ p , q ] ) .
The Hellinger distance is a metric distance.
Let I α [ p : q ] : = X p ( x ) α q ( x ) 1 α d μ ( x ) denote the skewed affinity coefficient so that D Bhat , α [ p : q ] = log I α [ p : q ] . Since I α [ p : q ] = I 1 α [ q : p ] , we have D Bhat , α [ p : q ] = D Bhat , 1 α [ q : p ] .
Consider an exponential family E = { p θ } with log-normalizer F ( θ ) . Then it is well-known that the α -skewed Bhattacharyya divergence between two densities of an exponential family amounts to a skewed Jensen divergence [30] (originally called Jensen difference in [32]):
D Bhat , α [ p θ 1 : p θ 2 ] = J F , α ( θ 1 : θ 2 ) ,
where the skewed Jensen divergence is defined by
J F , α ( θ 1 : θ 2 ) = α F ( θ 1 ) + ( 1 α ) F ( θ 2 ) F ( α θ 1 + ( 1 α ) θ 2 ) .
The convexity of the log-normalizer F ( θ ) ensures that J F , α ( θ 1 : θ 2 ) 0 . The Jensen divergence can be extended to full real α by rescaling it by 1 α ( 1 α ) , see [33].
Remark 1.
The Bhattacharyya skewed divergence D Bhat , α [ p : q ] appears naturally as the negative of the log-normalizer of the exponential family induced by the exponential arc { r α ( x ) α ( 0 , 1 ) } linking two densities p and q with r α ( x ) p ( x ) α q ( x ) 1 α . This arc is an exponential family of order 1:
r α ( x ) = exp α log p ( x ) + ( 1 α ) log q ( x ) log Z α ( p : q ) ,
= exp α log p ( x ) q ( x ) F p q ( α ) q ( x ) .
The sufficient statistic is t ( x ) = p ( x ) q ( x ) , the natural parameter α ( 0 , 1 ) , and the log-normalizer F p q ( α ) = log Z α ( p : q ) = log p ( x ) α q ( x ) 1 α d μ ( x ) = D Bhat , α [ p : q ] . This shows that D Bhat , α [ p : q ] is concave with respect to α since log-normalizers F p q ( α ) are always convex. Grünwald called those exponential families the likelihood ratio exponential families [34].
Now, consider calculating D Bhat , α [ p θ 1 : q θ 2 ] where p θ 1 E 1 with E 1 a truncated exponential family of E 2 and q θ 2 E 2 = { q θ } . We have q θ ( x ) = Z 1 ( θ ) Z 2 ( θ ) p θ ( x ) , where Z 1 ( θ ) and Z 2 ( θ ) are the partition functions of E 1 and E 2 , respectively. Thus we have
I α [ p θ 1 : q θ 2 ] = Z 1 ( θ 2 ) Z 2 ( θ 2 ) 1 α I α [ p θ 1 : p θ 2 ] ,
and the α -skewed Bhattacharyya divergence is
D Bhat , α [ p θ 1 : q θ 2 ] = D Bhat , α [ p θ 1 : p θ 2 ] ( 1 α ) ( F 1 ( θ 2 ) F 2 ( θ 2 ) ) .
Therefore we obtain
D Bhat , α [ p θ 1 : q θ 2 ] = J F 1 , α ( θ 1 : θ 2 ) ( 1 α ) ( F 1 ( θ 2 ) F 2 ( θ 2 ) ) ,
= α F 1 ( θ 1 ) + ( 1 α ) F 2 ( θ 2 ) F 1 ( α θ 1 + ( 1 α ) θ 2 ) ,
= : J F 1 , F 2 , α ( θ 1 : θ 2 ) .
We call J F 1 , F 2 , α ( θ 1 : θ 2 ) the duo Jensen divergence. Since F 2 ( θ ) F 1 ( θ ) , we check that
J F 1 , F 2 , α ( θ 1 : θ 2 ) J F 1 , α ( θ 1 : θ 2 ) 0 .
Figure 7 illustrates graphically the duo Jensen divergence J F 1 , F 2 , α ( θ 1 : θ 2 ) .
Theorem 2.
The α-skewed Bhattacharyya divergence for α ( 0 , 1 ) between a truncated density of E 1 with log-normalizer F 1 ( θ ) and another density of an exponential family E 2 with log-normalizer F 2 ( θ ) amounts to a duo Jensen divergence:
D Bhat , α [ p θ 1 : q θ 2 ] = J F 1 , F 2 , α ( θ 1 : θ 2 ) ,
where J F 1 , F 2 , α ( θ 1 : θ 2 ) is the duo skewed Jensen divergence induced by two strictly convex functions F 1 ( θ ) and F 2 ( θ ) such that F 2 ( θ ) F 1 ( θ ) :
J F 1 , F 2 , α ( θ 1 : θ 2 ) = α F 1 ( θ 1 ) + ( 1 α ) F 2 ( θ 2 ) F 1 ( α θ 1 + ( 1 α ) θ 2 ) .
In [30], it is reported that
D KL [ p θ 1 : p θ 2 ] = B F ( θ 2 : θ 1 ) ,
= lim α 0 1 α J F , α ( θ 2 : θ 1 ) = lim α 0 1 α J F , 1 α ( θ 1 : θ 2 ) ,
= lim α 0 1 α D Bhat , α [ p θ 2 : p θ 1 ] = lim α 0 1 α D Bhat , 1 α [ p θ 1 : p θ 2 ] .
Indeed, using the first-order Taylor expansion of
F ( θ 1 + α ( θ 2 θ 1 ) ) α 0 F ( θ 1 ) + α ( θ 2 θ 1 ) F ( θ 1 )
when α 0 , we check that we have
1 α J F , α ( θ 2 : θ 1 ) : = F ( θ 1 ) + α ( F ( θ 2 ) F ( θ 1 ) ) F ( θ 1 + α ( θ 2 θ 1 ) ) α ,
α 0 E q u a t i o n ( 132 ) F ( θ 1 ) + α ( F ( θ 2 ) F ( θ 1 ) ) F ( θ 1 ) α ( θ 2 θ 1 ) F ( θ 1 ) α ,
= F ( θ 2 ) F ( θ 1 ) ( θ 2 θ 1 ) F ( θ 1 ) ,
= : B F ( θ 2 : θ 1 ) .
Thus we have lim α 0 1 α J F , α ( θ 2 : θ 1 ) = B F ( θ 2 : θ 1 ) .
Moreover, we have
lim α 0 1 α D Bhat , 1 α [ p : q ] = D KL [ p : q ] .
Similarly, we can prove that
lim α 1 1 1 α J F 1 , F 2 , α ( θ 1 : θ 2 ) = B F 2 , F 1 ( θ 2 : θ 1 ) ,
which can be reinterpreted as
lim α 1 1 1 α D Bhat , α [ p θ 1 : q θ 2 ] = D KL [ p θ 1 : q θ 2 ] .

6. Concluding Remarks

We considered the Kullback–Leibler divergence between two parametric densities p θ E 1 and q θ E 2 belonging to truncated exponential families [7] E 1 and E 2 , and we showed that their KLD is equivalent to a duo Bregman divergence on swapped parameter order (Theorem 1). This result generalizes the study of Azoury and Warmuth [13]. The duo Bregman divergence can be rewritten as a duo Fenchel–Young divergence using mixed natural/moment parameterizations of the exponential family densities (Definition 1). This second result generalizes the approach taken in information geometry [15,35]. We showed how to calculate the Kullback–Leibler divergence between two truncated normal distributions as a duo Bregman divergence. More generally, we proved that the skewed Bhattacharyya distance between two parametric densities of truncated exponential families amounts to a duo Jensen divergence (Theorem 2). We showed asymptotically that scaled duo Jensen divergences tend to duo Bregman divergences generalizing a result of [30,33]. This study of duo divergences induced by pair of generators was motivated by the formula obtained for the Kullback–Leibler divergence between two densities of two different exponential families originally reported in [23] (Equation (29)).
It is interesting to find applications of the duo Fenchel–Young, Bregman, and Jensen divergences beyond the scope of calculating statistical distances between truncated exponential family densities. Note that in [36], the authors exhibit a relationship between densities with nested supports and quasi-convex Bregman divergences. However, those considered parametric densities are not exponential families since their supports depend on the parameter. Recently, Khan and Swaroop [37] used this duo Fenchel–Young divergence in machine learning for knowledge-adaptation priors in the so-called change regularizer task.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author would like to thank the three reviewers for their helpful comments, which led to this improved paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Sundberg, R. Statistical Modelling by Exponential Families; Cambridge University Press: Cambridge, UK, 2019; Volume 12. [Google Scholar]
  2. Pitman, E.J.G. Sufficient Statistics and Intrinsic Accuracy; Mathematical Proceedings of the cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 1936; Volume 32, pp. 567–579. [Google Scholar]
  3. Darmois, G. Sur les lois de probabilitéa estimation exhaustive. CR Acad. Sci. Paris 1935, 260, 85. [Google Scholar]
  4. Koopman, B.O. On distributions admitting a sufficient statistic. Trans. Am. Math. Soc. 1936, 39, 399–409. [Google Scholar] [CrossRef]
  5. Hiejima, Y. Interpretation of the quasi-likelihood via the tilted exponential family. J. Jpn. Stat. Soc. 1997, 27, 157–164. [Google Scholar] [CrossRef]
  6. Efron, B.; Hastie, T. Computer Age Statistical Inference: Algorithms, Evidence, and Data Science; Cambridge University Press: Cambridge, UK, 2021; Volume 6. [Google Scholar]
  7. Akahira, M. Statistical Estimation for Truncated Exponential Families; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  8. Bar-Lev, S.K. Large sample properties of the MLE and MCLE for the natural parameter of a truncated exponential family. Ann. Inst. Stat. Math. 1984, 36, 217–222. [Google Scholar] [CrossRef]
  9. Shah, A.; Shah, D.; Wornell, G. A Computationally Efficient Method for Learning Exponential Family Distributions. Adv. Neural Inf. Process. Syst. 2021, 34. Available online: https://proceedings.neurips.cc/paper/2021/hash/84f7e69969dea92a925508f7c1f9579a-Abstract.html (accessed on 15 March 2022).
  10. Keener, R.W. Theoretical Statistics: Topics for a Core Course; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  11. Cover, T.M. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
  12. Csiszár, I. Eine informationstheoretische Ungleichung und ihre Anwendung auf Beweis der Ergodizitaet von Markoffschen Ketten. Magyer Tud. Akad. Mat. Kutato Int. Koezl. 1964, 8, 85–108. [Google Scholar]
  13. Azoury, K.S.; Warmuth, M.K. Relative loss bounds for on-line density estimation with the exponential family of distributions. Mach. Learn. 2001, 43, 211–246. [Google Scholar] [CrossRef] [Green Version]
  14. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 2015. [Google Scholar]
  15. Amari, S.I. Differential-geometrical methods in statistics. Lect. Notes Stat. 1985, 28, 1. [Google Scholar]
  16. Bregman, L.M. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. Ussr Comput. Math. Math. Phys. 1967, 7, 200–217. [Google Scholar] [CrossRef]
  17. Acharyya, S. Learning to Rank in Supervised and Unsupervised Settings Using Convexity and Monotonicity. Ph.D. Thesis, The University of Texas at Austin, Austin, TX, USA, 2013. [Google Scholar]
  18. Blondel, M.; Martins, A.F.; Niculae, V. Learning with Fenchel-Young losses. J. Mach. Learn. Res. 2020, 21, 1–69. [Google Scholar]
  19. Nielsen, F. An elementary introduction to information geometry. Entropy 2020, 22, 1100. [Google Scholar] [CrossRef] [PubMed]
  20. Mitroi, F.C.; Niculescu, C.P. An Extension of Young’s Inequality; Abstract and Applied Analysis; Hindawi: London, UK, 2011; Volume 2011. [Google Scholar]
  21. Jeffreys, H. The Theory of Probability; OUP Oxford: Oxford, UK, 1998. [Google Scholar]
  22. Nielsen, F.; Nock, R. Sided and symmetrized Bregman centroids. IEEE Trans. Inf. Theory 2009, 55, 2882–2904. [Google Scholar] [CrossRef] [Green Version]
  23. Nielsen, F. On a variational definition for the Jensen-Shannon symmetrization of distances based on the information radius. Entropy 2021, 23, 464. [Google Scholar] [CrossRef]
  24. Itakura, F.; Saito, S. Analysis synthesis telephony based on the maximum likelihood method. In Proceedings of the 6th International Congress on Acoustics, Tokyo, Japan, 21–28 August 1968; pp. 280–292. [Google Scholar]
  25. Del Castillo, J. The singly truncated normal distribution: A non-steep exponential family. Ann. Inst. Stat. Math. 1994, 46, 57–66. [Google Scholar] [CrossRef] [Green Version]
  26. Burkardt, J. The Truncated Normal Distribution; Technical Report; Department of Scientific Computing Website, Florida State University: Tallahassee, FL, USA, 2014. [Google Scholar]
  27. Kotz, J.; Balakrishan. Continuous Univariate Distributions, Volumes I and II; John Wiley and Sons: Hoboken, NJ, USA, 1994. [Google Scholar]
  28. Nielsen, F.; Nock, R. Entropies and cross-entropies of exponential families. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 3621–3624. [Google Scholar]
  29. Bhattacharyya, A. On a measure of divergence between two statistical populations defined by their probability distributions. Bull. Calcutta Math. Soc. 1943, 35, 99–109. [Google Scholar]
  30. Nielsen, F.; Boltz, S. The Burbea-Rao and Bhattacharyya centroids. IEEE Trans. Inf. Theory 2011, 57, 5455–5466. [Google Scholar] [CrossRef] [Green Version]
  31. Hellinger, E. Neue Begründung der Theorie Quadratischer Formen von unendlichvielen Veränderlichen. J. Reine Angew. Math. 1909, 1909, 210–271. [Google Scholar] [CrossRef]
  32. Rao, C.R. Diversity and dissimilarity coefficients: A unified approach. Theor. Popul. Biol. 1982, 21, 24–43. [Google Scholar] [CrossRef]
  33. Zhang, J. Divergence function, duality, and convex analysis. Neural Comput. 2004, 16, 159–195. [Google Scholar] [CrossRef] [PubMed]
  34. Grünwald, P.D. The Minimum Description Length Principle; MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  35. Nielsen, F. The Many Faces of Information Geometry. Not. Am. Math. Soc. 2022, 69. [Google Scholar] [CrossRef]
  36. Nielsen, F.; Hadjeres, G. Quasiconvex Jensen Divergences and Quasiconvex Bregman Divergences; Workshop on Joint Structures and Common Foundations of Statistical Physics, Information Geometry and Inference for Learning; Springer: Berlin/Heidelberg, Germany, 2020; pp. 196–218. [Google Scholar]
  37. Emtiyaz Khan, M.; Swaroop, S. Knowledge-Adaptation Priors. arXiv 2021, arXiv:2106.08769. [Google Scholar]
Figure 2. Visualizing the sided and symmetrized Bregman divergences.
Figure 2. Visualizing the sided and symmetrized Bregman divergences.
Entropy 24 00421 g002
Figure 4. The duo Bregman divergence induced by two strictly convex and differentiable functions F 1 and F 2 such that F 1 ( θ ) F 2 ( θ ) . We check graphically that B F 1 , F 2 ( θ : θ ) B F 2 ( θ : θ ) (vertical gaps).
Figure 4. The duo Bregman divergence induced by two strictly convex and differentiable functions F 1 and F 2 such that F 1 ( θ ) F 2 ( θ ) . We check graphically that B F 1 , F 2 ( θ : θ ) B F 2 ( θ : θ ) (vertical gaps).
Entropy 24 00421 g004
Figure 5. The duo half squared Euclidean distance D a 2 ( θ : θ ) : = a 2 θ 2 + 1 2 θ 2 θ θ is non-negative when a 1 : (a) half squared Euclidean distance ( a = 1 ), (b) a = 2 , (c) a = 1 2 , which shows that the divergence can be negative then since a < 1 .
Figure 5. The duo half squared Euclidean distance D a 2 ( θ : θ ) : = a 2 θ 2 + 1 2 θ 2 θ θ is non-negative when a 1 : (a) half squared Euclidean distance ( a = 1 ), (b) a = 2 , (c) a = 1 2 , which shows that the divergence can be negative then since a < 1 .
Entropy 24 00421 g005
Figure 6. The Legendre transform reverses the dominance ordering: F 1 ( θ ) = θ 2 F 2 ( θ ) = θ 4 F 1 * ( η ) F 2 * ( η ) for θ [ 0 , 1 ] .
Figure 6. The Legendre transform reverses the dominance ordering: F 1 ( θ ) = θ 2 F 2 ( θ ) = θ 4 F 1 * ( η ) F 2 * ( η ) for θ [ 0 , 1 ] .
Entropy 24 00421 g006
Figure 7. The duo Jensen divergence J F 1 , F 2 , α ( θ 1 : θ 2 ) is greater than the Jensen divergence J F 1 , α ( θ 1 : θ 2 ) for F 2 ( θ ) F 1 ( θ ) .
Figure 7. The duo Jensen divergence J F 1 , F 2 , α ( θ 1 : θ 2 ) is greater than the Jensen divergence J F 1 , α ( θ 1 : θ 2 ) for F 2 ( θ ) F 1 ( θ ) .
Entropy 24 00421 g007
Table 1. Canonical decomposition of the Poisson and the geometric discrete exponential families.
Table 1. Canonical decomposition of the Poisson and the geometric discrete exponential families.
QuantityPoisson Family P Geometric Family Q
support N { 0 } N { 0 }
base measurecounting measurecounting measure
ordinary parameterrate λ > 0 success probability p ( 0 , 1 )
pmf λ x x ! exp ( λ ) ( 1 p ) x p
sufficient statistic t P ( x ) = x t Q ( x ) = x
natural parameter θ ( λ ) = log λ θ ( p ) = log ( 1 p )
cumulant function F P ( θ ) = exp ( θ ) F Q ( θ ) = log ( 1 exp ( θ ) )
F P ( λ ) = λ F Q ( p ) = log ( p )
auxiliary term k P ( x ) = log x ! k Q ( x ) = 0
moment η = E [ t ( x ) ] η = λ η = e θ 1 e θ = 1 p 1
negentropy F P * ( η ( λ ) ) = λ log λ λ F Q * ( η ( p ) ) = 1 1 p log ( 1 p ) + log p
( F * ( η ) = θ · η F ( θ ) )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nielsen, F. Statistical Divergences between Densities of Truncated Exponential Families with Nested Supports: Duo Bregman and Duo Jensen Divergences. Entropy 2022, 24, 421. https://doi.org/10.3390/e24030421

AMA Style

Nielsen F. Statistical Divergences between Densities of Truncated Exponential Families with Nested Supports: Duo Bregman and Duo Jensen Divergences. Entropy. 2022; 24(3):421. https://doi.org/10.3390/e24030421

Chicago/Turabian Style

Nielsen, Frank. 2022. "Statistical Divergences between Densities of Truncated Exponential Families with Nested Supports: Duo Bregman and Duo Jensen Divergences" Entropy 24, no. 3: 421. https://doi.org/10.3390/e24030421

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop