Divergences Induced by the Cumulant and Partition Functions of Exponential Families and Their Deformations Induced by Comparative Convexity

Exponential families are statistical models which are the workhorses in statistics, information theory, and machine learning, among others. An exponential family can either be normalized subtractively by its cumulant or free energy function, or equivalently normalized divisively by its partition function. Both the cumulant and partition functions are strictly convex and smooth functions inducing corresponding pairs of Bregman and Jensen divergences. It is well known that skewed Bhattacharyya distances between the probability densities of an exponential family amount to skewed Jensen divergences induced by the cumulant function between their corresponding natural parameters, and that in limit cases the sided Kullback–Leibler divergences amount to reverse-sided Bregman divergences. In this work, we first show that the α-divergences between non-normalized densities of an exponential family amount to scaled α-skewed Jensen divergences induced by the partition function. We then show how comparative convexity with respect to a pair of quasi-arithmetical means allows both convex functions and their arguments to be deformed, thereby defining dually flat spaces with corresponding divergences when ordinary convexity is preserved.


Introduction
In information geometry [4], any strictly convex and smooth function induces a dually flat space (DFS) with a canonical divergence which can be expressed in charts either as dual Bregman divergences [9] or equivalently as dual Fenchel-Young divergences [33].For example, the cumulant function of an exponential family [11] (also called free energy) generates a DFS: An exponential family manifold [36] with the canonical divergence yielding the reverse Kullback-Leibler divergence.Another typical example is the negative entropy (negentropy for short) of a mixture family inducing a DFS: A mixture family manifold with the canonical divergence yielding the Kullback-Leibler divergence [33].Any strictly convex and smooth function also induces a family of scaled skewed Jensen divergences [42,32] which includes in limit cases the sided forward and reverse Bregman divergences.
In §2, we show that there are two equivalent approaches to normalize an exponential family: Either by its cumulant function or by its partition function.Since both cumulant and partition functions are convex, they induce (i) families of scaled skewed Jensen divergences and (ii) dually flat spaces with corresponding Bregman divergences corresponding to statistical divergences (Proposition 1).In §3, we recall the well-known result that the statistical α-skewed Bhattacharyya distances between probability densities of an exponential family amount to a scaled α-skewed Jensen divergence between their natural parameters.In §4, we prove that the α-divergences between unnormalized densities of a exponential family amount to scaled α-skewed Jensen divergence between their natural parameters (Proposition 5).More generally, we explain in section 5 how to deform a convex function using comparative convexity [29]: When ordinary convexity of the deformed convex function is preserved, we thus get new skew Jensen divergences and DFSs with corresponding Bregman divergences.Section 6 concludes this work with a discussion.Let (X , A, µ) be a measure space where X denotes the sample set (e.g., finite alphabet, N, R d , space of positive-definite matrices Sym ++ (d), etc.), A a σ-algebra on X (e.g., power set 2 X , Borel σ-algebra B(X ), etc.), and µ a positive measure (e.g., counting measure or Lebesgue measure) on the measurable space (X , A).An exponential family [11] is a set of probability distributions P = {P λ : λ ∈ Λ} all dominated by µ such that their Radon-Nikodym densities p λ (x) = dP λ dµ (x) can be expressed as p λ (x) ∝ pλ (x) = exp (⟨θ(λ), t(x)⟩) h(x), where θ(λ) is the natural parameter, t(x) is the sufficient statistic, h(x) is an auxiliary term used to define the base measure with respect to µ, and ⟨•, •⟩ an inner product defined on the parameter space Λ.The unnormalized positive density pλ (x) is indicated with a tilde notation, and the normalized probability density is obtained as p λ (x) = 1 Z(λ) pλ (x) where Z(λ) = pλ (x)dµ(x) is the Laplace transform of µ.Exponential families include many well-known distributions: For example, the Bernoulli distributions, the Gaussian or normal distributions, the Gamma and Beta distributions, the Wishart distributions, the Poisson distributions, the Rayleigh distributions, etc.The categorical distributions (i.e., discrete distributions on a finite alphabet sample space) form an exponential family too, and exponential families can universally model arbitrarily closely any smooth density [12,30].Furthermore, any two probability measures Q and R with densities q and r with respect to dominating measure µ = Q+R 2 define an exponential family : called the likelihood ratio exponential family [19] because the sufficient statistic is t(x) = log q(x) r(x) (with auxiliary carrier term h(x) = r(x)) or the Bhattacharryya arc since the cumulant function of P Q,R is expressed as the negative of the skewed Bhattacharryya distances [24,32].In machine learning, undirected graphical models [39] and energy-based models [28] including Markov random fields [25] and conditional random fields are exponential families [13].
In the context of λ-deformed exponential families [43] which generalize exponential families, function Z(θ) is called the divisive normalization factor (Eq. 1) and function F (θ) is called the subtractive normalization factor (Eq. 2).Notice that function F (θ) is called the cumulant function because when X ∼ p θ (x) is a random variable following a probability distribution of an exponential family, function F (θ) appears in the cumulant generating function of X: K X (t) = log E X [e ⟨t,X⟩ ] = F (θ + t) − F (θ).The cumulant function is also called the log-normalizer or log-partition function in statistical physics.Since Z > 0 and F = log Z, we deduce that F ≥ Z since log x ≤ x for x > 0.
The order m of an exponential family is the dimension of the sufficient statistic t(x).The full natural parameter space Θ ∈ R m is defined by Θ = {θ : pθ (x)dµ(x) < ∞}.The exponential family is said full when θ is allowed to range in the full parameter space Θ (curved exponential families [4] have natural parameters c(θ) ranging in a subset of Θ), and regular when Θ is an open (convex) domain.A full regular exponential family such that both t(x) = x and k(x) = 0 is called a natural exponential family.Without loss of generality, we consider natural exponential families or exponential families with k(x) = 0 in the remainder.It is well-known that the cumulant function F (θ) is a strictly convex function and that the partition function Z(θ) is strictly log-convex [7] (see Appendix A): Proposition 1 ( [7]).The natural parameter space Θ of an exponential family is convex.
It can be shown that the cumulant and partition functions are smooth C ∞ analytic functions [11].A remarkable property is that strictly log-convex functions are also strictly convex: Proof.By definition, function Z(θ) is strictly log-convex if and only if: i.e., by taking the logarithm on both sides of the inequality, F = log Z is strictly convex: , we have for all α ∈ (0, 1): Letting F (θ) = log Z(θ) in the above inequality, we get: and therefore we get from Eq. 3 and Eq.5: That is, Z is strictly convex.An alternative proof may use the weighted arithmetic-mean geometric mean inequality [29] (AM-GM inequality): with equality if and only if a = b.Let a = Z(θ 0 ) and b = Z(θ 1 ) in the inequality of Eq. 7, we recover the inequality of Eq. 5.
See the Appendix for the weaker proposition of a log-convex function being convex using the assumption of second-order differentiability of convex functions.
The converse of Proposition 3 is not necessarily true: Some convex functions are not log-convex, and thus the class of strictly log-convex functions is a proper subclass of strictly convex functions.For example, θ 2 is convex but log-concave since (log θ 2 ) ′′ = − 2 θ 2 < 0 (Figure 1).Remark 1.Since Z = exp(F ) is strictly convex (Proposition 3), F is exponentially convex.
Strictly convex C 1 function strictly log-convex function and a pair of families of skewed Jensen divergences J F,α and J Z,α : For a strictly convex function F (θ), we define the symmetric Jensen divergence: Let B Θ denote the set of real-valued strictly convex and differentiable functions defined on an open set Θ, called Bregman generators.We may equivalently consider the set of strictly concave and differentiable functions G(θ), and let F (θ) = −G(θ): See [40] (Equation 1).
Remark 2. The non-negativeness of the Bregman divergences for the cumulant and partition functions define the criteria for checking the strict convexity or log-convexity of a C 1 function: The forward Bregman divergence B F (θ 1 : θ 2 ) and reverse Bregman divergence B F (θ 2 : θ 1 ) can be unified with the α-skewed Jensen divergences by rescaling J F,α , and allowing α to range in R [42,32]: where B * F denote the reverse Bregman divergence obtained by swapping the parameter order (reference duality [42]).
Remark 3.Alternatively, we may rescale J F by a factor κ(α) = Next, we first recall the connections between these Jensen and Bregman divergences which are divergences between parameters and statistical divergence counterparts between probability densities in §3.We will then introduce the novel connections between these parameter divergences and α-divergences between unnormalized densities in §4.

Divergences related to the cumulant function
Consider the scaled α-skewed Bhattacharyya distances [24,32] between two probability densities p(x) and q(x): The scaled α-skewed Bhattacharyya distances can also be interpreted as Rényi divergences [38] scaled by (p : q).Since D s B,α tend to the Kullback-Leibler divergence D KL when α → 1 and to the reverse Kullback-Leibler divergence D KL * when α → 0, we have When both probability densities belong to the same exponential family E = {p θ (x) : θ ∈ Θ} with cumulant F (θ), we have the following proposition: Proposition 4 ( [32]).The scaled α-skewed Bhattacharyya distances between two probability densities p θ1 and p θ2 of an exponential family amounts to the scaled α-skewed Jensen divergence between their natural parameters: Proof.The proof follows by first considering the α-skewed Bhattacharyya similarity coefficient ρ α (p, q) = p α q 1−α dµ.
Multiplying by exp(F (αθ Since θ ∈ Θ, we have exp( θ, x − F ( θ))dµ = 1, and therefore ρ α (p θ1 : For the practitioners in machine learning, it is well-known that the Kullback-Leibler divergence between two probability densities p θ1 and p θ2 of an exponential family amounts to a Bregman divergence for the cumulant generator on swapped parameter order (e.g., [5] and [3]): This is a particular instance of Eq. 13 obtained for α = 1: This formula has been further generalized in [31] by considering truncations of exponential family densities: Let Then we have Truncated exponential families are normalized exponential families which may not be regular [14] (i.e., parameter space Θ may not be open).

Divergences related to the partition function
The squared Hellinger distance [4] between two positive potentially unnormalized densities p and q is defined by Notice that the Hellinger divergence can be interpreted as the integral of the difference between the arithmetic mean A(p, q) = p+q G(p, q))dµ .This also proves that D H (p, q) ≥ 0 since A ≥ G.The Hellinger distance D H satisfies the metric axioms of distances.When considering unnormalized densities pθ1 = exp(⟨t(x), θ 1 ⟩) and pθ2 = exp(⟨t(x), θ 2 ⟩) of an exponential family E with partition function Z(θ) = pθ dµ, we get since pθ1 pθ2 = p θ 1 +θ 2
When considering unnormalized densities pθ1 and pθ2 of E, we get Proposition 5.The α-divergences between unnormalized densities of an exponential family amounts to scaled α-Jensen divergences between their natural parameters for the partition function: When α ∈ {0, 1}, the oriented Kullback-Leibler divergences between unnormalized exponential familiy densities amounts to reverse Bregman divergences on their corresponding natural parameters for the partition function: Proof.For α ̸ ∈ {0, 1}, consider We have αp θ1 dµ = αZ(θ 1 ), (1 − α)p θ2 dµ = (1 − α)Z(θ 2 ), and pα Notice that the KLD extended to unnormalized densities can be written as a generalized relative entropy, i.e., obtained as the difference of the extended cross-entropy minus the extended entropy (self cross-entropy): and Remark 4. In general, let us consider two unnormalized positive densities p(x) and q(x), and let p(x) = p(x) Zp and q(x) = q(x) Zq denote their corresponding normalized densities (with normalizing factors Z p = p dµ and Z q = q dµ).The KLD between p and q can be expressed using the KLD between their normalized densities and normalizing factors: Similarly, we have and D KL (p : q) = H × (p : q) − H(p).
Notice that Eq. 20 allows us to derive the following identity between B Z and B F : Let D skl (a : b) = a log a b + b − a be the scalar KLD for a > 0 and b > 0. Then we rewrite Eq. 20 as and we have: The KLD between unnormalized densities p and q with support X can also be written as a definite integral of a scalar Bregman divergence: where f skl (x) = x log x − x.Since B f skl (a : b) ≥ 0∀a > 0, b > 0, we deduce that D KL (p : q) ≥ 0 with equality iff p(x) = q(x) µ-almost everywhere.
More generally, the α-divergences between p θ1 and pθ2 can be written as and the (signed) α-skewed Bhattacharyya distances are given by The αdivergences between two unnormalized exponential distributions are: The partition function expressed with the natural parameter is θ − 5 2 > 0 (strictly convex on Θ).The unnormalized KLD between pσ 2 1 and pσ 2 2 is We check that we D KL (p σ 2 : pσ 2 ) = 0.
For the Hellinger divergence, we have and we check that D H (p σ 2 : pσ 2 ) = 0.

Comparative convexity
The log-convexity can be interpreted as a special case of comparative convexity with respect to a pair (M, N ) of comparable weighted means [29]: A function Z is (M, N )-convex if and only if for α ∈ [0, 1], we have and strictly (M, N )-convex iff we have strict inequality for α ∈ (0, 1) and x ̸ = y.Furthermore, a function Z is (strictly Log-convexity corresponds to (A, G)-convexity, i.e., convexity with respect to the weighted arithmetic and geometric means defined respectively by A(x, y; α, 1 − α) = αx + (1 − α)y and G(x, y; α, 1 − α) = x α y 1−α .Ordinary convexity is (A, A)-convexity.
A weighted quasi-arithmetic mean [26] (also called Kolmogorov-Nagumo mean [27]) is defined for a continuous and strictly increasing function h by We let M h (x, y) = M h x, y; 1 2 , 1 2 .Quasi-arithmetic means include the arithmetic mean obtained for h(u) = id(u) = u and the geometric mean for h(u) = log(u), and more generally, power means which are quasi-arithmetic means obtained for the family of generators h p (u) = u p −1 p with inverse h −1 p (u) = (1 + up) 1 p .In the limit p → 0, we have M 0 (x, y) = G(x, y) for the generator lim p→0 h p (u) = h 0 (u) = log u.

Proposition 6 ([1, 34]
).A function Z(θ) is strictly (M ρ , M τ )-convex with respect to two strictly increasing smooth functions ρ and τ if and only if the function Notice that the set of strictly increasing smooth functions form a non-abelian group with group operation the function composition, neutral element the identity function, and inverse element the functional inverse function.
For a (M ρ , M τ )-convex function Z(θ) which is also strictly convex, we can define a pair of Bregman divergences B Z and B F with F (θ) = τ (Z(ρ −1 (θ))) and a corresponding pair of skewed Jensen divergences.
Thus we have the following generic deformation scheme: In particular, when function Z is deformed by strictly increasing power functions h p1 and h p2 for p 1 and p 2 in R as is strictly convex when it is strictly (M p1 , M p2 )-convex, and thus induces corresponding Bregman and Jensen divergences.

Example 3. Consider the partition function
Thus we can deform Z smoothly by Z p while preserving convexity by ranging p from −1 to +∞.We thus get a corresponding family of Bregman and Jensen divergences.
The proposed convex deformation by using quasi-arithmetic mean generators differs from the interpolation of convex functions using the technique of proximal average [8].
Note that in [34], comparative convexity with respect to a pair of quasi-arithmetic means (M ρ , M τ ) is used to define a (M ρ , M τ )-Bregman divergence which turns out to be equivalent to a conformal Bregman divergence on the ρ-embedding of the parameters.

Dually flat spaces
We start by a refinement of the class of convex functions used to generate dually flat spaces: Definition 2 (Legendre type function [35]).(Θ, F ) is of Legendre type if the function F : Θ → R is strictly convex and differentiable with Θ ̸ = ∅ and Legendre-type functions F (Θ) admits a convex conjugate F * (η) via the Legendre transform F * (η) = sup θ∈Θ ⟨θ, η⟩ − F (θ): A smooth and strictly convex function (Θ, F (θ)) of Legendre-type [35] induces a dually flat space [4] M, i.e., a smooth Hessian manifold [37] with a single global chart (Θ, θ(•)) [4].A canonical divergence D(p : q) between two points p and q of M is viewed as a single parameter contrast function [16] D(r pq ) on the product manifold M × M. The canonical divergence and its dual canonical divergence D * (r qp ) = D(r pq ) can be expressed equivalently either as dual Bregman divergences or as dual Fenchel-Young divergences (Figure 2): where Y F,F * is the Fenchel-Young divergence: Thus a log-convex Legendre-type function Z(θ) induces two dually flat spaces by considering the DFSs induced by Z(θ) and F (θ) = log Z(θ).Let the gradient maps be η = ∇Z(θ) and η = ∇F (θ) = η Z(θ) .When F (θ) is chosen as the cumulant function of an exponential family, the Bregman divergence B F (θ 1 : θ 2 ) can be interpreted as a statistical divergence between corresponding probability densities.Namely, the Bregman divergenec amounts to the reverse Kullback-Leibler divergence: Notice that deforming a convex function F (θ) into F (ρ(θ)) such that F • ρ remains strictly convex has been considered by Yoshizawa and Tanabe [41] to build a 2-parameter deformation ρ α,β of the dually flat space induced by the cumulant function F (θ) of the multivariate normal family.See also the method of Hougaard [21] to obtain other exponential families from a given exponential family.
Thus in general there are many more dually flat spaces with corresponding divergences and statistical divergences than the usually considered exponential family manifold [36] induced by the cumulant function.It is interesting to consider their use in information sciences.
The canonical divergence D and the dual canonical divergence D * on a dually flat space M equiped with potential functions F and F * are viewed as single parameter contrast functions on the product manifold M × M: The divergence D can be expressed either using the θ × θ-coordinate system as a Bregman divergence or using the mixed θ × η-coordinate system as a Fenchel-Young divergence.Similarly, the dual divergence D can be expressed either using the η × η-coordinate system as a dual Bregman divergence or using the mixed η × θ-coordinate system as a dual Fenchel-Young divergence.

Conclusion and discussion
For the machine learning practioner, it is well-known that the Kullback-Leibler divergence (KLD) between two probability densities p θ1 and p θ2 of an exponential family with cumulant function F (free energy) amounts to a reverse Bregman divergence [5] induced by F or equivalently to a reverse Fenchel-Young divergence [3]: where η = ∇F (θ) is the dual moment or expectation parameter.
In this paper, we showed that the KLD extended to positive unnormalized densities pθ1 and pθ2 of an exponential family with convex partition function Z(θ) (Laplace transform) amounts to a reverse Bregman divergence induced by Z or equivalently to a reverse Fenchel-Young divergence: where η = ∇Z(θ).
More generally, we showed that the scaled α-skewed Jensen divergences induced by the cumulant and partition functions between natural parameters coincide with the scaled α-skewed Bhattacharyya distances between probability densities and the α-divergences between unnormalized densities, respectively: We noticed that the partition functions Z of exponential families are both convex and log-convex, and the corresponding cumulant functions are both convex and exponentially convex.Figure 3 summarizes the relationships between statistical divergences between normalized and unnormalized densities of an exponential family and corresponding divergences between their natural parameters.Notice that Brekelmans and Nielsen [10] considered deformed uni-order likelihood ratio exponential families (LREFs) for annealing paths, and obtained an identity between the α-divergences between unnormalized densities and Bregman divergences induced by multiplicatively scaled partition functions.
Since the log-convex partition function is also convex, we generalized the principle of building pairs of convex generators using comparative convexity with respect to a pair of quasi-arithmetic means, and discussed about the induced dually flat spaces and divergences.In particular, by considering the convexitypreserving deformations obtained by power mean generators, we show how to obtain a family of convex generators and dually flat spaces.Notice that some parametric families of Bregman divergences like the α-divergences [2] or the β-divergences [20] yield smooth families of dually flat spaces.
Banerjee et al. [6] proved a duality between regular exponential families and a subclass of Bregman divergences that they termed accordingly regular Bregman divergences.In particular, this duality allows one to view the Maximum Likelihood Estimator (MLE) of an exponential family with cumulant function F as a right-sided Bregman centroid with respect to the Legendre-Fenchel dual F * .The scope of that duality was further extended for arbitrary Bregman divergences by introducing a class of generalized exponential families in [17].
Concave deformations have also been recently studied in [22]: The authors introduce the log ϕ -concavity induced by a positive continuous function ϕ generating a deformed logarithm log ϕ as the (A, log ϕ )-comparative concavity (Definition 1.2 in [22]), and the weaker notion of F -concavity which corresponds to the (A, F )concavity (Definition 2.1 in [22], requiring strictly increasing functions F ).Our deformation framework Z = τ −1 • F • ρ is more general since double-sided: We deform both the function F by F τ = τ −1 • F and its argument θ by θ ρ = ρ(θ).
Exponentially concave functions are considered as generators of L-divergences in [40], and α-exponentially concave functions G such that exp(αG) are concave for α > 0 generalize the L-divergences to L α -divergences which can be expressed equivalently using a generalization of the Fenchel-Young divergence based on the ctransforms [40].When α < 0, exponentially convex functions are considered instead of exponentially concave functions.The information geometry induced by L α -divergences are dually projectively flat with constant curvature, and reciprocally a dually projectively flat structure with constant curvature induces (locally) a canonical L −α -divergence.Wong and Zhang [44] investigate a one-parameter deformation of convex duality called λ-duality by considering functions f such that 1 λ (e λf − 1) are convex for λ ̸ = 0.They define the λ-conjugate transform as a particular case of the c-transform [40] and study the information geometry of the induced λ-logarithmic divergences.The λ-duality yields a generalization of exponential and mixture families to λ-exponential and λ-mixture families related to the Rényi divergence.
Finally, a class of statistical divergences called projective divergences are invariant under rescaling: For example, the γ-divergence [18] D γ is such that D γ (p : q) = D γ (p : q), and the γ-divergence tends to the KLD when γ → 0.

A Convexity of the cumulant functions of exponential families
Let us prove Proposition 1 and Proposition 2: Proposition [7] The natural parameter space Θ of an exponential family is convex.
Proposition [7] The cumulant function F (θ) is strictly convex and the partition function Z(θ) is positive and strictly log-convex.
In Proposition 3, we prove that strictly log-convex functions are strictly convex.Let us prove a weaker version of this proposition using the second-order differentiability of convex functions: Proof.A C 2 function f (x) is convex iff f ′′ (x) ≥ 0 but a C 2 function f (x) with f ′′ (x) > 0 is strictly convex but not necessarily the converse: For example, f (x) = x 4 is strictly convex but f ′′ (x) = 12x 2 vanishes for x = 0. We can prove the convexity of log-convex functions by considering successively the univariate and multivariate cases as follows: • In the univariate case m = 1, consider the log-convex function Z(θ) = exp(F (θ)) for F (θ) a strictly convex function with F ′′ (θ) ≥ 0. Then we have Z ′′ (θ) = F ′′ (θ)e F (θ) + (F ′ (θ)) 2 e F (θ) ≥ 0.
We can give statistical interpretations of the potential functions and their gradients as follows:

Figure 1 :Definition 1 .
Figure 1: Strictly log-convex functions form a proper subset of strictly convex functions.

Let us illustrate Proposition 5 with some examples: Example 1 .
Consider the family of exponential distributions E = {p λ (x) = 1 x≥0 λ exp(−λx)}.E is an exponential family with natural parameter θ = λ and parameter space Θ = R> 0, and sufficient statistic t(x) = −x.The partition function is Z We have the dual global coordinate system η = ∇F (θ) and domain H = {∇F (θ) : θ ∈ Θ} which defines the dual Legendre-type potential function (H, F * (η)).Legendre-type function ensures that F * * = F .Manifold M is called dually flat because the torsion-free affine connections ∇ and ∇ * induced by the potential functions F (θ) and F * (η) linked with the Legendre-Fenchel transformation are flat: That is, their Christoffel symbols vanishes in the dual coordinate system: Γ(θ) = 0 and Γ * (η) = 0.The Legendre-type function (Θ, F (θ)) is not defined uniquely: Function F ( θ) = F (Aθ + b) + Cθ + d with θ = Aθ + b for A and C invertible matrices and b and d vectors defines the same dually flat space with the same canonical divergence D(p, q):

: θ 2 )Figure 3 :
Figure 3: Statistical divergences between normalized p θ and unnormalized pθ densities of an exponential family E with corresponding divergences between their natural parameters.Without loss of generality, we consider a natural exponential family (i.e., t(x) = x and k(x) = 0) with cumulant function F and partition function Z. J F and B F denote the Jensen and Bregman divergences induced by generator F , respectively.Statistical divergences D R,α and D B,α denote the Rényi α-divergences and the skewed α-Bhattacharyya distances, respectively.The superscript "s" indicates a rescaling by the multiplicative factor 1 α(1−α) , and the superscript "*" denote the reverse divergence obtained by swapping the parameter order.