Next Article in Journal
A Quantum-like Approach to Semantic Text Classification
Previous Article in Journal
Post-Quantum Secure Multi-Factor Authentication Protocol for Multi-Server Architecture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Equivalence of Informations Characterizes Bregman Divergences

by
Philip S. Chodrow
Department of Computer Science, Middlebury College, Middlebury, VT 05753, USA
Entropy 2025, 27(7), 766; https://doi.org/10.3390/e27070766 (registering DOI)
Submission received: 30 May 2025 / Revised: 16 July 2025 / Accepted: 16 July 2025 / Published: 19 July 2025
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

Bregman divergences form a class of distance-like comparison functions which plays fundamental roles in optimization, statistics, and information theory. One important property of Bregman divergences is that they generate agreement between two useful formulations of information content (in the sense of variability or non-uniformity) in weighted collections of vectors. The first of these is the Jensen gap information, which measures the difference between the mean value of a strictly convex function evaluated on a weighted set of vectors and the value of that function evaluated at the centroid of that collection. The second of these is the divergence information, which measures the mean divergence of the vectors in the collection from their centroid. In this brief note, we prove that the agreement between Jensen gap and divergence informations in fact characterizes the class of Bregman divergences; they are the only divergences that generate this agreement for arbitrary weighted sets of data vectors.

1. Introduction

For a convex set C R with relative interior C * and a strictly convex function ϕ : C R differentiable on C * , the Bregman divergence induced by ϕ is the function d ϕ : C × C * R defined by
d ϕ ( x 1 , x 2 ) = ϕ ( x 1 ) ϕ ( x 2 ) ϕ ( x 2 ) T ( x 1 x 2 ) .
Two common examples of Bregman divergences are:
  • The squared Mahalanobis distance d ϕ ( x 1 , x 2 ) = ( x 1 x 2 ) T W ( x 1 x 2 ) , where W is a positive-definite matrix. The function ϕ is given by ϕ ( x ) = 1 2 x T W x . The special case W = I gives the squared Euclidean distance. This divergence may be defined on C = R .
  • The Kullback–Leibler (KL) divergence d ϕ ( x , y ) = j = 1 x j log x j y j , where x and y are probability vectors. Here, C is the simplex Δ = x R | j = 1 x j = 1 , x j 0 j . The KL divergence is induced by the negative entropy function ϕ ( x ) = j = 1 x j log x j . This divergence can be extended to general convex subsets of R + with formula d ϕ ( x , y ) = j = 1 x j log x j y j + y j x j for x , y R + . When computing the KL divergence, we use the convention 0 log 0 = 0 .
While it is possible to extend the definition of Bregman divergences to Banach spaces [1], in this note we focus on divergences whose domains are convex subsets of R . In this setting, it is possible to interpret the Bregman divergence as a comparison between the difference ϕ ( x 1 ) ϕ ( x 2 ) on the one hand and the linearized approximation of this difference about x 2 given by ϕ ( x 2 ) T ( x 1 x 2 ) on the other.
Like metrics, Bregman divergences are positive-definite: d ϕ ( x , y ) 0 with equality if and only if x = y . Unlike metrics, Bregman divergences are not in general symmetric and do not in general satisfy a triangle inequality, though they do satisfy a “law of cosines” and a generalized Pythagorean theorem [2]. Bregman divergences are locally distance-like in that they induce a Riemannian metric on C obtained by the small- δ expansion
d ϕ ( x + δ , x ) = 1 2 δ T H ϕ ( x ) δ + o ( δ 2 ) ,
where δ is a small perturbation vector and H ϕ ( x ) is the Hessian of ϕ at x . Because ϕ is strictly convex, H ϕ ( x ) is positive-definite and defines a Riemannian metric on C [3]; much work in information geometry [4] pursues the geometry induced by this metric and its connections to statistical inference. Bregman divergences [5] also play fundamental roles in machine learning, optimization, and information theory. They are the unique class of distance-like losses for which iterative, centroid-based clustering algorithms (such as k-means) always reduce the global loss [2,6] Bregman divergences are central in the formulation of mirror-descent methods for convex optimization [7] and have a connection via convex duality to Fenchel-Young loss functions [4,8]. See Reem et al. [9] for a more detailed review of Bregman divergences.
Bregman divergences provide one natural route through which to generalize Shannon information theory, with the differentiable function ϕ taking on the role of the Shannon entropy. Indeed, generalized entropies play a role in describing the the asymptotic performance of learning algorithms; there exist a number of inequalities relating Bregman divergences to these generalized entropies [10,11]. Multiple characterization theorems exist for many information-theoretic quantities, including entropy [12,13,14], mutual information [15,16], and the Kullback–Leibler divergence [17,18]. This author, however, is aware of only one extant characterization of the class of Bregman divergences, due to Banerjee et al. [6]: Bregman divergences are the unique class of loss functions that render conditional expectations uniquely loss-minimizing in stochastic prediction problems. This characterization is the foundation of the connection between Bregman divergences and iterative centroid-based clustering algorithms noted above.
In this short note, we prove a new characterization of the class of Bregman divergences. This characterization is based on an equality of two common formulations of information content in weighted collections of finite-dimensional vectors.

2. Bregman Divergences Relate Two Informations

Let μ Δ n be a probability measure over n points x 1 , , x n C . We collect these points into a matrix X , and in a small abuse of notation, we consider this matrix to be an element of C n . We now define two standard formulations of information, each of which we consider as a function Δ n × C n R . The first formulation compares a weighted sum of strictly convex loss function evaluations on data points to the same loss function evaluated at the data centroid.
Definition 1 
(Jensen Gap Information). Let ϕ : C R be a strictly convex function on C . The Jensen gap information is the function I ϕ : Δ n × C n R given by
I ϕ ( μ , X ) i = 1 n μ i ϕ ( x i ) ϕ y ,
where y = i = 1 n μ i x i .
If we define X to be a random vector that takes value x i with probability μ i , Jensen’s inequality states that E [ ϕ ( X ) ] ϕ ( E [ X ] ) , with equality holding only if X is constant (i.e., if there exists i such that μ i = 1 ). The Jensen gap information is a measure of the difference of the two sides of this inequality; indeed, E [ ϕ ( X ) ] = ϕ ( E [ X ] ) + I ϕ ( μ , X ) [2,6]. This formulation makes clear that I ϕ is non-negative and that I ϕ ( μ , X ) = 0 if and only if x i = x j whenever μ i > 0 and μ j > 0 .
Another standard formulation expresses information content as a weighted mean of divergences of data points from their centroid.
Definition 2 
(Divergence). A function d : C × C R is a divergence if d ( x 1 , x 2 ) 0 for any x 1 , x 2 C , with equality if and only if x 1 = x 2 .
Definition 3 
(Divergence Information). Let d be a divergence. The divergence information is the function I d : Δ n × C n R given by
I d ( μ , X ) i = 1 n μ i d ( x i , y ) ,
where y = i = 1 n μ i x i .
In this definition, we assume that y C * ; as noted by Banerjee et al. [2], this assumption is not restrictive since the set C can be replaced with the convex hull of the data X without loss of generality. The divergence information measures the μ -weighted average divergence of x i from the centroid y . This divergence information is related to the characterization result for Bregman divergences by Banerjee et al. [6]: a divergence d is a Bregman divergence if and only if the vector y = i = 1 n μ i x i is the unique minimizer of the function i = 1 n μ i d ( x i , · ) appearing on the righthand side of Equation (1) for any choice of μ and X .
There are several important cases in which the Jensen gap information and the divergence information coincide.
Definition 4 
(Information Equivalence). We say that a pair ( ϕ , d ) comprising a strictly convex function ϕ : C R and a divergence d : C × C R satisfies the information equivalence property if, for all ( μ , X ) Δ n × C n , it holds that
I ϕ ( μ , X ) = I d ( μ , X ) .
A graphical illustration of information equivalence is shown in Figure 1.
Lemma 1 
(Information Equivalence with Bregman Divergences [2,6]). The pair ( ϕ , d ϕ ) satisfies the information equivalence property.
The proof is a direct calculation and is provided by Banerjee et al. [2]. When ϕ ( x ) = 1 2 x 2 and d = d ϕ is the squared Euclidean distance, the information equivalence property (2) is the identity
i = 1 n μ i x i 2 i = 1 n μ i x i 2 = i = 1 n μ i x i i = 1 n μ i x i 2 .
The righthand side of (3) is the weighted sum-of-squares loss of the data points x i with respect to their centroid i = 1 n μ i x i , which is often used in statistical tests and clustering algorithms. Equation (3) asserts that this loss may also be computed from a weighted average of the norms of the data points.
When C is the probability simplex, ϕ ( x ) = i = 1 n x i log x i is the negative entropy, and d = d ϕ is the KL divergence, the information equivalence property (2) expresses the equality of two equivalent formulations of the mutual information for discrete random variables. Let A and B be discrete random variables on alphabets A of size n and B of size , respectively. Suppose that their joint distribution is p A , B ( a i , b j ) = μ i x i j . Let y be the vector with entries y j = i = 1 n μ i x i j ; then, y is the marginal distribution of B. The Jensen gap information I ϕ ( μ , X ) is
I ϕ ( μ , X ) = i = 1 n μ i j = 1 x i j log x i j H ( B | A ) j = 1 y j log y j H ( B ) ;
which expresses the mutual information I ( A ; B ) between random variables A and B in the entropy-reduction formulation, I ( A ; B ) = H ( B ) H ( B | A ) [19]. On the other hand, the divergence information I d ( μ , X ) is
I d ( μ , X ) = i = 1 n μ i j = 1 x i j log x i j y j , d ϕ ( x i , y )
which expresses the mutual information I ( A ; B ) instead as the weighted sum of KL divergences of x i from y .
Our contribution in this paper is to prove a converse to Lemma 1: the Bregman divergence d ϕ is the only divergence that satisfies information equivalence with ϕ .

3. Main Result

Theorem 1. 
If the pair ( ϕ , d ) satisfies the information equivalence property (2), then d is the Bregman divergence induced by ϕ: d ( x , y ) = d ϕ ( x , y ) for any x C and y C * .
Let ( ϕ , d ) satisfy information equivalence (2). For any x C and y C * , we can write
d ( x , y ) = ϕ ( x ) ϕ ( y ) + f ( x , y )
for some unknown function f : C × C * R . We aim to show that f ( x , y ) = ϕ ( y ) T ( x y ) for all x C and y C * .
Our first step is to show that f is an affine function of its first argument x on C . To do so, we observe that if μ Δ n and X C n are such that i = 1 n μ i x i = y , then we have
i = 1 n μ i ϕ ( x i ) ϕ ( y ) = i = 1 n μ i d ( x i , y ) = i = 1 n μ i ϕ ( x i ) ϕ ( y ) + f ( x i , y ) = i = 1 n μ i ϕ ( x i ) ϕ ( y ) + i = 1 n μ i f ( x i , y ) ,
where the first line follows from information equivalence. It follows that
i = 1 n μ i f ( x i , y ) = 0 .
Fix y C * . Let A y = v R n | y + v C , and for any ϵ > 0 let B y ( ϵ ) = A y { v R n | v < ϵ } . Pick ϵ > 0 sufficiently small that, for all v B y ( ϵ ) , it holds that both y + v C and y v C ; this is possible due to the relative openness of C * . For notational compactness, let B y = B y ( ϵ ) . Since B y is the intersection of a Euclidean ball with convex set A y , it is also convex.
Consider the function g y : A y R given by g y ( v ) = f ( v + y , y ) . The condition (5) implies that
i = 1 n μ i g y ( v i ) = 0 .
for any v 1 , , v n A y such that i = 1 n μ i v i = 0 .
To show that f ( · , y ) is affine, it suffices to show that the function g y is linear on A y . We do this through two short lemmas. In each, we characterize the behavior of g y on the relative ball B y before extending this characterization to the entire domain A y .
Lemma 2. 
For any vector v A y and scalar α such that α v A y , we have g y ( α v ) = α g y ( v ) .
Proof. 
We will first prove the lemma in the restricted case that α = 1 and v B y . By Equation (6), we have that
1 2 g y ( v ) + 1 2 g y ( v ) = 0 .
from which it follows that g y ( v ) = g y ( v ) . Let us now assume that v B y but that α is general; we will then use this to prove the more general setting v A y . We proceed by cases.
  • α = 0 . The previous argument implies that g y ( 0 ) = 0 .
  • α > 0 . Since α 1 + α ( v ) + 1 1 + α ( α v ) = 0 , an application of Equation (6) gives α g y ( v ) + g y ( α v ) = 0 ; isolating g y ( α v ) and applying the previous argument proves the case.
  • α < 0 . This case follows by applying the proof of the previous case, replacing α with α .
Now, assume only that v A y . Choose β > 0 so that β v B y and β α v B y ; β = min ϵ 2 α v , 1 is one sufficient choice. Then, by our previous argument, we have g y ( v ) = g y 1 β β v = 1 β g y ( β v ) , from which we infer g y ( β v ) = β g y ( v ) . Using this, we can compute g y ( α v ) = g y 1 β β α v = α β g y β v = c g y v , which proves the lemma. □
Lemma 3. 
The function g y is linear on A y : for any α R n and vectors v 1 , , v n A y such that i = 1 n α i v i A y , it holds that
g y i = 1 n α i v i = i = 1 n α i g y ( v i ) .
Proof. 
Let us first assume that α Δ n and v 1 , , v n B y . Applying Equation (6) gives
1 2 g y i = 1 n α i v i + 1 2 i = 1 n α i g y ( v i ) = 0 ,
from which applying Lemma 2 gives the result under these hypotheses.
We now consider the general case. For each i, choose β i 0 so that β i α i > 0 and v ˜ i β i v i B y . Let M = i = 1 n α i β i . Define the vector α ˜ Δ n with entries α ˜ i = α i M β i . Then, by construction, α i v i = M α ˜ i v ˜ i for each i. Applying Lemma 2 and the restricted case above, we can then compute
g y i = 1 n α i v i = g y i = 1 n M α ˜ i v ˜ i = M g y i = 1 n α ˜ i v ˜ i = M i = 1 n α ˜ i g y ( v ˜ i ) = M i = 1 n α ˜ i β i g y ( v i ) = i = 1 n α i g y ( v i ) ; ,
which completes the proof. □
Proof of Theorem 1. 
Fix y C * . The preceding lemmas prove that the function g y is linear on A y . Since for constant y the function f in (4) is a translation of g y in its first argument, it follows that f is affine as a function of its first argument x . We may therefore write, for all x C and y C * ,
f ( x , y ) = h 1 ( y ) T x + h 2 ( y ) .
for some functions h 1 : C * R and h 2 : C * R .
We now determine these functions. First, since ϕ is differentiable on C * and f ( x , y ) is affine in x , d ( x , y ) is differentiable in its first argument on C * . Since d is a divergence, y is a critical point of the function d ( · , y ) on C * . It follows that 1 d ( y , y ) , the gradient of d with respect to its first argument, is orthogonal to C * at y :
1 d ( y , y ) T ( x y ) = 0
for any x C . We can compute 1 d ( y , y ) explicitly; it is 1 d ( y , y ) = ϕ ( y ) + h 1 ( y ) , which combined with (8) gives
( ϕ ( y ) + h 1 ( y ) ) T ( x y ) = 0 .
for any x C and y C * .
Now, the condition that d ( y , y ) = 0 implies that h 2 ( y ) = h 1 ( y ) T y . Using Equations (9) and (7), we then compute
ϕ ( y ) T ( x y ) = h 1 ( y ) T ( x y ) = h 1 ( y ) T x + h 2 ( y ) = f ( x , y ) .
Recalling the definition of f in (4), we conclude that
d ( x , y ) = ϕ ( x ) ϕ ( y ) ϕ ( y ) T ( x y ) ,
which is the Bregman divergence induced by ϕ . This completes the proof. □

4. Discussion

We have shown that the class of Bregman divergences is the unique class of divergences that induce agreement between the Jensen gap and divergence informations. This result offers some further perspective on the role for Bregman divergences in data clustering and quantization [2]. The Jensen gap information I ϕ is a natural loss function for such tasks, with one motivation as follows. Suppose that we wish to measure the complexity of a set of data points X with weights μ Δ n using a weighted per-observation loss function and a term that depends only on the centroid y = i = 1 n μ i x i of the data:
L ( μ , X ) = i = 1 n μ i ψ ( x i ) + ρ ( y ) .
A natural stipulation for the loss function L is that replacing two data points x 1 and x 2 with their weighted mean x = μ 1 μ 1 + μ 2 x 1 + μ 2 μ 1 + μ 2 x 2 should strictly decrease the loss when x 1 x 2 ; this requirement is equivalent to strict convexity of the function ψ . If we further require that L ( μ , X ) = 0 when each row of X is identical, we find that ρ ( y ) = ψ ( y ) and that our loss function is the Jensen gap information: L ( μ , X ) = I ψ ( μ , X ) . The present result shows that this natural formulation fully determines the choice of how to perform pairwise comparisons between individual data points; only the corresponding Bregman divergence can serve as a positive-definite comparator that is consistent with the Jensen gap information.
An extension of this result to the setting of Bregman divergences defined on more general spaces, such as Banach spaces [1], would be of considerable interest for problems in functional data clustering [20].

Funding

This research was funded by the National Science Foundation (NSF) under award number DMS-2407058.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Bauschke, H.H.; Borwein, J.M.; Combettes, P.L. Bregman Monotone Optimization Algorithms. SIAM J. Control Optim. 2003, 42, 596–636. [Google Scholar] [CrossRef]
  2. Banerjee, A.; Merugu, S.; Dhillon, I.S.; Ghosh, J. Clustering with Bregman divergences. J. Mach. Learn. Res. 2005, 6, 1705–1749. [Google Scholar]
  3. Amari, S.; Cichocki, A. Information Geometry of Divergence Functions. Bull. Pol. Acad. Sci. Tech. Sci. 2010, 58, 183–195. [Google Scholar] [CrossRef]
  4. Amari, S. Information Geometry and Its Applications; Applied Mathematical Sciences Series; Springer: Tokyo, Japan, 2016; Volume 194. [Google Scholar] [CrossRef]
  5. Bregman, L. The Relaxation Method of Finding the Common Point of Convex Sets and Its Application to the Solution of Problems in Convex Programming. USSR Comput. Math. Math. Phys. 1967, 7, 200–217. [Google Scholar] [CrossRef]
  6. Banerjee, A.; Guo, X.; Wang, H. On the Optimality of Conditional Expectation as a Bregman Predictor. IEEE Trans. Inf. Theory 2005, 51, 2664–2669. [Google Scholar] [CrossRef]
  7. Nemirovsky, A.; Yudin, D. Problem Complexity and Method Efficiency in Optimization; Wiley-Interscience Series in Discrete Mathematics; John Wiley & Sons: Hoboken, NJ, USA, 1983. [Google Scholar]
  8. Blondel, M.; Martins, A.F.T.; Niculae, V. Learning with Fenchel-Young Losses. J. Mach. Learn. Res. 2020, 21, 1–69. [Google Scholar]
  9. Reem, D.; Reich, S.; De Pierro, A. Re-Examination of Bregman Functions and New Properties of Their Divergences. Optimization 2019, 68, 279–348. [Google Scholar] [CrossRef]
  10. Painsky, A.; Wornell, G.W. Bregman Divergence Bounds and Universality Properties of the Logarithmic Loss. IEEE Trans. Inf. Theory 2020, 66, 1658–1673. [Google Scholar] [CrossRef]
  11. Xu, A. Continuity of Generalized Entropy and Statistical Learning. arXiv 2021, arXiv:2012.15829. [Google Scholar] [CrossRef]
  12. Baez, J.C.; Fritz, T.; Leinster, T. A Characterization of Entropy in Terms of Information Loss. Entropy 2011, 13, 1945–1957. [Google Scholar] [CrossRef]
  13. Faddeev, D.K. On the Concept of Entropy of a Finite Probabilistic Scheme. Uspekhi Mat. Nauk 1956, 11, 227–231. [Google Scholar]
  14. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  15. Fullwood, J. An Axiomatic Characterization of Mutual Information. Entropy 2023, 25, 663. [Google Scholar] [CrossRef] [PubMed]
  16. Frankel, D.M.; Volij, O. Measuring School Segregation. J. Econ. Theory 2011, 146, 1–38. [Google Scholar] [CrossRef]
  17. Jiao, J.; Courtade, T.; No, A.; Venkat, K.; Weissman, T. Information Measures: The Curious Case of the Binary Alphabet. IEEE Trans. Inf. Theory 2014, 60, 7616–7626. [Google Scholar] [CrossRef]
  18. Hobson, A. A New Theorem of Information Theory. J. Stat. Phys. 1969, 1, 383–391. [Google Scholar] [CrossRef]
  19. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  20. Zhang, M.; Parnell, A. Review of Clustering Methods for Functional Data. ACM Trans. Knowl. Discov. Data 2023, 17, 1–34. [Google Scholar] [CrossRef]
Figure 1. Illustration of the information equivalence property in one dimension. Several data points x i are shown (black) alongside their centroid y = i = 1 n μ i x i (hollow). The dashed grey line gives the tangent to ϕ at y, with equation t ( x ) = ϕ ( y ) + ϕ ( y ) ( x y ) . Dashed yellow segments give the value of ϕ ( x i ) ϕ ( y ) ; the signed weighted mean of the lengths of these segments is the Jensen-gap information I ϕ ( μ , X ) . Solid blue segments give the value of d ϕ ( x i , y ) ; the unsigned weighted mean of the lengths of these segments is the divergence information I d ( μ , X ) . Information equivalence (2) asserts these two weighted means are equal. For the purposes of this visualization, μ is the uniform distribution μ i = 1 n . The function ϕ shown is ϕ ( x ) = x log x .
Figure 1. Illustration of the information equivalence property in one dimension. Several data points x i are shown (black) alongside their centroid y = i = 1 n μ i x i (hollow). The dashed grey line gives the tangent to ϕ at y, with equation t ( x ) = ϕ ( y ) + ϕ ( y ) ( x y ) . Dashed yellow segments give the value of ϕ ( x i ) ϕ ( y ) ; the signed weighted mean of the lengths of these segments is the Jensen-gap information I ϕ ( μ , X ) . Solid blue segments give the value of d ϕ ( x i , y ) ; the unsigned weighted mean of the lengths of these segments is the divergence information I d ( μ , X ) . Information equivalence (2) asserts these two weighted means are equal. For the purposes of this visualization, μ is the uniform distribution μ i = 1 n . The function ϕ shown is ϕ ( x ) = x log x .
Entropy 27 00766 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chodrow, P.S. Equivalence of Informations Characterizes Bregman Divergences. Entropy 2025, 27, 766. https://doi.org/10.3390/e27070766

AMA Style

Chodrow PS. Equivalence of Informations Characterizes Bregman Divergences. Entropy. 2025; 27(7):766. https://doi.org/10.3390/e27070766

Chicago/Turabian Style

Chodrow, Philip S. 2025. "Equivalence of Informations Characterizes Bregman Divergences" Entropy 27, no. 7: 766. https://doi.org/10.3390/e27070766

APA Style

Chodrow, P. S. (2025). Equivalence of Informations Characterizes Bregman Divergences. Entropy, 27(7), 766. https://doi.org/10.3390/e27070766

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop