Next Article in Journal
A Discretization Approach for the Nonlinear Fractional Logistic Equation
Next Article in Special Issue
Minimum Divergence Estimators, Maximum Likelihood and the Generalized Bootstrap
Previous Article in Journal
Diffusion Limitations and Translocation Barriers in Atomically Thin Biomimetic Pores
Previous Article in Special Issue
A Two-Moment Inequality with Applications to Rényi Entropy and Mutual Information

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Strongly Convex Divergences

by
James Melbourne
Department of Electrical and Computer Engineering, University of Minnesota-Twin Cities, Minneapolis, MN 55455, USA
Entropy 2020, 22(11), 1327; https://doi.org/10.3390/e22111327
Submission received: 2 September 2020 / Revised: 6 November 2020 / Accepted: 9 November 2020 / Published: 21 November 2020

## Abstract

:
We consider a sub-class of the f-divergences satisfying a stronger convexity property, which we refer to as strongly convex, or $κ$-convex divergences. We derive new and old relationships, based on convexity arguments, between popular f-divergences.

## 1. Introduction

The concept of an f-divergence, introduced independently by Ali-Silvey [1], Morimoto [2], and Csisizár [3], unifies several important information measures between probability distributions, as integrals of a convex function f, composed with the Radon–Nikodym of the two probability distributions. (An additional assumption can be made that f is strictly convex at 1, to ensure that $D f ( μ | | ν ) > 0$ for $μ ≠ ν$. This obviously holds for any $f ″ ( 1 ) > 0$, and can hold for some f-divergences without classical derivatives at 0, for instance the total variation is strictly convex at 1. An example of an f-divergence not strictly convex is provided by the so-called “hockey-stick” divergence, where $f ( x ) = ( x − γ ) +$, see [4,5,6].) For a convex function $f : ( 0 , ∞ ) → R$ such that $f ( 1 ) = 0$, and measures P and Q such that $P ≪ Q$, the f-divergence from P to Q is given by $D f ( P | | Q ) : = ∫ f d P d Q d Q .$ The canonical example of an f-divergence, realized by taking $f ( x ) = x log x$, is the relative entropy (often called the KL-divergence), which we denote with the subscript f omitted. f-divergences inherit many properties enjoyed by this special case; non-negativity, joint convexity of arguments, and a data processing inequality. Other important examples include the total variation, the $χ 2$-divergence, and the squared Hellinger distance. The reader is directed to Chapter 6 and 7 of [7] for more background.
We are interested in how stronger convexity properties of f give improvements of classical f-divergence inequalities. More explicitly, we consider consequences of f being $κ$-convex, in the sense that the map $x ↦ f ( x ) − κ x 2 / 2$ is convex. This is in part inspired by the work of Sason [8], who demonstrated that divergences that are $κ$-convex satisfy “stronger than $χ 2$” data-processing inequalities.
Perhaps the most well known example of an f-divergence inequality is Pinsker’s inequality, which bounds the square of the total variation above by a constant multiple of the relative entropy. That is for probability measures P and Q, $| P − Q | T V 2 ≤ c D ( P | | Q )$. The optimal constant is achieved for Bernoulli measures, and under our conventions for total variation, $c = 1 / 2 log e$. Many extensions and sharpenings of Pinsker’s inequality exist (for examples, see [9,10,11]). Building on the work of Guntuboyina [9] and Topsøe [11], we achieve a further sharpening of Pinsker’s inequality in Theorem 9.
Aside from the total variation, most divergences of interest have stronger than affine convexity, at least when f is restricted to a sub-interval of the real line. This observation is especially relevant to the situation in which one wishes to study $D f ( P | | Q )$ in the existence of a bounded Radon–Nikodym derivative $d P d Q ∈ ( a , b ) ⊊ ( 0 , ∞ )$. One naturally obtains such bounds for skew divergences. That is divergences of the form $( P , Q ) ↦ D f ( ( 1 − t ) P + t Q | | ( 1 − s ) P + s Q )$ for $t , s ∈ [ 0 , 1 ]$, as in this case, $( 1 − t ) P + t Q ( 1 − s ) P + s Q ≤ max 1 − t 1 − s , t s$. Important examples of skew-divergences include the skew divergence [12] based on the relative entropy and the Vincze–Le Cam divergence [13,14], called the triangular discrimination in [11] and its generalization due to Györfi and Vajda [15] based on the $χ 2$-divergence. The Jensen–Shannon divergence [16] and its recent generalization [17] give examples of f-divergences realized as linear combinations of skewed divergences.
Let us outline the paper. In Section 2, we derive elementary results of $κ$-convex divergences and give a table of examples of $κ$-convex divergences. We demonstrate that $κ$-convex divergences can be lower bounded by the $χ 2$-divergence, and that the joint convexity of the map $( P , Q ) ↦ D f ( P | | Q )$ can be sharpened under $κ$-convexity conditions on f. As a consequence, we obtain bounds between the mean square total variation distance of a set of distributions from its barycenter, and the average f-divergence from the set to the barycenter.
In Section 3, we investigate general skewing of f-divergences. In particular, we introduce the skew-symmetrization of an f-divergence, which recovers the Jensen–Shannon divergence and the Vincze–Le Cam divergences as special cases. We also show that a scaling of the Vincze–Le Cam divergence is minimal among skew-symmetrizations of $κ$-convex divergences on $( 0 , 2 )$. We then consider linear combinations of skew divergences and show that a generalized Vincze–Le Cam divergence (based on skewing the $χ 2$-divergence) can be upper bounded by the generalized Jensen–Shannon divergence introduced recently by Nielsen [17] (based on skewing the relative entropy), reversing the classical convexity bounds $D ( P | | Q ) ≤ log ( 1 + χ 2 ( P | | Q ) ) ≤ log e χ 2 ( P | | Q )$. We also derive upper and lower total variation bounds for Nielsen’s generalized Jensen–Shannon divergence.
In Section 4, we consider a family of densities ${ p i }$ weighted by $λ i$, and a density q. We use the Bayes estimator $T ( x ) = arg max i λ i p i ( x )$ to derive a convex decomposition of the barycenter $p = ∑ i λ i p i$ and of q, each into two auxiliary densities. (Recall, a Bayes estimator is one that minimizes the expected value of a loss function. By the assumptions of our model, that $P ( θ = i ) = λ i$, and $P ( X ∈ A | θ = i ) = ∫ A p i ( x ) d x$, we have $E ℓ ( θ , θ ^ ) = 1 − ∫ λ θ ^ ( x ) p θ ^ ( x ) ( x ) d x$ for the loss function $ℓ ( i , j ) = 1 − δ i ( j )$ and any estimator $θ ^$. It follows that $E ℓ ( θ , θ ^ ) ≥ E ℓ ( θ , T )$ by $λ θ ^ ( x ) p θ ^ ( x ) ( x ) ≤ λ T ( x ) p T ( x ) ( x )$. Thus, T is a Bayes estimator associated to . ) We use this decomposition to sharpen, for $κ$-convex divergences, an elegant theorem of Guntuboyina [9] that generalizes Fano and Pinsker’s inequality to f-divergences. We then demonstrate explicitly, using an argument of Topsøe, how our sharpening of Guntuboyina’s inequality gives a new sharpening of Pinsker’s inequality in terms of the convex decomposition induced by the Bayes estimator.

#### Notation

Throughout, f denotes a convex function $f : ( 0 , ∞ ) → R ∪ { ∞ }$, such that $f ( 1 ) = 0$. For a convex function defined on $( 0 , ∞ )$, we define $f ( 0 ) : = lim x → 0 f ( x )$. We denote by $f ∗$, the convex function $f ∗ : ( 0 , ∞ ) → R ∪ { ∞ }$ defined by $f ∗ ( x ) = x f ( x − 1 )$. We consider Borel probability measures P and Q on a Polish space $X$ and define the f-divergence from P to Q, via densities p for P and q for Q with respect to a common reference measure $μ$ as
$D f ( p | | q ) = ∫ X f p q q d μ = ∫ { p q > 0 } q f p q d μ + f ( 0 ) Q ( { p = 0 } ) + f ∗ ( 0 ) P ( { q = 0 } ) .$
We note that this representation is independent of $μ$, and such a reference measure always exists, take $μ = P + Q$ for example.
For $t , s ∈ [ 0 , 1 ]$, define the binary f-divergence
$D f ( t | | s ) : = s f t s + ( 1 − s ) f 1 − t 1 − s$
with the conventions, $f ( 0 ) = lim t → 0 + f ( t )$, $0 f ( 0 / 0 ) = 0$, and $0 f ( a / 0 ) = a lim t → ∞ f ( t ) / t$. For a random variable X and a set A, we denote the probability that X takes a value in A by $P ( X ∈ A )$, the expectation of the random variable by $E X$, and the variance by $Var ( X ) : = E | X − E X | 2$. For a probability measure $μ$ satisfying $μ ( A ) = P ( X ∈ A )$ for all Borel A, we write $X ∼ μ$, and, when there exists a probability density function such that $P ( X ∈ A ) = ∫ A f ( x ) d γ ( x )$ for a reference measure $γ$, we write $X ∼ f$. For a probability measure $μ$ on $X$, and an $L 2$ function $f : X → R$, we denote $Var μ ( f ) : = Var ( f ( X ) )$ for $X ∼ μ$.

## 2. Strongly Convex Divergences

Definition 1.
A $R ∪ { ∞ }$-valued function f on a convex set $K ⊆ R$ is κ-convex when $x , y ∈ K$ and $t ∈ [ 0 , 1 ]$ implies
$f ( ( 1 − t ) x + t y ) ≤ ( 1 − t ) f ( x ) + t f ( y ) − κ t ( 1 − t ) ( x − y ) 2 / 2 .$
For example, when f is twice differentiable, (3) is equivalent to $f ″ ( x ) ≥ κ$ for $x ∈ K$. Note that the case $κ = 0$ is just usual convexity.
Proposition 1.
For $f : K → R ∪ { ∞ }$ and $κ ∈ [ 0 , ∞ )$, the following are equivalent:
• f is κ-convex.
• The function $f − κ ( t − a ) 2 / 2$ is convex for any $a ∈ R$.
• The right handed derivative, defined as $f + ′ ( t ) : = lim h ↓ 0 f ( t + h ) − f ( t ) h$ satisfies,
$f + ′ ( t ) ≥ f + ′ ( s ) + κ ( t − s )$
for $t ≥ s$.
Proof.
Observe that it is enough to prove the result when $κ = 0$, where the proposition is reduced to the classical result for convex functions. □
Definition 2.
An f-divergence $D f$ is κ-convex on an interval K for $κ ≥ 0$ when the function f is κ-convex on K.
Table 1 lists some $κ$-convex f-divergences of interest to this article.
Observe that we have taken the normalization convention on the total variation (the total variation for a signed measure $μ$ on a space X can be defined through the Hahn-Jordan decomposition of the measure into non-negative measures $μ +$ and $μ −$ such that $μ = μ + − μ −$, as $∥ μ ∥ = μ + ( X ) + μ − ( X )$ (see [18]); in our notation, $| μ | T V = ∥ μ ∥ / 2$) which we denote by $| P − Q | T V$, such that $| P − Q | T V = sup A | P ( A ) − Q ( A ) | ≤ 1$. In addition, note that the $α$-divergence interpolates Pearson’s $χ 2$-divergence when $α = 3$, one half Neyman’s $χ 2$-divergence when $α = − 3$, the squared Hellinger divergence when $α = 0$, and has limiting cases, the relative entropy when $α = 1$ and the reverse relative entropy when $α = − 1$. If f is $κ$-convex on $[ a , b ]$, then recalling its dual divergence $f ∗ ( x ) : = x f ( x − 1 )$ is $κ a 3$-convex on $[ 1 b , 1 a ]$. Recall that $f ∗$ satisfies the equality $D f ∗ ( P | | Q ) = D f ( Q | | P )$. For brevity, we use $χ 2$-divergence to refer to the Pearson $χ 2$-divergence, and we articulate Neyman’s $χ 2$ explicitly when necessary.
The next lemma is a restatement of Jensen’s inequality.
Lemma 1.
If f is κ-convex on the range of X,
$E f ( X ) ≥ f ( E ( X ) ) + κ 2 Var ( X ) .$
Proof.
Apply Jensen’s inequality to $f ( x ) − κ x 2 / 2$. □
For a convex function f such that $f ( 1 ) = 0$ and $c ∈ R$, the function $f ˜ ( t ) = f ( t ) + c ( t − 1 )$ remains a convex function, and what is more satisfies
$D f ( P | | Q ) = D f ˜ ( P | | Q )$
since $∫ c ( p / q − 1 ) q d μ = 0$.
Definition 3
($χ 2$-divergence). For $f ( t ) = ( t − 1 ) 2$, we write
$χ 2 ( P | | Q ) : = D f ( P | | Q ) .$
We pursue a generalization of the following bound on the total variation by the $χ 2$-divergence [19,20,21].
Theorem 1
([19,20,21]). For measures P and Q,
$| P − Q | T V 2 ≤ χ 2 ( P | | Q ) 2 .$
We mention the work of Harremos and Vadja [20], in which it is shown, through a characterization of the extreme points of the joint range associated to a pair of f-divergences (valid in general), that the inequality characterizes the “joint range”, that is, the range of the function $( P , Q ) ↦ ( | P − Q | T V , χ 2 ( P | | Q ) )$. We use the following lemma, which shows that every strongly convex divergence can be lower bounded, up to its convexity constant $κ > 0$, by the $χ 2$-divergence,
Lemma 2.
For a κ-convex f,
$D f ( P | | Q ) ≥ κ 2 χ 2 ( P | | Q ) .$
Proof.
Define a $f ˜ ( t ) = f ( t ) − f + ′ ( 1 ) ( t − 1 )$ and note that $f ˜$ defines the same $κ$-convex divergence as f. Thus, we may assume without loss of generality that $f + ′$ is uniquely zero when $t = 1$. Since f is $κ$-convex $ϕ : t ↦ f ( t ) − κ ( t − 1 ) 2 / 2$ is convex, and, by $f + ′ ( 1 ) = 0$, $ϕ + ′ ( 1 ) = 0$ as well. Thus, $ϕ$ takes its minimum when $t = 1$ and hence $ϕ ≥ 0$ so that $f ( t ) ≥ κ ( t − 1 ) 2 / 2$. Computing,
$D f ( P | | Q ) = ∫ f d P d Q d Q ≥ κ 2 ∫ d P d Q − 1 2 d Q = κ 2 χ 2 ( P | | Q ) .$
□
Based on a Taylor series expansion of f about 1, Nielsen and Nock ([22], [Corollary 1]) gave the estimate
$D f ( P | | Q ) ≈ f ″ ( 1 ) 2 χ 2 ( P | | Q )$
for divergences with a non-zero second derivative and P close to Q. Lemma 2 complements this estimate with a lower bound, when f is $κ$-concave. In particular, if $f ″ ( 1 ) = κ$, it shows that the approximation in (5) is an underestimate.
Theorem 2.
For measures P and Q, and a κ convex divergence $D f$,
$| P − Q | T V 2 ≤ D f ( P | | Q ) κ .$
Proof.
By Lemma 2 and then Theorem 1,
$D f ( P | | Q ) κ ≥ χ 2 ( P | | Q ) 2 ≥ | P − Q | T V .$
□
The proof of Lemma 2 uses a pointwise inequality between convex functions to derive an inequality between their respective divergences. This simple technique was shown to have useful implications by Sason and Verdu in [6], where it appears as Theorem 1 and is used to give sharp comparisons in several f-divergence inequalities.
Theorem 3
(Sason–Verdu [6]). For divergences defined by g and f with $c f ( t ) ≥ g ( t )$ for all t, then
$D g ( P | | Q ) ≤ c D f ( P | | Q ) .$
Moreover, if $f ′ ( 1 ) = g ′ ( 1 ) = 0$, then
$sup P ≠ Q D g ( P | | Q ) D f ( P | | Q ) = sup t ≠ 1 g ( t ) f ( t ) .$
Corollary 1.
For a smooth κ-convex divergence f, the inequality
$D f ( P | | Q ) ≥ κ 2 χ 2 ( P | | Q )$
is sharp multiplicatively in the sense that
$inf P ≠ Q D f ( P | | Q ) χ 2 ( P | | Q ) = κ 2 .$
if $f ″ ( 1 ) = κ$.
In information geometry, a standard f-divergence is defined as an f-divergence satisfying the normalization $f ( 1 ) = f ′ ( 1 ) = 0 , f ″ ( 1 ) = 1$ (see [23]). Thus, Corollary 1 shows that $1 2 χ 2$ provides a sharp lower bound on every standard f-divergence that is 1-convex. In particular, the lower bound in Lemma 2 complimenting the estimate (5) is shown to be sharp.
Proof.
Without loss of generality, we assume that $f ′ ( 1 ) = 0$. If $f ″ ( 1 ) = κ + 2 ε$ for some $ε > 0$, then taking $g ( t ) = ( t − 1 ) 2$ and applying Theorem 3 and Lemma 2
$sup P ≠ Q D g ( P | | Q ) D f ( P | | Q ) = sup t ≠ 1 g ( t ) f ( t ) ≤ 2 κ .$
Observe that, after two applications of L’Hospital,
$lim ε → 0 g ( 1 + ε ) f ( 1 + ε ) = lim ε → 0 g ′ ( 1 + ε ) f ′ ( 1 + ε ) = g ″ ( 1 ) f ″ ( 1 ) = 2 κ ≤ sup t ≠ 1 g ( t ) f ( t ) .$
Thus, (9) follows. □
Proposition 2.
When $D f$ is an f divergence such that f is κ-convex on $[ a , b ]$ and that $P θ$ and $Q θ$ are probability measures indexed by a set Θ such that $a ≤ d P θ d Q θ ( x ) ≤ b$, holds for all θ and $P : = ∫ Θ P θ d μ ( θ )$ and $Q : = ∫ Θ Q θ d μ ( θ )$ for a probability measure μ on Θ, then
$D f ( P | | Q ) ≤ ∫ Θ D f ( P θ | | Q θ ) d μ ( θ ) − κ 2 ∫ Θ ∫ X d P θ d Q θ − d P d Q 2 d Q d μ ,$
In particular, when $Q θ = Q$ for all θ
$D f ( P | | Q ) ≤ ∫ Θ D f ( P θ | | Q ) d μ ( θ ) − κ 2 ∫ Θ ∫ X d P θ d Q − d P d Q 2 d Q d μ ( θ ) ≤ ∫ Θ D f ( P θ | | Q ) d μ ( θ ) − κ ∫ Θ | P θ − P | T V 2 d μ ( θ )$
Proof.
Let $d θ$ denote a reference measure dominating $μ$ so that $d μ = φ ( θ ) d θ$ then write $ν θ = ν ( θ , x ) = d Q θ d Q ( x ) φ ( θ )$.
$D f ( P | | Q ) = ∫ X f d P d Q d Q = ∫ X f ∫ Θ d P θ d Q d μ ( θ ) d Q = ∫ X f ∫ Θ d P θ d Q θ ν ( θ , x ) d θ d Q$
By Jensen’s inequality, as in Lemma 1
$f ∫ Θ d P θ d Q θ ν θ d θ ≤ ∫ θ f d P θ d Q θ ν θ d θ − κ 2 ∫ Θ d P θ d Q θ − ∫ Θ d P θ d Q θ ν θ d θ 2 ν θ d θ$
Integrating this inequality gives
$D f ( P | | Q ) ≤ ∫ X ∫ θ f d P θ d Q θ ν θ d θ − κ 2 ∫ Θ d P θ d Q θ − ∫ Θ d P θ d Q θ ν θ d θ 2 ν θ d θ d Q$
Note that
$∫ X ∫ Θ d P θ d Q θ d Q − ∫ Θ d P θ d Q θ 0 ν θ 0 d θ 0 2 ν θ d θ d Q = ∫ Θ ∫ X d P θ d Q θ − d P d Q 2 d Q d μ ,$
and
$∫ X ∫ Θ f d P θ d Q θ ν ( θ , x ) d θ d Q = ∫ Θ ∫ X f d P θ d Q θ ν ( θ , x ) d Q d θ = ∫ Θ ∫ X f d P θ d Q θ d Q θ d μ ( θ ) = ∫ Θ D ( P θ | | Q θ ) d μ ( θ )$
Inserting these equalities into (14) gives the result.
To obtain the total variation bound, one needs only to apply Jensen’s inequality,
$∫ X d P θ d Q − d P d Q 2 d Q ≥ ∫ X d P θ d Q − d P d Q d Q 2 = | P θ − P | T V 2 .$
□
Observe that, taking $Q = P = ∫ Θ P θ d μ ( θ )$ in Proposition 2, one obtains a lower bound for the average f-divergence from the set of distribution to their barycenter, by the mean square total variation of the set of distributions to the barycenter,
$κ ∫ Θ | P θ − P | T V 2 d μ ( θ ) ≤ ∫ Θ D f ( P θ | | P ) d μ ( θ ) .$
An alternative proof of this can be obtained by applying $| P θ − P | T V 2 ≤ D f ( P θ | | P ) / κ$ from Theorem 2 pointwise.
The next result shows that, for f strongly convex, Pinsker type inequalities can never be reversed,
Proposition 3.
Given f strongly convex and $M > 0$, there exists P, Q measures such that
$D f ( P | | Q ) ≥ M | P − Q | T V .$
Proof.
By $κ$-convexity $ϕ ( t ) = f ( t ) − κ t 2 / 2$ is a convex function. Thus, $ϕ ( t ) ≥ ϕ ( 1 ) + ϕ + ′ ( 1 ) ( t − 1 ) = ( f + ′ ( 1 ) − κ ) ( t − 1 )$ and hence $lim t → ∞ f ( t ) t ≥ lim t → ∞ κ t / 2 + ( f + ′ ( 1 ) − κ ) 1 − 1 t = ∞ .$ Taking measures on the two points space $P = { 1 / 2 , 1 / 2 }$ and $Q = { 1 / 2 t , 1 − 1 / 2 t }$ gives $D f ( P | | Q ) ≥ 1 2 f ( t ) t$ which tends to infinity with $t → ∞$, while $| P − Q | T V ≤ 1$. □
In fact, building on the work of Basu-Shioya-Park [24] and Vadja [25], Sason and Verdu proved [6] that, for any f divergence, $sup P ≠ Q D f ( P | | Q ) | P − Q | T V = f ( 0 ) + f ∗ ( 0 )$. Thus, an f-divergence can be bounded above by a constant multiple of a the total variation, if and only if $f ( 0 ) + f ∗ ( 0 ) < ∞$. From this perspective, Proposition 3 is simply the obvious fact that strongly convex functions have super linear (at least quadratic) growth at infinity.

## 3. Skew Divergences

If we denote $C v x ( 0 , ∞ )$ to be quotient of the cone of convex functions f on $( 0 , ∞ )$ such that $f ( 1 ) = 0$ under the equivalence relation $f 1 ∼ f 2$ when $f 1 − f 2 = c ( x − 1 )$ for $c ∈ R$, then the map $f ↦ D f$ gives a linear isomorphism between $C v x ( 0 , ∞ )$ and the space of all f-divergences. The mapping $T : C v x ( 0 , ∞ ) → C v x ( 0 , ∞ )$ defined by $T f = f ∗$, where we recall $f ∗ ( t ) = t f ( t − 1 )$, gives an involution of $C v x ( 0 , ∞ )$. Indeed, $D T f ( P | | Q ) = D f ( Q | | P )$, so that $D T ( T ( f ) ) ( P | | Q ) = D f ( P | | Q )$. Mathematically, skew divergences give an interpolation of this involution as
$( P , Q ) ↦ D f ( ( 1 − t ) P + t Q | | ( 1 − s ) P + s Q )$
gives $D f ( P | | Q )$ by taking $s = 1$ and $t = 0$ or yields $D f ∗ ( P | | Q )$ by taking $s = 0$ and $t = 1$.
Moreover, as mentioned in the Introduction, skewing imposes boundedness of the Radon–Nikodym derivative $d P d Q$, which allows us to constrain the domain of f-divergences and leverage $κ$-convexity to obtain f-divergence inequalities in this section.
The following appears as Theorem III.1 in the preprint [26]. It states that skewing an f-divergence preserves its status as such. This guarantees that the generalized skew divergences of this section are indeed f-divergences. A proof is given in the Appendix A for the convenience of the reader.
Theorem 4
(Melbourne et al [26]). For $t , s ∈ [ 0 , 1 ]$ and a divergence $D f$, then
$S f ( P | | Q ) : = D f ( ( 1 − t ) P + t Q | | ( 1 − s ) P + s Q )$
is an f-divergence as well.
Definition 4.
For an f-divergence, its skew symmetrization,
$Δ f ( P | | Q ) : = 1 2 D f P | | P + Q 2 + 1 2 D f Q | | P + Q 2 .$
$Δ f$ is determined by the convex function
$x ↦ 1 + x 2 f 2 x 1 + x + f 2 1 + x .$
Observe that $Δ f ( P | | Q ) = Δ f ( Q | | P )$, and when $f ( 0 ) < ∞$, $Δ f ( P | | Q ) ≤ sup x ∈ [ 0 , 2 ] f ( x ) < ∞$ for all $P , Q$ since $d P d ( P + Q ) / 2$, $d Q d ( P + Q ) / 2 ≤ 2$. When $f ( x ) = x log x$, the relative entropy’s skew symmetrization is the Jensen–Shannon divergence. When $f ( x ) = ( x − 1 ) 2$ up to a normalization constant the $χ 2$-divergence’s skew symmetrization is the Vincze–Le Cam divergence which we state below for emphasis. The work of Topsøe [11] provides more background on this divergence, where it is referred to as the triangular discrimination.
Definition 5.
When $f ( t ) = ( t − 1 ) 2 t + 1$, denote the Vincze–Le Cam divergence by
$Δ ( P | | Q ) : = D f ( P | | Q ) .$
If one denotes the skew symmetrization of the $χ 2$-divergence by $Δ χ 2$, one can compute easily from (20) that $Δ χ 2 ( P | | Q ) = Δ ( P | | Q ) / 2$. We note that although skewing preserves 0-convexity, by the above example, it does not preserve $κ$-convexity in general. The skew symmetrization of the $χ 2$-divergence a 2-convex divergence while $f ( t ) = ( t − 1 ) 2 / ( t + 1 )$ corresponding to the Vincze–Le Cam divergence satisfies $f ″ ( t ) = 8 ( t + 1 ) 3$, which cannot be bounded away from zero on $( 0 , ∞ )$.
Corollary 2.
For an f-divergence such that f is a κ-convex on $( 0 , 2 )$,
$Δ f ( P | | Q ) ≥ κ 4 Δ ( P | | Q ) = κ 2 Δ χ 2 ( P | | Q ) ,$
with equality when the $f ( t ) = ( t − 1 ) 2$ corresponding the the $χ 2$-divergence, where $Δ f$ denotes the skew symmetrized divergence associated to f and Δ is the Vincze- Le Cam divergence.
Proof.
Applying Proposition 2
$0 = D f P + Q 2 | | Q + P 2 ≤ 1 2 D f P | | Q + P 2 + 1 2 D f Q | | Q + P 2 − κ 8 ∫ 2 P P + Q − 2 Q P + Q 2 d ( P + Q ) / 2 = Δ f ( P | | Q ) − κ 4 Δ ( P | | Q ) .$
□
When $f ( x ) = x log x$, we have $f ″ ( x ) ≥ log e 2$ on $[ 0 , 2 ]$, which demonstrates that up to a constant $log e 8$ the Jensen–Shannon divergence bounds the Vincze–Le Cam divergence (see [11] for improvement of the inequality in the case of the Jensen–Shannon divergence, called the “capacitory discrimination” in the reference, by a factor of 2).
We now investigate more general, non-symmetric skewing in what follows.
Proposition 4.
For $α , β ∈ [ 0 , 1 ]$, define
$C ( α ) : = 1 − α w h e n α ≤ β α w h e n α > β ,$
and
$S α , β ( P | | Q ) : = D ( ( 1 − α ) P + α Q | | ( 1 − β ) P + β Q ) .$
Then,
$S α , β ( P | | Q ) ≤ C ( α ) D ∞ ( α | | β ) | P − Q | T V ,$
where $D ∞ ( α | | β ) : = log max α β , 1 − α 1 − β$ is the binary ∞-Rényi divergence [27].
We need the following lemma originally proved by Audenart in the quantum setting [28]. It is based on a differential relationship between the skew divergence [12] and the [15] (see [29,30]).
Lemma 3
(Theorem III.1 [26]). For P and Q probability measures and $t ∈ [ 0 , 1 ]$,
$S 0 , t ( P | | Q ) ≤ − log t | P − Q | T V .$
Proof of Theorem 4.
If $α ≤ β$, then $D ∞ ( α | | β ) = log 1 − α 1 − β$ and $C ( α ) = 1 − α$. In addition,
$( 1 − β ) P + β Q = t ( 1 − α ) P + α Q + ( 1 − t ) Q$
with $t = 1 − β 1 − α$, thus
$S α , β ( P | | Q ) = S 0 , t ( ( 1 − α ) P + α Q | | Q ) ≤ ( − log t ) | ( ( 1 − α ) P + α Q ) − Q | T V = C ( α ) D ∞ ( α | | β ) | P − Q | T V ,$
where the inequality follows from Lemma 3. Following the same argument for $α > β$, so that $C ( α ) = α$, $D ∞ ( α | | β ) = log α β$, and
$( 1 − β ) P + β Q = t ( 1 − α ) P + α Q + ( 1 − t ) P$
for $t = β α$ completes the proof. Indeed,
$S α , β ( P | | Q ) = S 0 , t ( ( 1 − α ) P + α Q | | P ) ≤ − log t | ( ( 1 − α ) P + α Q ) − P | T V = C ( α ) D ∞ ( α | | β ) | P − Q | T V .$
□
We recover the classical bound [11,16] of the Jensen–Shannon divergence by the total variation.
Corollary 3.
For probability measure P and Q,
$JSD ( P | | Q ) ≤ log 2 | P − Q | T V$
Proof.
Since $JSD ( P | | Q ) = 1 2 S 0 , 1 2 ( P | | Q ) + 1 2 S 1 , 1 2 ( P | | Q )$. □
Proposition 4 gives a sharpening of Lemma 1 of Nielsen [17], who proved $S α , β ( P | | Q ) ≤ D ∞ ( α | | β )$, and used the result to establish the boundedness of a generalization of the Jensen–Shannon Divergence.
Definition 6
(Nielsen [17]). For p and q densities with respect to a reference measure μ, $w i > 0$, such that $∑ i = 1 n w i = 1$ and $α i ∈ [ 0 , 1 ]$, define
$J S α , w ( p : q ) = ∑ i = 1 n w i D ( ( 1 − α i ) p + α i q | | ( 1 − α ¯ ) p + α ¯ q )$
where $∑ i = 1 n w i α i = α ¯$.
Note that, when $n = 2$, $α 1 = 1$, $α 2 = 0$ and $w i = 1 2$, $J S α , w ( p : q ) = JSD ( p | | q )$, the usual Jensen–Shannon divergence. We now demonstrate that Nielsen’s generalized Jensen–Shannon Divergence can be bounded by the total variation distance just as the ordinary Jensen–Shannon Divergence.
Theorem 5.
For p and q densities with respect to a reference measure μ, $w i > 0$, such that $∑ i = 1 n w i = 1$ and $α i ∈ ( 0 , 1 )$,
$log e Var w ( α ) | p − q | T V 2 ≤ J S α , w ( p : q ) ≤ A H ( w ) | p − q | T V$
where $H ( w ) : = − ∑ i w i log w i ≥ 0$ and $A = max i | α i − α ¯ i |$ with $α ¯ i = ∑ j ≠ i w j α j 1 − w i$.
Note that, since $α ¯ i$ is the w average of the $α j$ terms with $α i$ removed, $α ¯ i ∈ [ 0 , 1 ]$ and thus $A ≤ 1$. We need the following Theorem from Melbourne et al. [26] for the upper bound.
Theorem 6
([26] Theorem 1.1). For $f i$ densities with respect to a common reference measure γ and $λ i > 0$ such that $∑ i = 1 n λ i = 1$,
$h γ ( ∑ i λ i f i ) − ∑ i λ i h γ ( f i ) ≤ T H ( λ ) ,$
where $h γ ( f i ) : = − ∫ f i ( x ) log f i ( x ) d γ ( x )$ and $T = sup i | f i − f ˜ i | T V$ with $f ˜ i = ∑ j ≠ i λ j 1 − λ i f j$.
Proof of Theorem 5.
We apply Theorem 6 with $f i = ( 1 − α i ) p + α i q$, $λ i = w i$, and noticing that in general
$h γ ( ∑ i λ i f i ) − ∑ i λ h γ ( f i ) = ∑ i λ i D ( f i | | f ) ,$
we have
$J S α , w ( p : q ) = ∑ i = 1 n w i D ( ( 1 − α i ) p + α i q | | ( 1 − α ¯ ) p + α ¯ q ) ≤ T H ( w ) .$
It remains to determine $T = max i | f i − f ˜ i | T V$,
$f ˜ i − f i = f − f i 1 − λ i = ( ( 1 − α ¯ ) p + α ¯ q ) − ( ( 1 − α i ) p + α i q ) 1 − w i = ( α i − α ¯ ) ( p − q ) 1 − w i = ( α i − α ¯ i ) ( p − q ) .$
Thus, $T = max i ( α i − α ¯ i ) | p − q | T V = A | p − q | T V ,$ and the proof of the upper bound is complete.
To prove the lower bound, we apply Pinsker’s inequality, $2 log e | P − Q | T V 2 ≤ D ( P | | Q ) ,$
$J S α , w ( p : q ) = ∑ i = 1 n w i D ( ( 1 − α i ) p + α i q | | ( 1 − α ¯ ) p + α ¯ q ) ≥ 1 2 ∑ i = 1 n w i 2 log e | ( ( 1 − α i ) p + α i q ) − ( ( 1 − α ¯ ) p + α ¯ q ) | T V 2 = log e ∑ i = 1 n w i ( α i − α ¯ ) 2 | p − q | T V 2 = log e Var w ( α ) | p − q | T V 2 .$
□
Definition 7.
Given an f-divergence, densities p and q with respect to common reference measure, $α ∈ [ 0 , 1 ] n$ and $w ∈ ( 0 , 1 ) n$ such that $∑ i w i = 1$ define its generalized skew divergence
$D f α , w ( p : q ) = ∑ i = 1 n w i D f ( ( 1 − α i ) p + α i q | | ( 1 − α ¯ ) p + α ¯ q ) .$
where $α ¯ = ∑ i w i α i$.
Note that, by Theorem 4, $D f α , w$ is an f-divergence. The generalized skew divergence of the relative entropy is the generalized Jensen–Shannon divergence $J S α , w$. We denote the generalized skew divergence of the $χ 2$-divergence from p to q by
$χ α , w 2 ( p : q ) : = ∑ i w i χ 2 ( ( 1 − α i ) p + α i q | | ( 1 − α ¯ p + α ¯ q )$
Note that, when $n = 2$ and $α 1 = 0$, $α 2 = 1$ and $w i = 1 2$, we recover the skew symmetrized divergence in Definition 4
$D f ( 0 , 1 ) , ( 1 / 2 , 1 / 2 ) ( p : q ) = Δ f ( p | | q )$
The following theorem shows that the usual upper bound for the relative entropy by the $χ 2$-divergence can be reversed up to a factor in the skewed case.
Theorem 7.
For p and q with a common dominating measure μ,
$χ α , w 2 ( p : q ) ≤ N ∞ ( α , w ) J S α , w ( p : q ) .$
Writing $N ∞ ( α , w ) = max i max 1 − α i 1 − α ¯ , α i α ¯$. For $α ∈ [ 0 , 1 ] n$ and $w ∈ ( 0 , 1 ) n$ such that $∑ i w i = 1$, we use the notation $N ∞ ( α , w ) : = max i e D ∞ ( α i | | α ¯ )$ where $α ¯ ∑ i w i α i$.
Proof.
By definition,
$J S α , w ( p : q ) = ∑ i = 1 n w i D ( ( 1 − α i ) p + α i q | | ( 1 − α ¯ ) p + α ¯ q ) .$
Taking $P i$ to be the measure associated to $( 1 − α i ) p + α i q$ and Q given by $( 1 − α ¯ ) p + α ¯ q$, then
$d P i d Q = ( 1 − α i ) p + α i q ( 1 − α ¯ ) p + α ¯ q ≤ max 1 − α i 1 − α ¯ , α i α ¯ = e D ∞ ( α i | | α ¯ ) ≤ N ∞ ( α , w ) .$
Since $f ( x ) = x log x$, the convex function associated to the usual KL divergence, satisfies $f ″ ( x ) = 1 x$, f is $e − D ∞ ( α )$-convex on $[ 0 , sup x , i d P i d Q ( x ) ]$, applying Proposition 2, we obtain
$D ∑ i w i P i | | Q ≤ ∑ i w i D ( P i | | Q ) − ∑ i w i ∫ X d P i d Q − d P d Q 2 d Q 2 N ∞ ( α , w ) .$
Since $Q = ∑ i w i P i$, the left hand side of (42) is zero, while
$∑ i w i ∫ X d P i d Q − d P d Q 2 d Q = ∑ i w i ∫ X d P i d P − 1 2 d P = ∑ i w i χ 2 ( P i | | P ) = χ α , w 2 ( p : q ) .$
Rearranging gives,
$χ α , w 2 ( p : q ) 2 N ∞ ( α , w ) ≤ J S α , w ( p : q ) ,$
which is our conclusion. □

## 4. Total Variation Bounds and Bayes Risk

In this section, we derive bounds on the Bayes risk associated to a family of probability measures with a prior distribution $λ$. Let us state definitions and recall basic relationships. Given probability densities ${ p i } i = 1 n$ on a space $X$ with respect a reference measure $μ$ and $λ i ≥ 0$ such that $∑ i = 1 n λ i = 1$, define the Bayes risk,
$R : = R λ ( p ) : = 1 − ∫ X max i { λ i p i ( x ) } d μ ( x )$
If $ℓ ( x , y ) = 1 − δ x ( y )$, and we define $T : = ( x ) arg max i λ i p i ( x )$ then observe that this definition is consistent with, the usual definition of the Bayes risk associated to the loss function . Below, we consider $θ$ to be a random variable on ${ 1 , 2 , … , n }$ such that $P ( θ = i ) = λ i$, and x to be a variable with conditional distribution $P ( X ∈ A | θ = i ) = ∫ A p i ( x ) d μ ( x )$. The following result shows that the Bayes risk gives the probability of the categorization error, under an optimal estimator.
Proposition 5.
The Bayes risk satisfies
$R = min θ ^ E ℓ ( θ , θ ^ ( X ) ) = E ℓ ( θ , T ( X ) )$
where the minimum is defined over $θ ^ : X → { 1 , 2 , … , n }$.
Proof.
Observe that $R = 1 − ∫ X λ T ( x ) p T ( x ) ( x ) d μ ( x ) = E ℓ ( θ , T ( X ) )$. Similarly,
$E ℓ ( θ , θ ^ ( X ) ) = 1 − ∫ X λ θ ^ ( x ) p θ ^ ( x ) ( x ) d μ ( x ) ≥ 1 − ∫ X λ T ( x ) p T ( x ) ( x ) d μ ( x ) = R ,$
which gives our conclusion. □
It is known (see, for example, [9,31]) that the Bayes risk can also be tied directly to the total variation in the following special case, whose proof we include for completeness.
Proposition 6.
When $n = 2$ and $λ 1 = λ 2 = 1 2$, the Bayes risk associated to the densities $p 1$ and $p 2$ satisfies
$2 R = 1 − | p 1 − p 2 | T V$
Proof.
Since $p T = | p 1 − p 2 | + p 1 + p 2 2$, integrating gives $∫ X p T ( x ) d μ ( x ) = | p 1 − p 2 | T V + 1$ from which the equality follows. □
Information theoretic bounds to control the Bayes and minimax risk have an extensive literature (see, for example, [9,32,33,34,35]). Fano’s inequality is the seminal result in this direction, and we direct the reader to a survey of such techniques in statistical estimation (see [36]). What follows can be understood as a sharpening of the work of Guntuboyina [9] under the assumption of a $κ$-convexity.
The function $T ( x ) = arg max i { λ i p i ( x ) }$ induces the following convex decompositions of our densities. The density q can be realized as a convex combination of $q 1 = λ T q 1 − Q$ where $Q = 1 − ∫ λ T q d μ$ and $q 2 = ( 1 − λ T ) q Q$,
$q = ( 1 − Q ) q 1 + Q q 2 .$
If we take $p ∑ i λ i p i$, then p can be decomposed as $ρ 1 = λ T p T 1 − R$ and $ρ 2 = p − λ T p T R$ so that
$p = ( 1 − R ) ρ 1 + R ρ 2 .$
Theorem 8.
When f is κ-convex, on $( a , b )$ with $a = inf i , x p i ( x ) q ( x )$ and $b = sup i , x p i ( x ) q ( x )$
$∑ i λ i D f ( p i | | q ) ≥ D f ( R | | Q ) + κ W 2$
where
$W : = W ( λ i , p i , q ) : = ( 1 − R ) 2 1 − Q χ 2 ( ρ 1 | | q 1 ) + R 2 Q χ 2 ( ρ 2 | | q 2 ) + W 0$
for $W 0 ≥ 0$.
$W 0$ can be expressed explicitly as
$W 0 = ∫ ( 1 − λ T ) V a r λ i ≠ T p i q d μ = ∫ ∑ i ≠ T λ i | p i − ∑ j ≠ T λ j 1 − λ T p j | 2 q d μ ,$
where for fixed x, we consider the variance $V a r λ i ≠ T p i q$ to be the variance of a random variable taking values $p i ( x ) / q ( x )$ with probability $λ i / ( 1 − λ T ( x ) )$ for $i ≠ T ( x )$. Note this term is a non-zero term only when $n > 2$.
Proof.
For a fixed x, we apply Lemma 1
$∑ i λ i f p i q = λ T f p T q + ( 1 − λ T ) ∑ i ≠ T λ i 1 − λ T f p i q ≥ λ T f p T q + ( 1 − λ T ) f p − λ T p T q ( 1 − λ T ) + κ 2 Var λ i ≠ T p i q$
Integrating,
$∑ i λ i D f ( p i | | q ) ≥ ∫ λ T f p T q q + ∫ ( 1 − λ T ) f − λ T p T + ∑ i λ i p i q ( 1 − λ T ) q + κ 2 W 0 ,$
where
$W 0 = ∫ ∑ i ≠ T ( x ) λ i 1 − λ T ( x ) | p i − ∑ j ≠ T λ j 1 − λ T p j | 2 q d μ .$
Applying the $κ$-convexity of f,
$∫ λ T f p T q q = ( 1 − Q ) ∫ q 1 f p T q ≥ ( 1 − Q ) f ∫ λ T p T 1 − Q + κ 2 Var q 1 p T q = ( 1 − Q ) f ( ( 1 − R ) / ( 1 − Q ) ) + Q κ 2 W 1 ,$
with
$W 1 : = Var q 1 p T q = 1 − R 1 − Q 2 Var q 1 λ T p T λ T q 1 − Q 1 − R = 1 − R 1 − Q 2 Var q 1 ρ 1 q 1 = 1 − R 1 − Q 2 χ 2 ( ρ 1 | | q 1 )$
Similarly,
$∫ ( 1 − λ T ) f p − λ T p T q ( 1 − λ T ) q = Q ∫ q 2 f p − λ T p T q ( 1 − λ T ) ≥ Q f ∫ q 2 p − λ T p T q ( 1 − λ T ) + Q κ 2 W 2 = Q f R 1 − Q + Q κ 2 W 2$
where
$W 2 : = Var q 2 p − λ T p T q ( 1 − λ T ) = R Q 2 Var q 2 p − λ T p T q ( 1 − λ T ) Q R = R Q 2 Var q 2 p − λ T p T q ( 1 − λ T ) − R Q 2 = R Q 2 ∫ q 2 ρ 2 q 2 − 1 2 = R Q 2 χ 2 ( ρ 2 | | q 2 )$
Writing $W = W 0 + W 1 + W 2$, we have our result. □
Corollary 4.
When $λ i = 1 n$, and f is κ-convex on $( inf i , x p i / q , sup i , x p i / q )$
$1 n ∑ i D f ( p i | | q ) ≥ D f ( R | | ( n − 1 ) / n ) + κ 2 n 2 ( 1 − R ) 2 χ 2 ( ρ 1 | | q ) + n R n − 1 2 χ 2 ( ρ 2 | | q ) + W 0$
further when $n = 2$,
$D f ( p 1 | | q ) + D f ( p 2 | | q ) 2 ≥ D f 1 − | p 1 − p 2 | T V 2 | | 1 2 + κ 2 ( 1 + | p 1 − p 2 | T V ) 2 χ 2 ( ρ 1 | | q ) + ( 1 − | p 1 − p 2 | T V ) 2 χ 2 ( ρ 2 | | q ) .$
Proof.
Note that $q 1 = q 2 = q$, since $λ i = 1 n$ implies $λ T = 1 n$ as well. In addition, $Q = 1 − ∫ λ T q d μ = n − 1 n$ so that applying Theorem 8 gives
$∑ i = 1 n D f ( p i | | q ) ≥ n D f ( R | | ( n − 1 ) / n ) + κ n W ( λ i , p i , q ) 2 .$
The term W can be simplified as well. In the notation of the proof of Theorem 8,
$W 1 = n 2 ( 1 − R ) 2 χ 2 ( ρ 1 , q ) W 2 = n R n − 1 2 χ 2 ( ρ 2 | | q ) W 0 = ∫ 1 n − 1 ∑ i ≠ T ( p i − 1 n − 1 ∑ j ≠ T p j ) 2 q d μ .$
For the special case, one needs only to recall $R = 1 − | p 1 − p 2 | T V 2$ while inserting 2 for n. □
Corollary 5.
When $p i ≤ q / t ∗$ for $t ∗ > 0$, and $f ( x ) = x log x$
$∑ i λ i D ( p i | | q ) ≥ D ( R | | Q ) + t ∗ W ( λ i , p i , q ) 2$
for $D ( p i | | q )$ the relative entropy. In particular,
$∑ i λ i D ( p i | | q ) ≥ D ( p | | q ) + D ( R | | P ) + t ∗ W ( λ i , p i , p ) 2$
where $P = 1 − ∫ λ T p d μ$ for $p = ∑ i λ i p i$ and $t ∗ = min λ i$.
Proof.
For the relative entropy, $f ( x ) = x log x$ is $1 M$-convex on $[ 0 , M ]$ since $f ″ ( x ) = 1 / x$. When $p i ≤ q / t ∗$ holds for all i, then we can apply Theorem 8 with $M = 1 t ∗$. For the second inequality, recall the compensation identity, $∑ i λ i D ( p i | | q ) = ∑ i λ i D ( p i | | p ) + D ( p | | q )$, and apply the first inequality to $∑ i D ( p i | | p )$ for the result.  □
This gives an upper bound on the Jensen–Shannon divergence, defined as $JSD ( μ | | ν ) = 1 2 D ( μ | | μ / 2 + ν / 2 ) + 1 2 D ( ν | | μ / 2 + ν / 2 )$. Let us also note that through the compensation identity $∑ i λ i D ( p i | | q ) = ∑ i λ i D ( p i | | p ) + D ( p | | q )$, $∑ i λ i D ( p i | | q ) ≥ ∑ i λ i D ( p i | | p )$ where $p = ∑ i λ i p i$. In the case that $λ i = 1 N$
$∑ i λ i D ( p i | | q ) ≥ ∑ i λ i D ( p i | | p ) ≥ Q f 1 − R Q + ( 1 − Q ) f R 1 − Q + t ∗ W 2$
Corollary 6.
For two densities $p 1$ and $p 2$, the Jensen–Shannon divergence satisfies the following,
$JSD ( p 1 | | p 2 ) ≥ D 1 − | p 1 − p 2 | T V 2 | | 1 / 2 + 1 4 ( 1 + | p 1 − p 2 | T V ) 2 χ 2 ( ρ 1 | | p ) + ( 1 − | p 1 − p 2 | T V ) 2 χ 2 ( ρ 2 | | p )$
with $ρ ( i )$ defined above and $p = p 1 / 2 + p 2 / 2$.
Proof.
Since $p i ( p 1 + p 2 ) / 2 ≤ 2$ and $f ( x ) = x log x$ satisfies $f ″ ( x ) ≥ 1 2$ on $( 0 , 2 )$. Taking $q = p 1 + p 2 2$, in the $n = 2$ example of Corollary 4 with $κ = 1 2$ yields the result. □
Note that $2 D ( ( 1 + V ) / 2 | | 1 / 2 ) = ( 1 + V ) log ( 1 + V ) + ( 1 − V ) log ( 1 − V ) ≥ V 2 log e$, we see that a further bound,
$JSD ( p 1 | | p 2 ) ≥ log e 2 V 2 + ( 1 + V ) 2 χ 2 ( ρ 1 | | p ) + ( 1 − V ) 2 χ 2 ( ρ 2 | | p ) 4 ,$
can be obtained for $V = | p 1 − p 2 | T V$.

#### On Topsøe’s Sharpening of Pinsker’s Inequality

For $P i , Q$ probability measures with densities $p i$ and q with respect to a common reference measure, $∑ i = 1 n t i = 1$, with $t i > 0$, denote $P = ∑ i t i P i$, with density $p = ∑ i t i p i$, the compensation identity is
$∑ i = 1 n t i D ( P i | | Q ) = D ( P | | Q ) + ∑ i = 1 n t i D ( P i | | P ) .$
Theorem 9.
For $P 1$ and $P 2$, denote $M k = 2 − k P 1 + ( 1 − 2 − k ) P 2$, and define
$M 1 ( k ) = M k 1 { P 1 > P 2 } + P 2 1 { P 1 ≤ P 2 } M k { P 1 > P 2 } + P 2 { P 1 ≤ P 2 } M 2 ( k ) = M k 1 { P 1 ≤ P 2 } + P 2 1 { P 1 > P 2 } M k { P 1 ≤ P 2 } + P 2 { P 1 > P 2 } ,$
then the following sharpening of Pinsker’s inequality can be derived,
$D ( P 1 | | P 2 ) ≥ ( 2 log e ) | P 1 − P 2 | T V 2 + ∑ k = 0 ∞ 2 k χ 2 ( M 1 ( k ) , M k + 1 ) 2 + χ 2 ( M 2 ( k ) , M k + 1 ) 2 .$
Proof.
When $n = 2$ and $t 1 = t 2 = 1 2$, if we denote $M = P 1 + P 2 2$, then (61) reads as
$1 2 D ( P 1 | | Q ) + 1 2 D ( P 2 | | Q ) = D ( M | | Q ) + JSD ( P 1 | | P 2 ) .$
Taking $Q = P 2$, we arrive at
$D ( P 1 | | P 2 ) = 2 D ( M | | P 2 ) + 2 JSD ( P 1 | | P 2 )$
Iterating and writing $M k = 2 − k P 1 + ( 1 − 2 − k ) P 2$, we have
$D ( P 1 | | P 2 ) = 2 n D ( M n | | P 2 ) + 2 ∑ k = 0 n JSD ( M n | | P 2 )$
It can be shown (see [11]) that $2 n D ( M n | | P 2 ) → 0$ with $n → ∞$, giving the following series representation,
$D ( P 1 | | P 2 ) = 2 ∑ k = 0 ∞ 2 k JSD ( M k | | P 2 ) .$
Note that the $ρ$-decomposition of $M k$ is exactly $ρ i = M k ( i )$, thus, by Corollary 6,
$D ( P 1 | | P 2 ) = 2 ∑ k = 0 ∞ 2 k JSD ( M k | | P 2 ) ≥ ∑ k = 0 ∞ 2 k | M k − P 2 | T V 2 log e + χ 2 ( M 1 ( k ) , M k + 1 ) 2 + χ 2 ( M 2 ( k ) , M k + 1 ) 2 = ( 2 log e ) | P 1 − P 2 | T V 2 + ∑ k = 0 ∞ 2 k χ 2 ( M 1 ( k ) , M k + 1 ) 2 + χ 2 ( M 2 ( k ) , M k + 1 ) 2 .$
Thus, we arrive at the desired sharpening of Pinsker’s inequality. □
Observe that the $k = 0$ term in the above series is equivalent to
$2 0 χ 2 ( M 1 ( 0 ) , M 0 + 1 ) 2 + χ 2 ( M 2 ( 0 ) , M 0 + 1 ) 2 = χ 2 ( ρ 1 , p ) 2 + χ 2 ( ρ 2 , p ) 2 ,$
where $ρ i$ is the convex decomposition of $p = p 1 + p 2 2$ in terms of $T ( x ) = arg max { p 1 ( x ) , p 2 ( x ) }$.

## 5. Conclusions

In this article, we begin a systematic study of strongly convex divergences, and how the strength of convexity of a divergence generator f, quantified by the parameter $κ$, influences the behavior of the divergence $D f$. We prove that every strongly convex divergence dominates the square of the total variation, extending the classical bound provided by the $χ 2$-divergence. We also study a general notion of a skew divergence, providing new bounds, in particular for the generalized skew divergence of Nielsen. Finally, we show how $κ$-convexity can be leveraged to yield improvements of Bayes risk f-divergence inequalities, and as a consequence achieve a sharpening of Pinsker’s inequality.

## Funding

This research was funded by NSF grant CNS 1809194.

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix A

Theorem A1.
The class of f-divergences is stable under skewing. That is, if f is convex, satisfying $f ( 1 ) = 0$, then
$f ^ ( x ) : = ( t x + ( 1 − t ) ) f r x + ( 1 − r ) t x + ( 1 − t )$
is convex with $f ^ ( 1 ) = 0$ as well.
Proof.
If $μ$ and $ν$ have respective densities u and v with respect to a reference measure $γ$, then $r μ + ( 1 − r ) ν$ and $t μ + 1 − t ν$ have densities $r u + ( 1 − r ) v$ and $t u + ( 1 − t ) v$
$S f , r , t ( μ | | ν ) = ∫ f r u + ( 1 − r ) v t u + ( 1 − t ) v ( t u + ( 1 − t ) v ) d γ$
$= ∫ f r u v + ( 1 − r ) t u v + ( 1 − t ) ( t u v + ( 1 − t ) ) v d γ$
$= ∫ f ^ u v v d γ .$
Since $f ^ ( 1 ) = f ( 1 ) = 0$, we need only prove $f ^$ convex. For this, recall that the conic transform g of a convex function f defined by $g ( x , y ) = y f ( x / y )$ for $y > 0$ is convex, since
$y 1 + y 2 2 f x 1 + x 2 2 / y 1 + y 2 2 = y 1 + y 2 2 f y 1 y 1 + y 2 x 1 y 1 + y 2 y 1 + y 2 x 2 y 2$
$≤ y 1 2 f ( x 1 / y 1 ) + y 2 2 f ( x 2 / y 2 ) .$
Our result follows since $f ^$ is the composition of the affine function $A ( x ) = ( r x + ( 1 − r ) , t x + ( 1 − t ) )$ with the conic transform of f,
$f ^ ( x ) = g ( A ( x ) ) .$
□

## References

1. Ali, S.M.; Silvey, S.D. A general class of coefficients of divergence of one distribution from another. J. Roy. Statist. Soc. Ser. B 1966, 28, 131–142. [Google Scholar] [CrossRef]
2. Morimoto, T. Markov processes and the H-theorem. J. Phys. Soc. Jpn. 1963, 18, 328–331. [Google Scholar] [CrossRef]
3. Csiszár, I. Eine informationstheoretische Ungleichung und ihre Anwendung auf den Beweis der Ergodizität von Markoffschen Ketten. Magyar Tud. Akad. Mat. Kutató Int. Közl. 1963, 8, 85–108. [Google Scholar]
4. Csiszár, I. Information-type measures of difference of probability distributions and indirect observation. Stud. Sci. Math. Hung. 1967, 2, 229–318. [Google Scholar]
5. Polyanskiy, Y.; Poor, H.V.; Verdú, S. Channel coding rate in the finite blocklength regime. IEEE Trans. Inf. Theory 2010, 56, 2307–2359. [Google Scholar] [CrossRef]
6. Sason, I.; Verdú, S. f-divergence inequalities. IEEE Trans. Inf. Theory 2016, 62, 5973–6006. [Google Scholar] [CrossRef]
7. Polyanskiy, Y.; Wu, Y. Lecture Notes on Information Theory. Available online: http://people.lids.mit.edu/yp/homepage/data/itlectures_v5.pdf (accessed on 13 November 2019).
8. Sason, I. On data-processing and majorization inequalities for f-divergences with applications. Entropy 2019, 21, 1022. [Google Scholar] [CrossRef] [Green Version]
9. Guntuboyina, A. Lower bounds for the minimax risk using f-divergences, and applications. IEEE Trans. Inf. Theory 2011, 57, 2386–2399. [Google Scholar] [CrossRef]
10. Reid, M.; Williamson, R. Generalised Pinsker inequalities. arXiv 2009, arXiv:0906.1244. [Google Scholar]
11. Topsøe, F. Some inequalities for information divergence and related measures of discrimination. IEEE Trans. Inf. Theory 2000, 46, 1602–1609. [Google Scholar] [CrossRef] [Green Version]
12. Lee, L. Measures of distributional similarity. In Proceedings of the 37th Annual Meeting of the Association For Computational Linguistics on Computational Linguistics; Association for Computational Linguistics: Stroudsburg, PA, USA, 1999; pp. 25–32. [Google Scholar]
13. Le Cam, L. Asymptotic Methods in Statistical Decision Theory; Springer Series in Statistics; Springer: New York, NY, USA, 1986. [Google Scholar]
14. Vincze, I. On the concept and measure of information contained in an observation. Contrib. Probab. 1981, 207–214. [Google Scholar] [CrossRef]
15. Györfi, L.; Vajda, I. A class of modified Pearson and Neyman statistics. Stat. Decis. 2001, 19, 239–251. [Google Scholar] [CrossRef]
16. Lin, J. Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 1991, 37, 145–151. [Google Scholar] [CrossRef] [Green Version]
17. Nielsen, F. On a generalization of the Jensen–Shannon divergence and the Jensen–Shannon centroid. Entropy 2020, 22, 221. [Google Scholar] [CrossRef] [Green Version]
18. Folland, G. Real Analysis: Modern Techniques and Their Applications; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
19. Gibbs, A.L.; Su, F.E. On choosing and bounding probability metrics. Int. Stat. Rev. 2002, 70, 419–435. [Google Scholar] [CrossRef] [Green Version]
20. Harremoës, P.; Vajda, I. On pairs of f-divergences and their joint range. IEEE Trans. Inf. Theory 2011, 57, 3230–3235. [Google Scholar] [CrossRef]
21. Reiss, R. Approximate Distributions of Order Statistics: With Applications to Nonparametric Statistics; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
22. Nielsen, F.; Nock, R. On the chi square and higher-order chi distances for approximating f-divergences. IEEE Signal Process. Lett. 2013, 21, 10–13. [Google Scholar] [CrossRef] [Green Version]
23. Amari, S. Information Geometry and Its Applications; Springer: Berlin/Heidelberg, Germany, 2016; p. 194. [Google Scholar]
24. Basu, A.; Shioya, H.; Park, C. Statistical Inference: The Minimum Distance Approach; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
25. Vajda, I. On the f-divergence and singularity of probability measures. Period. Math. Hung. 1972, 2, 223–234. [Google Scholar] [CrossRef]
26. Melbourne, J.; Talukdar, S.; Bhaban, S.; Madiman, M.; Salapaka, M.V. The differential entropy of mixtures: new bounds and applications. arXiv 2020, arXiv:1805.11257. [Google Scholar]
27. Erven, T.V.; Harremos, P. Rényi divergence and Kullback-Leibler divergence. IEEE Trans. Inf. Theory 2014, 60, 3797–3820. [Google Scholar] [CrossRef] [Green Version]
28. Audenaert, K.M.R. Quantum skew divergence. J. Math. Phys. 2014, 55, 112202. [Google Scholar] [CrossRef] [Green Version]
29. Melbourne, J.; Madiman, M.; Salapaka, M.V. Relationships between certain f-divergences. In Proceedings of the 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, , 24–27 September 2019; pp. 1068–1073. [Google Scholar]
30. Nishiyama, T.; Sason, I. On relations between the relative entropy and χ2-divergence, generalizations and applications. Entropy 2020, 22, 563. [Google Scholar] [CrossRef]
31. Nielsen, F. Generalized Bhattacharyya and Chernoff upper bounds on Bayes error using quasi-arithmetic means. Pattern Recognit. Lett. 2014, 42, 25–34. [Google Scholar] [CrossRef] [Green Version]
32. Birgé, L. A new lower bound for multiple hypothesis testing. IEEE Trans. Inf. Theory 2005, 51, 1611–1615. [Google Scholar] [CrossRef]
33. Chen, X.; Guntuboyina, A.; Zhang, Y. On Bayes risk lower bounds. J. Mach. Learn. Res. 2016, 17, 7687–7744. [Google Scholar]
34. Xu, A.; Raginsky, M. Information-theoretic lower bounds on Bayes risk in decentralized estimation. IEEE Trans. Inf. Theory 2016, 63, 1580–1600. [Google Scholar] [CrossRef]
35. Yang, Y.; Barron, A. Information-theoretic determination of minimax rates of convergence. Ann. Statist. 1999, 27, 1564–1599. [Google Scholar]
36. Scarlett, J.; Cevher, V. An introductory guide to Fano’s inequality with applications in statistical estimation. arXiv 2019, arXiv:1901.00555. [Google Scholar]
Table 1. Examples of Strongly Convex Divergences.
Table 1. Examples of Strongly Convex Divergences.
Divergencef$κ$Domain
relative entropy (KL)$t log t$$1 M$$( 0 , M ]$
total variation$| t − 1 | 2$0$( 0 , ∞ )$
Pearson’s $χ 2$$( t − 1 ) 2$2$( 0 , ∞ )$
squared Hellinger$2 ( 1 − t )$$M − 3 2 / 2$$( 0 , M ]$
reverse relative entropy$− log t$$1 / M 2$$( 0 , M ]$
Vincze- Le Cam$( t − 1 ) 2 t + 1$$8 ( M + 1 ) 3$$( 0 , M ]$
Jensen–Shannon$( t + 1 ) log 2 t + 1 + t log t$$1 M ( M + 1 )$$( 0 , M ]$
Neyman’s $χ 2$$1 t − 1$$2 / M 3$$( 0 , M ]$
Sason’s s$log ( s + t ) ( s + t ) 2 − log ( s + 1 ) ( s + 1 ) 2$$2 log ( s + M ) + 3$$[ M , ∞ )$, $s > e − 3 / 2$
$α$-divergence$4 1 − t 1 + α 2 1 − α 2 , α ≠ ± 1$$M α − 3 2$$[ M , ∞ ) , α > 3 ( 0 , M ] , α < 3$
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Melbourne, J. Strongly Convex Divergences. Entropy 2020, 22, 1327. https://doi.org/10.3390/e22111327

AMA Style

Melbourne J. Strongly Convex Divergences. Entropy. 2020; 22(11):1327. https://doi.org/10.3390/e22111327

Chicago/Turabian Style

Melbourne, James. 2020. "Strongly Convex Divergences" Entropy 22, no. 11: 1327. https://doi.org/10.3390/e22111327

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.