Next Article in Journal
A Smooth Path between the Classical Realm and the Quantum Realm
Next Article in Special Issue
Generalizations of Talagrand Inequality for Sinkhorn Distance Using Entropy Power Inequality
Previous Article in Journal
Gravity Observations and Apparent Density Changes before the 2017 Jiuzhaigou Ms7.0 Earthquake and Their Precursory Significance
Previous Article in Special Issue
Fast Approximations of the Jeffreys Divergence between Univariate Gaussian Mixtures via Mixture Conversions to Exponential-Polynomial Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inequalities for Jensen–Sharma–Mittal and Jeffreys–Sharma–Mittal Type f–Divergences

Department of Applied Mathematics and Computer Science, University of Life Sciences in Lublin, 28 Głęboka Street, 20-612 Lublin, Poland
Entropy 2021, 23(12), 1688; https://doi.org/10.3390/e23121688
Submission received: 21 October 2021 / Revised: 9 December 2021 / Accepted: 14 December 2021 / Published: 16 December 2021
(This article belongs to the Special Issue Distance in Information and Statistical Physics III)

Abstract

:
In this paper, we introduce new divergences called Jensen–Sharma–Mittal and Jeffreys–Sharma–Mittal in relation to convex functions. Some theorems, which give the lower and upper bounds for two new introduced divergences, are provided. The obtained results imply some new inequalities corresponding to known divergences. Some examples, which show that these are the generalizations of Rényi, Tsallis, and Kullback–Leibler types of divergences, are provided in order to show a few applications of new divergences.

1. Introduction

The Sharma–Mittal entropy was introduced as a new measure of information with two parameters [1]. It has previously been studied in the context of multi-dimensional harmonic oscillator systems [2]. This entropy could also be formulated in the form of exponential families, to which many usual statistical distributions including the Gaussians and discrete multinomials (that is, normalized histograms) belong. In physical applications it plays a major role in the field of thermo-statistics [3].
The Sharma–Mittal entropy is also applied for the analysis of the results of machine learning methods [4,5]. Additionally, the divergence based on considered entropy could be a cost function in the context of so-called the Twin Gaussian Processes [6].
It was originally showed by [7] that the Sharma–Mittal entropy generalized both Tsallis and Rényi entropy in the limiting cases of these two entropies. In [8], authors suggested a physical meaning of Sharma–Mittal entropy, which is the free energy difference between the equilibrium and the off-equilibrium distribution.
Recently, was published a manuscript showing, in opposition to the work [8], that Sharma–Mittal entropy besides the convenient thermodynamic systems does not reduce only to Kullback–Leibler entropy. In [9] Verma and Merigó present the use of Sharma–Mittal entropy under intuitionistic fuzzy environment. Additionally, in [5] Koltcov et al. demonstrate that Sharma–Mittal entropy is a tool for selecting both the number of topics and the values of hyper-parameters, simultaneously controlling for semantic stability, which none of the existing metrics can do.
Another applications of considered entropy are interesting results in the cosmological setup, such as black hole thermodynamics [10]. Namely, it helps us to describe the current accelerated universe by using the vacuum energy in a suitable manner [11]. In addition [12] have established the relation between anomalous diffusion process and Sharma–Mittal entropy.
This paper is based on publications in which we introduced new types of f-divergences [13,14,15,16].
In this paper we generalize Sharma–Mittal types divergences in order to obtain new types of divergences and hence the inequalities from which it will be possible to derive new results and generalizations for known divergences in order to estimate the lower and upper bounds which determine the level of the uncertainty measure.

2. Sharma–Mittal Type Divergences

Throughout R + and R + + denote the sets of non-negative and positive numbers, respectively, i.e., R + = [ 0 , ) and R + + = ( 0 , ) .
Let p = ( p 1 , , p n ) and q = ( q 1 , , q n ) with p i , q i 0 , i = 1 , , n . The relative entropy (also called Kullback–Leibler divergence) is defined by (see [17])
H 1 ( p , q ) = i = 1 n p i log p i q i .
In the above definition, based on continuity arguments, we use a convention that 0 log ( 0 / q ) = 0 and p log ( p / 0 ) = + . Additionally 0 log ( 0 / 0 ) = 0 .
Let f: R + R be a convex function on R + , and p = ( p 1 , , p n ) R + + n , q = ( q 1 , , q n ) R + n .
The Csiszár f-divergence is defined by (see [15])
C f p , q = i = 1 n p i f q i p i .
with the conventions 0 f 0 0 = 0 and 0 f c 0 = c lim t f ( t ) t , c > 0 (see [18,19,20]).
The Tsallis divergence of order α is defined by (see [17])
T α ( p , q ) = 1 α 1 i = 1 n p i α q i 1 α 1 .
The Rényi divergence of order α is defined by (see [17,21])
H α ( p , q ) = 1 α 1 log i = 1 n p i α q i 1 α .
The Sharma–Mittal divergence of order α and degree β is defined by (see [4])
S M α , β ( p , q ) = 1 β 1 i = 1 n p i α q i 1 α 1 β 1 α 1 ,
for all α > 0 , α 1 and β 1 .
Let g : I R be a convex function on an interval I R . Let x = ( x 1 , , x n ) I n and p i [ 0 , 1 ) for i = 1 , n .
The Jensen’s inequality is as follows (see [22])
g i = 1 n p i x i i = 1 n p i g ( x i ) .
When the function k : R + R is convex and the function l : R + R is convex and increasing then the composition of the functions k l : R + R is convex. We assume that the probabilities p i 0 and q i > 0 for i = 1 , , n .
It is known (see [4]) that if
α = β 1 then S M α , β ( p , q ) H 1 ( p , q ) ,
β 1 and α R then S M α , β ( p , q ) H α ( p , q ) ,
β = α then S M α , β ( p , q ) T α ( p , q ) .
Let h : R R be the differentiable function. Then the Sharma–Mittal h-divergence is defined as follows:
S M h , α , β ( p , q ) = h i = 1 n p i α q i 1 α 1 β 1 α 1 β 1 ,
for all α > 0 , α 1 and β 1 .
If we assume that h = i d then (5) becomes Sharma–Mittal divergence.
When for all t > 0 , h ( t ) = log ( t e ) then (5) becomes Rényi divergence of order α .
We substitute for t = i = 1 n p i α q i 1 α 1 β 1 α and we have
h ( t ) = h i = 1 n p i α q i 1 α 1 β 1 α = log i = 1 n p i α q i 1 α 1 β 1 α e = log i = 1 n p i α q i 1 α 1 β 1 α + 1 .
Hence, from (5)
S M h , α , β ( p , q ) = log i = 1 n p i α q i 1 α 1 β 1 α β 1 = 1 α 1 log i = 1 n p i α q i 1 α = H α ( p , q ) .
Let Ψ : R + × R + R be a differentiable function with respect to β and
Ψ ( α , β ) = h i = 1 n p i α q i 1 α 1 β 1 α .
We assume that h ( 1 ) = 1 and Ψ ( α , 1 ) = 1 . Then,
lim β 1 Ψ ( α , β ) Ψ ( α , 1 ) β 1 = Ψ ( α , 1 ) = h i = 1 n p i α q i 1 α 1 β 1 α | β = 1 =
β | β = 1 h i = 1 n p i α q i 1 α 1 β 1 α β | β = 1 i = 1 n p i α q i 1 α 1 β 1 α β | β = 1 1 β 1 α =
h ( 1 ) i = 1 n p i α q i 1 α 1 1 1 α log i = 1 n p i α q i 1 α 1 1 α =
h ( 1 ) 1 α 1 log i = 1 n p i α q i 1 α = 1 α 1 log i = 1 n p i α q i 1 α .
Hence, the Sharma–Mittal h–divergence tends to Rényi divergence of order α .
Remark 1.
If, additionally, α tends to 1 then based on the proof of the Equation (11) from [16], Sharma–Mittal h-divergence tends to relative entropy (called Kullback–Leibler divergence).
Now we define a new generalized ( h , ϕ ) Sharma–Mittal divergence as follows
S M h , ϕ , α , β ( p , q ) = h i = 1 n q i ϕ p i q i , α 1 β 1 α 1 β 1 ,
where ϕ : ( 0 , + ) × R + R + is an increasing, non-negative and differentiable function for β > 1 .
We assume that F = { f α : ( 0 , ) R : α R } is a given family of functions such that i = 1 n q i f α | α = 1 p i q i = 1 for α = 1 and which are increasing, non-negative for α > 1 and such that for every t ( 0 , + ) the function α f α ( t ) is differentiable.
According to [16] if we substitute the function f α ( p i q i ) from the family F for ϕ ( p i q i , α ) then it stands that
lim β 1 S M h , ϕ , α , β ( p , q ) = R h , f α ( p , q ) .
We assume that h ( 1 ) = 1 . Then,
lim β 1 S M h , ϕ , α , β ( p , q ) = lim β 1 h i = 1 n q i ϕ p i q i , α 1 β 1 α 1 β 1 =
h i = 1 n q i ϕ p i q i , α 0 · i = 1 n q i ϕ p i q i , α 0 log i = 1 n q i ϕ p i q i , α 1 1 α =
h ( 1 ) 1 α 1 log i = 1 n q i ϕ p i q i , α = 1 α 1 log i = 1 n q i ϕ p i q i , α =
1 α 1 log i = 1 n q i f α p i q i = R h , f α ( p , q ) .
Remark 2.
If in (6) β 1 and ϕ ( p i q i , α ) = f α ( p i q i ) then the generalized ( h , ϕ ) –Sharma–Mittal divergence tends to generalized ( h , F ) –Rényi divergence.
The function ϕ is the generalization of the function f α ( p i q i ) which is used for example in Csiszár f-divergence. Condition β 1 means that the limit of generalized Sharma–Mittal divergence is equal to generalized ( h , F ) –Rényi divergence. Hence we have implications for generalized forms of entropies.
Remark 3.
Additionally, when in (6) α 1 and ϕ ( p i q i , α ) = p i q i α then the generalized ( h , ϕ ) –Sharma–Mittal divergence tends to Kullback–Leibler divergence, because we have from Remark 2
lim ( α , β ) ( 1 , 1 ) S M h , ϕ , α , β ( p , q ) = lim α 1 log i = 1 n q i ϕ p i q i , α log 1 α 1 = α | α = 1 log i = 1 n q i p i q i α = 1 i = 1 n p i i = 1 n q i p i q i log p i q i = i = 1 n p i log p i q i = H 1 ( p , q ) .
Remark 4.
In (6), when the parameter β = α , the function h = i d and ϕ ( p i q i , α ) = p i q i α then the generalized ( h , ϕ ) –Sharma–Mittal divergence tends to the Tsallis f-divergence or order α.
This work is more theoretical than practical. Therefore, the implications are formulated in the mathematical area that is from constructing general model which gives known specific cases.

3. Jensen–Sharma–Mittal and Jeffreys–Sharma–Mittal Divergences

The Jensen–Shannon divergence (Jensen–Shannon entropy) is defined as follows (see [17]):
Jen ( p , q ) = 1 2 S M p , p + q 2 + 1 2 S M q , p + q 2 .
The Jeffreys divergence (Jeffreys entropy) is defined as follows (see [17]):
Jef ( p , q ) = S M ( p , q ) + S M ( q , p ) .
We introduce a new generalized ( h , ϕ ) Jensen–Sharma–Mittal divergence defined by
Jen S M α , β h , ϕ ( p , q ) = 1 2 S M h , ϕ , α , β p , p + q 2 + 1 2 S M h , ϕ , α , β q , p + q 2
with assumptions as before.
We similarly introduce a new generalized ( h , ϕ ) Jeffreys–Sharma–Mittal divergence as follows
Jef S M α , β h , ϕ ( p , q ) = S M h , ϕ , α , β ( p , q ) + S M h , ϕ , α , β ( q , p ) .
Taking into account inequality from [17]:
0 Jen ( p , q ) 1 2 Jef ( p , q ) ,
describing the relation between the Jensen–Shannon and Jeffreys divergences, we could formulate the following:
0 Jen S M α , β h , ϕ ( p , q ) 1 2 Jef S M α , β h , ϕ ( p , q ) .
We define the Jensen–Sharma–Mittal h-divergence where, in (8), ϕ p i q i , α = p i q i α . Then, it takes the form:
Jen S M α , β h ( p , q ) = 1 2 S M h , α , β p , p + q 2 + 1 2 S M h , α , β q , p + q 2
In the same way, we define the Jeffreys–Sharma–Mittal h–divergence:
Jef S M α , β h ( p , q ) = S M h , α , β ( p , q ) + S M h , α , β ( q , p ) .
Additionally, if the function h ( t ) = t then we define the Jensen–Sharma–Mittal and the Jeffreys–Sharma–Mittal divergences of order α and degree β , respectively.
Jen S M α , β ( p , q ) = 1 2 S M α , β p , p + q 2 + 1 2 S M α , β q , p + q 2 ,
Jef S M α , β ( p , q ) = S M α , β ( p , q ) + S M α , β ( q , p ) .
When in (8) and (9) β 1 and we substitute for ϕ p i q i , α = f α p i q i then we obtain, defined in [16], the generalized ( h , F ) Jensen–Rényi and Jeffreys–Rényi divergences, respectively:
Jen S M α , 1 h , f α ( p , q ) = Jen R h , f α ( p , q ) = 1 2 R h , f α p , p + q 2 + 1 2 R h , f α q , p + q 2 ,
Jef S M α , 1 h , f α ( p , q ) = Jef R h , f α ( p , q ) = R h , f α ( p , q ) + R h , f α ( q , p ) .
The following theorem is the generalization and refinement of the inequalities for some known divergences and provides lower and upper bounds for the generalized ( h , ϕ ) Jeffreys–Sharma–Mittal divergence in order to a more accurate estimation of its uncertainty measure.
Theorem 1.
Let p = ( p 1 , , p n ) and q = ( q 1 , , q n ) be two discrete probability distributions with p i > 0 , q i > 0 , q i p i I 0 , p i q i I 0 , i = 1 , , n , where I 0 R is an interval, such that 1 I 0 . Let ϕ : I 0 × R + R + be an increasing, non-negative and differentiable function for which i = 1 n q i ϕ p i q i , α 1 and i = 1 n p i ϕ q i p i , α 1 where 1 < β α , α R + { 1 } and h : I 0 R be a convex and increasing function on I 0 .
Then, the following inequalities are valid:
1 α 1 log i = 1 n ϕ p i q i , α q i ϕ q i p i , α p i Jef S M α , β h , ϕ ( p , q )
1 β 1 Jef C h ϕ ( p , q ) 2 .
Proof. 
Taking into account the assumptions, we could formulate the following inequality:
i = 1 n q i ϕ p i q i , α 1 β 1 α i = 1 n q i ϕ p i q i , α .
The function h is increasing and convex, therefore, from (4) and (17) we obtain inequalities:
h i = 1 n q i ϕ p i q i , α 1 β 1 α h i = 1 n q i ϕ p i q i , α i = 1 n q i ( h ϕ ) p i q i , α .
In the same way, we obtain the following inequalities:
h i = 1 n p i ϕ q i p i , α 1 β 1 α h i = 1 n p i ϕ q i p i , α i = 1 n p i ( h ϕ ) q i p i , α .
From (9) we have
Jef S M α , β h , ϕ ( p , q ) = S M h , ϕ , α , β ( p , q ) + S M h , ϕ , α , β ( q , p ) =
h i = 1 n q i ϕ p i q i , α 1 β 1 α 1 β 1 + h i = 1 n p i ϕ q i p i , α 1 β 1 α 1 β 1
Taking into account (2), (18), (19) and the definition of Jeffreys divergence, it stands that:
Jef S M α , β h , ϕ ( p , q ) i = 1 n q i ( h ϕ ) p i q i , α 1 β 1 + i = 1 n p i ( h ϕ ) q i p i , α 1 β 1 =
i = 1 n q i ( h ϕ ) p i q i , α + i = 1 n p i ( h ϕ ) q i p i , α 2 β 1 = Jef C h ϕ ( p , q ) 2 β 1 .
The above inequality is the upper bound for generalized ( h , ϕ ) Jeffreys–Sharma–Mittal divergence.
By using the convexity of the function h with h ( 1 ) = 1 the following inequality is valid for β > 1 :
h i = 1 n q i ϕ p i q i , α 1 β 1 α 1 β 1 β | β = 1 h i = 1 n q i ϕ p i q i , α 1 β 1 α .
From (7) the above derivative function is equal to: 1 α 1 log i = 1 n q i ϕ p i q i , α .
The function f ( t ) = log t is concave and increasing. Then, it stands that:
1 α 1 log i = 1 n q i ϕ p i q i , α 1 α 1 i = 1 n q i log ϕ p i q i , α .
Hence, from (21) and (22) we have the inequality:
h i = 1 n q i ϕ p i q i , α 1 β 1 α 1 β 1 1 α 1 i = 1 n q i log ϕ p i q i , α .
Similarly, we obtain the second inequality:
h i = 1 n p i ϕ q i p i , α 1 β 1 α 1 β 1 1 α 1 i = 1 n p i log ϕ q i p i , α .
We have from (6), (23) and (24) that:
S M h , ϕ , α , β ( p , q ) 1 α 1 i = 1 n q i log ϕ p i q i , α ,
S M h , ϕ , α , β ( q , p ) 1 α 1 i = 1 n p i log ϕ q i p i , α .
Then, by using the definition (9) we have:
Jef S M α , β h , ϕ ( p , q ) 1 α 1 log i = 1 n ϕ p i q i , α q i + log i = 1 n ϕ q i p i , α p i =
1 α 1 log i = 1 n ϕ p i q i , α q i ϕ q i p i , α p i .
This result is the lower bound of the generalized ( h , ϕ ) Jeffreys–Sharma–Mittal divergence.
Combining (20) and (25) we obtain the expected inequalities (16). □
Corollary 1.
When we substitute for ϕ p i q i , α = p i q i α then from (16) we obtain the inequalities for Jeffreys–Sharma–Mittal h-divergence:
1 α 1 log i = 1 n p i q i α ( q i p i ) Jef S M α , β h ( p , q )
i = 1 n q i h p i q i α + i = 1 n p i h q i p i α 2 β 1 .
We now formulate the theorem thanks to which the estimation of the generalized ( h , ϕ ) Jensen–Sharma–Mittal divergence will be possible.
Theorem 2.
Let p = ( p 1 , , p n ) and q = ( q 1 , , q n ) be two discrete probability distributions with p i > 0 , q i > 0 , q i p i I 0 , p i q i I 0 , i = 1 , , n , where I 0 R is an interval such that 1 I 0 . Let ϕ : I 0 × R + R + be an increasing, non-negative and differentiable function for which i = 1 n p i + q i 2 ϕ 2 p i p i + q i , α 1 and i = 1 n p i + q i 2 ϕ 2 q i p i + q i , α 1 where 1 < β α , α R + { 1 } and h : I 0 R be a convex and increasing function on I 0 .
Then, the following inequalities are valid:
1 2 ( α 1 ) log i = 1 n ϕ 2 p i p i + q i , α ϕ 2 q i p i + q i , α p i + q i 2 Jen S M α , β h , ϕ ( p , q )
i = 1 n ( p i + q i ) ( h ϕ ) 2 p i p i + q i , α + ( h ϕ ) 2 q i p i + q i , α 4 4 ( β 1 )
Proof. 
Let’s consider the function
h i = 1 n p i + q i 2 ϕ p i p i + q i 2 , α 1 β 1 α 1 β 1 .
Using the assumptions that the function h is differentiable, convex and h ( 1 ) = 1 we could formulate the following inequality:
h i = 1 n p i + q i 2 ϕ p i p i + q i 2 , α 1 β 1 α 1 β 1 β | β = 1 h i = 1 n p i + q i 2 ϕ p i p i + q i 2 , α 1 β 1 α .
Then, (28) is equal to:
log i = 1 n p i + q i 2 ϕ p i p i + q i 2 , α 1 1 α = 1 α 1 log i = 1 n p i + q i 2 ϕ p i p i + q i 2 , α .
Taking into account concavity of the function log, we have that:
1 α 1 log i = 1 n p i + q i 2 ϕ p i p i + q i 2 , α 1 α 1 i = 1 n p i + q i 2 log ϕ p i p i + q i 2 , α .
Then, we obtain that (27) is greater than
1 α 1 log i = 1 n ϕ p i p i + q i 2 , α p i + q i 2 .
We do the same with the function
h i = 1 n p i + q i 2 ϕ q i p i + q i 2 , α 1 β 1 α 1 β 1 .
Hence, we have that (30) is greater than
1 α 1 log i = 1 n ϕ q i p i + q i 2 , α p i + q i 2 .
Then, combining (27), (29)–(31), and using the definition (8) the following inequality occurs
Jen S M α , β h , ϕ ( p , q ) 1 2 ( α 1 ) log i = 1 n ϕ 2 p i p i + q i , α ϕ 2 q i p i + q i , α p i + q i 2
and it is the lower bound of the generalized ( h , ϕ ) Jensen–Sharma–Mittal divergence.
When we consider the function
h i = 1 n p i + q i 2 ϕ p i p i + q i 2 , α 1 β 1 α
with 1 < β α then for the convex and increasing function h we have from (4) that (33) is smaller than
h i = 1 n p i + q i 2 ϕ p i p i + q i 2 , α i = 1 n p i + q i 2 ( h ϕ ) p i p i + q i 2 , α .
In a similar way we conclude the following inequality for the function
h i = 1 n p i + q i 2 ϕ q i p i + q i 2 , α 1 β 1 α
and we have
h i = 1 n p i + q i 2 ϕ q i p i + q i 2 , α i = 1 n p i + q i 2 ( h ϕ ) q i p i + q i 2 , α .
Then combining (33)–(36) and the definition (8) with the proper transformations we obtain the inequality
Jen S M α , β h , ϕ ( p , q ) i = 1 n ( p i + q i ) ( h ϕ ) 2 p i p i + q i , α + ( h ϕ ) 2 q i p i + q i , α 4 4 ( β 1 )
which is the upper bound of the generalized ( h , ϕ ) Jensen–Sharma–Mittal divergence.
When we take into account (32) and (37), then we obtain (26). □
Corollary 2.
When we substitute for ϕ 2 p i p i + q i , α = 2 p i p i + q i α and for ϕ 2 q i p i + q i , α = 2 q i p i + q i α then from (26) we obtain the inequalities for Jensen–Sharma–Mittal h–divergence:
1 2 ( α 1 ) log i = 1 n 4 p i q i ( p i + q i ) 2 α ( p i + q i ) 2 Jen S M α , β h ( p , q )
i = 1 n ( p i + q i ) h 2 p i p i + q i α + h 2 q i p i + q i α 4 4 ( β 1 ) .
Remark 5.
It could be seen that the lower bounds for both Jeffreys (25) and Jensen (32) Sharma–Mittal ( h , ϕ ) divergences are independent of the function h.
Remark 6.
Taking into account the inequality (10) we obtain the alternative upper bound for the Jensen–Sharma–Mittal and the lower bound for the Jeffreys–Sharma–Mittal generalized ( h , ϕ ) divergences, respectively.
Jen S M α , β h , ϕ ( p , q ) h i = 1 n q i ϕ p i q i , α 1 β 1 α + h i = 1 n q i ϕ p i q i , α 1 β 1 α 2 2 ( β 1 ) ,
Jef S M α , β h , ϕ ( p , q ) h i = 1 n p i + q i 2 ϕ 2 p i p i + q i , α 1 β 1 α 1 β 1 +
h i = 1 n p i + q i 2 ϕ 2 q i p i + q i , α 1 β 1 α 1 β 1 .

4. Applications

In this section we show how our theory works.

4.1. Bounds for Sharma–Mittal Divergences

For the functions h ( t ) = t , ϕ ( t , α ) = t α and based on Theorems 1 and 3 we obtain the lower and upper bounds for Jeffreys–Sharma–Mittal and Jensen–Sharma–Mittal divergences, respectively, as follows
1 α 1 log i = 1 n p i q i α ( q i p i ) Jef S M α , β ( p , q ) i = 1 n ( p i q i ) α q i 1 2 α + p i 1 2 α 2 β 1
1 2 ( α 1 ) log i = 1 n 2 p i q i p i + q i α ( p i + q i ) Jen S M α , β ( p , q ) i = 1 n p i + q i 2 1 α p i α + q i α 2 2 ( β 1 ) .
Remark 7.
The above lower bounds (38) and (39) are the same for Rényi types divergences because they are independent of the parameter β which in that case approaches 1.
Remark 8.
Substituting different values for the parameters α, β, such that 1 < β α and taking into account the assumptions from the Theorems 1 and 3 about the functions h and ϕ we could formulate new types of divergences and related inequalities which are based on the generalized ( h , ϕ ) Sharma–Mittal divergence.

4.2. Bounds for Tsallis Divergences

When we make the same assumptions as for Sharma–Mittal divergences with additional that β = α we obtain the bounds for Tsallis type divergences as follows
1 α 1 log i = 1 n p i q i α ( q i p i ) Jef T α ( p , q ) i = 1 n ( p i q i ) α q i 1 2 α + p i 1 2 α 2 α 1
1 2 ( α 1 ) log i = 1 n 2 p i q i p i + q i α ( p i + q i ) Jen T α ( p , q ) i = 1 n p i + q i 2 1 α p i α + q i α 2 2 ( α 1 ) .

4.3. Bounds for Kullback–Leibler Divergences

When we have the same situation as in case of Tsallis divergence that is h ( t ) = t , ϕ ( t , α ) = t α , α = β and additionally both α and β approach 1 then we obtain new upper bounds for Jeffreys and Jensen–Shannon divergences, respectively.
Jef S ( p , q ) i = 1 n ( p i + q i ) log p i q i ,
Jen S ( p , q ) i = 1 n p i log p i + q i log q i ( p i + q i ) log ( p i + q i ) .
The last inequality is equivalent to Jen S ( p , q ) 2 l o g 2 .

5. Summary

In this paper, new types of entropy have been defined, which are generalizations of others known and used so far in information theory.
The manuscript deals more with issues in the field of pure mathematics, therefore the standard axioms of entropy used in thermodynamics could, in this case, be extended by other assumptions and properties.
These divergences have been introduced for new physical interpretations which could be generated.
Generalized Sharma–Mittal and consequently Jensen–Sharma–Mittal and Jeffrey–Sharma–Mittal divergences have been defined for obtaining better estimates for known entropies, which will allow to more accurately determination of the dispersion measure of different distributions.
The derived inequalities have both upper and lower limits for the considered f-divergences. As a consequence, we obtain specific estimates for some new order measures. Hence they provide much wider interpretation possibilities in comparing probability distributions in the sense of mutual distances in different spaces.
In the era of advancing quantum mechanics, scientists are striving to build a quantum computer with very high computing power. The obtained results, despite their mathematical and analytical complexity, will very quickly generate specific numerical intervals which are an estimation of new introduced entropies. Therefore, such results as in this paper will be very useful in developing information theory issues.
This work is from the area of pure mathematics, therefore it is more theoretical than practical and makes it possible to find the existing known entropies by means of new defined generalizations. These generalizations can be used for interpreting various physical phenomena. The aim of this manuscript was to provide some new theoretical solutions for physicists who, with their knowledge and experience, will be able to look for new applications.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author wishes to thank anonymous referees for their helpful suggestions improving the readability of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sharma, B.D.; Mittal, D.P. New nonadditive measures of inaccuracy. J. Math. Sci. 1975, 10, 122–133. [Google Scholar]
  2. Üzengi Aktürk, O.; Aktürk, E.; Tomak, M. Can Sobolev inequality be written for Sharma–Mittal entropy? Int. J. Theor. Phys. 2008, 47, 3310–3320. [Google Scholar] [CrossRef]
  3. Naudts, J. Generalised Thermostatistics; Springer: Berlin, Germany, 2011. [Google Scholar]
  4. Elhoseiny, M.; Elgammal, A. Generalized Twin Gaussian Processes using Sharma–Mittal Divergence. arXiv 2015, arXiv:1409.7480v5. [Google Scholar] [CrossRef] [Green Version]
  5. Koltcov, S.; Ignatenko, V.; Koltsova, O. Estimating Topic Modeling Performance with Sharma–Mittal Entropy. Entropy 2019, 21, 660. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Bo, L.; Sminchisescu, C. Twin gaussian processes for structured prediction. Int. J. Comput. Vis. 2010, 87, 28–52. [Google Scholar] [CrossRef]
  7. Masi, M. A step beyond Tsallis and Rényi entropies. Phys. Lett. A 2005, 338, 217–224. [Google Scholar] [CrossRef] [Green Version]
  8. Aktürk, E.; Bagci, G.; Sever, R. Is Sharma–Mittal entropy really a step beyond Tsallis and Rényi entropies? arXiv 2007, arXiv:cond-mat/0703277. [Google Scholar]
  9. Verma, R.; Merigó, J.M. On Sharma–Mittal’s Entropy under Intuitionistic Fuzzy Environment. Cybern. Syst. 2021, 52, 498–521. [Google Scholar] [CrossRef]
  10. Ghaffari, S.; Ziaie, A.H.; Moradpour, H.; Asghariyan, F.; Feleppa, F.; Tavayef, M. Black hole thermodynamics in Sharma–Mittal generalized entropy formalism. Gen. Relativ. Gravit. 2019, 51, 93. [Google Scholar] [CrossRef] [Green Version]
  11. Demirel, E.C.G. Dark energy model in higher–dimensional FRW universe with respect to generalized entropy of Sharma and Mittal of flat FRW space–time. Can. J. Phys. 2019, 97, 1185–1186. [Google Scholar] [CrossRef]
  12. Frank, T.; Daffertshofer, A. Exact time–dependent solutions of the Rényi Fokker–Planck equation and the Fokker–Planck equations related to the entropies proposed by Sharma and Mittal. Phys. A Stat. Mech. Appl. 2000, 285, 351–366. [Google Scholar] [CrossRef]
  13. Kluza, P.A.; Niezgoda, M. Inequalities for relative operator entropies. Electron. J. Linear Algebra 2014, 27, 851–864. [Google Scholar] [CrossRef] [Green Version]
  14. Kluza, P.A.; Niezgoda, M. Generalizations of Crooks and Lin’s results on Jeffreys-Csiszár and Jensen-Csiszár f-divergences. Phys. A Stat. Mech. Appl. 2016, 463, 383–393. [Google Scholar] [CrossRef]
  15. Kluza, P.A.; Niezgoda, M. On Csiszár and Tsallis type f-divergences induced by superqudratic and convex functions. Math. Inequal. Appl. 2018, 21, 455–467. [Google Scholar]
  16. Kluza, P.A. On Jensen–Rényi and Jeffreys-Rényi type f-divergences induced by convex functions. Phys. A Stat. Mech. Appl. 2020, 548, 1–10. [Google Scholar] [CrossRef]
  17. Crooks, G.E. On Measures of Entropy and Information. Available online: http://threeplusone.com/info (accessed on 22 September 2018).
  18. Csiszár, I. Information–type measures of differences of probability distributions and indirect observations. Stud. Sci. Math. Hung. 1967, 2, 299–318. [Google Scholar]
  19. Csiszár, I.; Körner, J. Information Theory: Coding Theorems for Discrete Memory-Less Systems; Academic Press: New York, NY, USA, 1981. [Google Scholar]
  20. Dragomir, S.S. Inequalities for the Csiszár f-Divergence in Information Theory. RGMIA Monographs, Victoria University. 2000. Available online: http://rgmia.org/monographs/csiszar.htm (accessed on 1 February 2001).
  21. Baez, J.C. Rényi entropy and free energy. arXiv 2011, arXiv:1102.2098. [Google Scholar]
  22. Dragomir, S.S.; Pecaric, J.E.; Persson, L.E. Properties of some functionals related to Jensen’s inequality. Acta Math. Hungar. 1996, 70, 129–143. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kluza, P.A. Inequalities for Jensen–Sharma–Mittal and Jeffreys–Sharma–Mittal Type f–Divergences. Entropy 2021, 23, 1688. https://doi.org/10.3390/e23121688

AMA Style

Kluza PA. Inequalities for Jensen–Sharma–Mittal and Jeffreys–Sharma–Mittal Type f–Divergences. Entropy. 2021; 23(12):1688. https://doi.org/10.3390/e23121688

Chicago/Turabian Style

Kluza, Paweł A. 2021. "Inequalities for Jensen–Sharma–Mittal and Jeffreys–Sharma–Mittal Type f–Divergences" Entropy 23, no. 12: 1688. https://doi.org/10.3390/e23121688

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop