Refined Young Inequality and Its Application to Divergences

We give bounds on the difference between the weighted arithmetic mean and the weighted geometric mean. These imply refined Young inequalities and the reverses of the Young inequality. We also studied some properties on the difference between the weighted arithmetic mean and the weighted geometric mean. Applying the newly obtained inequalities, we show some results on the Tsallis divergence, the Rényi divergence, the Jeffreys–Tsallis divergence and the Jensen–Shannon–Tsallis divergence.


Introduction
The Young integral inequality is the source of many basic inequalities.Young [28] proved the following: suppose that f : [0, ∞) → [0, ∞) is an increasing continuous function such that f (0) = 0 and lim with equality iff b = f (a).Such a gap is often used to define the Fenchel-Legendre divergence in information geometry [3,25].For f (x) = x p−1 , (p > 1), in inequality (1), we deduce the classical Young inequality: for all a, b > 0 and p, q > 1 with 1 p + 1 q = 1.The equality occurs if and only if a p = b q .Minguzzi [18], proved a reverse Young inequality in the following way: for all a, b > 0 and p, q > 1 with 1 p + 1 q = 1.
The classical Young inequality (2) is rewitten as by putting 1/p =: p (then 1/q = 1 − p) in the inequality (4).It is notable that α-divergence is related to the difference between the weighted arithmetic mean and the weighted geometric mean [24].For p = 1/2, we deduce the inequality between the geometric mean and the arithmetic mean, G (a, b) := √ ab ≤ a+b 2 =: A (a, b).The Heinz mean [2, Eq.( 3)](See also [8]) is defined as ). Especially, when we discuss about the Young inequality, we will refer to the last form.We consider the following expression Cartwright-Field inequality (see e.g.[5]) is often written as follows: for a, b > 0 and 0 ≤ p ≤ 1.This double inequality gives an improvement of the Young inequality and, at the same time, gives a reverse inequality for the Young inequality.Kober proved in [15] a general result related to an improvement of the inequality between arithmetic and geometric means, which for n = 2 implies the inequality: where a, b > 0, 0 ≤ p ≤ 1 and r = min {p, 1 − p}.This inequality was rediscovered by Kittaneh and Manasrah in [14].(See also [4].) Finally, we found, in [19], another improvement of the Young inequality and a reverse inequality, given as: where a, b ≥ 1, 0 < p < 1 and r = min {p, It is remarkable that the inequalities (9) give a further refinement of (8), since A(p) ≥ 0 and B(p) ≤ 0.
In [9], we also presented two inequalities which give two different reverse inequalities for the Young inequality: where a, b > 0, 0 ≤ p ≤ 1. See [12,Chapter 2] for recent advances on refinements and reverses of the Young inequality.The α-divergence is related to the difference of a weighted arithmetic mean with a geometric mean [24].We mention that the gap is used information geometry to define the Fenchel-Legendre divergence [3], [25].We give bounds on the difference between the weighted arithmetic mean and the weighted geometric mean.These imply refined Young inequalities and the reverses of the Young inequality.We also study some properties on the difference between the weighted arithmetic mean and the weighted geometric mean.Applying the newly obtained inequalities, we show some results on the Tsallis divergence, the Rényi divergence, the Jeffreys-Tsallis divergence and the Jensen-Shannon-Tsallis divergence [16], [26].The parametric Jensen-Shannon divergence can be used to detect unusual data, and that one can use it also as a means to perform relevant analysis of fire experiments [21].

Main results
We give estimates on d p (a, b) and also study the properties of d p (a, b).We give the following estimates of d p (a, b), firstly.
Theorem 2.1.For 0 < a, b ≤ 1 and 0 ≤ p ≤ 1, we have where r = min {p, 1 − p} and Proof.For p = 0 or p = 1 or a = b, we have equality.We assume a = b and 0 < p < 1.Because 0 < a, b ≤ 1, we have 1 a , 1 b ≥ 1, so, applying inequality (9), we deduce the following relation: But, we known that Proof.For p = 1 or a = b, we have equality.We assume a > b and 0 < p < 1.It is easy to see that We take x = a/b in (14) and then obtain Next, we take the function f : [1, a/b] → R defined by f (t) := 1−t p−1 .By simple calculations we have So the function f is concave so that we can apply Hermite-Hadamard inequality [23]: The left hand side of the inequalities above shows Since the function f (t) := 1 − t 1−p is increasing, we have Integrating the above inequality by t from 1 to x, we get which implies Theorem 2.3.For a, b > 0 and 0 ≤ p ≤ 1, we have Proof.We give two different proofs (I) and (II).
(I) For a = b or p ∈ {0, 1}, we obtain equality in the relation from the statement.Thus, we assume a = b and p ∈ (0, 1).It is easy to see that Using the Lagrange theorem, there exists c 1 and c 2 between a and b such that (a But, we have the inequality . Therefore, we deduce the inequality of the statement. (II) Using the Cartwright-Field inequality, we have: and if we replace p by 1 − p, we deduce , we deduce an estimation for the Heinz mean: which is in fact the inequality given by Minguzzi (3).
Proof.For a = b or p ∈ {0, 1}, we obtain equality in the relations from the statement.Thus, we assume a = b and p ∈ (0, 1).But, we have We consider the function f : (0, ∞) → R defined by f (t) = (2p − 1)(t − 1) − t p + t 1−p .We calculate the derivatives of f , thus we have For t > 1 and 1/2 ≤ p < 1, we have > 0, so, function df dt is increasing, so we obtain > 0, so, function df dt is increasing, so we obtain dt < df (1) dt = 0, which implies that function f is decreasing, so we have In the analogous way, we show the inequality in (ii).
Remark 2.6.From (i) in Theorem 2.5 for 1/2 ≤ p ≤ 1 and a ≥ b, we have which is just left hand side of Cartwright-Field inequality: Therefore, it is quite natural to consider the following inequality holds or not for a general case a, b > 0 and 0 ≤ p ≤ 1.However, this inequality does not hold in general.We set the function Then we have h 0.1 (0.3) ≃ −0.00434315, h 0.1 (0.6) ≃ 0.000199783 and also h 0.9 (1.8) ≃ 0.000352199, h 0.9 (2.6) ≃ −0.00282073.
Theorem 2.7.For a, b ≥ 1 and 0 ≤ p ≤ 1, we have where Proof.For p = 0 or p = 1 or a = b, we have equality.We assume a = b and 0 < p < 1.If b < a, then using Theorem 2.2, we have Using the Lagrange theorem, we obtain . If b > a and we replace p by 1 − p, then Theorem 2.2 implies Using the Lagrange theorem, we obtain b p − a p = p(b − a)θ p−1 , where a < θ < b.For a ≥ 1, we . Taking into account the above considerations, we prove the statement.
Corollary 2.8.For 0 < a, b ≤ 1 and 0 ≤ p ≤ 1, we have where Proof.For p = 0 or p = 1 or a = b, we have the equality.We assume a = b and 0 < p < 1.If in inequality (18), we replace a, b ≤ 1 by 1 a , 1 b ≥ 1, we deduce Consequently, we prove the inequalities of the statement.
Theorem 2.9.For a, b > 0 and 0 ≤ p ≤ 1, we have Proof.For p = 0 or p = 1 or a = b, we have the equality in the relation from the statement.
We assume a = b and 0 < p < 1.We consider function f : (0, ∞) → R defined by dt ≤ 0, which implies that f is decreasing, so, we obtain f (t) ≤ f (1) = 0. Therefore, we find the following inequality Multiplying the above inequality by t > 0, we have which is equivalent to the inequality for all t > 0 and p ∈ [0, 1].Therefore, if we take t = a b in the above inequality and after some calculations, we deduce the inequality of the statement.
Corollary 2.10.For a, b > 0 and 0 ≤ p ≤ 1, we have Proof.For p = 0 or p = 1 or a = b, we have the equality.We assume a = b and 0 < p < 1.If in inequality (21), we exchange a with b, we deduce , so, we have Consequently, we prove the inequality of the statement.
The Rényi divergence (e.g., [1]) also denoted by We see in (e.g.[10]) that It is also known that where D(p|r) is the standard divergence (KL information, reltative entropy).The Jeffreys divergence (see [10], [11]) is defined by J 1 (p|r) := D(p|r) + D(r|p) and the Jensen-Shannon divergence [16,26] is defined by In [20], the Jeffreys and the Jensen-Shannon divergence are extended to biparametric forms.In [11], Furuichi and Mitroi generalizes these divergences to the Jeffreys-Tsallis divergence, which is given by J q (p|r) := D T q (p|r) + D T q (r|p) and to the Jensen-Shannon-Tsallis divergence, which is defined as Several properties of divergences can be extended in the operator theory [22].For the Tsallis divergence, we have the following relations.
Proof.From the definition of the Tsallis divergence, we deduce the equality: (6).Applying Theorem 2.3, we obtain and combining with the above equality, we deduce the inequalities (24).
Remark 3.2.(i) In the limit of q → 1 in (24), we then obtain n j=1 for the standard divergence.
(ii) From ( 23), we have where we used the inequality e x ≥ x + 1 for all x ∈ R. Thus, we deduce the inequalities: and D T q (p|r) + D T q (r|p) ≥ D R q (p|r) + D R q (r|p), (q > 1).Combining (25) with Theorem 3.1, we therefore have the following result for the Rényi divergence We give the relation between the Jeffreys-Tsallis divergence and the Jensen-Shannon-Tsallis divergence.
Proof.We consider the function g : (0, ∞) → R defined by g(t) = t 1−q , which is concave for q ∈ [0, 1).Therefore, we have , which implies the following inequalities From the definition of the Tsallis divergence, we deduce the inequality: which is equivalent to the relation of the statement.For the case of q > 1, the function g(t) = t 1−q is convex in t > 0. Similarly, we have the statement, taking into account that 1 − q < 0.
Proof.For q = 0, we obtain the equality.Now, we consider 0 < q < 1.Using Theorem 2.1 for a = p j < 1 and b = r j < 1, j ∈ {1, 2, ..., n}, we deduce where r = min {q, 1 − q}.If we replace q by 1 − q and taking into account that A (q) = A (1 − q) and B (q) = B (1 − q), then we have Taking the sum on j = 1, 2, • • • , n, we find the inequalities which is equivalent to the inequalities in the statement.
Remark 3.6.In the limit of q → 1 in ( 27), we then obtain We give the further bounds on the Jeffreys-Tsallis divergence by the use of Theorem 2.7 and Corollary 2.10.Theorem 3.7.For two probability distributions p := {p 1 , • • • , p n } and r := {r 1 , • • • , r n } with p j > 0 and r j > 0 for all j = 1, • • • , n, and 0 ≤ q < 1, we have where Proof.Putting a := p j , b := r j and p := q in (20), we deduce Taking into account that and by taking the sum on j = 1, 2, • • • , n, we have (d q (p j , r j ) + d 1−q (p j , r j )) we prove the lower bounds of J q (p|r).To prove the upper bound of J q (p|r), we put a := p j , b := r j and p := q in inequality ( 22).Then we deduce By taking the sum on j = 1, 2, • • • , n, we find n j=1 (d q (p j , r j ) + d 1−q (p j , r j )) ≤ (1 − q) n j=1 (p j − r j ) 2 (p j + r j ) p j r j .
Consequently, we prove the inequalities of the statement.
From the above inequalities we have the statement, by summing on j = 1, 2, • • • , n.
It is quite natural to extend the Jensen-Shannon-Tsallis divergence to the following form: We call this the v-weighted Jensen-Shannon-Tsallis divergence.For v = 1/2, we find that JS 1/2 q (p|r) = JS q (p|r) which is the Jensen-Shannon-Tsallis divergence.For this quantity JS v q (p|r), we can obtain the following result in a way similar to the proof of Theorem 3.8.
Proposition 3.9.For two probability distributions p Proof.We calculate as Using inequality (7), we deduce Multiplying v and 1 − v by the above inequalities, respectively, and then taking the sum on j = 1, 2, • • • , n, we obtain the statement.

Conclusion
We obtained new inequalities which improve classical Young inequality by analytical calculations with known inequalities.We also obtained some bounds on the Jeffreys-Tsallis divergence and the Jensen-Shannon-Tsallis divergence.At this point, we do not know clearly that the obtained bounds will play any role in information theory.However, if there exsits a purpose to find the meaning of the parameter q in divergences based on Tsallis divergence, then we may state that almost theorems (except for Theorem 3.3) hold for 0 ≤ q < 1.In the first author's previous studies [6,7], some results related to Tsallis divergence (relative entropy) are still true for 0 ≤ q < 1, while some results related to Tsallis entropy are still true for q > 1.In this paper, we treated the Tsallis type divergence so it is shown that almost results are true for 0 ≤ q < 1.This insgiht may give a rough meaning of the parameter q.
Since our results in Section 3 are based on the inequalities in Section 2, we summerize on the tightness for our obtained inequalities in Section 2. The double inequality ( 12) is a counterpart of the double inequality (9) for a, b ∈ (0, 1].Therefore they can not be compared each other from the point of view on the tightness, since the conditions are different.The double inequality (12) was used to obtain Theorem 3.5.The double inequality ( 15) is essentially Cartwright-Field inequality itself, and it was used to obtain Theorem 3.1 as the first result in Section 3. The results in Theorem 2.5 are mathematical properties on d p (a, b).The inequalities given in (18) gave an improvement of the left hand side in the inequality (7) for the case a, b ≥ 1 and we obtained Theorem 3.7 by (18).We obtained the upper bound of d p (a, b) as a counterpart of (18) for a general a, b > 0. This is used to prove Corollary 2.10 which was used to prove Theorem 3.7.However, we find that the upper bound of d p (a, b) + d 1−p (a, b) given in (22) is not tighter than one in (15).
Finally, Theorem 3.3 can be obtained from the convexity/concavity of the function t 1−q .The studies to obtain much sharper bounds will be continued.We extend the Jensen-Shannon-Tsallis divergence to the following: JS v q (p|r) := vD T q (p|vp + (1 − v)r) + (1 − v)D T q (r|vp + (1 − v)r), (0 ≤ v ≤ 1, q > 0, q = 1), and we call this the v-weighted Jensen-Shannon-Tsallis divergence.For v = 1/2, we find that JS 1/2 q (p|r) = JS q (p|r) which is the Jensen-Shannon-Tsallis divergence.For this quantity as a information-theoretic divergence measure JS v q (p|r), we obtained several characterizations.
, b) and if we replace p by 1 − p in relation (13) and because A(p) = A(1−p), B(1−p) = B(p), then we proved the inequality from the statement.

Theorem 2 . 2 .
For a ≥ b > 0 and 0 < p ≤ 1, we have a, b > 0 and 0 ≤ p ≤ 1.By summing up these inequalities, we proved the inequality of the statement Remark 2.4.(i) From the proof of Theorem 2.3, we obtain A(a, b)−H p (a, b) = dp(a,b)+d 1−p (a,b) 2