Next Article in Journal
On Independent Secondary Dominating Sets in Generalized Graph Products
Next Article in Special Issue
On Some New Simpson’s Formula Type Inequalities for Convex Functions in Post-Quantum Calculus
Previous Article in Journal
Analysis of Asymmetrical Deformation of Surface and Oblique Pipeline Caused by Shield Tunneling along Curved Section
Previous Article in Special Issue
Schur-Convexity for Elementary Symmetric Composite Functions and Their Inverse Problems and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bounds for the Differences between Arithmetic and Geometric Means and Their Applications to Inequalities

by
Shigeru Furuichi
1 and
Nicuşor Minculete
2,*
1
Department of Information Science, College of Humanities and Sciences, Nihon University, Setagaya-ku, Tokyo 156-8550, Japan
2
Department of Mathematics and Computer Science, Transilvania University of Braşov, 500091 Braşov, Romania
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(12), 2398; https://doi.org/10.3390/sym13122398
Submission received: 7 November 2021 / Revised: 24 November 2021 / Accepted: 10 December 2021 / Published: 12 December 2021
(This article belongs to the Special Issue Symmetry in the Mathematical Inequalities)

Abstract

:
Refining and reversing weighted arithmetic-geometric mean inequalities have been studied in many papers. In this paper, we provide some bounds for the differences between the weighted arithmetic and geometric means, using known inequalities. We improve the results given by Furuichi-Ghaemi-Gharakhanlu and Sababheh-Choi. We also give some bounds on entropies, applying the results in a different approach. We explore certain convex or concave functions, which are symmetric functions on the axis t = 1 / 2 .

1. Introduction

We denote a set of all probability distributions by
Δ n : = p = { p 1 , p 2 , , p n } p j > 0 , ( j = 1 , 2 , , n ) , j = 1 n p j = 1 .
In this manuscript, for mathematical simplicity we remove the case p j = 0 for j = 1 , 2 , , n . For any p Δ n , Shannon entropy H ( p ) , Rényi entropy R q ( p ) and Tsallis entropy H q ( p ) are defined as [1,2,3]
H ( p ) : = j = 1 n p j log p j , R q ( p ) : = 1 1 q log j = 1 n p j q , H q ( p ) : = j = 1 n p j q ln q p j .
where ln q ( x ) : = x 1 q 1 1 q is q-logarithmic function defined for x > 0 and q > 0 with q 1 . It is known that lim q 1 R q ( p ) = lim q 1 H q ( p ) = H ( p ) . An interesting differential relation of the Rényi entropy [4] is
d R q ( p ) d q = 1 ( 1 q ) 2 j = 1 n v j log v j p j ,
which is proportional to Kullback–Leibler divergence, where v j = p j q / j = 1 n p j q .
In [5], the Fermi–Dirac-Tsallis entropy was introduced by
I q F D ( p ) : = j = 1 n p j ln q 1 p j + j = 1 n ( 1 p j ) ln q 1 1 p j
for p Δ n and the Bose–Einstein–Tsallis entropy was given in [6] as
I q B E ( p ) : = j = 1 n p j ln q 1 p j j = 1 n ( 1 + p j ) ln q 1 1 + p j .
In the limit of q 1 , we have
lim q 1 I q F D ( p ) = I 1 F D ( p ) : = j = 1 n p j log p j j = 1 n ( 1 p j ) log ( 1 p j )
and
lim q 1 I q B E ( p ) = I 1 B E ( p ) : = j = 1 n p j log p j + j = 1 n ( 1 + p j ) log ( 1 + p j ) ,
where I 1 F D ( p ) and I 1 B E ( p ) are the Fermi–Dirac entropy and the Bose–Einstein entropy, respectively. See [6] and references therein for their details.
In [7], we used the expression that describes the difference between the arithmetic mean and the weighted geometric mean:
d p ( a , b ) : = p a + ( 1 p ) b a p b 1 p , a , b > 0 , p [ 0 , 1 ] .
It is well known that d p ( a , b ) 0 as Young inequality or the weighted arithmetic–geometric mean inequality.
Next, we consider d p ( a , b ) for p R . We easily find that the following properties:
d p ( a , b ) 0 ( when p [ 0 , 1 ] ) ,
d p ( a , a ) = d 0 ( a , b ) = d 1 ( a , b ) = 0 , d p ( a , b ) = d 1 p ( b , a )
and
d p 1 a , 1 b = 1 a b d p ( b , a ) , d p ( a , 1 ) + d p ( b , 1 ) = d p ( a + b , 2 ) + 2 a + b 2 p a p + b p 2 .
In [8] Sababheh and Choi prove that if a and b are positive numbers with p [ 0 , 1 ] , then d 1 p ( a , b ) 0 .
Some important results [9,10,11] on the studies used to estimate bounds on several entropies have been established, recently, via the use of mathematical inequalities. We provide some results on several entropies, applying new and improved inequalities in this paper.

2. Bounds of d · ( · , · ) and Inequalities for Entropies

We first rewrite the Tsallis entropy, Rényi entropy, the Fermi–Dirac-Tsallis entropy, and the Bose–Einstein-Tsallis entropy by the use of the notation d · ( · , · ) .
Lemma 1.
For p Δ n and q 0 with q > 1 , we have
(i) 
H q ( p ) = n 1 1 1 q j = 1 n d q ( p j , 1 ) ,
(ii) 
R q ( p ) = 1 1 q log n ( 1 q ) + q j = 1 n d q ( p j , 1 ) ,
(iii) 
I q F D ( p ) = n 1 1 q j = 1 n d q ( p j , 1 ) + d q ( 1 p j , 1 ) ,
(iv) 
I q B E ( p ) = n 1 1 q j = 1 n d q ( p j , 1 ) d q ( 1 + p j , 1 ) .
Proof. 
The proof can be done by the direct calculations.
(i)
Simple calculations
1 + H q ( p ) = j = 1 n p j p j q 1 q p j = j = 1 n q p j p j q 1 q = n j = 1 n q p j p j q 1 q + 1 = n 1 1 q j = 1 n d q ( p j , 1 )
show the statement in (i).
(ii)
Since we have the relation:
exp ( 1 q ) R q ( p ) = 1 + ( 1 q ) H q ( p ) ,
we have
exp ( 1 q ) R q ( p ) = n ( 1 q ) + q j = 1 n d q ( p j , 1 )
which implies the statement in (ii).
(iii)
We can calculate as
j = 1 n ( 1 p j ) ln q 1 1 p j = j = 1 n ( 1 p j ) q ( 1 p j ) 1 q = j = 1 n ( 1 p j ) q ( 1 p j ) 1 q + 1 p j n + 1 = j = 1 n ( 1 p j ) q q ( 1 p j ) 1 q 1 + 1 = 1 1 1 q j = 1 n q ( 1 p j ) + 1 q ( 1 p j ) q   = 1 1 1 q j = 1 n d q ( 1 p j , 1 ) .
Thus, we have with the result of (i),
I q F D ( p ) = j = 1 n p j ln q 1 p j + j = 1 n ( 1 p j ) ln q 1 1 p j = n 1 1 1 q j = 1 n d q ( p j , 1 ) + 1 1 1 q j = 1 n d q ( 1 p j , 1 ) = n 1 1 q j = 1 n d q ( p j , 1 ) + d q ( 1 p j , 1 ) .
(iv)
We can calculate as
j = 1 n ( 1 + p j ) ln q 1 1 + p j = j = 1 n ( 1 + p j ) q ( 1 + p j ) 1 q = j = 1 n ( 1 + p j ) q ( 1 + p j ) 1 q + 1 + p j n 1 = j = 1 n ( 1 + p j ) q q ( 1 + p j ) 1 q 1 1 = 1 1 1 q j = 1 n d q ( 1 + p j , 1 ) .
Thus, we have
I q B E ( p ) = j = 1 n p j ln q 1 p j j = 1 n ( 1 + p j ) ln q 1 1 + p j = n 1 1 1 q j = 1 n d q ( p j , 1 ) + 1 + 1 1 q j = 1 n d q ( 1 + p j , 1 ) = n 1 1 q j = 1 n d q ( p j , 1 ) d q ( 1 + p j , 1 ) .
 □
We give relations on d · ( · , · ) .
Lemma 2.
Let a , b > 0 . If p R , then the following equalities hold:
d p ( a , b ) = p a b 2 + d 2 p a b , b
and
d p ( a , b ) = ( 1 p ) a b 2 + d 2 p 1 a , a b .
Proof. 
We note that a p b 1 p = ( a b ) 1 p a 2 p 1 = a b 2 2 p a 2 p 1 = a b 2 p b 1 2 p .
(i)
Then,
d p ( a , b ) = p a + ( 1 p ) b a b 2 p b 1 2 p = p a + ( 1 p ) b 2 p a b ( 1 2 p ) b + 2 p a b + ( 1 2 p ) b a b 2 p b 1 2 p = p a b 2 + d 2 p a b , b
(ii)
We also have
d p ( a , b ) = p a + ( 1 p ) b a b 2 2 p a 2 p 1 = p a + ( 1 p ) b 2 ( 1 p ) a b ( 2 p 1 ) a + 2 ( 1 p ) a b + ( 2 p 1 ) a a b 2 2 p a 2 p 1 = ( 1 p ) a b 2 + d 2 p 1 a , a b .
 □
In several papers [7,12,13,14], we find estimations of the bounds of d p ( a , b ) . For this purpose, we use the following inequalities from (a) to (d).
(a)
Kittaneh and Manasrah gave in [12]:
r ( p ) a b 2 d p ( a , b ) R ( p ) a b 2
where a , b > 0 , 0 p 1 and r ( p ) = min p , 1 p , R ( p ) = max p , 1 p , whose notations are used throughout this paper without mention.
(b)
Cartwright and Field proved the inequality (see, e.g., [14]):
1 2 p 1 p a b 2 max { a , b } d p ( a , b ) 1 2 p 1 p a b 2 min { a , b }
for a , b > 0 and 0 p 1 .
(c)
Alzer, da Fonseca, and Kovačec obtained the following inequalities (see, e.g., [13]):
1 2 p 1 p min { a , b } log 2 a b d p ( a , b ) 1 2 p 1 p max { a , b } log 2 a b
and
min p q , 1 p 1 q d q ( a , b ) d p ( a , b ) max p q , 1 p 1 q d q ( a , b ) ,
for a , b > 0 and 0 < p , q < 1 .
Taking into account (1), (2) and taking b = 1 and changing p by q in the above inequalities given in (a)–(c), we obtain the following.
( a 1 )
r ( q ) a 1 2 d q ( a , 1 ) R ( q ) a 1 2
where a > 0 and 0 q 1 .
( b 1 )
1 2 q 1 q a 1 2 d q ( a , 1 ) 1 2 q 1 q a 1 2 a
for 0 < a 1 and 0 q 1 .
( c 1 )
1 2 q 1 q a log 2 a d q ( a , 1 ) 1 2 q 1 q log 2 a
and
min q p , 1 q 1 p d p ( a , 1 ) d q ( a , 1 ) max q p , 1 q 1 p d p ( a , 1 )
for 0 < a 1 and 0 < p , q < 1 .
If we take a = p j < 1 , for all j { 1 , , n } , in the above inequalities ( a 1 )–( c 1 ) and passing to the sum from 1 to n, we deduce the following inequalities ( a 2 )–( c 2 ) on d · ( · , · ) .
( a 2 )
r ( q ) j = 1 n p j 1 2 j = 1 n d q ( p j , 1 ) R ( q ) j = 1 n p j 1 2
where 0 q 1 .
( b 2 )
1 2 q 1 q j = 1 n p j 1 2 j = 1 n d q ( p j , 1 ) 1 2 q 1 q j = 1 n p j 1 2 p j
for 0 q 1 .
( c 2 )
1 2 q 1 q j = 1 n p j log 2 p j j = 1 n d q ( p j , 1 ) 1 2 q 1 q j = 1 n log 2 p j
and
min q p , 1 q 1 p j = 1 n d p ( p j , 1 ) j = 1 n d q ( p j , 1 ) max q p , 1 q 1 p j = 1 n d p ( p j , 1 )
for 0 < p , q < 1 .
Using the point (i) from Lemma 2 and inequalities ( a 2 )–( c 2 ), we deduce a series of inequalities for the Tsallis entropy H q ( p ) in the following (A)–(C) as the theorem.
Theorem 1.
Let 0 < p , q < 1 . Then we have the following (A)–(C).
(A) 
n 1 R ( q ) 1 q j = 1 n p j 1 2 H q ( p ) n 1 r ( q ) 1 q j = 1 n p j 1 2 .
(B) 
n 1 q 2 j = 1 n p j 1 2 p j H q ( p ) n 1 q 2 j = 1 n p j 1 2 .
(C) 
n 1 q 2 j = 1 n log 2 p j H q ( p ) n 1 q 2 j = 1 n p j log 2 p j
and
n 1 1 1 p 1 q max q p , 1 q 1 p + 1 p 1 q max q p , 1 q 1 p H p ( p ) H q ( p ) n 1 1 1 p 1 q min q p , 1 q 1 p + 1 p 1 q min q p , 1 q 1 p H p ( p ) .
If p q , then we have min q p , 1 q 1 p = 1 q 1 p and max q p , 1 q 1 p = q p , then we obtain
n 1 p q p ( 1 q ) + q ( 1 p ) p ( 1 q ) H p ( p ) H q ( p ) H p ( p ) ,
which implies that H q ( p ) is decreasing related to q.
In the limit of q 1 , we find some bounds for Shannon entropy as a corollary of the above theorem.
Corollary 1.
We have the inequalities for Shannon entropy H ( p ) .
H ( p ) n 1 j = 1 n p j 1 2 = 2 j = 1 n p j 1 ,
n 1 1 2 j = 1 n p j 1 2 p j H ( p ) n 1 1 2 j = 1 n p j 1 2 ,
n 1 1 2 j = 1 n log 2 p j H ( p ) n 1 1 2 j = 1 n p j log 2 p j
and
H ( p ) H p ( p ) , ( 0 < p < 1 ) .
Using the points (ii) and (iii) from Lemma 2 and inequalities ( a 2 )–( c 2 ), we deduce several inequalities for Rényi entropy R q ( p ) and for the Fermi–Dirac–Tsallis entropy I q F D ( p ) in the following:
Theorem 2.
Let 0 < q < 1 . Then we have
( A 1 )
1 1 q log { n ( 1 q ) + q R ( q ) ( n + 1 2 j = 1 n p j ) } R q ( p ) 1 1 q log { n ( 1 q ) + q r ( q ) ( n + 1 2 j = 1 n p j ) } ,
( B 1 )
1 1 q log { n ( 1 q ) + q 1 2 q ( 1 q ) ( 1 2 n + j = 1 n 1 p j ) } R q ( p ) 1 1 q log { n ( 1 q ) + q 1 2 q ( 1 q ) ( n 2 + j = 1 n p j 2 ) } ,
( C 1 )
1 1 q log { n ( 1 q ) + q 1 2 q ( 1 q ) j = 1 n log 2 p j } R q ( p ) 1 1 q log { n ( 1 q ) + q 1 2 q ( 1 q ) j = 1 n p j log 2 p j } ,
( A 2 )
n R ( q ) 1 q { 3 n 2 j = 1 n ( p j + 1 p j ) } I q F D ( p ) n r ( q ) 1 q { 3 n 2 j = 1 n ( p j + 1 p j ) } ,
( B 2 )
n q 2 j = 1 n 1 p j ( 1 p j ) 3 n I q F D ( p ) n q 2 n 2 + 2 j = 1 n p j 2
( C 2 )
n q 2 j = 1 n log 2 p j + log 2 ( 1 p j ) I q F D ( p ) n q 2 j = 1 n p j log 2 p j + ( 1 p j ) log 2 ( 1 p j ) .
In the limit of q 1 , we find some bounds for the Fermi–Dirac–Tsallis entropy as a corollary of the above theorem.
Corollary 2.
We have the following inequalities for the Fermi–Dirac entropy I 1 F D ( p ) :
I 1 F D ( p ) 2 j = 1 n ( p j + 1 p j 1 ) , 1 2 5 n j = 1 n 1 p j ( 1 p j ) I 1 F D ( p ) 1 2 n + 2 2 j = 1 n p j 2
and
n 1 2 j = 1 n log 2 p j + log 2 ( 1 p j ) I 1 F D ( p ) n 1 2 j = 1 n p j log 2 p j + ( 1 p j ) log 2 ( 1 p j ) .
Theorem 3.
Let 0 < q < 1 . Then,
( A 3 )
( 2 n + 1 ) r ( q ) ( n + 1 ) R ( q ) + 2 R ( q ) j = 1 n p j 2 r ( q ) j = 1 n 1 + p j ( 1 q ) I q B E ( p ) n ( 2 n + 1 ) R ( q ) ( n + 1 ) r ( q ) + 2 r ( q ) j = 1 n p j 2 R ( q ) j = 1 n 1 + p j ,
( B 3 )
n + q 2 n j = 1 n 1 p j ( p j + 1 ) I q B E ( p ) n + q 2 2 n
( C 3 )
j = 1 n log 2 ( p j + 1 ) log 2 p j 2 q I q B E ( p ) n j = 1 n ( p j + 1 ) log 2 ( p j + 1 ) p j log 2 p j .
Proof. 
From inequality (7), we find
r ( q ) p j 1 2 d q ( p j , 1 ) R ( q ) p j 1 2
and
r ( q ) p j + 1 1 2 d q ( p j + 1 , 1 ) R ( q ) p j + 1 1 2 .
Using inequalities (19), (20) and the definition of the Bose–Einstein–Tsallis entropy I q B E ( p ) , given above, we find
n + 1 1 q r ( q ) j = 1 n p j + 1 1 2 R ( q ) j = 1 n p j 1 2 I q B E ( p )
n + 1 1 q R ( q ) j = 1 n p j + 1 1 2 r ( q ) j = 1 n p j 1 2 ,
which implies inequality (16). From inequality (8), we have:
1 2 q 1 q p j 2 p j + 1 d q ( p j + 1 , 1 ) 1 2 q 1 q p j 2
and
1 2 q 1 q p j 1 2 d q ( p j , 1 ) 1 2 q 1 q p j 1 2 p j .
Summing from 1 to n, we deduce inequality (17).
We apply inequality (9) in the following way:
1 2 q 1 q log 2 p j + 1 d q ( p j + 1 , 1 ) 1 2 q 1 q p j + 1 log 2 p j + 1
and
1 2 q 1 q p j log 2 p j d q ( p j , 1 ) 1 2 q 1 q p j log 2 p j .
Summing from 1 to n, we deduce inequality (18). □
Corollary 3.
We have the following inequalities for the Bose–Einstein entropy I 1 B E ( p ) :
3 n j = 1 n 1 p j ( 1 + p j ) 2 I 1 B E ( p ) n + 1
and
j = 1 n log 2 ( p j + 1 ) log 2 p j 2 I 1 B E ( p ) n j = 1 n ( p j + 1 ) log 2 ( p j + 1 ) p j log 2 p j .

3. New Characterizations of Young’s Inequality

The inequality of Young is given by:
p a + ( 1 p ) b a p b 1 p , ( a , b > 0 , p [ 0 , 1 ] ) ,
which means d p ( a , b ) 0 .
In this section, we give further bounds on d · ( · , · ) .
Lemma 3.
Let a and b be positive real numbers, and let p R . Then,
d p ( a , b ) = p k = 1 n 2 k 1 b 2 k 1 1 2 k 1 a 2 k b 2 k 2 + d 2 n p ( a b 2 n 1 2 n , b )
and
d p ( a , b ) = ( 1 p ) k = 1 n 2 k 1 a 2 k 1 1 2 k 1 a 2 k b 2 k 2 + d 2 n ( p 1 ) + 1 ( a , a 2 n 1 b 2 n ) .
Proof. 
Using Lemma 2 for p R , then
d p ( a , b ) = p ( a b ) 2 + d 2 p ( a b , b ) .
We replace p by 2 p and a by a b , then we get
d 2 p ( a b , b ) = 2 p ( a b 4 b ) 2 + d 4 p ( a b 3 4 , b ) .
If we inductively repeat the above substitutions, for k 1 , then we have
d 2 k 1 p ( a b 2 k 1 1 2 k 1 , b ) = 2 k 1 p ( a b 2 k 1 1 2 k b ) 2 + d 2 k p ( a b 2 k 1 2 k , b ) .
Therefore, summarizing the above relations for k { 1 , , n } , we obtain the relation of the statement. Applying equality (21) and taking into account that d p ( a , b ) = d 1 p ( b , a ) , we deduce equality (22). □
Remark 1.
From [8], if a , b > 0 and p [ 0 , 1 ] , we have d 1 p ( a , b ) 0 , so, we deduce d 2 n p ( a b 2 n 1 2 n , b ) 0 , for p [ 0 , 1 2 n ] and d 2 n ( p 1 ) + 1 ( a , a 2 n 1 b 2 n ) 0 , for p [ 1 1 2 n , 1 ] . Using the above equalities, we deduce the inequalities:
d p ( a , b ) p k = 1 n 2 k 1 b 2 k 1 1 2 k 1 a 2 k b 2 k 2
when p 0 , 1 2 n and
d p ( a , b ) ( 1 p ) k = 1 n 2 k 1 a 2 k 1 1 2 k 1 a 2 k b 2 k 2 .
when p 1 1 2 n , 1 . These inequalities are given by Furuichi et al. in ([15], Theorem 3). We also find that inequality (23) when p 0 and inequality (24) when p 1 are given by Sababheh–Choi in ([8], Theorem 2.9) and by Sababheh–Moslehian ([16], Theorem 2.2).
Proposition 1.
Let a and b be positive real numbers. We then have the following bounds on d · ( · , · ) .
(i) 
For p 0 , 1 2 n , we have
r ( 2 n p ) a b 2 n 1 2 n + 1 b 2 + p k = 1 n b 2 k 1 1 2 k 1 a 2 k b 2 k 2 d p ( a , b ) R ( 2 n p ) a b 2 n 1 2 n + 1 b 2 + p k = 1 n b 2 k 1 1 2 k 1 a 2 k b 2 k 2 .
where r ( · ) and R ( · ) are defined above,
(ii) 
For p 0 , 1 2 n , we have
2 n 1 p 1 2 n p a b 2 n 1 2 n b 2 max { a b 2 n 1 2 n , b } + p k = 1 n b 2 k 1 1 2 k 1 a 2 k b 2 k 2 d p ( a , b )
2 n 1 p 1 2 n p a b 2 n 1 2 n b 2 min { a b 2 n 1 2 n , b } + p k = 1 n b 2 k 1 1 2 k 1 a 2 k b 2 k 2 .
(iii) 
For p 0 , 1 2 n , we have
p 2 n + 1 1 2 n p min { a b 2 n 1 2 n , b } log 2 a b d p ( a , b ) p k = 1 n b 2 k 1 1 2 k 1 a 2 k b 2 k 2 p 2 n + 1 1 2 n p min { a b 2 n 1 2 n , b } log 2 a b .
Proof. 
We use the inequalities from (a) to (c), where we replace p by 2 n p and a by a b 2 n 1 2 n . For a , b > 0 and p 0 , 1 2 n , we have the following ( a 3 )–( c 3 ).
( a 3 )
r ( 2 n p ) a b 2 n 1 2 n + 1 b 2 d 2 n p ( a b 2 n 1 2 n , b ) R ( 2 n p ) a b 2 n 1 2 n + 1 b 2 ,
( b 3 )
2 n 1 p 1 2 n p a b 2 n 1 2 n b 2 max { a b 2 n 1 2 n , b } d 2 n p ( a b 2 n 1 2 n , b ) 2 n 1 p 1 2 n p a b 2 n 1 2 n b 2 min { a b 2 n 1 2 n , b } .
( c 3 )
p 2 n + 1 1 2 n p   min { a b 2 n 1 2 n , b } log 2 a b d 2 n p ( a b 2 n 1 2 n , b ) p 2 n + 1 1 2 n p   max { a b 2 n 1 2 n , b } log 2 a b .
Using equality (21) and inequalities (26)–(28), we deduce the inequalities from the statement. □

4. The Connection between d · ( · , · ) and Different Types of Convexity

In the following, we use the inequality by Kittaneh–Manasrah as noted in (3). We prepare some lemmas to state our results.
Lemma 4.
If f : J R , where J is an interval of R , is a concave function, then
f ( 1 + r ) x r y   ( 1 + r ) f ( x ) r f ( y )
for all x , y J and all r > 0 with ( 1 + r ) x r y J . If f is a convex function, then the reversed inequality above holds.
Proof. 
If f is concave, then we have
1 1 + r f ( 1 + r ) x r y + r 1 + r f ( y ) f x r 1 + r y + r 1 + r y = f ( x ) .
 □
The following result is given in ([15], Corollary 1). This is the supplemental to the first inequality of (3).
Lemma 5.
Let a and b be positive real numbers and let p ( 0 , 1 ) . Then,
d p ( a , b ) r ( p ) a b 2 ,
where r ( p ) : = min { p , 1 p } .
Proof. 
We set the function f ( t ) : = t p 2 p t 1 / 2 ( 1 2 p ) for t > 0 and p ( 0 , 1 / 2 ) . From f ( t ) = p t 1 / 2 t p 1 / 2 1 , we find that f ( t ) = 0 t = 1 , f ( t ) < 0 for 0 < t < 1 and f ( t ) > 0 for t > 1 . Thus, we have f ( t ) f ( 1 ) = 0 . Putting t : = a / b and multiplying b > 0 to both sides in the inequality f ( t ) 0 , we have
a p b 1 p 2 p a b + ( 1 2 p ) b ,
which is equivalent to
p a + ( 1 p ) b p a b 2 a p b 1 p , p ( 0 , 1 / 2 )
We similarly have
p a + ( 1 p ) b ( 1 p ) a b 2 a p b 1 p , p ( 1 / 2 , 1 ) .
From (31) and (32), we have (30). □
Note that the supplemental to the second inequality of (3), never generally holds:
R ( p ) a b 2 d p ( a , b ) , a , b > 0 p ( 0 , 1 ) .
To state the following result, we review the log-convexity/log-concavity. For the function f : I ( 0 , ) , where I R , x , y I and λ [ 0 , 1 ] , if f ( 1 λ ) x + λ y f 1 λ ( x ) f λ ( y ) , then f is often called log-convex function. If the reversed inequality holds, then f is called log-concave function.
In the following two lemmas, we deal with the symmetric function on 1 2 (i.e., f ( t ) = f ( 1 t ) , for every t [ 0 , 1 ] ). The results are applied to the concrete symmetric function related to entropy, in the end of this section.
Lemma 6.
Let f : [ 0 , 1 ] ( 0 , ) be a convex function such that f ( t ) = f ( 1 t ) for every t [ 0 , 1 ] . Then
2 R ( t ) f ( 1 / 2 ) + ( 1 2 R ( t ) ) f ( 0 ) f ( t ) 2 r ( t ) f ( 1 / 2 ) + ( 1 2 r ( t ) ) f ( 0 ) ,
where r ( t ) : = min { t , 1 t } and R ( t ) : = max { t , 1 t } . If in addition, f is log-convex, then
2 R ( t ) f 1 / 2 + 1 2 R ( t ) f ( 0 ) 2 R ( t ) f 1 / 2 + 1 2 R ( t ) f ( 0 ) 1 2 R ( t ) f 1 / 2 f ( 0 ) 2 f 1 / 2 2 R ( t ) f ( 0 ) 1 2 R ( t ) f ( t ) f 1 / 2 2 r ( t ) f ( 0 ) 1 2 r ( t ) 2 r ( t ) f ( 1 / 2 ) + ( 1 2 r ( t ) ) f ( 0 ) r 2 r ( t ) f 1 / 2 f ( 0 ) 2 2 r ( t ) f ( 1 / 2 ) + ( 1 2 r ( t ) ) f ( 0 ) .
Proof. 
By convexity of f, we have for t [ 0 , 1 / 2 ] ,
f ( t ) = f 2 t · 1 2 + ( 1 2 t ) · 0 2 t f 1 2 + ( 1 2 t ) f ( 0 ) .
Thus, we have
2 t f 1 / 2 f ( 0 ) f ( t ) f ( 0 ) .
For t [ 1 / 2 , 1 ] , by exchanging t with 1 t in the above inequality, we have
2 ( 1 t ) f 1 / 2 f ( 0 ) f ( t ) f ( 0 ) .
Therefore, we have
2 r ( t ) f 1 / 2 f ( 0 ) f ( t ) f ( 0 ) ,
which implies the second inequality in (33). By Lemma 4 with r : = 2 t 1 > 0 (i.e., t [ 1 / 2 , 1 ] .), we have
f ( t ) = f 2 t · 1 2 + ( 1 2 t ) · 0 = f ( 1 + r ) · 1 2 r · 0 ( 1 + r ) f 1 / 2 r f ( 0 ) = 2 t f 1 / 2 + ( 1 2 t ) f ( 0 ) .
Thus, we have for t [ 1 / 2 , 1 ]
2 t f 1 / 2 f ( 0 ) f ( t ) f ( 0 ) .
For t [ 0 , 1 / 2 ] , by exchanging t with 1 t in the above inequality, we have
2 ( 1 t ) f 1 / 2 f ( 0 ) f ( t ) f ( 0 ) .
Therefore, we have
2 R ( t ) f 1 / 2 f ( 0 ) f ( t ) f ( 0 ) ,
which implies the first inequality in (33).
By log-convexity of f, log f is convex so that we have f ( t ) f 1 / 2 2 r ( t ) f ( 0 ) 1 2 r ( t ) which is the forth inequality of (34). The third inequality is from (33) and the second one is obtained by the Young inequality. The last inequality of (34) is trivial. Since 0 r ( t ) 1 2 , we have 0 2 r ( t ) 1 . So we can use the first inequality of (3) as
f 1 / 2 2 r ( t ) f ( 0 ) 1 2 r ( t ) 2 r ( t ) f 1 / 2 + 1 2 r ( t ) f ( 0 ) r 2 r ( t ) f ( 1 / 2 ) f ( 0 ) 2 ,
which is the fifth inequality of (34). Finally, we prove the first inequality of (34). Since 1 2 R ( t ) 1 , we have 1 2 R ( t ) 2 and 3 1 2 R ( t ) 1 . Namely, we have 1 2 R ( t ) 2 R ( t ) . By using (30), we have
f 1 / 2 2 R ( t ) f ( 0 ) 1 2 R ( t ) 2 R ( t ) f 1 / 2 + ( 1 2 R ( t ) ) f ( 0 ) 1 2 R ( t ) f ( 1 / 2 ) f ( 0 ) 2 .
 □
It is notable that the right inequalities in (33) and (34) are also found in ([17], Lemma 1.1). The following lemma is a counterpart by concavity. However, it does not completely corresponded to the above lemma. (See Remark 2 below).
Lemma 7.
Let f : [ 0 , 1 ] ( 0 , ) be a concave function with f ( t ) = f ( 1 t ) for every t [ 0 , 1 ] . Then
2 r ( t ) f ( 1 / 2 ) + ( 1 2 r ( t ) ) f ( 0 ) f ( t ) 2 R ( t ) f ( 1 / 2 ) + ( 1 2 R ( t ) ) f ( 0 ) .
If in addition, f is log-concave, then
2 r ( t ) f 1 / 2 + 1 2 r ( t ) f ( 0 ) R 2 r ( t ) f 1 / 2 f ( 0 ) 2 f 1 / 2 2 r ( t ) f ( 0 ) 1 2 r ( t ) 2 r ( t ) f 1 / 2 + 1 2 r ( t ) f ( 0 ) f ( t ) 2 R ( t ) f 1 / 2 + 1 2 R ( t ) f ( 0 ) f 1 / 2 2 R ( t ) f ( 0 ) 1 2 R ( t )
Proof. 
By concavity of f, we have for t [ 0 , 1 / 2 ] ,
f ( t ) = f 2 t · 1 2 + ( 1 2 t ) · 0 2 t f 1 / 2 + ( 1 2 t ) f ( 0 ) ,
which implies
2 t f ( 0 ) f 1 / 2 f ( 0 ) f ( t ) .
For the case of t [ 1 / 2 , 1 ] , by exchanging t with 1 t , then we have from the above inequality
2 ( 1 t ) f ( 0 ) f 1 / 2 f ( 0 ) f ( 1 t ) = f ( 0 ) f ( t ) .
Thus, we have for t [ 0 , 1 ] and r ( t ) : = min { t , 1 t } ,
2 r ( t ) f ( 0 ) f 1 / 2 f ( 0 ) f ( t ) ,
which implies the first inequality of (35). For the proof of the second inequality of (35), we use Lemma 4. Putting r : = 2 t 1 > 0 in (29), we have
f ( t ) = f 2 t · 1 2 + ( 1 2 t ) · 0 = f ( 1 + r ) · 1 2 r · 0 ( 1 + r ) f ( 1 / 2 ) r f ( 0 ) = 2 t f ( 1 / 2 ) + ( 1 2 t ) f ( 0 ) ,
which means
2 t f ( 0 ) f ( 1 / 2 ) f ( 0 ) f ( t ) , t [ 1 / 2 , 1 ] .
For the case of t [ 0 , 1 / 2 ] , by exchanging t with 1 t , we have from the above inequality
2 ( 1 t ) f ( 0 ) f ( 1 / 2 ) f ( 0 ) f ( 1 t ) , t [ 0 , 1 / 2 ] .
By the symmetric property of f in t = 1 / 2 , we obtain
2 R ( t ) f ( 0 ) f ( 1 / 2 ) f ( 0 ) f ( t ) ,
which gives the right hand side in the inequalities (35).
If f is log-concave, then we have from the first inequality of (35) with concave function log f , f 1 / 2 2 R ( t ) f ( 0 ) 1 2 R ( t ) f ( t ) , which show the forth inequality in (36). The third inequality is just from (35). The second and last inequalities in (36) are obtained by the Young inequality.
Since we have 0 r ( t ) 1 / 2 R ( t ) 1 generally, we have 0 2 r ( t ) 1 for t [ 0 , 1 ] . Then we apply the second inequality of (3), we have
f 1 / 2 2 r ( t ) f ( 0 ) 1 2 r ( t ) 2 r ( t ) f 1 / 2 + 1 2 r ( t ) f ( 0 ) R 2 r ( t ) f 1 / 2 f ( 0 ) 2
which shows the first inequality in (36). □
Remark 2.
In general, we have the supplement to the Young inequality:
a v b 1 v v a + ( 1 v ) b , v ( 0 , 1 ) , a , b > 0 .
Thus, we have
f ( 1 / 2 ) 2 R ( t ) f ( 0 ) 1 2 R ( t ) 2 R ( t ) f ( 1 / 2 ) + ( 1 2 R ( t ) ) f ( 0 ) .
Therefore, it seems difficult to bound f ( 1 / 2 ) 2 R ( t ) f ( 0 ) 1 2 R ( t ) in (36) from the above by the use of the two terms 2 R ( t ) f ( 1 / 2 ) + ( 1 2 R ( t ) ) f ( 0 ) and f 1 / 2 f ( 0 ) 2 as a simple form.
We have some bounds on f ( 1 / 2 ) 2 R ( t ) f ( 0 ) 1 2 R ( t ) by applying (3)–(6). We here show one result by the use of (3). However, we omit the other cases.
Lemma 8.
Let a and b be positive real numbers and let p [ 1 , 2 ] . Then,
p a + ( 1 p ) b + min { A p , B p } a b 2 a p b 1 p p a + ( 1 p ) b + max { A p , B p } a b 2 ,
where A p : = ( p 1 ) 1 + 2 a b and B p : = ( 2 p 3 ) a b + ( p 1 ) 1 + 2 a b .
Proof. 
Since p 1 [ 0 , 1 ] , we can use (3) as
r ( p 1 ) a b 2 d p 1 ( a , b ) R ( p 1 ) a b 2 .
Here we have the relation:
b · d p ( a , b ) a · d p 1 ( a , b ) = ( 1 p ) ( a b ) 2 , ( a , b > 0 , p R ) .
Combining (39) with (38), we obtain
r ( p 1 ) a a b 2 + ( 1 p ) ( a b ) 2 b · d p ( a , b ) R ( p 1 ) a a b 2 + ( 1 p ) ( a b ) 2 .
Elementary calculations imply
p a + ( 1 p ) b + 1 b ( p 1 ) a + b 2 R ( p 1 ) a a b 2 a p b 1 p p a + ( 1 p ) b + 1 b ( p 1 ) a + b 2 r ( p 1 ) a a b 2 .
Considering the cases max { p 1 , 2 p } and min { p 1 , 2 p } , we obtain the inequalities (37). □
As for the bounds on f ( 1 / 2 ) 2 R ( t ) f ( 0 ) 1 2 R ( t ) , we have the following result.
Proposition 2.
Let t [ 0 , 1 ] and a function f : [ 0 , 1 ] ( 0 , ) . Then we have
2 R ( t ) f ( 1 / 2 ) + ( 1 2 R ( t ) ) f ( 0 ) + min { A t , B t } f ( 1 / 2 ) f ( 0 ) 2 f ( 1 / 2 ) 2 R ( t ) f ( 0 ) 1 2 R ( t ) 2 R ( t ) f ( 1 / 2 ) + ( 1 2 R ( t ) ) f ( 0 ) + max { A t , B t } f ( 1 / 2 ) f ( 0 ) 2 ,
where
A t : = ( 2 R ( t ) 1 ) 1 + 2 f ( 1 / 2 ) f ( 0 ) , B t : = ( 4 R ( t ) 3 ) f ( 1 / 2 ) f ( 0 ) + ( 2 R ( t ) 1 ) 1 + 2 f ( 1 / 2 ) f ( 0 ) .
Proof. 
Since 1 2 R ( t ) 2 , we set p : = 2 R ( t ) , a : = f ( 1 / 2 ) and b : = f ( 0 ) in Lemma 8. □
Example 1.
The so-called binary entropy (e.g., [18], example 2.1.1) defined by
h b i n ( t ) : = t log t ( 1 t ) log ( 1 t ) > 0 , ( 0 < t < 1 )
with convention 0 log 0 = : 0 , satisfies the conditions in Lemma 7, since
d 2 h b i n ( t ) d t 2 = 1 t ( 1 t ) < 0
and
d 2 d t 2 log h b i n ( t ) = h b i n ( t ) t ( 1 t ) log t log ( 1 t ) 2 t ( 1 t ) h b i n ( t ) 2 < 0
The standard convention 0 log 0 = : 0 is in information theory, since we have lim x 0 x log x = 0 and log x is undefined for x 0 . In information theory, we use 2 as the base of the logarithmic function, but we here use e for mathematical simplicity. Its selection is not essential in mathematics. Applying (35) to function h b i n ( t ) with convention h b i n ( 0 ) = : 0 , we have 2 ( log e 2 ) r ( t ) h b i n ( t ) 2 ( log e 2 ) R ( t ) , which is equivalent to
2 min { t , 1 t } t log 2 t ( 1 t ) log 2 ( 1 t ) 2 max { t , 1 t } .
The above inequalities are equivalent to
1 1 2 p H b ( p ) 1 + 1 2 p , ( 0 p 1 ) ,
where H b ( p ) : = p log 2 p ( 1 p ) log 2 ( 1 p ) is the usual binary entropy, whose base is 2.
If we do not adopt the standard convention 0 log 0 = : 0 in information theory, then we assume f ( 0 ) : = lim t 0 f ( t ) = : ε precisely. Applying the inequalities in (36):
f 1 / 2 2 r ( t ) f ( 0 ) 1 2 r ( t ) f ( t ) f 1 / 2 2 R ( t ) f ( 0 ) 1 2 R ( t ) ,
we obtain
( log e 2 ) 2 r ( t ) ε 1 2 r ( t ) f ( t ) ( log e 2 ) 2 R ( t ) ε 1 2 R ( t ) ,
which implies the following result.
ε log e 2 1 2 r ( p ) H b ( p ) ε log e 2 1 2 R ( p ) , ( 0 p 1 ) .
The Fermi–Dirac entropy is defined above by
I 1 F D ( p ) : = j = 1 n p j log p j j = 1 n ( 1 p j ) log ( 1 p j ) .
From the bounds of the binary entropy given in (40) and (41), we obtain the interesting bounds on the Fermi–Dirac entropy as
2 j = 1 n min { p j , 1 p j } I 1 F D ( p ) 2 j = 1 n max { p j , 1 p j }
or
n j = 1 n 1 2 p j I 1 F D ( p ) n + j = 1 n 1 2 p j .

5. Concluding Remarks

We close this paper by providing some remarks on the log-convex function.
Lemma 9.
For a , b , c , d > 0 and λ [ 0 , 1 ] , we have
a λ b 1 λ + c λ d 1 λ ( a + c ) λ ( b + d ) 1 λ .
Proof. 
Since function f ( t ) = t λ is concave for λ [ 0 , 1 ] , we use the Jensen inequality for positive real numbers x and y as
b f ( x ) + d f ( y ) b + d f b x + d y b + d .
If we take x : = a b and y : = c d , then we obtain
b b + d a b λ + d b + d c d λ a + c b + d λ
which implies (42). □
Theorem 4.
If f , g : I ( 0 , ) are log-convex functions, then function μ f + ν g is log-convex, where I R and μ , ν > 0 .
Proof. 
Since f , g are log-convex functions, we have for λ [ 0 , 1 ] ,
μ f + ν g λ x + ( 1 λ ) y = μ f λ x + ( 1 λ ) y + ν g λ x + ( 1 λ ) y μ f λ ( x ) f 1 λ ( y ) + ν g λ ( x ) g 1 λ ( y ) = μ f ( x ) λ μ f ( y ) 1 λ + ν g ( x ) λ ν g ( y ) 1 λ μ f ( x ) + ν g ( x ) λ μ f ( y ) + ν g ( y ) 1 λ ,
where we used Lemma 9 in the last inequality. Therefore, μ f + ν g is log-convex. □
Let M n be the set of all n × n complex matrices, and let M n + be the set of all positive semi-definite matrices in M n .
Corollary 4.
For A , B M n + , X M n , t [ 0 , 1 ] and · is the unitarily invariant norm, the following functions are log-convex:
g 1 ( t ) : = A t X B t + A 1 t X B 1 t , g 2 ( t ) : = A t X B 1 t + A 1 t X B t , g 3 ( t ) : = A t + A 1 t , g 4 ( t ) : = t r A t X B 1 t X + A 1 t X B t X .
Proof. 
In [19], it was shown that functions f 1 ( t ) : = A t X B t , f 2 ( t ) : = A t X B 1 t , f 3 ( t ) : = A t and f 4 ( t ) : = t r A t X B 1 t X are log-convex on [ 0 , 1 ] . Thus, we have the corollary from Theorem 4. □
Since the functions g i are log-convex and g i ( t ) = g i ( 1 t ) , we can apply Lemma 6 for the symmetric function g i on an axis t = 1 2 . Therefore, we obtain the chain of inequalities for the functions g 1 in the following, for example. We can obtain the similar inequalities for the other functions g 2 , g 3 and g 4 . However, we omit them. For A , B M n + , X M n and t [ 0 , 1 ] , we have
4 R ( t ) A 1 / 2 X B 1 / 2 + 1 2 R ( t ) X + A X B 1 2 R ( t ) 2 A 1 / 2 X B 1 / 2 X + A X B 2 2 A 1 / 2 X B 1 / 2 2 R ( t ) X + A X B 1 2 R ( t ) A t X B t + A 1 t X B 1 t 2 A 1 / 2 X B 1 / 2 2 r ( t ) X + A X B 1 2 r ( t ) 4 r ( t ) A 1 / 2 X B 1 / 2 + 1 2 r ( t ) X + A X B r 2 r ( t ) 2 A 1 / 2 X B 1 / 2 X + A X B 2 .

Author Contributions

This work was carried out in collaboration among all authors. All authors contributed equally and significantly in writing this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The first author was supported in part by JSPS KAKENHI grant number 21K03341.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the reviewers for their important suggestions and careful reading of our manuscript. The authors would like to thank M. Kian, who let us know the essential estimation for the symmetric function in Lemma 6.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rényi, A. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; University of California Press: Berkeley, CA, USA, 1961; Volume 1, p. 547. [Google Scholar]
  2. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–656. [Google Scholar] [CrossRef] [Green Version]
  3. Tsallis, C. Possible generalization of Bolzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  4. Beck, C.; Schlögl, F. Thermodynamics of Chaotic Systems: An Introduction; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  5. Conroy, J.M.; Miller, H.G.; Plastino, A.R. Thermodynamic consistency of the q–deformed Fermi–Dirac distribution in nonextensive thermostatics. Phys. Lett. A 2010, 374, 4581–4584. [Google Scholar] [CrossRef] [Green Version]
  6. Furuichi, S.; Mitroi, F.-C. Mathematical inequalities for some divergences. Phys. A 2012, 391, 388–400. [Google Scholar] [CrossRef] [Green Version]
  7. Furuichi, S.; Minculete, N. Refined Young inequality and its application to divergences. Entropy 2021, 23, 514. [Google Scholar] [CrossRef] [PubMed]
  8. Sababheh, M.; Choi, D. A complete refinement of Young’s inequality. J. Math. Anal. Appl. 2016, 440, 379–393. [Google Scholar] [CrossRef]
  9. Butt, S.I.; Mehmood, N.; Pečarić, D.P.; Pečarić, J.P. New bounds for Shannon, relative and Mandelbrot entropies via Abel-Gontscharoff interpolating polynomial. Math. Inequal. Appl. 2019, 22, 1283–1301. [Google Scholar] [CrossRef] [Green Version]
  10. Tohyama, H.; Kamei, E.; Watanabe, M. The n-th residual relative operator entropy R x , y [ n ] ( A B ) . Adv. Oper. Theory 2021, 6, 18. [Google Scholar] [CrossRef]
  11. Isa, H.; Kamei, E.; Tohyama, H.; Watanabe, M. The n-th relative operator entropies and the n-th operator divergences. Ann. Funct. Anal. 2020, 11, 298–313. [Google Scholar] [CrossRef]
  12. Kittaneh, F.; Manasrah, Y. Improved Young and Heinz inequalities for matrix. J. Math. Anal. Appl. 2010, 361, 262–269. [Google Scholar] [CrossRef] [Green Version]
  13. Alzer, H.; da Fonseca, C.; Cec, A.K. Young-type inequalities and their matrix analogues. Linear Multilinear Algebra 2014, 63, 622–635. [Google Scholar] [CrossRef]
  14. Cartwright, D.I.; Field, M.J. A refinement of the arithmetic mean-geometric mean inequality. Proc. Am. Math. Soc. 1978, 71, 36–38. [Google Scholar] [CrossRef]
  15. Furuichi, S.; Ghaemi, M.B.; Gharakhanlu, N. Generalized reverse Young and Heinz inequalities. Bull. Malays. Math. Sci. Soc. 2019, 42, 267–284. [Google Scholar] [CrossRef] [Green Version]
  16. Sababheh, M.; Moslehian, M.S. Advanced refinements of Young and Heinz inequalities. Number Theory 2017, 172, 178–199. [Google Scholar] [CrossRef] [Green Version]
  17. Alakhrass, M.; Sababheh, M. Matrix mixed mean inequalities. Results Math. 2019, 74, 2. [Google Scholar] [CrossRef]
  18. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley–Interscience: Hoboken, NJ, USA, 2006. [Google Scholar]
  19. Sababheh, M. Log and harmonically log–convex functions related to matrix norms. Oper. Matrices 2016, 10, 453–465. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Furuichi, S.; Minculete, N. Bounds for the Differences between Arithmetic and Geometric Means and Their Applications to Inequalities. Symmetry 2021, 13, 2398. https://doi.org/10.3390/sym13122398

AMA Style

Furuichi S, Minculete N. Bounds for the Differences between Arithmetic and Geometric Means and Their Applications to Inequalities. Symmetry. 2021; 13(12):2398. https://doi.org/10.3390/sym13122398

Chicago/Turabian Style

Furuichi, Shigeru, and Nicuşor Minculete. 2021. "Bounds for the Differences between Arithmetic and Geometric Means and Their Applications to Inequalities" Symmetry 13, no. 12: 2398. https://doi.org/10.3390/sym13122398

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop