Bounds for the Differences between Arithmetic and Geometric Means and Their Applications to Inequalities

: Reﬁning and reversing weighted arithmetic-geometric mean inequalities have been studied in many papers. In this paper, we provide some bounds for the differences between the weighted arithmetic and geometric means, using known inequalities. We improve the results given by Furuichi-Ghaemi-Gharakhanlu and Sababheh-Choi. We also give some bounds on entropies, applying the results in a different approach. We explore certain convex or concave functions, which are symmetric functions on the axis t = 1/2.

It is known that lim q→1 R q (p) = lim q→1 H q (p) = H(p). An interesting differential relation of the Rényi entropy [4] is which is proportional to Kullback-Leibler divergence, where v j = p q j / ∑ n j=1 p q j . In [5], the Fermi-Dirac-Tsallis entropy was introduced by (1 − p j ) ln q 1 1 − p j for p ∈ ∆ n and the Bose-Einstein-Tsallis entropy was given in [6] as (1 + p j ) ln q 1 1 + p j .
In the limit of q → 1, we have (1 + p j ) log(1 + p j ), where I FD 1 (p) and I BE 1 (p) are the Fermi-Dirac entropy and the Bose-Einstein entropy, respectively. See [6] and references therein for their details.
In [7], we used the expression that describes the difference between the arithmetic mean and the weighted geometric mean: It is well known that d p (a, b) ≥ 0 as Young inequality or the weighted arithmeticgeometric mean inequality.
Next, we consider d p (a, b) for p ∈ R. We easily find that the following properties: and In [8] Sababheh and Choi prove that if a and b are positive numbers with p / ∈ [0, 1], then d 1−p (a, b) ≤ 0.
Some important results [9][10][11] on the studies used to estimate bounds on several entropies have been established, recently, via the use of mathematical inequalities. We provide some results on several entropies, applying new and improved inequalities in this paper.

Bounds of d · (·, ·) and Inequalities for Entropies
We first rewrite the Tsallis entropy, Rényi entropy, the Fermi-Dirac-Tsallis entropy, and the Bose-Einstein-Tsallis entropy by the use of the notation d · (·, ·). Lemma 1. For p ∈ ∆ n and q ≥ 0 with q > 1, we have Proof. The proof can be done by the direct calculations.
(i) Simple calculations show the statement in (i).
(ii) Since we have the relation: Thus, we have with the result of (i), (iv) We can calculate as Thus, we have We give relations on d · (·, ·). Lemma 2. Let a, b > 0. If p ∈ R, then the following equalities hold: In several papers [7,[12][13][14], we find estimations of the bounds of d p (a, b). For this purpose, we use the following inequalities from (a) to (d).
Taking into account (1), (2) and taking b = 1 and changing p by q in the above inequalities given in (a)-(c), we obtain the following.
and min q p , for 0 < a ≤ 1 and 0 < p, q < 1. If we take a = p j < 1, for all j ∈ {1, ..., n}, in the above inequalities (a 1 )-(c 1 ) and passing to the sum from 1 to n, we deduce the following inequalities (a 2 )-(c 2 ) on d · (·, ·). (a 2 ) Using the point (i) from Lemma 2 and inequalities (a 2 )-(c 2 ), we deduce a series of inequalities for the Tsallis entropy H q (p) in the following (A)-(C) as the theorem. and which implies that H q (p) is decreasing related to q.
In the limit of q → 1, we find some bounds for Shannon entropy as a corollary of the above theorem.

Corollary 1. We have the inequalities for Shannon entropy H(p)
. and Using the points (ii) and (iii) from Lemma 2 and inequalities (a 2 )-(c 2 ), we deduce several inequalities for Rényi entropy R q (p) and for the Fermi-Dirac-Tsallis entropy I FD q (p) in the following: In the limit of q → 1, we find some bounds for the Fermi-Dirac-Tsallis entropy as a corollary of the above theorem.

Corollary 2.
We have the following inequalities for the Fermi-Dirac entropy I FD 1 (p): Proof. From inequality (7), we find and Using inequalities (19), (20) and the definition of the Bose-Einstein-Tsallis entropy I BE q (p), given above, we find which implies inequality (16). From inequality (8), we have: Summing from 1 to n, we deduce inequality (17). We apply inequality (9) in the following way: Summing from 1 to n, we deduce inequality (18).

Corollary 3.
We have the following inequalities for the Bose-Einstein entropy I BE 1 (p): (p j + 1) log 2 (p j + 1) − p j log 2 p j .

New Characterizations of Young's Inequality
The inequality of Young is given by: In this section, we give further bounds on d · (·, ·).
Lemma 3. Let a and b be positive real numbers, and let p ∈ R. Then, and Proof. Using Lemma 2 for p ∈ R, then We replace p by 2p and a by √ ab, then we get If we inductively repeat the above substitutions, for k ≥ 1, then we have Therefore, summarizing the above relations for k ∈ {1, ..., n}, we obtain the relation of the statement. Applying equality (21) and taking into account that d p (a, b) = d 1−p (b, a), we deduce equality (22).
(i) For p ∈ 0, 1 2 n , we have where r(·) and R(·) are defined above, (ii) For p ∈ 0, 1 2 n , we have (25) (iii) For p ∈ 0, 1 2 n , we have Proof. We use the inequalities from (a) to (c), where we replace p by 2 n p and a by 2 n √ ab 2 n −1 .

The Connection between d · (·, ·) and Different Types of Convexity
In the following, we use the inequality by Kittaneh-Manasrah as noted in (3). We prepare some lemmas to state our results.
for all x, y ∈ J and all r > 0 with (1 + r)x − ry ∈ J. If f is a convex function, then the reversed inequality above holds.
Proof. If f is concave, then we have The following result is given in ( [15], Corollary 1). This is the supplemental to the first inequality of (3).
Note that the supplemental to the second inequality of (3), never generally holds: To state the following result, we review the log-convexity/log-concavity. For the function f : I → (0, ∞), where I ⊂ R, x, y ∈ I and λ ∈ [0, 1], if f ((1 − λ)x + λy) ≤ f 1−λ (x) f λ (y), then f is often called log-convex function. If the reversed inequality holds, then f is called log-concave function.
In the following two lemmas, we deal with the symmetric function on 1 2 (i.e., f (t) = f (1 − t), for every t ∈ [0, 1]). The results are applied to the concrete symmetric function related to entropy, in the end of this section.

Proof.
By convexity of f , we have for t ∈ [0, 1/2], For t ∈ [1/2, 1], by exchanging t with 1 − t in the above inequality, we have Therefore, we have which implies the second inequality in (33). By Lemma 4 with r := 2t − 1 > 0 (i.e., t ∈ [1/2, 1].), we have Thus, we have for t ∈ [1/2, 1] For t ∈ [0, 1/2], by exchanging t with 1 − t in the above inequality, we have which implies the first inequality in (33). By log-convexity of f , log f is convex so that we have f (t) ≤ f (1/2) 2r(t) f (0) 1−2r(t) which is the forth inequality of (34). The third inequality is from (33) and the second one is obtained by the Young inequality. The last inequality of (34) is trivial. Since 0 ≤ r(t) ≤ 1 2 , we have 0 ≤ 2r(t) ≤ 1. So we can use the first inequality of (3) as which is the fifth inequality of (34). Finally, we prove the first inequality of (34). Since . By using (30), we have It is notable that the right inequalities in (33) and (34) are also found in ( [17], Lemma 1.1). The following lemma is a counterpart by concavity. However, it does not completely corresponded to the above lemma. (See Remark 2 below).
If in addition, f is log-concave, then Proof. By concavity of f , we have for t ∈ [0, 1/2], For the case of t ∈ [1/2, 1], by exchanging t with 1 − t, then we have from the above inequality Thus, we have for t ∈ [0, 1] and r(t) := min{t, 1 − t}, which implies the first inequality of (35). For the proof of the second inequality of (35), we use Lemma 4. Putting r := 2t − 1 > 0 in (29), we have For the case of t ∈ [0, 1/2], by exchanging t with 1 − t, we have from the above inequality By the symmetric property of f in t = 1/2, we obtain which gives the right hand side in the inequalities (35). If f is log-concave, then we have from the first inequality of (35) with concave function log f , f (1/2) 2R(t) f (0) 1−2R(t) ≥ f (t), which show the forth inequality in (36). The third inequality is just from (35). The second and last inequalities in (36) are obtained by the Young inequality.