- freely available
- re-usable

*Information*
**2013**,
*4*(2),
198-239;
doi:10.3390/info4020198

^{†}

## Abstract

**:**Jensen-Shannon, J-divergence and Arithmetic-Geometric mean divergences are three classical divergence measures known in the information theory and statistics literature. These three divergence measures bear interesting inequality among the three non-logarithmic measures known as triangular discrimination, Hellingar’s divergence and symmetric chi-square divergence. However, in 2003, Eve studied seven means from a geometrical point of view, which are Harmonic, Geometric, Arithmetic, Heronian, Contra-harmonic, Root-mean square and Centroidal. In this paper, we have obtained new inequalities among non-negative differences arising from these seven means. Correlations with generalized triangular discrimination and some new generating measures with their exponential representations are also presented.

## 1. Introduction

Let

, ,

be the set of all complete finite discrete probability distributions. For all , let us consider two groups of measures:

**• Logarithmic Measures**

,

,

.

The above three measures are classical divergence measures in the literature on information theory and statistics known as Jensen-Shannon divergence, J-divergence and Arithmetic-Geometric mean divergence respectively. These three measures bear the following two relations:

- (i)
;

- (ii)
.

**• Non-logarithmic Measures**

,

and

.

The above three measures , and are respectively known as triangular discrimination, Hellingar’s divergence and symmetric chi-square divergence. These measures allow the following inequalities among the measures.

.

If we consider all the six measures, they satisfy [1] the following inequalities:

.

**• Generalized Symmetric Divergence Measures**

Let us consider the measure

for all . The measure is generalized J-divergence extensively studied in Taneja [2,3]. It admits the following particular cases:

- (i)
;

- (ii)
;

- (iii)
.

Again consider another generalized measure

for all . The measure known as generalized arithmetic and geometric mean divergence. It also admits the following particular cases:

- (i)
;

- (ii)
;

- (iii)
;

- (iv)
.

We observe that the six measures given in above inequality appear as particular cases of the above two generalized measures. These two generalizations are mainly the generalizations of the logarithmic measures , and . The non-negativity of the arithmetic-geometric mean divergence, is based on the well-known arithmetic and geometric means, i.e., we can write it as

,

where and are arithmetic and geometric means respectively. Moreover, the measure can also be written in terms of arithmetic mean

.

On the other side, these means plays important roles, being applied in different areas, especially in information theory and statistics. Eve [4] studied some interesting geometrical interpretation of some means, famous as Eve’s seven means.

Our aim here is to present generalizations of non-logarithmic measures, starting from triangular discrimination. Also connections Eve’s seven means with the non-logarithmic measures are given. We performed this through inequalities, where some new generalized means are also presented.

## 2. Seven Means

Let be two positive numbers. Eves [4] studied the geometrical interpretation of the following seven means:

Arithmetic mean: ;

Geometric mean: ;

Harmonic mean: ;

Heronian mean: ;

Contra-harmonic mean: ;

Root-mean-square: ;

Centroidal mean: .

We can easily verify that the following inequality having the above seven means:

Let us write, , where stands for any of the above seven means, then we have

As and , the means , , , and may also be written in terms of the means and .

#### 2.1. Inequalities among Differences of Means

For simplicity, let us write

- (i)
;

- (ii)
;

- (iii)
.

The measures and are the well know triangular discsrimination [5] and Hellinger’s distance [6] given by and respectively. Not all the measures appearing in the above pyramid (4) are convex in the pair . Recently, the author [7] has proved the following theorem for the convex measures.

**Theorem 2.1.** The following inequalities hold:

The proof of the above theorem is based on the following two lemmas [8,9].

**Lemma 2.1.** Let be a convex and differentiable function satisfying . Consider a function

, ,

then the function is convex in . Additionally, if , then the following inequality hold:

.

**Lemma 2.2. **Let be two convex functions satisfying the assumptions:

- (i)
, ;

- (ii)
and are twice differentiable in ;

- (iii)
there exists the real constants such that and

, ,

for all then we have the inequalities:

,

for all , where the function is as defined in Lemma 2.1.

#### 2.2. Generalized Triangular Discrimination

For all consider the following measure generalizing triangular discrimination

In particular, we have

,

,

,

and

From above, we observe that the expression (6) contains some well-known measures such as (reference Jain and Srivastava [10]), (reference Kumar and Johnson [11]) and , the latter being symmetric to measure [12]. will be considered here for the first time. The generalization (6) considered above is little different from the one considered by Topsoe [13]:

.

Furthermore, there are more particular cases as known measures.

**Convexity:** Let us prove now the convexity of the measure (6). We can write , , where

.

The second order derivative of the function is given by

,

where

From (7) we observe that we are unable to find a unique value of when the function is positive. But for at least , , , we have . Also, we have . Thus according to Lemma 1.1, the measure is convex for all , . Testing individually to fix , we can check the convexity for other measures as well, for example , is convex.

**Monotonicity: **Calculating the first order derivative of the function with respect to , we have

.

We can easily check that for all , . This proves that the function is monotonically increasing with respect to . This gives

Also we know that and . Thus combining Equations (5) and (8), we have

As a part of (9), let us consider the following inequalities:

Our purpose is to study further inequalities by considering possible nonnegative differences from (10).

## 3. New Inequalities

In this section we will bring inequalities in different stages. In the first stage the measures considered are the nonnegative differences arising from (10). This will be done many times until one final measure.

#### 3.1. First Stage

For simplicity, let us write the expression (10) as

,

,

,

,

,

,

,

,

and

.

Calculating the second order derivative of above functions we have

,

,

,

,

,

;

,

,

and

.

The Inequalities (11) again admit 45 nonnegative differences. These differences satisfy some natural inequalities given in a **pyramid** below:

**pyramid:**

In the view of above equalities, we are left only with 27 nonnegative convex measures and these are connected with each other by the inequalities given in the theorem below.

**Theorem 3.1.** The following sequences of inequalities hold:

**Proof.** We will prove the above theorem by parts.

**1. For ** : We shall apply two approaches to prove this result.

**1st Approach:** Let us consider a function

,

After simplifications, we have

,

and

.

By the application Lemma 2.2 we get the required result.

**2nd Approach:** We shall use an alternative approach to prove the above result. We know that . In order to prove the result, we need to show that . By considering the difference , we have

,

where

Since , we get the required result.

For simplicity, from now onward we shall use only the second approach.

**2.** For : Let us consider a function . After simplifications, we have

,

and

**3.** For : Let us consider a function . After simplifications, we have

,

and

.

**4.** For : Let us consider a function . After simplifications, we have

,

and

.

**5.** For : Let us consider a function . After simplifications, we have

,

and

.

**6.** For : Let us consider a function . After simplifications, we have

,

and

,

where

**7.** For : Let us consider a function . After simplifications, we have

,

and

,

where

**8.** For : Let us consider a function . After simplifications, we have

,

and

.

**9.** For : Let us consider a function . After simplifications, we have

,

and

.

**10.** For : Let us consider a function . After simplifications, we have

,

and

.

**11.** For : Let us consider a function . After simplifications, we have

,

and

,

where

**12.** For : Let us consider a function . After simplifications, we have

,

and

,

where

**13.** For : Let us consider a function . After simplifications, we have

,

and

,

where

**14.** For : Let us consider a function . After simplifications, we have

,

and

.

**15.** For : Let us consider a function . After simplifications, we have

,

and

.

**16.** For : Let us consider a function . After simplifications, we have

,

and

.

**17.** For : Let us consider a function . After simplifications, we have

,

and

,

where

**18.** For : Let us consider a function . After simplifications, we have

,

and

,

where

**19.** For : Let us consider a function . After simplifications, we have

,

and

,

where

**20.** For : Let us consider a function . After simplifications, we have

,

and

,

where

**21.** For : Let us consider a function . After simplifications, we have

,

and

.

**22.** For : Let us consider a function . After simplifications, we have

,

and

.

**23.** For : Let us consider a function . After simplifications, we have

,

and

.

**24.** For : Let us consider a function . After simplifications, we have

,

and

,

where

**25.** For : Let us consider a function . After simplifications, we have

,

and

,

where

**26.** For : Let us consider a function . After simplifications, we have

,

and

,

with

**27.** For : Let us consider a function . After simplifications, we have

,

and

,

where

Combining the results 1–27, we get the proof of (15).

**Remarks. **Based on the equalities given in (14), we have the following proportionality relations:

- (i)
;

- (ii)
;

- (iii)
;

- (iv)
;

- (v)
;

- (vi)
;

- (vii)
;

- (viii)
;

- (ix)
;

- (x)
;

- (xi)
;

- (xii)
.

#### 3.1.1. Reverse Inequalities

We observe from the above results that the first four inequalities appearing in pyramid are equal with some multiplicative constants. The other four inequalities satisfies reverse inequalities given by

- (i)
- (ii)
- (iii)
;

- (iv)
.

#### 3.2. Second Stage

In this stage we shall bring inequalities based on measures arising from the stage. The above 27 parts generate some new measures given by

**Theorem 3.2.** The following inequalities hold:

**Proof.** We shall prove the above theorem following similar lines to Theorem 3.1. Since we need the second derivatives of the functions given by (16)–(29) to prove the theorem, their values are as follows:

,

,

,

,

,

,

,

,

,

,

,

and

.

We will prove the above theorem by parts. In view of procedure used in Theorem 3.1, we shall write the proof of each part in summarized way.

**1. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**2. **For : Let us consider a function . After simplifications, we have

,

and

.

**3. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**4. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**5. **For : Let us consider a function . After simplifications, we have

.

This gives . Let us consider now, and

,

where

**6. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**7. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**8. **For : Let us consider a function . After simplifications, we have

,

and

.

**9. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**10. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**11. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**12. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**13. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**14. **For : Let us consider a function . After simplifications, we have

,

and

.

#### 3.3. Third Stage

The proof of above 14 parts gives us some new measures. These are given by

,

,

,

,

,

,

,

,

,

and

.

The theorem below connects only the first nine measures. The other two will be given later.

**Theorem 3.3.** The following inequalities hold:

**Proof.** We will prove the inequalities (45) by parts and shall use the same approach applied in the above theorems. Without specifying, we will frequently use the second derivatives , .

**1. **For : Let us consider a function . After simplifications, we have

,

and

.

**2. **For : Let us consider a function . After simplifications, we have

,

and

.

**3. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**4. **For : Let us consider a function . After simplifications, we have

,

and

.

**5. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**6. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**7. **For : Let us consider a function . After simplifications, we have

,

and

.

**8. **For : Let us consider a function . After simplifications, we have

,

and

.

Combining the parts 1–8, we get the proof of the Inequalities (45).

#### 3.4. Forth Stage

Still, we have more measures to compare, i.e., to . This comparison is given in the theorem below. Here below are the second derivatives of the functions given by (46)–(58).

,

,

and

.

**Theorem 3.4.** The following inequalities hold:

**Proof.** We shall prove the above theorem by parts.

**1. **For : Let us consider a function . After simplifications, we have

,

and

,

where

**2. **For : Let us consider a function . After simplifications, we have

,

and

.

**3. **For : Let us consider a function . After simplifications, we have

,

and

.

**4. **For : Let us consider a function . After simplifications, we have

,

and

.

**Remark:** Interestingly, in all the four cases only with a single measure is left, i.e., given by

#### 3.5. Equivalent Expressions

The measures appearing in the proof of Theorems 3.2–3.4 can be written in terms of the measures appearing in the Inequalities (9). Here follow equivalent versions of these measures.

**Measures appearing in Theorem 3.2.**We can write,

,

,

,

,

,

,

,

,

,

.

**Measures appearing in Theorems 3.3 and 3.4.**We could write,

,

,

,

,

,

,

,

,

,

,

,

,

,

.

## 4. Generating Divergence Measures and Exponential Representations

Some of the measures given in Section 2 can be written in generating forms. Below are the generating measures.

#### 4.1. First Generalization of Triangular Discrimination

For all , let us consider the following measures

In particular, we have

,

,

and

.

The Expression (52) gives first generalization of the measure . Now we will prove its convexity. We can write , , where

The second order derivative of the function is given by

,

where

.

For all , , , we have . Also we have . In view of Lemma 1.1, the measure is convex for all , .

Now, we shall present exponential representation of the measure (52) based on the function given by (53). Let us consider a linear combination of convex functions,

i.e.,

,

where are the constants. For simplicity we will choose,

Thus, we have

Which will give us

As a consequence of (54), we will have the following exponential triangular discrimination

#### 4.2. Second Generalization of Triangular Discrimination

For all , let us consider the following measures

In particular, we have

and

.

The Expression (33) gives the second generalization of the measure . Now we will prove its convexity. We can write , , where

.

The second order derivative of the function is given by

,

where

.

For all , , , we have . Also we have . In view of Lemma 2.1, the measure is convex for all , .

Following the similar lines of (54) and (55), the exponential representation of the measure is given by

.

#### 4.3. First Generalization of the Measure

For all , let us consider the following measures

In particular, we have

,

,

and

.

The expression (57) gives the first parametric generalization of the measure given by (3). We will prove now its convexity. We might write , , where

.

The second order derivative of the function is given by

,

where

.

For all , , , we have . Also we have . In view of Lemma 1.1, the measure is convex for all , .

Following the similar lines of (54) and (55), the exponential representation of the measure is given by

.

#### 4.4. Second Generalization of the Measure

For all , let us consider the following measures

In particular, we will have

and

.

The Expression (58) gives the second generalization of the measure given by (1.3). We will now prove its convexity. We can write , , where

.

The second order derivative of the function is given by

,

where

.

For all , , , we have . Also we have . In view of Lemma 1.1, the measure is convex for all , .

Following the similar lines of (54) and (55), the exponential representation of the measure is given by

.

#### 4.5. Generalization of Hellingar’s Discrimination

For all , let us consider the following measures

In particular, we have

,

,

,

and

.

The measure (59) give generalized Hellingar’s discrimination. Let us now prove its convexity. We might write , , where

.

The second order derivative of the function is given by

,

where

.

For all , , , we have . Also . In view of Lemma 2.1, the measure is convex for all , .

.

#### 4.6. New Measure

For all , let us consider the following measures

In particular, we will have

,

,

,

and

.

We will prove now the convexity of the measure (60). We can write , , where

.

The second order derivative of the function is given by

,

where

.

For all , , , we have . Also we have . In view of Lemma 2.1, the measure is convex for all , .

.

**Remarks:**

- (i)
The first 10 measures appearing in the second pyramid (13) represents the same measure (14) and is same as . The last measure given by (51) is the same as . The measure (51) is the only one that appears in all the four parts of the Theorem 3.4. Both these measures generate the interesting measure shown in (60).

- (ii)
The measure appears in the work of Dragomir et al. [14]. An improvement over his work can be seen in Taneja [9].

- (iii)
Following the similar lines of (54) and (55), the exponential representation of the principal measure appearing in (6) is given by

We observe that the expression (61) is different from the one obtained above in six parts. Applications of the generating measures (6), (52), (56), (57), (58), (59) and (60) along with their exponential representations should be encouraged in further studies.

## Acknowledgements

Author is thankful to anonymous reviewers for their valuable comments and suggestions on an earlier version the paper. The author also thanks Atul Kumar Taneja for English grammar review.

## References and notes

- Taneja, I.J. Refinement inequalities among symmetric divergence measures. Austr. J. Math. Anal. Appl.
**2005**, 2, 1–23. [Google Scholar] - Taneja, I.J. New developments in generalized information measures. In Advances in Imaging and Electron Physics; Hawkes, P.W., Ed.; Elsevier Publisher: New York, NY, USA, 1995; Volume 91, pp. 37–135. [Google Scholar]
- Taneja, I.J. On symmetric and non-symmetric divergence measures and their generalizations. In Advances in Imaging and Electron Physics; Hawkes, P.W., Ed.; Elsevier Publisher: New York, NY, USA, 2005; Volume 138, pp. 177–250. [Google Scholar]
- Eves, H. Means appearing in geometrical figures. Math. Mag.
**2003**, 76, 292–294. [Google Scholar] - LeCam, L. Asymptotic Methods in Statistical Decision Theory; Springer: New York, NY, USA, 1986. [Google Scholar]
- Hellinger, E. Neue Begründung der Theorie der quadratischen Formen von unendlichen vielen Veränderlichen. J. Reine Aug. Math.
**1909**, 136, 210–271. [Google Scholar] - Taneja, I.J. Inequalities having seven means and proportionality relations. 2012. Available online: http://arxiv.org/abs/1203.2288/ (accessed on 7 April 2013). [Google Scholar]
- Taneja, I.J.; Kumar, P. Relative information of type s, Csiszar’s f-divergence, and information inequalities. Inf. Sci.
**2004**, 166, 105–125. [Google Scholar] [CrossRef] - Taneja, I.J. Refinement of inequalities among means. J. Combin. Inf. Syst. Sci.
**2006**, 31, 357–378. [Google Scholar] - Jain, K.C.; Srivastava, A. On symmetric information divergence measures of Csiszar’s f-divergence class. J. Appl. Math. Stat. Inf.
**2007**, 3, 85–102. [Google Scholar] - Kumar, P.; Johnson, A. On a symmetric divergence measure and information inequalities. J. Inequal. Pure Appl. Math.
**2005**, 6, 1–13. [Google Scholar] - Taneja, I.J. Bounds on triangular discrimination, harmonic mean and symmetric chi-square divergences. J. Concr. Appl. Math.
**2006**, 4, 91–111. [Google Scholar] - Topsoe, F. Some inequalities for information divergence and related measures of discrimination. IEEE Trans. Inf. Theory
**2000**, 46, 1602–1609. [Google Scholar] [CrossRef] - Dragomir, S.S.; Sunde, J.; Buse, C. New inequalities for jefferys divergence measure. Tamsui Oxf. J. Math. Sci.
**2000**, 16, 295–309. [Google Scholar]

© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).