Next Article in Journal
Enhancement of Film Cooling Effectiveness in a Supersonic Nozzle
Next Article in Special Issue
On the Uncertainty Properties of the Conditional Distribution of the Past Life Time
Previous Article in Journal
Counting-Based Effective Dimension and Discrete Regularizations
Previous Article in Special Issue
EspEn Graph for the Spatial Analysis of Entropy in Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Jensen–Inaccuracy Information Measure

1
Department of Statistics, Faculty of Mathematical Sciences, Vali-e-Asr University of Rafsanjan, Rafsanjan 7718897111, Iran
2
Dipartimento di Matematica e Applicazioni “Renato Caccioppoli”, Università degli Studi di Napoli Federico II, 80138 Naples, Italy
3
Dipartimento di Biologia, Università degli Studi di Napoli Federico II, 80138 Naples, Italy
*
Author to whom correspondence should be addressed.
Current address: Institute of Statistics, RWTH Aachen University, 52056 Aachen, Germany.
Entropy 2023, 25(3), 483; https://doi.org/10.3390/e25030483
Submission received: 17 February 2023 / Revised: 7 March 2023 / Accepted: 8 March 2023 / Published: 10 March 2023
(This article belongs to the Special Issue Measures of Information III)

Abstract

:
The purpose of the paper is to introduce the Jensen–inaccuracy measure and examine its properties. Furthermore, some results on the connections between the inaccuracy and Jensen–inaccuracy measures and some other well-known information measures are provided. Moreover, in three different optimization problems, the arithmetic mixture distribution provides optimal information based on the inaccuracy information measure. Finally, two real examples from image processing are studied and some numerical results in terms of the inaccuracy and Jensen–inaccuracy information measures are obtained.

1. Introduction

In recent decades, several researchers have studied information theory and its applications in various fields such as statistics, physics, economics, and engineering. Shannon entropy [1] is a fundamental quantity in information theory and it is defined for a continuous random variable X having probability density function (PDF) f on support X as
H ( X ) = X f ( x ) log f ( x ) d x ,
where log denotes the natural logarithm. Throughout the paper, the support will be omitted in all the integrals. Several extensions of Shannon entropy have been considered by many researchers (see, for instance, Rényi and Tsallis entropies [2,3]) by providing one-parameter families of entropy measures.
Moreover, several divergence measures based on entropy have been defined in order to measure similarity and dissimilarity between two density functions. Among them, chi-square, Kullback–Leibler, Rényi divergences, and their extensions have been introduced; for further details, see Nielsen and Nock [4], Di Crescenzo and Longobardi [5], and Van Erven and Harremos [6].
Let X and Y be two random variables with density functions f and g, respectively. Then, the Kullback–Leibler divergence [7] is defined as
K L ( f , g ) = f ( x ) log f ( x ) g ( x ) d x ,
provided the integral exists. Because of some limitations of Shannon entropy, Kerridge [8] proposed a measure, known as the inaccuracy measure (or the Kerridge measure). Consider f and g as two probability density functions. Then, the inaccuracy measure between f and g is given by
K ( f , g ) = H ( f ) + K L ( f , g ) = f ( x ) log g ( x ) d x .
Several extensions of the inaccuracy measure have been developed, as well as Shannon entropy. For more details, see Kayal and Sunoj [9] and the references therein.
In the literature, the class of Jensen divergences has been studied in an extensive way as a general technique in developing information measures. Recently, other information measures such as the Jensen–Shannon, Jensen–Fisher, and Jensen–Gini measures have been studied as generalizations of well-known quantities. For further details, see Lin [10], Sánchez-Moreno et al. [11], and Mehrali et al. [12].
However, the link between the inaccuracy measure and the Jensen concept has remained unknown so far. Therefore, the main motivation in this paper is to present the Jensen–inaccuracy information (JII) measure and its properties. Let us remark that the introduction of JII measure is motivated by the fact that it can be expressed as mixture of well-known divergence measures, and it is close to the arithmetic–geometric divergence measure. Nevertheless, our new measure obtains better results when studying the similarity between elements in the field of image quality assessment. Furthermore, we establish some results associated with the connection between inaccuracy and Jensen–inaccuracy information measures and some other measures of discrimination such as Rényi entropy, average entropy, and Rényi divergence. Next, we show that the arithmetic mixture distribution provides optimal information under three different optimization problems based on the inaccuracy information measure. In the following, some well-known and useful information measures are recalled.
An extended version of the Shannon entropy measure for α > 0 and α 1 , is defined by Rényi [2] as
R α ( f ) = log X f α ( x ) d x 1 α .
Several applications of Rényi entropy have been discussed in the literature.
Furthermore, the Rényi divergence of order α > 0 and α 1 between density functions f and g is defined by
D α ( f , g ) = log f α ( x ) g 1 α ( x ) d x α 1 .
The information measures in (3) and (4) become Shannon entropy and Kullback–Leibler divergence measures, respectively, when α tends to 1.
Another important diversity measure between two continuous density functions f and g is the chi-square divergence, defined as
χ 2 ( f , g ) = f ( x ) g ( x ) 2 f ( x ) d x .
In a similar manner, we can define χ 2 ( g , f ) .
The rest of this paper is organized as follows. In Section 2, we first introduce the Jensen–inaccuracy information (JII) measure. Then, we show that JII can be expressed as a mixture of the Kullback–Leibler divergence measures. We show that the Jensen–inaccuracy information measure has a close connection with the arithmetic–geometric divergence measure. Furthermore, we present an upper bound for the JII measure in terms of chi-square divergence measures. The ( w , α ) -Jensen–inaccuracy measure is also introduced in this section as an extended version of JII. We study the inaccuracy information measure for the escort and generalized escort distributions in Section 3. In Section 4, we consider the average entropy and define the average inaccuracy measure. Furthermore, some results are given in this regard. In Section 5, we show that the arithmetic mixture distribution involves optimal information under three different optimization problems in terms of the inaccuracy measure. Then, in Section 6, a real example is presented in order to study a problem in image processing, and some numerical results are presented in terms of the inaccuracy and Jensen–inaccuracy information measures. More precisely, our measure is useful for detection of similarity between images. Finally, in Section 7, concluding remarks are provided.

2. The Jensen–Inaccuracy Measure

In this section, we introduce the Jensen–inaccuracy measure and then provide a representation for this information measure in terms of Kullback–Leibler divergence measure. Furthermore, we explore the possible connection between Jensen–inaccuracy and arithmetic–geometric divergence measures. We also provide an upper bound for the Jensen–inaccuracy measure based on chi-square divergence measure. At the end of this section, we introduce ( w , α ) -Jensen–inaccuracy and establish a result for this extended measure.
Definition 1. 
Let f , f 0 , and f 1 be three density functions. Then, the Jensen–inaccuracy measure between f 0 and f 1 with respect to f is defined by
JK ( f , f 0 , f 1 ) = 1 2 K ( f , f 0 ) + 1 2 K ( f , f 1 ) K f , f 0 + f 1 2 .
Theorem 1. 
The JK ( f , f 0 , f 1 ) inaccuracy measure in (6) is non-negative.
Proof. 
From the convexity properties of log x , x > 0 , function we have
log f 0 ( x ) + f 1 ( x ) 2 1 2 log f 0 ( x ) 1 2 log f 1 ( x ) .
Now, by multiplying both sides of (7) by f ( x ) and then integrating with respect to x, we obtain
f ( x ) log f 0 ( x ) + f 1 ( x ) 2 d x f ( x ) 1 2 log f 0 ( x ) d x f ( x ) 1 2 log f 1 ( x ) d x ,
as required. □

2.1. The Jensen–Inaccuracy Measure and its Connection to Kullback–Leibler Divergence

Here, we provide a representation for the Jensen–inaccuracy measure in terms of Kullback–Leibler divergence measure.
Theorem 2. 
A representation for the Jensen–inaccuracy measure in (6) based on mixture of Kullback–Liebler divergence measures is given by
JK ( f , f 0 , f 1 ) = 1 2 K L ( f , f 0 ) + 1 2 K L ( f , f 1 ) K L f , f 0 + f 1 2 .
Proof. 
According to the definition of the Jensen–inaccuracy measure and the relations
K ( f , f 0 ) = K L ( f , f 0 ) + H ( f ) ,
K ( f , f 1 ) = K L ( f , f 1 ) + H ( f ) ,
K f , f 0 + f 1 2 = K L f , f 0 + f 1 2 + H ( f ) ,
we have
JK ( f , f 0 , f 1 ) = 1 2 K ( f , f 0 ) + 1 2 K ( f , f 1 ) K f , f 0 + f 1 2 = K L ( f , f 0 ) + H ( f ) 2 + K L ( f , f 1 ) + H ( f ) 2 K L f , f 0 + f 1 2 H ( f ) = 1 2 K L ( f , f 0 ) + 1 2 K L ( f , f 1 ) K L f , f 0 + f 1 2 ,
as required. □
Next, we extend the definition of the Jensen–inaccuracy measure based on n + 1 density functions.
Definition 2. 
Let X 1 , , X n , and Y be random variables with density functions f 1 , , f n , and f, respectively, and α 1 , , α n be non-negative real numbers such that i = 1 n α i = 1 . Then, the Jensen–inaccuracy measure is defined as
JK α ( f , f 1 , , f n ) = i = 1 n α i K f , f i K f , i = 1 n α i f i .
Theorem 3. 
The JK α ( f , f 1 , , f n ) information measure in (9) can be written in terms of Kullback–Leibler divergence as
JK α ( f , f 1 , , f n ) = i = 1 n α i K L f , f i K L f , i = 1 n α i f i .
Proof. 
From the definition of JK α ( f , f 1 , , f n ) in (9), we have
JK α ( f , f 1 , . . . , f n ) = i = 1 n α i K f , f i K f , i = 1 n α i f i = i = 1 n α i K L f , f i + H ( f ) K L f , i = 1 n α i f i H ( f ) = i = 1 n α i K L f , f i K L f , i = 1 n α i f i ,
as required. □

2.2. Connection between Jensen–Inaccuracy and Arithmetic–Geometric Divergence Measures

Now, we explore the connection between the Jensen–inaccuracy and arithmetic–geometric divergence measures. Then, we provide an upper bound for the Jensen–inaccuracy measure based on chi-square divergence measure.
Let f 0 and f 1 be two density functions. Then, the arithmetic–geometric divergence measure is defined as
T ( f 0 , f 1 ) = f 0 ( x ) + f 1 ( x ) 2 log f 0 ( x ) + f 1 ( x ) 2 f 0 ( x ) f 1 ( x ) d x .
For more details, see Taneja [13].
In the following definition, we provide an extension of the arithmetic–geometric divergence measure to n-density functions.
Definition 3. 
Let X 1 , , X n be random variables with density functions f 1 , , f n , respectively, and α 1 , , α n be non-negative real numbers such that i = 1 n α i = 1 . Then, the extended arithmetic–geometric divergence measure is defined as
T ( f 1 , . . . , f n ; α ) = f T ( x ) log f T ( x ) i = 1 n f i α i ( x ) d x ,
where f T ( x ) = i = 1 n α i f i ( x ) and α is a brief notation to denote α 1 , , α n .
In the following, we explore the connection between the Jensen–inaccuracy measure in (9) and the arithmetic–geometric divergence measure in (12).
Theorem 4. 
If f = f T , then, we have
JK α ( f , f 1 , , f n ) = T ( f 1 , , f n ; α ) ,
where T ( f 1 , , f n ; α ) is the extended arithmetic–geometric divergence measure defined in (12) and α is a brief notation to denote α 1 , , α n .
Proof. 
From the assumption f = f T and Theorem 3, we have
JK α ( f T , f 1 , . . . , f n ) = i = 1 n α i K L f T , f i = i = 1 n α i f T ( x ) log f T ( x ) f i ( x ) d x = f T ( x ) log i = 1 n f T ( x ) f i ( x ) α i d x = f T ( x ) log f T ( x ) i = 1 n f i α i ( x ) d x = T ( f 1 , . . . , f n ; α ) .
as required. □
Remark 1. 
From the inequality log x x 1 for x > 0 , it is easy to obtain an upper bound for JK α ( f T , f 1 , , f n ) based on the chi-square divergence measure:
JK α ( f T , f 1 , . . . , f n ) = i = 1 n α i K L f T , f i i = 1 n α i χ 2 ( f i , f T ) .

2.3. The ( w , α ) -Jensen–Inaccuracy Measure

Here, ( p , w ) -Jensen–inaccuracy is defined. Moreover, we establish a result for this extended measure.
Definition 4. 
Let f, f 0 and f 1 be three density functions. Then, the ( w , α ) -Jensen–inaccuracy measure between f 0 and f 1 with respect to f is defined by
JK w , α ( f , f 0 , f 1 ) = w K f , ( 1 p ) f 0 + p f 1 + ( 1 w ) K f , p f 0 + ( 1 p ) f 1 K f , ( 1 p ¯ ) f 0 + p ¯ f 1 ,
where p ¯ = w p + ( 1 w ) ( 1 p ) .
Note that
( 1 p ¯ ) f 0 ( x ) + p ¯ f 1 ( x ) = w ( 1 p ) f 0 ( x ) + p f 1 ( x ) + ( 1 w ) p f 0 ( x ) + ( 1 p ) f 1 ( x ) .
Theorem 5. 
A representation for ( w , α ) -Jensen–inaccuracy measure based on Kullback–Leibler divergence is given by
JK w , α ( f , f 0 , f 1 ) = w K L f , ( 1 p ) f 0 + p f 1 + ( 1 w ) K L f , p f 0 + ( 1 p ) f 1 K L ( f , ( 1 p ¯ ) f 0 + p ¯ f 1 ) ,
Proof. 
It can be proven in the same manner as Theorem 2. □

3. Inaccuracy Information Measure of the Escort and Generalized Escort Distributions

The escort distribution is a baseline definition in non-extensive statistical mechanics and coding theory; it is closely related to Tsallis and Rényi entropies. For more details, see Bercher [14]. We show that the inaccuracy measure between an arbitrary density and its corresponding escort density can be expressed as a mixture of Shannon and Rényi entropies. Furthermore, another finding associated with the inaccuracy measure between a generalized escort distribution and each of its components reveals some interesting connections in terms of Kullback–Leibler and Rényi divergences.
Let f be a density function. Then, the escort density with order α > 0 associated with f is defined as
f α ( x ) = f α ( x ) f α ( x ) d x .
Theorem 6. 
Let f be a density function and f α be an escort density corresponding to f. Then, for α > 0 , we obtain:
(i)
K ( f , f α ) = α H ( f ) + ( 1 α ) R α ( f ) ;
(ii)
K ( f α , f ) = 1 α H ( f α ) + α 1 α R α ( f ) ,
where R α ( f ) is Rényi entropy in (3).
Proof. 
From the definition of inaccuracy measure between f and f α , we have
K ( f , f α ) = f ( x ) log f α ( x ) d x = f ( x ) log f α ( x ) f α ( x ) d x d x = α f ( x ) log f ( x ) d x + f ( x ) log f α ( x ) d x d x = α H ( f ) + log f α ( x ) d x = α H ( f ) + ( 1 α ) R α ( f ) ,
which proves ( i ) . Next
K ( f α , f ) = f α ( x ) log f ( x ) d x = f α ( x ) f α ( x ) d x log f ( x ) d x = 1 α f α ( x ) f α ( x ) d x log f α ( x ) d x = 1 α f α ( x ) f α ( x ) d x log f α ( x ) f α ( x ) d x d x 1 α f α ( x ) f α ( x ) d x log f α ( x ) d x d x = 1 α H ( f α ) + α 1 α R α ( f ) ,
which proves ( i i ) . □
Let f and g be two probability density functions. Then, the generalized escort density for α ( 0 , 1 ) is defined as
h α ( x ) = f α ( x ) g 1 α ( x ) f α ( x ) g 1 α ( x ) d x .
Theorem 7. 
The inaccuracy information measure between f and the generalized escort density h α is given by
K ( f , h α ) = ( α 1 ) D α ( f , g ) α K L ( f , g ) K ( f , g ) ,
where D α ( f , g ) is the relative Rényi entropy defined by
D α ( f , g ) = log f α ( x ) g 1 α ( x ) d x α 1 .
Proof. 
From the definition of K ( f , h α ) , we derive
K ( f , h α ) = f ( x ) log h α ( x ) d x = f ( x ) log f α ( x ) g 1 α ( x ) f α ( x ) g 1 α ( x ) d x d x = f ( x ) log f ( x ) g ( x ) α d x + f ( x ) log f α ( x ) g 1 α ( x ) d x d x K ( f , g ) = α f ( x ) log f ( x ) g ( x ) d x + log f α ( x ) g 1 α ( x ) d x K ( f , g ) = ( α 1 ) D α ( f , g ) α K L ( f , g ) K ( f , g ) ,
as required. □
Theorem 8. 
Let f 0 and f 1 be two density functions and consider the arithmetic and geometric mixture densities, respectively, as f a ( x ) = p f 0 ( x ) + ( 1 p ) f 1 ( x ) and f g ( x ) = f 0 p ( x ) f 1 1 p ( x ) f 0 p ( x ) f 1 1 p ( x ) d x . Then, a lower bound for K ( f a , f g ) is given by
K ( f a , f g ) H ( f a ) + ( 1 p ) D p ( f 0 , f 1 ) .
Proof. 
From the definition K ( f a , f g ) , by using the arithmetic mean–geometric mean inequality, we have
K ( f a , f g ) = f a ( x ) log f g ( x ) d x = p f 0 ( x ) + ( 1 p ) f 1 ( x ) log f 0 p ( x ) f 1 1 p ( x ) f 0 p ( x ) f 1 1 p ( x ) d x d x = p f 0 ( x ) + ( 1 p ) f 1 ( x ) log f 0 p ( x ) f 1 1 p ( x ) d x log f 0 p ( x ) f 1 1 p ( x ) d x p f 0 ( x ) + ( 1 p ) f 1 ( x ) log p f 0 ( x ) + ( 1 p ) f 1 ( x ) d x log f 0 p ( x ) f 1 1 p ( x ) d x = H p f 0 + ( 1 p ) f 1 + ( 1 p ) D p ( f 0 , f 1 ) = H ( f a ) + ( 1 p ) D p ( f 0 , f 1 ) ,
as required. □

4. Inaccuracy Measure Based on Average Entropy

Let X be a random variable with PDF f. Then, the average entropy associated with f is defined as
A E ( f ) = f ( x ) log f ( x ) f 2 ( x ) d x d x
For pertinent details, see Kittaneh et al. [15].
Theorem 9. 
Let f be a density function. Then, an upper bound for the Shannon entropy based on inaccuracy measure is given by
H ( f ) K ( f , f 2 ) ,
where f 2 is the corresponding escort distribution of the density f with order 2.
Proof. 
From the definition of average entropy, we have
A E ( f ) = f ( x ) log f ( x ) f 2 ( x ) d x d x = f ( x ) log f 2 ( x ) f 2 ( x ) d x d x + f ( x ) log f ( x ) d x = K ( f , f 2 ) H ( f ) .
Now, because A E ( f ) is non-negative (see Theorem 1 of Kittaneh et al. [15]), we have
K ( f , f 2 ) H ( f ) ,
as required. □
Definition 5. 
Let X be a random variable with density f. Then, the average inaccuracy measure is defined as
AK ( f , g ) = f ( x ) log g ( x ) g 2 ( x ) d x d x .
Theorem 10. 
Let f and g be two PDFs. Then, the average inaccuracy measure between f and g, AK ( f , g ) , can be expressed as
AK ( f , g ) = K ( f , g ) R 2 ( g ) .
Proof. 
From the definition of the average inaccuracy measure, we have
AK ( f , g ) = f ( x ) log g ( x ) g 2 ( x ) d x d x = f ( x ) log g ( x ) d x + log g 2 ( x ) d x = K ( f , g ) R 2 ( g ) ,
as required. □

5. Optimal Information Model under Inaccuracy Information Measure

In this section, we prove that the arithmetic mixture distribution provides optimal information under three different optimization problems associated with inaccuracy information measures. For more details on optimal information properties of some statistical distributions, one may refer to Kharazmi et al. [16] and the references therein.
Theorem 11. 
Let f, f 0 , and f 1 be three density functions. Then, the solution to the optimization problem
min f K ( f 0 , f ) s u b j e c t   t o   K ( f 1 , f ) = η , f ( x ) d x = 1 ,
is the arithmetic mixture density with mixing parameter p = 1 1 + λ 0 , λ 0 > 0 is the Lagrangian multiplier, and η is a constraint associated with the optimization problem.
Proof. 
We use the Lagrangian multiplier technique in order to solve the optimization problem in (24). Thus, we have
L ( f , λ 0 , λ 1 ) = f 0 ( x ) log f ( x ) d x λ 0 f 1 ( x ) log f ( x ) d x + λ 1 f ( x ) d x .
Now, differentiating with respect to f, we obtain
f L ( f , λ 0 , λ 1 ) = f 0 ( x ) f ( x ) λ 0 f 1 ( x ) f ( x ) + λ 1 .
By setting (25) to zero, we derive the optimal density function as
f ( x ) = p f 0 ( x ) + ( 1 p ) f 1 ( x ) ,
where p = 1 1 + λ 0 , as required. In fact, we obtain the solution based on f ( x ) as
f ( x ) = f 0 ( x ) + λ 0 f 1 ( x ) λ 1 .
From the normalization condition, we have
f ( x ) d x = f 0 ( x ) + λ 0 f 1 ( x ) λ 1 d x = 1 ,
and then λ 1 = 1 + λ 0 . □
Theorem 12. 
Let f, f 0 , and f 1 be three density functions. Then, the solution to the optimization problem,
min f { w K ( f 0 , f ) + ( 1 w ) K ( f 1 , f ) } s u b j e c t   t o f ( x ) d x = 1 , 0 w 1 ,
is the arithmetic mixture density with mixing parameter p = w .
Proof. 
Making use of the Lagrangian multiplier technique as in Theorem 11, the result follows. □
Theorem 13. 
Let f, f 0 , and f 1 be three density functions and T α ( X ) = f 0 ( X ) f 2 ( X ) . Then, the solution to the optimization problem,
min f K ( f 0 , f ) s u b j e c t   t o   E f ( T α ( X ) ) = η , f ( x ) d x = 1 ,
is the arithmetic mixture density with mixing parameter p = 1 1 + λ 0 , λ 0 > 0 is the Lagrangian multiplier, E f ( · ) is the expectation with respect to f and η is a constraint associated with the optimization problem.
Proof. 
The result follows analogously with the proof of Theorem 11. □

6. Application

In this section, we first consider the definition of histogram for a given image in the context of image quality assessment. Then, we illustrate two applications by using the inaccuracy and Jensen–inaccuracy measures.

6.1. Image and Histogram

A digital image is defined as a discrete set of small surface elements (pixel). One such digital image is a grayscale image in which each pixel only contains one value (its intensity). These values are in the set { 0 , , L 1 } , where L represents the number of intensity values. Suppose that n k is the number of times in which the kth intensity appears in the image. Furthermore, the corresponding histogram of a digital image refers to a histogram of the pixel intensity values in the set { 0 , , L 1 } (one may refer to Gonzalez [17]).

6.2. Non-Parametric Jensen–Inaccuracy Estimation

We now show an application of the inaccuracy and Jensen–inaccuracy measures defined in (6) to image processing. Let X 1 , , X n be a random sample with probability density function f. Then, the kernel estimate of density f based on kernel function K with bandwidth h X > 0 at a fixed point x is given by
f ^ ( x ) = 1 n h X i = 1 n K x X i h X .
Similarly, the non-parametric estimate of density g with bandwidth h Y > 0 , based on random sample Y 1 , , Y n , is expressed as
g ^ ( x ) = 1 n h Y i = 1 n K x Y i h Y .
For more details, see Duong et al. [18]. Upon making use of (28) and (29), the integrated non-parametric estimate of the inaccuracy and Jensen–inaccuracy measures are given, respectively, by
K ^ ( f , g ) = f ^ ( x ) log g ^ ( x ) d x = 1 n h X i = 1 n K x X i h X log 1 n h Y i = 1 n K x Y i h Y d x
and
JK ^ ( f , f 0 , f 1 ) = 1 2 K ( f ^ , f 0 ^ ) + 1 2 K ( f ^ , f 1 ^ ) K f ^ , f 0 ^ + f 1 ^ 2 = 1 n h i = 1 n K x X i h log 1 n h 0 i = 1 n K x Y i h 0 d x 1 n h i = 1 n K x X i h log 1 n h 1 i = 1 n K x Y i h 1 d x + 1 n h i = 1 n K x X i h log 1 n h 0 i = 1 n K x Y i h 0 + 1 n h 1 i = 1 n K x Y i h 1 2 d x ,
where h 0 , h 1 and h are the corresponding bandwidths for the kernel estimations of the densities f 0 , f 1 and f, respectively. Here we use Gaussian kernel K ( u ) = 1 2 π e u 2 2 .
Next, we present two examples of image processing (two reference images including grayscale cameraman and lake images) and compute the inaccuracy and Jensen–inaccuracy information measures between the original picture and each of its adjusted versions for both cases.
  • Cameraman image
Figure 1 shows the original cameraman picture denoted by X and three adjusted versions of this original picture considered as Y ( = X + 0.3 ) (increasing brightness), Z ( = 2 × X ) (increasing contrast), and W ( = 0.5 × X + 0.5 ) (increasing brightness and decreasing contrast). The cameraman image includes 512 × 512 cells and the gray level of each cell has a value between 0 (black) and 1 (white).
  • Lake image
Figure 2 shows the original lake image that includes 512 × 512 cells and the level of the color gray of each cell assumes a value in the interval [ 0 , 1 ] (0 for black and 1 for white). This image labeled as X and three adjusted versions of it labeled as Y ( = X + 0.3 ) (increasing brightness), Z ( = 2 × X ) (increasing contrast) and W ( = X ) (gamma corrected).
Now, we first compute the inaccuracy between the original image and each of its adjusted versions. Then, we obtain the amount of the dissimilarity between each pair of the interference images with respect to original image based on the Jensen–inaccuracy information measure. For both images, we consider three interferences of the original images, as described above. For more details, see the EBImage package in R software [19].
The extracted histograms are plotted in Figure 3 and Figure 4, with the corresponding empirical densities for pictures X, Y, Z, and W for the cameraman and lake images, respectively.
We can see from Figure 1 and Figure 3 that the similarity has the highest degree related to W and then to Y, whereas Z has a divergence of the highest degree with respect to X that is the original picture. Moreover, from Figure 2 and Figure 4, the same observation is also found for the lake image and its three adjusted versions. We have presented the inaccuracy and Jensen–inaccuracy information measures (for both cameraman and lake images) for all pictures in Table 1. Therefore, the inaccuracy and Jensen–inaccuracy information measures can be considered as efficient criteria for comparing the similarity between an original picture and its adjusted versions.
According to Fan et al. [20], we have tried to observantly follow axioms 1 and 2 of axiomatic design theory, which has been proposed in recent decades; axiom 1 is about verification of the validity of designs, and according to axiom 2, one must choose the best design among several options.

7. Conclusions

In this paper, by considering the inaccuracy measure, we have proposed Jensen–inaccuracy and ( w , α ) -Jensen–inaccuracy information measures. We have specifically shown that Jensen–inaccuracy is connected to the arithmetic–geometric divergence measure. Then, we have studied the inaccuracy measure between the escort distribution and its underling density. Furthermore, we have examined the inaccuracy measure between the generalized escort distribution and its components. It has been shown that these inaccuracy measures are closely connected with Rényi entropy, average entropy, and Rényi divergence. Interestingly, we have shown that the arithmetic mixture distribution provides optimal information under three different optimization problems associated with the inaccuracy measure. Finally, we have described two applications of the inaccuracy and Jensen–inaccuracy measures to image processing. We have considered three adjusted versions of the original cameraman and lake images and then have examined the dissimilarity between the original image and each of its adjusted versions for both cases.

Author Contributions

O.K., F.S., F.B. and M.L. have contributed equally to this work in terms of conceptualization, formal analysis, visualization, software, and writing—original draft. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Francesco Buono and Maria Longobardi are members of the research group GNAMPA of INdAM (Istituto Nazionale di Alta Matematica) and are partially supported by MIUR-PRIN 2017, project “Stochastic Models for Complex Systems”, no. 2017 JFFHSH. The present work was developed within the activities of the project 000009_ALTRI_CDA_75_2021_FRA_LINEA_B_SIMONELLI funded by “Programma per il finanziamento della ricerca di Ateneo—Linea B” of the University of Naples Federico II. We would like to thank the two anonymous referees for useful remarks and comments which led to a significant improvement in the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
K L ( f , g ) Kullback–Leibler divergence between f and g
K ( f , g ) inaccuracy measure between f and g
PDFprobability density function
JIIJensen–inaccuracy information measure
JK ( f , f 0 , f 1 ) Jensen–inaccuracy measure between f 0 and f 1 with respect to f
JK α ( f , f 1 , , f n ) extended Jensen–inaccuracy measure based on n + 1 density functions
T ( f 0 , f 1 ) arithmetic–geometric divergence measure
T ( f 1 , , f n ; α ) extended arithmetic–geometric divergence measure
JK w , α ( f , f 0 , f 1 ) ( w , α ) -Jensen–inaccuracy measure between f 0 and f 1 with respect to f
R α ( f ) Rényi entropy of order α
D α ( f , g ) relative Rényi entropy of order α
A E ( f ) average entropy associated with f
AK ( f , g ) average inaccuracy measure

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  2. Rényi, A. On measures of information and entropy. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Oakland, CA, USA, 1 January 1961; pp. 547–561. [Google Scholar]
  3. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  4. Nielsen, F.; Nock, R. On the chi square and higher-order chi distances for approximating f-divergences. IEEE Signal Process. Lett. 2013, 21, 10–13. [Google Scholar] [CrossRef] [Green Version]
  5. Di Crescenzo, A.; Longobardi, M. Some properties and applications of cumulative Kullback-Leibler information. Appl. Stoch. Model. Bus. Ind. 2015, 31, 875–891. [Google Scholar] [CrossRef]
  6. Van Erven, T.; Harremos, P. Rényi divergence and Kullback-Leibler divergence. IEEE Trans. Inf. Theory 2014, 60, 3797–3820. [Google Scholar] [CrossRef] [Green Version]
  7. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  8. Kerridge, D.F. Inaccuracy and inference. J. R. Stat. Soc. Ser. (Methodol.) 1961, 23, 184–194. [Google Scholar] [CrossRef]
  9. Kayal, S.; Sunoj, S.M. Generalized Kerridge’s inaccuracy measure for conditionally specified models. Commun.-Stat.-Theory Methods 2017, 46, 8257–8268. [Google Scholar] [CrossRef]
  10. Lin, J. Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 1991, 37, 145–151. [Google Scholar] [CrossRef] [Green Version]
  11. Sánchez-Moreno, P.; Zarzo, A.; Dehesa, J.S. Jensen divergence based on Fisher’s information. J. Phys. A Math. Theor. 2012, 45, 125305. [Google Scholar] [CrossRef] [Green Version]
  12. Mehrali, Y.; Asadi, M.; Kharazmi, O. A Jensen-Gini measure of divergence with application in parameter estimation. Metron 2018, 76, 115–131. [Google Scholar] [CrossRef]
  13. Taneja, I.J. Generalized symmetric divergence measures and the probability of error. Commun.-Stat.-Theory Methods 2013, 42, 1654–1672. [Google Scholar] [CrossRef]
  14. Bercher, J.F. Source coding with escort distributions and Rényi entropy bounds. Phys. Lett. A 2009, 373, 3235–3238. [Google Scholar] [CrossRef] [Green Version]
  15. Kittaneh, O.A.; Khan, M.A.; Akbar, M.; Bayoud, H.A. Average entropy: A new uncertainty measure with application to image segmentation. Am. Stat. 2016, 70, 18–24. [Google Scholar] [CrossRef]
  16. Kharazmi, O.; Contreras-Reyes, J.E.; Balakrishnan, N. Optimal information, Jensen-RIG function and α-Onicescu’s correlation coefficient in terms of information generating functions. Phys. A Stat. Mech. Its Appl. 2023, 609, 128362. [Google Scholar] [CrossRef]
  17. Gonzalez, R.C. Digital Image Processing; Prentice Hall: New York, NY, USA, 2009. [Google Scholar]
  18. Duong, T.; Duong, M.T.; Suggests, M.A.S.S. Package ‘ks’. R Package Version, 1(5). 2022. Available online: https://cran.r-project.org/web/packages/ks/index.html (accessed on 15 January 2023).
  19. Pau, G.; Fuchs, F.; Sklyar, O.; Boutros, M.; Huber, W. EBImage—An R package for image processing with applications to cellular phenotypes. Bioinformatics 2010, 26, 979–981. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Fan, L.X.; Cai, M.Y.; Lin, Y.; Zhang, W.J. Axiomatic design theory: Further notes and its guideline to applications. Int. J. Mater. Prod. Technol. 2015, 51, 359–374. [Google Scholar] [CrossRef]
Figure 1. The cameraman image with its three adjusted versions. First row (left panel) original X; first row (right panel) Y (increasing brightness); second row (left panel) Z (increasing contrast); second row (right panel) W (increasing brightness and decreasing contrast).
Figure 1. The cameraman image with its three adjusted versions. First row (left panel) original X; first row (right panel) Y (increasing brightness); second row (left panel) Z (increasing contrast); second row (right panel) W (increasing brightness and decreasing contrast).
Entropy 25 00483 g001
Figure 2. The lake image with its three adjusted versions. First row (left panel) original X; first row (right panel) Y (increasing brightness); second row (left panel) Z (increasing contrast); second row (right panel) W (gamma corrected).
Figure 2. The lake image with its three adjusted versions. First row (left panel) original X; first row (right panel) Y (increasing brightness); second row (left panel) Z (increasing contrast); second row (right panel) W (gamma corrected).
Entropy 25 00483 g002
Figure 3. The histograms and the corresponding empirical densities for cameraman image and its three adjusted versions.
Figure 3. The histograms and the corresponding empirical densities for cameraman image and its three adjusted versions.
Entropy 25 00483 g003
Figure 4. The histograms and the corresponding empirical densities for lake image and its three adjusted versions.
Figure 4. The histograms and the corresponding empirical densities for lake image and its three adjusted versions.
Entropy 25 00483 g004
Table 1. Inaccuracy measures for the cameraman and lake images.
Table 1. Inaccuracy measures for the cameraman and lake images.
Cameraman ImageLake Image
InaccuracyJensen–InaccuracyInaccuracyJensen–Inaccuracy
X Y   8.6142 Y , Z X   0.9674 X Y   10.7143 Y , Z X   1.9235
X Z   7.3654 Y , W X   0.5707 X Z   6.5744 Y , W X   1.2558
X W   9.2097 Z , W X   1.4315 X W   7.4086 Z , W X   0.5131
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kharazmi, O.; Shirazinia, F.; Buono, F.; Longobardi, M. Jensen–Inaccuracy Information Measure. Entropy 2023, 25, 483. https://doi.org/10.3390/e25030483

AMA Style

Kharazmi O, Shirazinia F, Buono F, Longobardi M. Jensen–Inaccuracy Information Measure. Entropy. 2023; 25(3):483. https://doi.org/10.3390/e25030483

Chicago/Turabian Style

Kharazmi, Omid, Faezeh Shirazinia, Francesco Buono, and Maria Longobardi. 2023. "Jensen–Inaccuracy Information Measure" Entropy 25, no. 3: 483. https://doi.org/10.3390/e25030483

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop