Next Article in Journal
On a Matrix Inequality Related to the Distillability Problem
Previous Article in Journal
Marking Vertices to Find Graph Isomorphism Mapping Based on Continuous-Time Quantum Walk
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rényi Entropy and Rényi Divergence in Product MV-Algebras

by
Dagmar Markechová
1,* and
Beloslav Riečan
2,3
1
Department of Mathematics, Faculty of Natural Sciences, Constantine the Philosopher University in Nitra, A. Hlinku 1, SK-949 01 Nitra, Slovakia
2
Department of Mathematics, Faculty of Natural Sciences, Matej Bel University, Tajovského 40, SK-974 01 Banská Bystrica, Slovakia
3
Mathematical Institute, Slovak Academy of Sciences, Štefánikova 49, SK-814 73 Bratislava, Slovakia
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(8), 587; https://doi.org/10.3390/e20080587
Submission received: 5 July 2018 / Revised: 2 August 2018 / Accepted: 3 August 2018 / Published: 8 August 2018
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
This article deals with new concepts in a product MV-algebra, namely, with the concepts of Rényi entropy and Rényi divergence. We define the Rényi entropy of order q of a partition in a product MV-algebra and its conditional version and we study their properties. It is shown that the proposed concepts are consistent, in the case of the limit of q going to 1, with the Shannon entropy of partitions in a product MV-algebra defined and studied by Petrovičová (Soft Comput. 2000, 4, 41–44). Moreover, we introduce and study the notion of Rényi divergence in a product MV-algebra. It is proven that the Kullback–Leibler divergence of states on a given product MV-algebra introduced by Markechová and Riečan in (Entropy 2017, 19, 267) can be obtained as the limit of their Rényi divergence. In addition, the relationship between the Rényi entropy and the Rényi divergence as well as the relationship between the Rényi divergence and Kullback–Leibler divergence in a product MV-algebra are examined.

1. Introduction

The Shannon entropy [1] and Kullback–Leibler divergence [2] are the most significant and most widely used quantities in information theory [3]. Due to their successful use, many attempts have been done to generalize them. It is known that their important generalizations are the Rényi entropy and Rényi divergence [4], respectively. These quantities have many significant applications; for example, in statistics, in ecology, and also in quantum information.
Shannon’s entropy is defined in the context of a probabilistic model in the following way: if we consider a probability space ( Ω , S , P ) and a measurable partition = { E 1 , E 2 , , E n } of ( Ω , S , P ) , then the Shannon entropy of is defined as the number H ( ) = i = 1 n P ( E i ) · log P ( E i ) (with the usual convention that P ( E i ) · log P ( E i ) = 0 , for P ( E i ) = 0 ) . If = { E 1 , E 2 , , E n } and = { F 1 , F 2 , , F m } are two measurable partitions of ( Ω , S , P ) , then the conditional Shannon entropy of assuming a realization of is defined as the number H ( / ) = i = 1 n j = 1 m P ( E i F j ) log P ( E i F j ) P ( F j ) (with the usual convention that 0 log 0 x = 0 if x 0 ). If = { E 1 , E 2 , , E n } is a measurable partition of ( Ω , S , P ) with probabilities p i = P ( E i ) , i = 1 , 2 , , n , then its Rényi entropy of order q , where q ( 0 , 1 ) ( 1 , ) , is defined as the number H q ( ) = 1 1 q log i = 1 n p i q . It can be shown that lim q 1 H q ( ) = i = 1 n p i log 1 p i , thus the Shannon entropy is a limiting case of the Rényi entropy for q 1 . It is known that there is no universally accepted definition of conditional Rényi entropy. The paper [5] describes three definitions of conditional Rényi entropy that can be found in the literature. In [6], it is also possible to find a brief overview of various approaches to defining the conditional Rényi entropy, and in addition, a new definition of conditional Rényi entropy was proposed. In [7], the authors introduced a general type of conditional Rényi entropy which contains some of previously defined conditional Rényi entropies as special cases. The proposed concepts have successfully been used in information theory [8], time series analysis [9], and cryptographic applications [10]. However, no one of the proposed generalizations satisfies all basic properties of Shannon conditional entropy. The selection of the definition therefore depends on the purpose of application.
The present article is devoted to the study of Rényi entropy and Rényi divergence in product MV-algebras. An MV-algebra [11] is the most useful instrument for describing multivalued processes, especially after its Mundici’s characterization as an interval in a lattice ordered group (cf. [12,13]). At present, this algebraic structure is being studied by many researchers and it is natural that there are results also regarding entropy in this structure; we refer, for instance, to [14,15]. Also, a measure theory (cf. [16]) and a probability theory (cf. [17]) were studied on MV-algebras. Of course, in some problems of probability it is necessary to introduce a product on an MV-algebra, an operation outside the corresponding group addition. The operation of a product on an MV-algebra was suggested independently in [18] from the viewpoint of mathematical logic and in [19] from the viewpoint of probability. Also, the approach from the viewpoint of a general algebra suggested in [20] seems interesting. We note that the notion of a product MV-algebra generalizes some classes of fuzzy sets; a full tribe of fuzzy sets (see e.g., [21]) presents an example of a product MV-algebra.
A suitable entropy theory of Shannon and Kolmogorov-Sinai type for the case of a product MV-algebra has been provided by Petrovičová in [22,23]. We remark that in our article [24], based on the results of Petrovičová, we proposed the notions of Kullback–Leibler divergence and mutual information of partitions in a product MV-algebra. In the present article, we continue studying entropy and divergence in a product MV-algebra, by defining and studying the Rényi entropy and the Rényi divergence.
The rest of the paper is structured as follows. In the following section, preliminaries and related works are given. Our main results are discussed in Section 3, Section 4 and Section 5. In Section 3, we define the Rényi entropy of a partition in a product MV-algebra and examine its properties. It is shown that for q 1 the Rényi entropy of order q converges to the Shannon entropy of a partition in a product MV-algebra introduced in [22]. In Section 4, we introduce the concept of conditional Rényi entropy of partitions in a product MV-algebra and study its properties. It is shown that the proposed definition of the conditional Rényi entropy is consistent, in the case of the limit of q going to 1, with the conditional Shannon entropy of partitions studied in [22], and that it satisfies the property of monotonicity and a weak chain rule. In the final part of this section, we define the Rényi information about a partition X in a partition Y as an example for the further usage of the proposed concept of the conditional Rényi entropy. Section 5 is devoted to the study of Rényi divergence in a product MV-algebra. It is shown that the Kullback–Leibler divergence in a product MV-algebra introduced by the authors in [24] can be obtained as the limit of the Rényi divergence. We illustrate the results with numerical examples. The last section contains a brief summary.

2. Preliminaries and Related Works

We start by reminding the definitions of basic terms and some of the known results that will be used in the article. We mention some works related to the subject of this article, of course, without claiming completeness.
Several different (but equivalent) axiom systems have been used to define the term of MV-algebra (cf., e.g., [19,25,26]). In this paper, we use the definition of MV-algebra given by Riečan in [27], which is based on the Mundici representation theorem. Based on Mundici’s theorem [12] (see also [13]), MV-algebras can be considered as intervals of an abelian lattice-ordered group (shortly l-group). We remind that by an l-group (cf. [28]) we understand a triplet ( G , + , ) , where ( G , + ) is an abelian group, ( G , ) is a partially ordered set being a lattice and x y x + z y + z .
Definition 1
([27]). An MV-algebra is an algebraic structure A = ( A , , , 0 , u ) satisfying the following conditions:
(i) 
there exists an l-group ( G , + , ) such that A = [ 0 , u ] = { x G ; 0 x u } , where 0 is the neutral element of ( G , + ) and u is a strong unit of G (i.e., u G such that u > 0 and to every x G there exists a positive integer n with the property x n u ;
(ii) 
and are binary operations on A satisfying the following identities: x y = ( x + y ) u , x y = ( x + y u ) 0 .
We note that MV-algebras provide a generalization of Boolean algebras in the sense that every Boolean algebra is an MV-algebra satisfying the condition x x = x . For this reason, in order to generalize the concept of probability on Boolean algebras, Mundici introduced in [29] the notion of a state on an MV-algebra in the following way. Let A = ( A , , , 0 , u ) be an MV-algebra. A mapping s : A [ 0 , 1 ] is a state on A whenever s ( 0 ) = 0 and, for every x , y A , the following condition is satisfied: if x y = 0 , then s ( x y ) = s ( x ) + s ( y ) . Since the definitions of product MV-algebra and partition in a product MV-algebra are based on the Mundici representation theorem (i.e., the MV-algebra operation is substituted by the group operation + in the abelian l-group corresponding to the considered MV-algebra), in this contribution, we shall use the following (equivalent) definition of a state which is also based on the Mundici representation theorem. This means that the sum in the following definition of a state, and subsequently in what follows, denotes the sum in the abelian l-group that corresponds to the given MV-algebra.
Definition 2
([27]). A state on an MV-algebra A = ( A , , , 0 , u ) is a mapping s : A [ 0 , 1 ] with the following two properties:
(i) 
s ( u ) = 1 ;
(ii) 
if x , y A such that x + y u , then s ( x + y ) = s ( x ) + s ( y ) .
Definition 3
([19]). A product MV-algebra is an algebraic structure ( A , , , , 0 , u ) , where ( A , , , 0 , u ) is an MV-algebra and is an associative and abelian binary operation on A with the following properties:
(i) 
for every x A , u x = x ;
(ii) 
if x , y , z A such that x + y u , then z x + z y u , and z ( x + y ) = z x + z y .
For brevity, we will write ( A , ) instead of ( A , , , , 0 , u ) . A relevant probability theory for the product MV-algebras was developed by Riečan in [30], see also [31,32]; the entropy theory of Shannon and Kolmogorov-Sinai type for the product MV-algebras was proposed in [22,23]. We present the main idea and some results of these theories that will be used in the following text.
As in [22], by a partition in a product MV-algebra ( A , ) , we understand any n-tuple X = ( x 1 , x 2 , , x n ) of elements of A with the property x 1 + x 2 + + x n = u . In the system of all partitions in a given product MV-algebra ( A , ) , we define the refinement partial order in a standard way (cf. [33]). If X = ( x 1 , x 2 , , x n ) , and Y = ( y 1 , y 2 , , y m ) are two partitions in ( A , ) , then we write Y X (and we say that Y is a refinement of X ) , if there exists a partition { J ( 1 ) , J ( 2 ) , , J ( n ) } of the set { 1 , 2 , , m } such that x i = j J ( i ) y j , for i = 1 , 2 , , n . Further, for two finite sequences X = ( x 1 , x 2 , , x n ) , and Y = ( y 1 , y 2 , , y m ) of elements of A, we put X Y = ( x i y j ; i = 1 , 2 , , n , j = 1 , 2 , , m ) . If X = ( x 1 , x 2 , , x n ) , and Y = ( y 1 , y 2 , , y m ) are partitions in ( A , ) , then the system X Y is a partition in ( A , ) , since i = 1 n j = 1 m x i y j = ( i = 1 n x i ) ( j = 1 m y j ) = u u = u . Later we shall need the following assertion:
Proposition 1.
If X , Y , Z are partitions in a product MV-algebra ( A , ) , then it holds X Y X , and Y X implies Y Z X Z .
Proof. 
The proof can be found in [33]. ☐
The following example shows that the model studied in this article generalizes the classical case.
Example 1.
Let us consider a probability space ( Ω , S , P ) and put A = { I E ; E S } , where I E : Ω { 0 , 1 } is the indicator function of the set E S . The class A is closed with respect to the product of indicator functions and it represents a special case of product MV-algebras. The mapping s : A [ 0 , 1 ] defined, for every I E A , by s ( I E ) = P ( E ) , is a state on the considered product MV-algebra ( A , ) . A measurable partition { E 1 , E 2 , , E n } of ( Ω , S , P ) can be viewed as a partition in the product MV-algebra ( A , ) , if we consider the n-tuple ( I E 1 , I E 2 , , I E n ) instead of { E 1 , E 2 , , E n } .
Example 2.
Let ( Ω , S , P ) be a probability space, and A be the class of all S - measurable functions f : Ω [ 0 , 1 ] , so called full tribe of fuzzy sets (cf., e.g., [21,34]). The class A is closed with respect to the natural product of fuzzy sets and it represents a significant case of product MV-algebras. The map s : A [ 0 , 1 ] defined, for every f A , by s ( f ) = Ω f d P , is a state on the product MV-algebra ( A , ) . The notion of a partition in the product MV-algebra ( A , ) coincides with the notion of a fuzzy partition (cf. [34]).
Definition 4.
Let s be a state on a product MV-algebra ( A , ) . We say that partitions X , Y in ( A , ) are statistically independent with respect to s , if s ( x y ) = s ( x ) s ( y ) , for every x X , and y Y .
The following definition of entropy of Shannon type was introduced in [22].
Definition 5.
Let X = ( x 1 , x 2 , , x n ) be a partition in a product MV-algebra ( A , ) and let s : A [ 0 , 1 ] be a state. Then the entropy of X with respect to s is defined by Shannon’s formula
H s ( X ) = i = 1 n s ( x i ) log s ( x i ) .
If X = ( x 1 , x 2 , , x n ) , and Y = ( y 1 , y 2 , , y m ) are two partitions in ( A , ) , then the conditional entropy of X given y j Y is defined by
H s ( X / y j ) = i = 1 n s ( x i / y j ) log s ( x i / y j ) ,
where
s ( x i / y j ) = { s ( x i y j ) s ( y j ) , if   s ( y j ) > 0 ; 0 , if   s ( y j ) = 0 .
The conditional entropy of X given Y is defined by
H s ( X / Y ) = j = 1 m s ( y j ) H s ( X / y j ) = i = 1 n j = 1 m s ( x i y j ) log s ( x i y j ) s ( y j ) .
Here, the usual convention that 0 log 0 x = 0 if x 0 is used. The base of the logarithm can be any positive real number, but as a rule one takes logarithms to the base 2. The entropy is then expressed in bits. The entropy and the conditional entropy of partitions in a product MV-algebra satisfy all properties that correspond to properties of Shannon’s entropy of measurable partitions in the classical case; for more details, see [22].
In [24], the concepts of mutual information and Kullback–Leibler divergence in a product MV-algebra were introduced in the following way.
Definition 6.
Let X , Y be partitions in a product MV-algebra ( A , ) , and let s : A [ 0 , 1 ] be a state. We define the mutual information between X and Y by the formula
I s ( X , Y ) = H s ( X ) H s ( X / Y ) .
Definition 7.
Let s , t be states defined on a given product MV-algebra ( A , ) , and X = ( x 1 , x 2 , , x n ) be a partition in ( A , ) . Then we define theKullback–Leiblerdivergence D X ( s t ) by the formula
D X ( s t ) = i = 1 n s ( x i ) log s ( x i ) t ( x i ) .
The logarithm in this formula is taken to the base 2 and information is measured in units of bits. We use the convention that x log x 0 = if x > 0 , and 0 log 0 x = 0 if x 0 .
In the proofs, we shall use the Jensen inequality which states that for a real concave function φ , real numbers a 1 , a 2 , , a n in its domain and nonnegative real numbers c 1 , c 2 , , c n such that i = 1 n c i = 1 , it holds
φ ( i = 1 n c i a i ) i = 1 n c i φ ( a i ) ,
and the inequality is reversed if φ is a real convex function. The equality holds if and only if a 1 = a 2 = = a n or φ is linear.
Further, we recall the following notions.
Definition 8.
Let D be a non-empty set and f : D be a real function defined on it. Then the support of f is defined by supp ( f ) = { x D ; f ( x ) 0 } .
Definition 9.
Let f : D be a real function defined on a non-empty set D. Define the q -norm, for 1 q < , or q -quasinorm, for 0 < q < 1 , of f as
f q = ( x D | f ( x ) | q ) 1 q .

3. The Rényi Entropy of a Partition in a Product MV-Algebra

In this section, we define the Rényi entropy of a partition in a product MV-algebra ( A , ) , and examine its properties. In the following, we assume that s : A [ 0 , 1 ] is a state.
Definition 10.
Let X = ( x 1 , x 2 , , x n ) be a partition in a product MV-algebra ( A , ) . Then we define the Rényi entropy of order q , where q ( 0 , 1 ) ( 1 , ) , of the partition X with respect to s as the number:
H q s ( X ) = 1 1 q log i = 1 n s ( x i ) q .
Remark 1.
In accordance with the classical theory, the log is to the base 2 and the Rényi entropy is expressed in bits. For simplicity, we write s ( x i ) q instead of ( s ( x i ) ) q and log i = 1 n s ( x i ) q instead of log ( i = 1 n s ( x i ) q ) .
Let X = ( x 1 , x 2 , , x n ) be a partition in ( A , ) . If we consider the function s X : X , defined, for every x i X , by s X ( x i ) = s ( x i ) , then we have
s X q = ( i = 1 n s ( x i ) q ) 1 q ,
and the Formula (7) can be expressed in the following equivalent form:
H q s ( X ) = q 1 q log ( s X q ) .
Example 3.
Let X = ( x 1 , x 2 , , x n ) be any partition in a product MV-algebra ( A , ) . Let s : A [ 0 , 1 ] be a uniform state over X, i.e., s ( x i ) = 1 n , for i = 1 , 2 , , n . Then
H q s ( X ) = 1 1 q log i = 1 n ( 1 n ) q = 1 1 q log n 1 q = log n .
Example 4.
Let us consider any product MV-algebra ( A , ) , and the partition E = ( u ) in ( A , ) that represents an experiment resulting in a certain event. It is easy to see that H q s ( E ) = 0 .
Remark 2.
It is possible to verify that the Rényi entropy H q s ( X ) is always nonnegative. Namely, for q ( 0 , 1 ) , and i = 1 , 2 , , n , we have s ( x i ) q s ( x i ) , hence i = 1 n s ( x i ) q i = 1 n s ( x i ) = s ( i = 1 n x i ) = s ( u ) = 1 . It follows that H q s ( X ) = 1 1 q log i = 1 n s ( x i ) q 0 . On the other hand, for q ( 1 , ) , and i = 1 , 2 , , n , it holds s ( x i ) q s ( x i ) , hence i = 1 n s ( x i ) q i = 1 n s ( x i ) = 1 . The assumption q ( 1 , ) implies 1 1 q < 0 , therefore, we get H q s ( X ) = 1 1 q log i = 1 n s ( x i ) q 0 .
At q = 1 the value of the quantity H q s ( X ) is undefined since it generates the form 0 0 . In the following theorem, it is shown that for q 1 the Rényi entropy H q s ( X ) converges to the Shannon entropy of a partition X in ( A , ) defined by the formula (1).
Theorem 1.
Let X = ( x 1 , x 2 , , x n ) be any partition in a product MV-algebra ( A , ) . Then
lim q 1 H q s ( X ) = i = 1 n s ( x i ) log s ( x i ) .
Proof. 
Put f ( q ) = log i = 1 n s ( x i ) q , and g ( q ) = 1 q , for every q ( 0 , ) . The functions f , g are differentiable and for every q ( 0 , 1 ) ( 1 , ) , we have H q s ( X ) = f ( q ) g ( q ) . Evidently, lim q 1 g ( q ) = 0 , and lim q 1 f ( q ) = lim q 1 log i = 1 n s ( x i ) q = log i = 1 n s ( x i ) = log 1 = 0 . Using L’Hôpital’s rule, this yields that lim q 1 H q s ( X ) = lim q 1 f ( q ) g ( q ) , under the assumption that the right hand side exists. It holds d d q g ( q ) = 1 , and
d d q f ( q ) = 1 ( ln 2 ) i = 1 n s ( x i ) q i = 1 n d d q s ( x i ) q = 1 ( ln 2 ) i = 1 n s ( x i ) q i = 1 n s ( x i ) q ln s ( x i ) = 1 i = 1 n s ( x i ) q i = 1 n s ( x i ) q log s ( x i ) .
Note that the calculation of the derivative of function f is easily done by using the identity b α = e α ln b . We get
lim q 1 H q s ( X ) = lim q 1 1 i = 1 n s ( x i ) q i = 1 n s ( x i ) q log s ( x i ) = i = 1 n s ( x i ) log s ( x i ) ,
which is the Shannon entropy of X defined by the Formula (1). ☐
In the following theorem it is proved that the function H q s ( X ) is monotonically decreasing in q .
Theorem 2.
Let X be any partition in a given product MV-algebra ( A , ) , and q 1 , q 2 ( 0 , 1 ) ( 1 , ) . Then q 1 q 2 implies H q 1 s ( X ) H q 2 s ( X ) .
Proof. 
Suppose that X = x1, x2, …, xn and q 1 , q 2 ( 1 , ) such that q 1 q 2 . Then the claim is equivalent to the inequality
( i = 1 n s ( x i ) q 1 ) 1 q 1 1 ( i = 1 n s ( x i ) q 2 ) 1 q 2 1 .
The above inequality follows by applying the Jensen inequality to the function φ , defined, for every x [ 0 , ) , by φ ( x ) = x q 2 1 q 1 1 . The assumption q 1 q 2 implies q 2 1 q 1 1 1 , hence the function φ is concave. Putting c i = s ( x i ) , and a i = s ( x i ) q 1 1 , i = 1 , 2 , , n , in the inequality (6), we get
( i = 1 n s ( x i ) q 1 ) 1 q 1 1 = ( i = 1 n s ( x i ) s ( x i ) q 1 1 ) q 2 1 ( q 1 1 ) ( q 2 1 ) = ( [ i = 1 n s ( x i ) s ( x i ) q 1 1 ] q 2 1 q 1 1 ) 1 q 2 1 ( i = 1 n s ( x i ) s ( x i ) q 2 1 ) 1 q 2 1 = ( i = 1 n s ( x i ) q 2 ) 1 q 2 1 .
The case where q 1 , q 2 ( 0 , 1 ) is obtained in an analogous way. Finally, the case where q 1 ( 1 , ) , and q 2 ( 0 , 1 ) is obtained by transitivity. ☐
Example 5.
Consider any product MV-algebra ( A , ) , and a state s : A [ 0 , 1 ] . Let x A with s ( x ) = p , where p ( 0 , 1 ) . Then s ( u x ) = 1 p , and the pair X = ( x , u x ) is a partition in ( A , ) . If we put p = 1 2 , then we have H q s ( X ) = 1 bit, for every q ( 0 , 1 ) ( 1 , ) . Put p = 1 3 . By simple calculations we get H 1 / 2 s ( X ) = ˙ 0.958 bit, H 2 s ( X ) = ˙ 0.848 bit, H 1 / 3 s ( X ) = ˙ 0.972 bit. So, it holds H 2 s ( X ) < H 1 / 2 s ( X ) < H 1 / 3 s ( X ) , which is consistent with the property proven in the previous theorem.
Theorem 3.
Let X , Y be partitions in a product MV-algebra ( A , ) such that Y X . Then H q s ( X ) H q s ( Y ) .
Proof. 
Suppose that X = ( x 1 , x 2 , , x n ) , Y = ( y 1 , y 2 , , y m ) , Y X . Then there exists a partition { J ( 1 ) , J ( 2 ) , , J ( n ) } of the set { 1 , 2 , , m } such that x i = j J ( i ) y j , for i = 1 , 2 , , n . Hence s ( x i ) = s ( j J ( i ) y j ) = j J ( i ) s ( y j ) , for i = 1 , 2 , , n .
(i)
Consider the case when q ( 1 , ) . Then s ( x i ) q = ( j J ( i ) s ( y j ) ) q j J ( i ) s ( y j ) q , for i = 1 , 2 , , n , and consequently
i = 1 n s ( x i ) q i = 1 n j J ( i ) s ( y j ) q = j = 1 m s ( y j ) q .
Therefore, we get
log i = 1 n s ( x i ) q log j = 1 m s ( y j ) q .
In this case we have 1 1 q < 0 , hence
H q s ( X ) = 1 1 q log i = 1 n s ( x i ) q 1 1 q log j = 1 m s ( y j ) q = H q s ( Y ) .
(ii)
Consider the case when q ( 0 , 1 ) . Then s ( x i ) q = ( j J ( i ) s ( y j ) ) q j J ( i ) s ( y j ) q , for i = 1 , 2 , , n , and consequently
i = 1 n s ( x i ) q j = 1 m s ( y j ) q .
Therefore, we get
log i = 1 n s ( x i ) q log j = 1 m s ( y j ) q .
In this case we have 1 1 q > 0 , hence
H q s ( X ) = 1 1 q log i = 1 n s ( x i ) q 1 1 q log j = 1 m s ( y j ) q = H q s ( Y ) .
As an immediate consequence of the previous theorem and Proposition 1, we obtain the following result.
Corollary 1.
For every partition X , Y in a product MV-algebra ( A , ) , it holds
H q s ( X Y ) max [ H q s ( X ) , H q s ( Y ) ] .
Example 6.
Consider the measurable space ( [ 0 , 1 ] , ) , where is the σ - algebra of all Borel subsets of the unit interval [ 0 , 1 ] . Let A be the family of all Borel measurable functions f : [ 0 , 1 ] [ 0 , 1 ] . If we define in the family A the operation as the natural product of fuzzy sets, then the system ( A , ) is a product MV-algebra. We define a state s : A [ 0 , 1 ] by the equality s ( f ) = 0 1 f ( x ) d x , for any element f of A. It is easy to see that the pairs X = ( f 1 , f 2 ) , Y = ( g 1 , g 2 ) , where f 1 ( x ) = x , f 2 ( x ) = 1 x , g 1 ( x ) = x 2 , g 2 ( x ) = 1 x 2 , x [ 0 , 1 ] , are partitions in ( A , ) with the state values 1 2 , 1 2 and 1 3 , 2 3 of the corresponding elements, respectively. The join of partitions X and Y is the system X Y = ( f 1 g 1 , f 1 g 2 , f 2 g 1 , f 2 g 2 ) with the state values 1 4 , 1 4 , 1 12 , 5 12 of the corresponding elements. Using Formula (7), it can be computed that H 1 / 2 s ( X Y ) = ˙ 1.903 bit, H 2 s ( X Y ) = ˙ 1.710 bit. We have H 1 / 2 s ( X ) = H 2 s ( X ) = 1 bit, H 1 / 2 s ( Y ) = ˙ 0.958 bit, and H 2 s ( Y ) = ˙ 0.848 bit. It can be seen that the inequality (9) applies.
Theorem 4.
If partitions X , Y in a product MV-algebra ( A , ) are statistically independent with respect to s , then H q s ( X Y ) = H q s ( X ) + H q s ( Y ) .
Proof. 
Suppose that X = ( x 1 , x 2 , , x n ) , Y = ( y 1 , y 2 , , y m ) . Let us calculate:
H q s ( X Y ) = 1 1 q log i = 1 n j = 1 m s ( x i y j ) q = 1 1 q log ( i = 1 n s ( x i ) q j = 1 m s ( y j ) q ) = 1 1 q log i = 1 n s ( x i ) q + 1 1 q log j = 1 m s ( y j ) q = H q s ( X ) + H q s ( Y ) .

4. The Conditional Rényi Entropy in a Product MV-Algebra

In this section, we introduce the concept of conditional Rényi entropy H q s ( X / Y ) of partitions in a product MV-algebra ( A , ) , analogously to [6]. It is shown that the proposed definition is consistent, in the case of the limit of q going to 1, with the conditional Shannon entropy defined by Equation (3). Subsequently, by using the proposed notion of conditional Rényi entropy, we define the Rényi information about a partition X in a partition Y.
Let X = ( x 1 , x 2 , , x n ) , and Y = ( y 1 , y 2 , , y m ) be two partitions in a product MV-algebra ( A , ) , and y j Y be fixed. If we consider the function s X / y j : X , defined, for every x i X , by s X / y j ( x i ) = s ( x i / y j ) , then we have
s X / y j q = ( i = 1 n s ( x i / y j ) q ) 1 q .
Definition 11.
Let X = ( x 1 , x 2 , , x n ) , and Y = ( y 1 , y 2 , , y m ) be partitions in ( A , ) . We define the conditional Rényi entropy of order q , where q ( 0 , 1 ) ( 1 , ) , of X given Y by the formula
H q s ( X / Y ) = q 1 q log ( j = 1 m s ( y j ) s X / y j q ) .
Remark 3.
In the same way as in the unconditional case, it can be verified that the conditional Rényi entropy H q s ( X / Y ) is always nonnegative. Let X = ( x 1 , x 2 , , x n ) be any partition in ( A , ) , and E = ( u ) . Since s X / u ( x i ) = s ( x i / u ) = s ( x i ) = s X ( x i ) , for i = 1 , 2 , , n , it holds s X / u q = s X q , and consequently
H q s ( X / E ) = q 1 q log ( s ( u ) s X / u q ) = q 1 q log ( s X q ) = H q s ( X ) .
Proposition 2.
Let X = ( x 1 , x 2 , , x n ) be a partition in a product MV-algebra ( A , ) and let s : A [ 0 , 1 ] be a state. Then:
(i) 
i = 1 n s ( x i y ) = s ( y ) , for any element y A ;
(ii) 
i = 1 n s ( x i / y ) = 1 , for any element y A such that s ( y ) > 0 .
Proof. 
The proof of the claim (i) can be found in [33]. If y A such that s ( y ) > 0 , then using the previous equality, we get
i = 1 n s ( x i / y ) = 1 s ( y ) i = 1 n s ( x i y ) = s ( y ) s ( y ) = 1 .
Theorem 5.
Let X = ( x 1 , x 2 , , x n ) , and Y = ( y 1 , y 2 , , y m ) be partitions in ( A , ) . Then
lim q 1 H q s ( X / Y ) = i = 1 n j = 1 m s ( x i y j ) log s ( x i y j ) s ( y j ) .
Proof. 
For every q ( 0 , 1 ) ( 1 , ) , we have
H q s ( X / Y ) = q 1 q log ( j = 1 m s ( y j ) s X / y j q ) = 1 1 1 q log ( j = 1 m s ( y j ) s X / y j q ) = f ( q ) g ( q ) ,
where f and g are continuous functions defined, for every q ( 0 , ) , by the equalities
f ( q ) = log ( j = 1 m s ( y j ) s X / y j q ) ,   g ( q ) = 1 1 q .
The functions f and g are differentiable and evidently, lim q 1 g ( q ) = 0 . Also, it can easily be verified that lim q 1 f ( q ) = 0 . Indeed, if we put δ = { j ; s ( y j ) > 0 } , then using Proposition 2, we get
lim q 1 f ( q ) = log ( j = 1 m s ( y j ) i = 1 n s ( x i / y j ) ) = log ( j δ s ( y j ) i = 1 n s ( x i / y j ) ) = log ( j δ s ( y j ) i = 1 n s ( x i y j ) s ( y j ) ) = log ( j δ i = 1 n s ( x i y j ) ) = log ( j δ s ( y j ) ) = log ( j = 1 m s ( y j ) ) = log 1 = 0 .
Using L’Hôpital’s rule, it follows that lim q 1 H q s ( X / Y ) = lim q 1 f ( q ) g ( q ) , under the assumption that the right-hand side exists. Let us calculate the derivatives of the functions f and g. We have d d q g ( q ) = 1 q 2 , and d d q f ( q ) = h ( q ) h ( q ) ln 2 , where h ( q ) is the continuous function defined, for every q ( 0 , ) , by the formula h ( q ) = j = 1 m s ( y j ) s X / y j q with the continuous derivative h for which it holds
h ( q ) = j = 1 m s ( y j ) s X / y j q [ 1 q 2 ln i = 1 n s ( x i / y j ) q + 1 q 1 i = 1 n s ( x i / y j ) q i = 1 n s ( x i / y j ) q ln s ( x i / y j ) ] .
Analogously as in the proof of Theorem 1, we used the identity b α = e α ln b to calculate the derivative of function h.
We get
lim q 1 f ( q ) = 1 ln 2 j = 1 m s ( y j ) i = 1 n s ( x i / y j ) ln s ( x i / y j ) = j = 1 m i = 1 n s ( x i y j ) log s ( x i y j ) s ( y j ) .
It follows
lim q 1 H q s ( X / Y ) = lim q 1 f ( q ) = i = 1 n j = 1 m s ( x i y j ) log s ( x i y j ) s ( y j ) ,
which is the conditional Shannon entropy of X given Y defined by Equation (3). ☐
Theorem 6 (monotonicity).
Let X and Y be partitions in a product MV-algebra ( A , ) . Then
H q s ( X / Y ) H q s ( X ) .
Proof. 
Let X = ( x 1 , x 2 , , x n ) , Y = ( y 1 , y 2 , , y m ) . Then by Proposition 2, it holds s ( x i ) = j = 1 m s ( x i y j ) , for i = 1 , 2 , , n . Suppose that q ( 1 , ) . Then, using the triangle inequality of the q -norm, we get
i = 1 n s ( x i ) q = i = 1 n ( j = 1 m s ( x i y j ) ) q = ( ( i = 1 n ( j = 1 m s ( x i y j ) ) q ) 1 q ) q = ( j = 1 m s X ( y j ) q ) q ( j = 1 m s X ( y j ) q ) q = ( j = 1 m s ( y j ) s X / y j q ) q .
It follows that
log i = 1 n s ( x i ) q log ( j = 1 m s ( y j ) s X / y j q ) q ,
and consequently
H q s ( X ) = 1 1 q log i = 1 n s ( x i ) q 1 1 q log ( j = 1 m s ( y j ) s X / y j q ) q = q 1 q log ( j = 1 m s ( y j ) s X / y j q ) = H q s ( X / Y ) .
For the case where q ( 0 , 1 ) , we put r = 1 q . By writing the Rényi entropy in terms of the 1 q -norm and using the triangle inequality for the 1 q -norm, we get
H q s ( X ) = 1 1 q log i = 1 n s ( x i ) q = r r 1 log i = 1 n s ( x i ) 1 r = r r 1 log i = 1 n ( j = 1 m s ( x i y j ) ) 1 r = r r 1 log i = 1 n s ( x i ) Y 1 r r r r 1 log i = 1 n s ( x i ) Y 1 r r = r r 1 log ( j = 1 m ( i = 1 n s ( x i y j ) 1 r ) r ) 1 r = r r 1 log ( j = 1 m s ( y j ) ( i = 1 n s ( x i / y j ) 1 r ) r ) 1 r = 1 r 1 log ( j = 1 m s ( y j ) ( i = 1 n s ( x i / y j ) 1 r ) r ) = q 1 q log ( j = 1 m s ( y j ) s X / y j q ) = H q s ( X / Y ) .
Theorem 7.
If partitions X , Y in a product MV-algebra ( A , ) are statistically independent with respect to s , then H q s ( X / Y ) = H q s ( X ) .
Proof. 
Suppose that X = ( x 1 , x 2 , , x n ) , Y = ( y 1 , y 2 , , y m ) , and put δ = { j ; s ( y j ) > 0 } . Since it holds j δ s ( y j ) = j = 1 m s ( y j ) = 1 , we get
H q s ( X / Y ) = q 1 q log ( j = 1 m s ( y j ) ( i = 1 n s ( x i / y j ) q ) 1 q ) = q 1 q log ( j δ s ( y j ) ( i = 1 n s ( x i ) q s ( y j ) q s ( y j ) q ) 1 q ) = q 1 q log ( j δ s ( y j ) ( i = 1 n s ( x i ) q ) 1 q ) = 1 1 q log i = 1 n s ( x i ) q = H q s ( X ) .
Theorem 8.
Let X, Y be partitions in a product MV-algebra ( A , ) , and q 1 , q 2 ( 0 , 1 ) ( 1 , ) . Then q 1 q 2 implies H q 1 s ( X / Y ) H q 2 s ( X / Y ) .
Proof. 
Suppose that X = ( x 1 , x 2 , , x n ) , Y = ( y 1 , y 2 , , y m ) , and q 1 , q 2 ( 1 , ) . Then the claim is equivalent to the inequality
( j = 1 m s ( y j ) s X / y j q 1 ) q 1 q 1 1 ( j = 1 m s ( y j ) s X / y j q 2 ) q 2 q 2 1 .
We prove this inequality by applying twice the Jensen inequality. First, we apply the Jensen inequality to the function φ 1 defined, for every x [ 0 , ) , by φ 1 ( x ) = x q 1 ( q 2 1 ) q 2 ( q 1 1 ) . The assumption q 1 q 2 implies that q 1 ( q 2 1 ) q 2 ( q 1 1 ) 1 , hence the function φ 1 is concave. Therefore, if we put c j = s ( y j ) , and a j = s X / y j q 1 , j = 1 , 2 , , m , in the inequality (6), we obtain
( j = 1 m s ( y j ) s X / y j q 1 ) q 1 q 1 1 = [ ( j = 1 m s ( y j ) s X / y j q 1 ) q 1 ( q 2 1 ) q 2 ( q 1 1 ) ] q 2 q 2 1 [ j = 1 m s ( y j ) ( s X / y j q 1 ) q 1 ( q 2 1 ) q 2 ( q 1 1 ) ] q 2 q 2 1 = [ j = 1 m s ( y j ) ( i = 1 n s ( x i / y j ) s ( x i / y j ) q 1 1 ) q 2 1 q 2 ( q 1 1 ) ] q 2 q 2 1 .
Next, we apply the Jensen inequality to the function φ 2 defined, for every x [ 0 , ) , by φ 2 ( x ) = x q 2 1 q 1 1 . The assumption q 1 q 2 implies q 2 1 q 1 1 1 , hence the function φ 2 is concave. Put c i = s ( x i / y j ) , and a i = s ( x i / y j ) q 1 1 , i = 1 , 2 , , n , in the inequality (6). Note that according to Proposition 2, it holds i = 1 n c i = 1 . By the Jensen inequality we get
[ j = 1 m s ( y j ) ( i = 1 n s ( x i / y j ) s ( x i / y j ) q 1 1 ) q 2 1 q 2 ( q 1 1 ) ] q 2 q 2 1 [ j = 1 m s ( y j ) ( i = 1 n s ( x i / y j ) q 2 ) 1 q 2 ] q 2 q 2 1 =   ( j = 1 m s ( y j ) s X / y j q 2 ) q 2 q 2 1 .
By combining the previous results, we obtain the inequality (11). Analogously, we can prove the inequality for the case where q 1 , q 2 ( 0 , 1 ) . Finally, the case where q 1 ( 1 , ) and q 2 ( 0 , 1 ) follows by transitivity. ☐
In the following theorem, a weak chain rule for the Rényi entropy of partitions in a product MV-algebra ( A , ) is given.
Theorem 9.
Let X = ( x 1 , x 2 , , x n ) , and Y = ( y 1 , y 2 , , y m ) be partitions in a product MV-algebra ( A , ) . Then
H q s ( X Y ) H q s ( X / Y ) + log β ,
where β = max { 1 s ( y j ) ; y j s u p p ( s ) , j = 1 , 2 , , m } .
Proof. 
Put δ = { j ; s ( y j ) > 0 } . The assertion follows by applying the Jensen inequality to the function φ defined by φ ( x ) = x q , x [ 0 , ) , and putting c j = s ( y j ) , a j = s X / y j q , j = 1 , 2 , , m .
Let q ( 0 , 1 ) . Then the function φ is concave, and therefore we get
( j = 1 m s ( y j ) s X / y j q ) q j = 1 m s ( y j ) ( s X / y j q ) q = j δ s ( y j ) i = 1 n s ( x i y j ) q s ( y j ) q = j δ ( 1 s ( y j ) ) q 1 i = 1 n s ( x i y j ) q j δ β q 1 i = 1 n s ( x i y j ) q .
It follows
H q s ( X / Y ) = q 1 q log ( j = 1 m s ( y j ) s X / y j q ) 1 1 q log ( β q 1 j δ i = 1 n s ( x i y j ) q ) = log β + 1 1 q log j = 1 m i = 1 n s ( x i y j ) q = log β + H q s ( X Y ) .
Consider now the case where q ( 1 , ) . Then the function φ is convex, and therefore we have
( j = 1 m s ( y j ) s X / y j q ) q j = 1 m s ( y j ) ( s X / y j q ) q = j δ ( 1 s ( y j ) ) q 1 i = 1 n s ( x i y j ) q j δ β q 1 i = 1 n s ( x i y j ) q .
Thus
q log ( j = 1 m s ( y j ) s X / y j q ) ( q 1 ) log β + log j = 1 m i = 1 n s ( x i y j ) q .
Since 1 q < 0 , for q ( 1 , ) , we get
H q s ( X / Y ) = q 1 q log ( j = 1 m s ( y j ) s X / y j q ) q 1 1 q log β + 1 1 q log j = 1 m i = 1 n s ( x i y j ) q = log β + H q s ( X Y ) .
Remark 4.
Let X = ( x 1 , x 2 , , x n ) , and Y = ( y 1 , y 2 , , y m ) be partitions in ( A , ) . Since X Y = Y X , it holds also the inequality H q s ( X Y ) H q s ( Y / X ) + log α , where α = max { 1 s ( x i ) ; x i s u p p ( s ) , i = 1 , 2 , , n } .
Corollary 2.
Let X = ( x 1 , x 2 , , x n ) , and Y = ( y 1 , y 2 , , y m ) be partitions in ( A , ) . Then:
(i) 
H q s ( X Y ) H q s ( X ) + log β ;
(ii) 
H q s ( X Y ) H q s ( Y ) + log α ,
where β = max { 1 s ( y j ) ; y j s u p p ( s ) , j = 1 , 2 , , m } , and α = max { 1 s ( x i ) ; x i s u p p ( s ) , i = 1 , 2 , , n } .
Proof. 
The claim is a direct consequence of Theorems 6 and 9. ☐
Definition 12.
Let X , Y be partitions in ( A , ) . We define the Rényi information of order q , where q ( 0 , 1 ) ( 1 , ) , about X in Y by the formula
I q s ( X , Y ) = H q s ( X ) H q s ( X / Y ) .
Theorem 10.
Let X , Y be partitions in ( A , ) . Then lim q 1 I q s ( X , Y ) = I s ( X , Y ) , where I s ( X , Y ) is the mutual information of partitions X , Y defined by Equation (4).
Proof. 
The claim is obtained as a direct consequence of Theorems 1 and 5. ☐
Theorem 11.
For arbitrary partitions X , Y in ( A , ) , it holds I q s ( X , Y ) 0 . Moreover, if X , Y are statistically independent with respect to s , then I q s ( X , Y ) = 0 .
Proof. 
The claim is a direct consequence of Theorems 6 and 7. ☐

5. The Rényi Divergence in a Product MV-Algebra

In this section, we introduce the concept of the Rényi divergence in a product MV-algebra ( A , ) . We will prove basic properties of this quantity, and for illustration, we provide some numerical examples.
Definition 13.
Let s , t be states on a given product MV-algebra ( A , ) , and X = ( x 1 , x 2 , , x n ) be a partition in ( A , ) such that t ( x i ) > 0 , for i = 1 , 2 , , n . Then we define the Rényi divergence D q X ( s t ) of order q , where q ( 0 , 1 ) ( 1 , ) , with respect to X as the number
D q X ( s t ) = 1 q 1 log i = 1 n s ( x i ) q t ( x i ) 1 q .
Remark 5.
The logarithm in the Formula (13) is taken to the base 2 and information is measured in bits. It is easy to see that D q X ( s s ) = 0 . Namely,
D q X ( s s ) = 1 q 1 log i = 1 n s ( x i ) q s ( x i ) 1 q = 1 q 1 log i = 1 n s ( x i ) = 1 q 1 log 1 = 0 .
The following theorem states that for q 1 the Rényi divergence D q X ( s t ) converges to the Kullback–Leibler divergence D X ( s t ) defined by the Formula (5).
Theorem 12.
Let s , t be states on a given product MV-algebra ( A , ) , and X = ( x 1 , x 2 , , x n ) be a partition in ( A , ) such that t ( x i ) > 0 , for i = 1 , 2 , , n . Then
lim q 1 D q X ( s t ) = i = 1 n s ( x i ) log s ( x i ) t ( x i ) .
Proof. 
For every q ( 0 , 1 ) ( 1 , ) , we have
D q X ( s t ) = 1 q 1 log i = 1 n s ( x i ) q t ( x i ) 1 q = f ( q ) g ( q ) ,
where f , g are continuous functions defined, for every q ( 0 , ) , in the following way:
f ( q ) = log i = 1 n s ( x i ) q t ( x i ) 1 q ,   g ( q ) = q 1 .
By continuity of the functions f , g , we have lim q 1 f ( q ) = f ( 1 ) = log i = 1 n s ( x i ) t ( x i ) 0 = log 1 = 0 , and lim q 1 g ( q ) = g ( 1 ) = 0 . Using L’Hôpital’s rule, we get that lim q 1 D q X ( s t ) = lim q 1 f ( q ) g ( q ) , under the assumption that the right hand side exists. Since g ( q ) = 1 , and f ( q ) = h ( q ) h ( q ) ln 2 , where
h ( q ) = i = 1 n s ( x i ) q t ( x i ) 1 q ,   h ( q ) = i = 1 n s ( x i ) q t ( x i ) 1 q ln s ( x i ) t ( x i ) ,
we obtain
lim q 1 D q X ( s t ) = lim q 1 f ( q ) = 1 ln 2 i = 1 n s ( x i ) ln s ( x i ) t ( x i ) = i = 1 n s ( x i ) log s ( x i ) t ( x i ) .
The following theorem states that the Rényi entropy H q s ( X ) can be expressed in terms of the Rényi divergence D q X ( s t ) of a state s from a state t that is uniform over X.
Theorem 13.
Let s be a state on a product MV-algebra ( A , ) , and X = ( x 1 , x 2 , , x n ) be a partition in ( A , ) . If a state t : A [ 0 , 1 ] is uniform over X, i.e., t ( x i ) = 1 n , for i = 1 , 2 , , n , then
H q s ( X ) = H q t ( X ) D q X ( s t ) = log n D q X ( s t ) .
Proof. 
Let us calculate:
D q X ( s t ) = 1 q 1 log i = 1 n s ( x i ) q t ( x i ) 1 q = 1 q 1 log i = 1 n s ( x i ) q ( 1 n ) 1 q = 1 q 1 log ( 1 n ) 1 q + 1 q 1 log i = 1 n s ( x i ) q = log n H q s ( X ) .
On the other hand, as shown by Example 3, it holds H q t ( X ) = log n .
Example 7.
Consider any product MV-algebra ( A , ) , and a state s : A [ 0 , 1 ] . In Example 5, we dealt with the partition X = ( x , u x ) with s ( x ) = 1 3 , and we calculated the Rényi entropy H 1 / 2 s ( X ) = ˙ 0.958 bit. Let t : A [ 0 , 1 ] be a state uniform over X, i.e., t ( x ) = t ( u x ) = 1 2 . Then the Rényidivergence D q X ( s t ) of order q = 1 2 is D 1 / 2 X ( s t ) = 2 log ( 1 3 1 2 + 2 3 1 2 ) = ˙ 0.04186 bit, and H 1 / 2 t ( X ) D 1 / 2 X ( s t ) = ˙ 1 0.04186 = ˙ 0.958 bit. So, the equality H 1 / 2 s ( X ) = H 1 / 2 t ( X ) D 1 / 2 X ( s t ) holds.
Theorem 14.
Let s , t be states on a given product MV-algebra ( A , ) , and X = ( x 1 , x 2 , , x n ) be a partition in ( A , ) such that s ( x i ) > 0 , and t ( x i ) > 0 , for i = 1 , 2 , , n . Then D q X ( s t ) 0 with the equality if and only if s ( x i ) = t ( x i ) , for i = 1 , 2 , , n .
Proof. 
The inequality follows by applying the Jensen inequality to the function φ defined by φ ( x ) = x 1 q , x [ 0 , ) , and putting c i = s ( x i ) , a i = t ( x i ) s ( x i ) , for i = 1 , 2 , , n .
Let us consider the case of q ( 1 , ) . Then 1 q < 0 , therefore the function φ is convex. By the Jensen inequality we obtain
1 = ( i = 1 n t ( x i ) ) 1 q = ( i = 1 n s ( x i ) t ( x i ) s ( x i ) ) 1 q i = 1 n s ( x i ) ( t ( x i ) s ( x i ) ) 1 q = i = 1 n s ( x i ) q t ( x i ) 1 q ,
and consequently
log i = 1 n s ( x i ) q t ( x i ) 1 q log 1 = 0 .
Since 1 q 1 > 0 , for q ( 1 , ) , it follows that
D q X ( s t ) = 1 q 1 log i = 1 n s ( x i ) q t ( x i ) 1 q 0 .
Let q ( 0 , 1 ) . Then the function φ is concave, and therefore we get i = 1 n s ( x i ) q t ( x i ) 1 q 1 , and consequently
log i = 1 n s ( x i ) q t ( x i ) 1 q log 1 = 0 .
Since 1 q 1 < 0 , for q ( 0 , 1 ) , it follows that
D q X ( s t ) = 1 q 1 log i = 1 n s ( x i ) q t ( x i ) 1 q 0 .
The equality in (14) holds if and only if t ( x i ) s ( x i ) is constant, for i = 1 , 2 , , n , i.e., if and only if t ( x i ) = k s ( x i ) , for i = 1 , 2 , , n . By summing over all i = 1 , 2 , , n , we get i = 1 n t ( x i ) = k i = 1 n s ( x i ) , which implies that k = 1 . Hence s ( x i ) = t ( x i ) , for i = 1 , 2 , , n . Therefore, we conclude that D q X ( s t ) = 0 if and only if s ( x i ) = t ( x i ) , for i = 1 , 2 , , n .
Corollary 3.
Let s be a state on a product MV-algebra ( A , ) , and X = ( x 1 , x 2 , , x n ) be a partition in ( A , ) such that s ( x i ) > 0 , for i = 1 , 2 , , n . Then H q s ( X ) log n with the equality if and only if the state s is uniform over X.
Proof. 
Let t : A [ 0 , 1 ] be a state uniform over X, i.e., t ( x i ) = 1 n , for i = 1 , 2 , , n . Then according to Theorems 14 and 13, it holds
0 D q X ( s t ) = log n H q s ( X ) ,
which implies that H q s ( X ) log n . Since the equality D q X ( s t ) = 0 applies if and only if s ( x i ) = t ( x i ) , for i = 1 , 2 , , n , the equality H q s ( X ) = log n holds if and only if the state s is uniform over X. ☐
Example 8.
Let us consider any product MV-algebra ( A , ) , and states s 1 , s 2 , s 3 defined on it. Let x A with s i ( x ) = p i , where p i ( 0 , 1 ) , for i = 1 , 2 , 3 . Then s i ( u x ) = 1 p i , for i = 1 , 2 , 3 . Further, we consider the partition X = ( x , u x ) in ( A , ) . Putting p 1 = 1 2 , p 2 = 1 3 , p 3 = 1 4 , and q = 2 , we obtain
D 2 X ( s 1 s 2 ) = log [ ( 1 2 ) 2 ( 1 3 ) 1 + ( 1 2 ) 2 ( 2 3 ) 1 ] = ˙ 0.1699   b i t ;
D 2 X ( s 1 s 3 ) = log [ ( 1 2 ) 2 ( 1 4 ) 1 + ( 1 2 ) 2 ( 3 4 ) 1 ] = ˙ 0.4150   b i t ;
D 2 X ( s 2 s 3 ) = log [ ( 1 3 ) 2 ( 1 4 ) 1 + ( 2 3 ) 2 ( 3 4 ) 1 ] = ˙ 0.0525   b i t .
For q = 1 2 , we get D q X ( s 1 s 2 ) = ˙ 0.04186 bit; D q X ( s 1 s 3 ) = ˙ 0.1 bit; D q X ( s 2 s 3 ) = ˙ 0.0122 bit. Evidently, in both cases mentioned above,
D q X ( s 1 s 3 ) > D q X ( s 1 s 2 ) + D q X ( s 2 s 3 ) .
This means that the triangle inequality for the Rényi divergence generally does not apply. The result means that it is not a metric in a true sense.
Theorem 15.
Let s , t be states on a given product MV-algebra ( A , ) , and X = ( x 1 , x 2 , , x n ) be a partition in ( A , ) such that s ( x i ) > 0 , and t ( x i ) > 0 , for i = 1 , 2 , , n . Then:
(i) 
0 < q < 1 implies D q X ( s t ) D X ( s t ) ;
(ii) 
q > 1 implies D q X ( s t ) D X ( s t ) ,
where
D X ( s t ) = i = 1 n s ( x i ) log s ( x i ) t ( x i ) .
Proof. 
We prove the claims by applying the Jensen inequality to the concave function φ defined, for every x ( 0 , ) , by φ ( x ) = log x . If we put c i = s ( x i ) , and a i = ( s ( x i ) t ( x i ) ) q 1 , for i = 1 , 2 , , n , in the inequality (6), we get
log i = 1 n s ( x i ) q t ( x i ) 1 q = log i = 1 n s ( x i ) ( s ( x i ) t ( x i ) ) q 1 i = 1 n s ( x i ) log ( s ( x i ) t ( x i ) ) q 1 = ( q 1 ) i = 1 n s ( x i ) log s ( x i ) t ( x i ) .
(i)
Suppose that 0 < q < 1 . Then 1 q 1 < 0 , and therefore we obtain
D q X ( s t ) = 1 q 1 log i = 1 n s ( x i ) q t ( x i ) 1 q i = 1 n s ( x i ) log s ( x i ) t ( x i ) = D X ( s t ) .
(ii)
Suppose that q > 1 . Then 1 q 1 > 0 , and we get
D q X ( s t ) = 1 q 1 log i = 1 n s ( x i ) q t ( x i ) 1 q i = 1 n s ( x i ) log s ( x i ) t ( x i ) = D X ( s t ) .
To illustrate the result of previous theorem, let us consider the following example which is a continuation to Example 6.
Example 9.
Consider the product MV-algebra ( A , ) from Example 6 and the real functions F 1 , F 2 defined by F 1 ( x ) = x , F 2 ( x ) = x 2 , for every x [ 0 , 1 ] . On the product MV-algebra ( A , ) we define two states s 1 , s 2 by the formulas s i ( f ) = 0 1 f ( x ) d F i ( x ) , i = 1 , 2 , for any element f of A . In addition, we consider the partition X = ( I [ 0 , 1 3 ) , I [ 1 3 , 1 ] ) in ( A , ) . It can be easily calculated that it has the s 1 -state values 1 3 , 2 3 of the corresponding elements, and the s 2 -state values 1 9 , 8 9 of the corresponding elements. By using Formula (5), it can be calculated the Kullback–Leibler divergences D X ( s 1 s 2 ) = ˙ 0.25163 bit, and D X ( s 2 s 1 ) = ˙ 0.19281 bit. Further, by simple calculations we obtain: D 2 X ( s 1 s 2 ) = ˙ 0.58496 bit, D 2 X ( s 2 s 1 ) = ˙ 0.28951 bit, D 1 / 3 X ( s 1 s 2 ) = ˙ 0.0707 bit, and D 1 / 3 X ( s 2 s 1 ) = ˙ 0.07736 bit. As can be seen, for q = 1 3 , we have D q X ( s 1 s 2 ) < D X ( s 1 s 2 ) , D q X ( s 2 s 1 ) < D X ( s 2 s 1 ) , and for q = 2 , we have D q X ( s 1 s 2 ) > D X ( s 1 s 2 ) , D q X ( s 2 s 1 ) > D X ( s 2 s 1 ) . The obtained results correspond to the claim of Theorem 15. Based on previous results, we can also see that the Rényi divergence D q X ( s t ) , as well as the Kullback–Leibler divergence D X ( s t ) , is not symmetrical.

6. Conclusions

The aim of this paper was to generalize the results concerning the Shannon entropy and Kullback-Leibler divergence in a product MV-algebra given in [22] and [24] to the case of Rényi entropy and Rényi divergence. The results are contained in Section 3, Section 4 and Section 5. In Section 3, we have introduced the concept of Rényi entropy H q s ( X ) of a partition X in a product MV-algebra ( A , ) , and we examined properties of this entropy measure. In Section 4, we have defined the conditional Rényi entropy of partitions in the studied algebraic structure. It was shown that the proposed concepts are consistent, in the case of the limit of q 1 , with the Shannon entropy of partitions defined and studied in [22]. Moreover, it was shown that the Rényi entropy H q s ( X ) as well as the conditional Rényi entropy H q s ( X / Y ) are monotonically decreasing functions of parameter q . In the final part of Section 4, we have defined the Rényi information about a partition X in a partition Y as an example for the further usage of the proposed concept of the conditional Rényi entropy. Section 5 was devoted to the study of Rényi divergence in ( A , ) . We have proved that the Kullback–Leibler divergence of states defined on a product MV-algebra can be derived from their Rényi divergence as the limiting case for q going to 1. Theorem 14 allows interpreting the Rényi divergence as a distance measure between two states (over the same partition) defined on a given product MV-algebra. In addition, we investigated the relationship between the Rényi entropy and the Rényi divergence (Theorem 13), as well as the relationship between the Rényi divergence and Kullback–Leibler divergence (Theorem 15), in a product MV-algebra.
In the proofs we used L’Hôpital’s rule, the triangle inequality of q -norm, and the Jensen inequality. To illustrate the results, we have provided several numerical examples.
As has been shown in Example 1, the model studied in this article generalizes the classical case; that is, the Rényi entropy and the Rényi divergence defined in this paper are a generalization of the classical concepts of Rényi entropy and Rényi divergence. On the other hand, MV-algebras enable to study more general situations. We note that MV-algebras can be for example interpreted by means of Ulam some games (see e.g., [35,36,37]). The obtained results could therefore be useful for the researches on this subject.
In Example 2, we have mentioned that the full tribe of fuzzy sets represents a special case of product MV-algebras; therefore, the results of the article can be immediately applied to this important class of fuzzy sets. We recall that by a fuzzy subset of a non-empty set Ω (cf. [38]), we understand any map f : Ω [ 0 , 1 ] . The value f ( ω ) is interpreted as the degree of belongingness of ω Ω to the considered fuzzy set f . In [39], Atanassov has generalized the fuzzy sets by introducing the idea of an intuitionistic fuzzy set, a set having with each member a degree of belongingness as well as a degree of non-belongingness. From the application point of view, it is interesting that to a given class F of intuitionistic fuzzy sets can be constructed an MV-algebra A such that F can be embedded to A . Also, an operation of product on F can be introduced by such a way that the corresponding MV-algebra is a product MV-algebra. Therefore, all the results of this article can also be applied to the intuitionistic fuzzy case.

Author Contributions

Conceptualization, D.M.; Formal analysis, D.M. and B.R.; Investigation, D.M. and B.R.; Writing—original draft, D.M.; Writing—review & editing, B.R. Both authors have read and approved the final manuscript.
Funding: This research received no external funding.

Acknowledgments

The authors thank the editor and the referees for their valuable comments and suggestions. The authors are grateful to Constantine the Philosopher University in Nitra for covering the costs to publish in open access.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  3. Gray, R.M. Entropy and Information Theory; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  4. Rényi, A. On measures of information and entropy. In Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics and Probability, Oakland, CA, USA, 20 June–30 July 1960; pp. 547–561. [Google Scholar]
  5. Teixeira, A.; Matos, A.; Antunes, L. Conditional Rényi entropies. IEEE Trans. Inf. Theory 2012, 58, 4273–4277. [Google Scholar] [CrossRef]
  6. Fehr, S.; Berens, S. On the conditional Rényi entropy. IEEE Trans. Inf. Theory 2014, 60, 6801–6810. [Google Scholar] [CrossRef]
  7. Ilič, V.M.; Djordjevič, I.B.; Stankovič, M. On a General Definition of Conditional Rényi Entropies. Proceedings 2018, 2, 166. [Google Scholar] [CrossRef]
  8. Arimoto, S. Information Mesures and Capacity of Order α for Discrete Memoryless Channels. In Topics in Information Theory; Colloquia Mathematica Societatis János Bolyai; Csiszár, I., Elias, P., Eds.; János Bolyai Mathematical Society and North-Holland: Budapest, Hungary, 1977; Volume 16, pp. 493–519. [Google Scholar]
  9. Jizba, P.; Arimitsu, T. The world according to Rényi: thermodynamics of multifractal systems. Ann. Phys. 2004, 312, 17–59. [Google Scholar] [CrossRef] [Green Version]
  10. Renner, R.; Wolf, S. Advances in Cryptology—ASIACRYPT 2005. In Proceedings of the 11th International Conference on the Theory and Application of Cryptology and Information Security, Chennai, India, 4–8 December 2005; Chapter Simple and Tight Bounds for Information Reconciliation and Privacy Amplification; Springer: Berlin/Heidelberg, Germany, 2005; pp. 199–216. [Google Scholar]
  11. Chang, C.C. Algebraic analysis of many valued logics. Trans. Am. Math. Soc. 1958, 88, 467–490. [Google Scholar] [CrossRef]
  12. Mundici, D. Interpretation of AFC*-algebras in Łukasiewicz sentential calculus. J. Funct. Anal. 1986, 56, 889–894. [Google Scholar]
  13. Mundici, D. Advanced Łukasiewicz Calculus and MV-Algebras; Springer: Dordrecht, The Netherlands, 2011. [Google Scholar]
  14. Di Nola, A.; Dvurečenskij, A.; Hyčko, M.; Manara, C. Entropy on Effect Algebras with the Riesz Decomposition Property II: MV-Algebras. Kybernetika 2005, 41, 161–176. [Google Scholar]
  15. Riečan, B. Kolmogorov–Sinaj entropy on MV-algebras. Int. J. Theory Phys. 2005, 44, 1041–1052. [Google Scholar] [CrossRef]
  16. Barbieri, G.; Weber, H. Measures on clans and on MV-algebras. In Handbook of Measure Theory; Pap, E., Ed.; Elsevier: Amsterdam, The Netherlands, 2002; pp. 911–945. [Google Scholar]
  17. Riečan, B.; Mundici, D. Probability on MV-algebras. In Handbook of Measure Theory; Pap, E., Ed.; Elsevier: Amsterdam, The Netherlands, 2002; pp. 869–910. [Google Scholar]
  18. Montagna, F. An algebraic approach to propositional fuzzy logic. J. Log. Lang. Inf. 2000, 9, 91–124. [Google Scholar] [CrossRef]
  19. Riečan, B. On the product MV-algebras. Tatra Mt. Math. 1999, 16, 143–149. [Google Scholar]
  20. Jakubík, J. On product MV-algebras. Czech. Math. J. 2002, 52, 797–810. [Google Scholar] [CrossRef]
  21. Riečan, B.; Neubrunn, T. Integral, Measure and Ordering; Springer: Dordrecht, The Netherlands, 1997. [Google Scholar]
  22. Petrovičová, J. On the entropy of partitions in product MV-algebras. Soft Comput. 2000, 4, 41–44. [Google Scholar] [CrossRef]
  23. Petrovičová, J. On the entropy of dynamical systems in product MV-algebras. Fuzzy Sets Syst. 2001, 121, 347–351. [Google Scholar] [CrossRef]
  24. Markechová, D.; Riečan, B. Kullback-Leibler Divergence and Mutual Information of Partitions in Product MV Algebras. Entropy 2017, 19, 267. [Google Scholar] [CrossRef]
  25. Gluschankof, D. Cyclic ordered groups and MV-algebras. Czechoslovak Math. J. 1993, 43, 249–263. [Google Scholar]
  26. Cattaneo, G.; Lombardo, F. Independent axiomatization for MV-algebras. Tatra Mt. Math. 1998, 15, 227–232. [Google Scholar]
  27. Riečan, B. Analysis of Fuzzy Logic Models. In Intelligent Systems; Koleshko, V.M., Ed.; InTech Janeza Trdine 9: Rijeka, Croatia, 2012; pp. 219–244. [Google Scholar]
  28. Anderson, M.; Feil, T. Lattice Ordered Groups; Kluwer: Dordrecht, The Netherlands, 1988. [Google Scholar]
  29. Mundici, D. Averaging the truth-value in Łukasiewicz logic. Stud. Logica 1995, 55, 113–127. [Google Scholar] [CrossRef]
  30. Riečan, B. On the probability theory on product MV-algebras. Soft Comput. 2000, 4, 49–57. [Google Scholar] [CrossRef]
  31. Kroupa, T. Conditional probability on MV-algebras. Fuzzy Sets Syst. 2005, 149, 369–381. [Google Scholar] [CrossRef]
  32. Vrábelová, M. A note on the conditional probability on product MV algebras. Soft Comput. 2000, 4, 58–61. [Google Scholar] [CrossRef]
  33. Markechová, D.; Mosapour, B.; Ebrahimzadeh, A. Logical Divergence, Logical Entropy, and Logical Mutual Information in Product MV-Algebras. Entropy 2018, 20, 129. [Google Scholar] [CrossRef]
  34. Markechová, D.; Riečan, B. Entropy of fuzzy partitions and entropy of fuzzy dynamical systems. Entropy 2016, 18, 19. [Google Scholar] [CrossRef]
  35. Mundici, D. Logic of infinite quantum systems. Int. J. Theory Phys. 1993, 32, 1941–1955. [Google Scholar] [CrossRef]
  36. Cignoli, R.; D’Ottaviano, I.M.L.; Mundici, D. Algebraic Foundations of Many-Valued Reasoning; Trends in Logic; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2000; Volume 7. [Google Scholar]
  37. Gerla, V. Conditioning a state by a Łukasiewicz event: A probabilistic approach to Ulam games. Theory Comput. Sci. 2000, 230, 149–166. [Google Scholar] [CrossRef]
  38. Zadeh, L.A. Fuzzy Sets. Inf. Control. 1965, 8, 338–358. [Google Scholar] [CrossRef]
  39. Atanassov, K. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Markechová, D.; Riečan, B. Rényi Entropy and Rényi Divergence in Product MV-Algebras. Entropy 2018, 20, 587. https://doi.org/10.3390/e20080587

AMA Style

Markechová D, Riečan B. Rényi Entropy and Rényi Divergence in Product MV-Algebras. Entropy. 2018; 20(8):587. https://doi.org/10.3390/e20080587

Chicago/Turabian Style

Markechová, Dagmar, and Beloslav Riečan. 2018. "Rényi Entropy and Rényi Divergence in Product MV-Algebras" Entropy 20, no. 8: 587. https://doi.org/10.3390/e20080587

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop