Logical Divergence, Logical Entropy, and Logical Mutual Information in Product MV-Algebras

In the paper we propose, using the logical entropy function, a new kind of entropy in product MV-algebras, namely the logical entropy and its conditional version. Fundamental characteristics of these quantities have been shown and subsequently, the results regarding the logical entropy have been used to define the logical mutual information of experiments in the studied case. In addition, we define the logical cross entropy and logical divergence for the examined situation and prove basic properties of the suggested quantities. To illustrate the results, we provide several numerical examples.


Introduction
In all areas of empirical research, it is very important to know how much information we gain by the realization of experiments. As it is known, the measure of information is entropy, the standard approach being based on Shannon entropy [1]. The standard mathematical model of an experiment in information theory [2] is a measurable partition of a probability space. Let us remind that a measurable partition of a probability space (X, S, P) is a sequence A = {A 1 , . . . , A n } of measurable subsets of X such that ∪ n i=1 A i = X and A i ∩ A j = ∅ whenever i = j. The Shannon entropy of the measurable partition A = {A 1 , . . . , A n } with probabilities p i = P(A i ), i = 1, . . . , n, of the corresponding elements, is the number h S (A) = ∑ n i=1 S(p i ), where S : [0, 1] → is the Shannon entropy function defined by the formula: In classical theory, partitions are defined within the Cantor set theory. However, it has turned out that, in many cases, the partitions defined in the context of fuzzy set theory [3] are more suitable for solving real problems. Hence, numerous suggestions have been put forward to generalize the classical partitions to fuzzy partitions [4][5][6][7][8][9][10]. Fuzzy partitions provide a mathematical model of random experiments the outcomes of which are unclear, inaccurately defined events. The Shannon entropy of fuzzy partitions has been studied by many authors; we refer the reader to, e.g., [11][12][13][14][15][16][17][18][19][20][21].
The notion of an MV-algebra, originally proposed by Chang in [22] in order to give an algebraic counterpart of the Łukasiewicz many-valued logic [23] (MV = many valued), generalizes some classes of fuzzy sets. MV-algebras have been investigated by numerous international research groups [24][25][26][27][28]. A Shannon entropy theory for MV-algebras was created in [29,30]. The fuzzy set theory is a rapidly evolving field of theoretical and applied mathematical research. At present the subjects of intensive study are also other algebraic structures based on the fuzzy set theory, such as D-posets [31][32][33], effect algebras [34], and A-posets [35,36]. Some results concerning Shannon's entropy on these structures have been provided, e.g., in [37][38][39].
An important case of MV-algebras is the so-called product MV-algebra (see, e.g., [40][41][42][43][44][45]). This notion was proposed independently by two authors: Riečan [40] and Montagna [41]. A Shannon entropy theory for product MV-algebras was provided in [30,46,47]. We note that in the recently published paper [48], the results regarding the Shannon entropy of partitions in product MV-algebras were exploited to define the notions of Kullback-Leibler divergence and mutual information of partitions in product MV-algebras. The Kullback-Leibler divergence (often shortened to K-L divergence) was proposed in [49] as the distance between two probability distributions and it is currently one of the most basic quantities in information theory.
When addressing some special issues instead of Shannon entropy, it is preferable to use an approach based on the conception of logical entropy [50][51][52][53][54][55][56]. If A = {A 1 , . . . , A n } is a measurable partition with probabilities p 1 , . . . , p n of the corresponding elements, then the logical entropy of A is defined by the formula h l In [50], the author gives a history of the logical entropy formula h l (A) = 1 − ∑ n i=1 p 2 i . It is interesting that Alan Turing, who worked during the Second World War at the Bletchley Park facility in England, used the formula ∑ n i=1 p 2 i in his famous cryptanalysis work. This formula was independently used by Polish crypto-analysts in their work [57] on the Enigma. The relationship between the Shannon entropy and the logical entropy is examined in [50]. In addition, the notions of logical cross entropy and logical divergence have been proposed in the cited paper. For some recent works related to the concept of logical entropy on algebraic structures based on fuzzy set theory, we refer the reader to (for example) [58][59][60][61][62][63][64][65].
The purpose of this article is to extend the study of logical entropy provided in [50] to the case of product MV-algebras. The remainder of the article is structured as follows. In Section 2 we present basic concepts, terminology and the known results that are used in the article. The results of the paper are given in the succeeding three sections. In Section 3, we define the logical entropy of partitions in product MV-algebras and its conditional version and examine their properties. In the following section, the results of Section 3 are exploited to define the concept of logical mutual information for the studied situation. Using the notion of logical conditional mutual information, we present chain rules for logical mutual information in product MV-algebras. In Section 5, we define the logical cross entropy and the logical divergence of states defined on product MV-algebras and we examine properties of these quantities. The results are explained with several examples to illustrate the theory developed in the article. The final section contains a brief overview. It is shown that by replacing the Shannon entropy function (Equation (1)) by the logical entropy function (Equation (2)) we obtain the results analogous to the results given in [48].

Preliminaries
The aim of the section is to provide basic concepts, terminology and the known results used in the paper.
Example 2. Let (L, +, ≤) be a commutative lattice ordered group (shortly l-group), i.e., (L, +) is a commutative group, (L, ≤) is a partially ordered set being a lattice and a ≤ b =⇒ a + c ≤ b + c. Let u ∈ L be a strong unit of L (i.e., to each a ∈ L there exists a positive integer n satisfying the condition a ≤ nu) such that u > 0, where 0 is a neutral element of (L, +).
Evidently, if a, b ∈ L such that a + b ≤ u, then a ⊕ b = a + b Moreover, it can be seen that the condition a ⊗ b = 0 is equivalent to the condition that a + b ≤ u.
By the following Mundici representation theorem, every MV-algebra M can be identified with the unit interval [0, u] of a unique (up to isomorphism) commutative lattice ordered group L with a strong unit u. We say that L is the l-group corresponding to M. Theorem 1 [66]. Let M be an MV-algebra. Then there exists a commutative lattice ordered group L with a strong unit u such that M = M 0 (L, u), and (L, u) is unique up to isomorphism. Definition 2 [47]. Let M = (M, ⊕, ⊗, ⊥, 0, 1) be an MV-algebra. A partition in M is an n-tuple α = (a 1 , . . . , a n ) of elements of M with the property a 1 + . . . + a n = u, where + is an addition in the l-group L corresponding to M and u is a strong unit of L.
In the paper we shall deal with product MV-algebras. The definition of product MV-algebra (cf. [40,41]), as well as the previous definition of partition in MV-algebra, is based on Mundici's theorem, i.e., the MV-algebra operation ⊕ in the following definition, and in what follows, is substituted by the group operation + in the commutative lattice ordered group L that corresponds to the considered MV-algebra M. Analogously, the element u is a strong unit of L and ≤ is the partial-ordering relation in L.
A product MV-algebra is an algebraic structure (M, ⊕, ⊗, ·, ⊥, 0, 1) where (M, ⊕, ⊗, ⊥, 0, 1) is an MV-algebra and · is a commutative and associative binary operation on M with the following properties: For brevity, we will write (M, · ) instead of (M, ⊕, ⊗, ·, ⊥, 0, 1) Further, we consider a state defined on (M, · ) which plays the role of a probability measure on M. We note that a relevant probability theory for the product MV-algebras was developed in [44], see also [27,45].
Definition 4 [44]. A state on a product MV-algebra (M, · ) is a map s : M → [0, 1] with the properties: Notice that the disjointness of the elements a, b ∈ M is expressed in the previous definition by the condition a + b ≤ u (or equivalently by a ≤ u − b). According to the Mundici theorem this condition can be formulated in the equivalent way as a + b ∈ M or also as a ⊗ b = 0. As is customary, we will write ∑ n i=1 a i instead of a 1 + . . . + a n . Let s : M → [0, 1] be a state. Applying induction we get that for any elements a 1 , . . . , a n ∈ M such that ∑ n i=1 a i ≤ u, it holds s(∑ n i=1 a i ) = ∑ n i=1 s(a i ). In the system of all partitions of (M, · ), we define the refinement partial order in a standard way (cf. [23]). If α = (a 1 , . . . , a n ), and β = (b 1 , . . . , b m ) are two partitions of (M, · ), then we write β α (and we say that β is a refinement of α), if there exists a partition {I(1), I(2), . . . , I(n)} of the set {1, 2, . . . , m} such that a i = ∑ j∈I(i) b j , for i = 1, . . . , n. Further, we define α ∨ β = (a i · b j ; i = 1, . . . , n, j = 1, 2, . . . , m).
. The partition α ∨ β represents a combined experiment consisting of a realization of the considered experiments α and β. If α 1 , α 2 , . . . , α n are partitions in a product MV-algebra (M, · ), then we put

Logical Entropy of Partitions in Product MV-Algebras
In this section we define the logical entropy and the logical conditional entropy of partitions in a product MV-algebra and derive their properties.

Definition 5.
Let α = (a 1 , . . . , a n ) be a partition in a product MV-algebra (M, · ), and s : M → [0, 1] be a state. Then we define the logical entropy of α with respect to state s by the formula: Remark 1. Evidently, the logical entropy h l s (α) is always nonnegative, and it has the maximum value 1 − 1 n for the state s uniform over α = (a 1 , . . . , a n ). (3) can also be written in the following form: Example 3. Let (M, · ) be a product MV-algebra and s : M → [0, 1] be a state. If we put ε = (u), then ε is a partition of (M, · ) with the property α ε, for every partition α of (M, · ). Its logical entropy is h l s (ε) = 0. Let a ∈ M with s(a) = p, where p ∈ (0, 1). It is obvious that the pair α = (a, u − a) is a partition of (M, · ). Since s(u − a) = 1 − p, the logical entropy h l s (α) = 2p(1 − p). If we put p = 1 2 , then we have h l s (α) = 1 2 .
In the proofs we shall use the following propositions.
(i) Since by Proposition 1, for i = 1, . . . , n, it holds ∑ m j=1 s(a i · b j ) = s(a i ), we obtain: Therefore: (ii) Combining Equation (7) and the previous property we obtain the claim (ii).

Logical Mutual Information in Product MV-Algebras
In this section, the previous results are exploited to introduce the concept of logical mutual information of partitions in product MV-algebras and its conditional version and to derive their properties. In particular, using the concept of logical conditional mutual information we formulate chain rules for the examined situation.

Remark 7.
Let us remind that the product MV-algebra presented in the previous example represents an important class of fuzzy sets; it is called a full tribe of fuzzy sets (cf. [21,24,25]).

Theorem 7.
If partitions α and β of (M, · ) are statistically independent, i.e., s(a · b) = s(a) · s(b), for every a ∈ α, b ∈ β, then: Proof. Let α = (a 1 , . . . , a k ), β = (b 1 , . . . , b l ). Using Equations (12) and (4) we obtain: As it is known, one of the most significant properties of Shannon entropy is additivity: if partitions A, B are statistically independent, then h S (A ∨ B) = h S (A) + h S (B). Here, A ∨ B = {A ∩ B; A ∈ A, B ∈ B}. In the case of logical entropy, the following property applies. Theorem 8. If partitions α and β of (M, · ) are statistically independent, then: Proof. As a consequence of Theorem 7 and Equation (12), we obtain: In the following two theorems, using the concept of logical conditional mutual information, chain rules for logical mutual information in product MV-algebras are established. Definition 8. Let α, β, γ be partitions of (M, · ). The logical conditional mutual information of α and β assuming a realization of γ is defined by: Remark 8. It is easy to show that: I l s (α, β/γ) = I l s (β, α/γ).

Logical Cross Entropy and Logical Divergence in Product MV-Algebras
In this section, we define the notions of logical cross entropy and logical divergence in product MV-algebras. The proposed notions are analogies of the concepts of logical cross entropy and logical divergence introduced by Ellerman in [50]. For illustration, we provide some numerical examples.

Definition 10.
Let α = (a 1 , . . . , a n ) be a partition in a product MV-algebra (M, · ), and s, t ∈ S(M). We define the logical cross entropy of states s, t with respect to α by the formula: s(a i ) (1 − t(a i )).

Remark 9.
Since ∑ n i=1 s(a i ) = 1, we can also write: s(a i )t(a i ).
Evidently, the logical cross entropy h l α (s t) is symmetric and it is always nonnegative. If states s, t are identical over α (i.e., s(a i ) = t(a i ), for i = 1, 2, . . . , n ), then h l α (s t) = h l s (α).
Definition 11. Let α = (a 1 , . . . , a n ) be a partition in a product MV-algebra (M, · ), and s, t ∈ S(M). We define the logical divergence of states s, t with respect to α by the formula: Remark 10. It is evident that d l α (s t) = d l α (t s), and d l α (s t) ≥ 0 with the equality if and only if the states s, t are identical over α. As in the case of K-L divergence, the logical divergence is not a distance metric because it does not satisfy the triangle inequality (as shown in the example that follows). Notice that its square root (with or without the 1 2 factor) is a natural distance metric.

Theorem 12.
Let α be a partition of a product MV-algebra (M, · ). Then, for every states s, t defined on (M, · ), it holds: Proof. Assume that α = (a 1 , . . . , a n ). Let us calculate: Remark 11. As a simple consequence of the previous theorem and the logical information inequality d l α (s t) ≥ 0 (with the equality if and only if the states s, t are identical over α) we get that h l α (s t) ≥ 1 2 (h l s (α) + h l t (α)) with the equality if and only if the states s, t are identical over α. It is now possible to verify that: d l α (s 1 s 2 ) = h l α (s 1 s 2 ) − 1 2 (h l s 1 (α) + h l s 2 (α)), and h l α (s 1 s 2 ) ≥ 1 2 (h l s 1 (α) + h l s 2 (α)).

Conclusions
In [48], the authors introduced the concepts of mutual information and K-L divergence in product MV-algebras and derived the fundamental properties of these quantities. Naturally, the presented theory is based on the Shannon entropy function (Equation (1)). The aim of this paper was to construct a relevant theory on product MV-algebras for the case when the Shannon entropy function is replaced by the logical entropy function (Equation (2)). The main results of the paper are contained in Sections 3-5.
In Section 3, we have proposed the concepts of logical entropy and logical conditional entropy of partitions in product MV-algebras and examined their properties. Among others, the concavity of logical entropy has been proved. In Section 4, the notions of logical entropy and logical conditional entropy have been exploited to define the logical mutual information for the examined case of product MV-algebras. We have shown basic properties of these quantities. Moreover, chain rules for logical entropy and logical mutual information for the studied case of product MV-algebras were derived. In the final section, the notions of logical cross entropy and logical divergence in product MV-algebras were proposed. To illustrate the developed theory, several numerical examples are included in the paper.
As already mentioned in Section 4 (see Example 5), an important case of product MV-algebras is the full tribe M of fuzzy sets. We note that in [21] (see also [24,25]) the entropy of Shannon type on the full tribe M of fuzzy sets was examined. In a natural way, all results, based on the logical entropy function (2), provided by the theory developed in the paper may be applied also to the case of a full tribe of fuzzy sets.