Generalized Mutual Information

Mutual information is one of the essential building blocks of information theory. Yet, it is only finitely defined for distributions with fast decaying tails on a countable joint alphabet of two random elements. The unboundedness of mutual information over the general class of all distributions on a joint alphabet prevents its potential utility to be fully realized. This is in fact a void in the foundation of information theory that needs to be filled. This article proposes a family of generalized mutual information all of whose members 1) are finitely defined for each and every distribution of two random elements on a joint countable alphabet, except the one by Shannon, and 2) enjoy all utilities of a finite Shannon's mutual information.


Introduction and Summary
Let Z be a random element on a countable alphabet Z = {z k ; k ≥ 1} with an associated distribution p = {p k ; k ≥ 1}.Let the cardinality or support on Z be denoted K = k≥1 1[p k > 0], where 1[•] is the indicator function.K is possibly finite or infinite.Let P denote the family of all distributions on Z .Let (X, Y ) be a pair of random elements on a joint countable alphabet X × Y = {(x i , y j ); i ≥ 1, j ≥ 1} with an associated joint probability distribution p X,Y = {p i,j ; i ≥ 1, j ≥ 1}, let the two marginal distributions be respectively denoted p X = {p i,• = j≥1 p i,j ; i ≥ 1} and p Y = {p •,j = i≥1 p i,j ; j ≥ 1}.Let P X,Y denote the family of all distributions on X × Y .Shannon (1948) offers two fundamental building blocks of information theory, Shannon's entropy H = H(Z) = − k≥1 p k ln p k and mutual information MI = MI(X, Y ) = H(X) + H(Y ) − H(X, Y ), where H(X), H(Y ) and H(X, Y ) are entropies respectively defined with the distributions p X , p Y and p X,Y .* AMS 2000 Subject Classifications.Primary 60E10; secondary 94A15, 82B30.Keywords and phrases.Mutual information, Shannon's entropy, conditional distribution of total collision, generalized entropy, generalized mutual information.
Mutual information plays a central role in the theory and the practice of modern data science for three basic reasons.First, the definition of MI does not rely on any metrization on an alphabet, nor does it require the letters of the alphabet to be ordinal.This generality allows it to be defined and used in data spaces beyond the real coordinate space Ê n , where random variables (as opposed to random elements) reside.Second, when X and Y are random variables assuming real values, that is, the joint alphabet is metrized, MI(X, Y ) captures linear as well as any non-linear stochastic association between X and Y .See Chapter 5 of Zhang (2017) for examples.Third, it offers a single-valued index measure for the stochastic association between two random elements, more specifically, MI(X, Y ) ≥ 0 for any probability distribution of X and Y on a joint alphabet and MI(X, Y ) = 0 if and only if X and Y are independent, under a wide class of general probability distributions.
However mutual information MI, in its current form, may not be finitely defined for joint distributions in a subclass of P X,Y , partially due to the fact that any or all of the three Shannon's entropies in the linear combination may be unbounded.The said unboundedness prevents the potential utility of mutual information to be fully realized, and hence is a deficiency of MI which leaves a void in P X,Y .(More detailed arguments are provided in Section 2 below.)This article introduces a family of generalized mutual information indexed by a positive integer n ∈ N, denoted Á = {MI n ; n ≥ 1}, each of whose members, MI n , is referred to as the n th order mutual information.
All members of Á are finitely defined for each and every p X,Y ∈ P X,Y , except MI 1 = MI, and all of them preserve the utilities of Shannon's mutual information when it is finite.
The said deficiency of MI is due to the fact that Shannon's entropy may not be finite for "thick-tailed" distributions (with p k decaying slowly in k) in P. To address the deficiency of MI, the issue of unboundedness of Shannon's entropy on a subset of P must be addressed, through some generalization in one way or the other.The effort to generalize Shannon's entropy has been long and extensive in the existing literature.The main perspective in the generalization in the existing literature is based on axiomatic characterization of Shannon's entropy.Interested readers may refer to Csiszá (2008) and Amigó, Balogh and Hernández (2018) for details and references therewithin.In a nut shell, with respect to the functional form, H = k≥1 h(p k ), under certain desirable axioms, for example, Khinchin (1957) and Chakrabarti and Chakrabarty (2005), h(p) = −p ln p is uniquely determined up to a multiplicative constant; if the strong additivity axiom is relaxed to be one of the weaker versions, say α-additivity or composability, then h(p) may be of other forms which give rise to Rényi's entropy, by Rényi (1961), and the Tsallis entropy, by Tsallis (1988).However all such generalization effort does not seem to lead to an information measure on a joint alphabet that would possess all the desirable properties of MI, in particular MI(X, Y ) = 0 if and only if X and Y are independent, which is supported by an argument via Kullback-Leibler divergence proposed by Kullback and Leibler (1951).
Toward repairing the said deficiency of MI, a new perspective of generalizing Shannon's entropy is introduced in this article.In the new perspective, instead of searching for alternative forms of h(p) in H = k≥1 h(p k ) under weaker axiomatic conditions, it is sought to apply Shannon's entropy to, not the original underlying distribution p but distributions induced by p.One particular set of such induced distributions is a family, each of whose members is referred to as a conditional distribution of total collision (CDOTC) indexed by n ∈ N. It is shown that Shannon's entropy defined with every CDOTC induced by any p ∈ P is bounded above, provided that n ≥ 2. The boundedness of the generalized entropy allows mutual information to be defined for any CDOTC of degree n ≥ 2 for any p X,Y ∈ P X,Y .The resulting mutual information is referred to as the n th order mutual information index and is denoted MI n , which is shown to possess all the desired properties of MI but with boundedness guaranteed.The main results are given and established in Section 3 after several motivating arguments for the generalization of mutual information in Section 2.

Generalization Motivated
To further motivate the generalization of mutual information in this article, let the definition of mutual information be considered in a broader perspective.Inherited from the Kullback-Leibler divergence, mutual information on a joint alphabet, MI(X, Y ) = i≥1,j≥1 p i,j ln(p i,j /(p i,• × p •,j )), is unbounded for a large subclass of distributions in P X,Y .Example 1 below demonstrates the existence of such a subclass of joint distributions.
Example 1.Let p = {p k ; k ≥ 1} be a probability distribution with p k > 0 for every k but unbounded entropy.Let p X,Y = {p i,j ; i ≥ 1 and j ≥ 1} be such that p i,j = p i for all i = j and p i,j = 0 for all i = j, hence p One of the most attractive properties of mutual information is that mutual information MI(X, Y ) is finitely defined for all joint distributions such that p i,j = p i,• × p •,j for all i ≥ 1 and j ≥ 1 and MI(X, Y ) = 0 if and only if the two random elements X and Y are independent.However the utility of mutual information is beyond mere an indication of whether it is zero or not.The magnitude of mutual information is also of essential importance, although Shannon did not elaborate that in his landmark paper, Shannon (1948).The said importance is perhaps best illustrated by the notion of the standardized mutual information defined as κ(X, Y ) = MI(X, Y )/H(X, Y ) and Theorem 1 below.However before stating Theorem 1, Definition 1 below is needed.
Definition 1. Random elements X ∈ X and Y ∈ Y are said to have an one-to-one correspondence, or to be one-to-one corresponded, under a joint probability distribution p X,Y on X × Y , if 1. for every i satisfying P(X = x i ) > 0, there exists a unique j such that P(Y = y j |X = x i ) = 1, and 2. for every j satisfying P(Y = y j ) > 0, there exists a unique i such that P(X = x i |Y = y j ) = 1.
Theorem 1.Let (X, Y ) be a pair of random elements on alphabet X × Y with joint distribution 2. κ(X, Y ) = 0 if and only if X and Y are independent, and 3. κ(X, Y ) = 1 if and only if X and Y are one-to-one corresponded.
A proof of Theorem 1 can be found on page 159 of Zhang (2017).Theorem 1 essentially maps independence of X and Y (the strongest form of unrelatedness) to κ = 0, one-to-one correspondence (the strongest form of relatedness) to κ = 1, and everything else in between.In so doing, the magnitude of mutual information is utilized in measuring the degree of dependence in pairs of random elements, which could lead to all sorts of practical tools in evaluating, ranking, and selecting variables in data space.
It is important to note that the condition of H(X, Y ) < ∞ is essential in Theorem 1 since obviously, without it, κ may not be well defined.In fact, if H(X, Y ) < ∞ is not imposed, even observing reasonable conventions such as 1/∞ = 0 and 0/∞ = 0, the statements of Theorem 1 may not be true.To see this, consider the following constructed example.
Example 2. Let p = {p k ; k ≥ 1} be a probability distribution with p k > 0 for every k but unbounded entropy.Let p X,Y = {p i,j ; i = 1 or 2 and j ≥ 1} be such that p i,j =    p j i = 1 and j is odd p j i = 2 and j is even 0 otherwise, hence p X = {p 1,• , p 2,• } = { k=odd p k , k=even p k } and p Y = {p •,j = p j ; j ≥ 1}.X and Y are obviously not independent, and Therefore Part 2 of Theorem 1 fails.
Example 2 indicates that mutual information in its current form is deprived of the potential utility of Theorem 1 for a large class of joint distributions and therefore leaves much to be desired.
Another argument for the generalization of mutual information can be made in a statistical perspective.In practice, mutual information is often to be estimated from sample data.For statistical inference to be meaningful, the estimand MI(X, Y ) needs to exist, i.e., MI(X, Y ) < ∞.
More specifically in testing the hypothesis of independence between X and Y , H 0 : p X,Y ∈ P 0 where P 0 ⊂ P X,Y is the subclass of all joint distributions for independent X and Y on X × Y , MI(X, Y ) needs to be finitely defined in an open neighborhood of P 0 in P X,Y , or else the logic framework of statistical inference is not well supported.Let P ∞ be the subclass of P X,Y such that MI(X, Y ) = ∞.In general, it can be shown that P ∞ is dense in P X,Y with respect to the p-norm for p ≥ 1.In specific, for any p X,Y ∈ P 0 , there exists a sequence of distributions Example 3. Let p X,Y = {p i,j ; i = 1, 2 and j = 1, 2} where p i,j = 0.25 for all (i, j) such that Obviously X and Y are independent under p X,Y , that is, p X,Y ∈ P 0 .
Let p m,X,Y be constructed based on p X,Y as follows.
Remove an arbitrarily small quantity ε/4 > 0 where ε = 1/m away from each of the four positive probabilities in p X,Y so each becomes p m,i,j = 0.25 − ε/4 for all (i, j) such that 1 ≤ i ≤ 2 and 1 ≤ j ≤ 2. Extend the range of (i, j) to i ≥ 3 and j ≥ 3, and allocate the mass ε to over the extended range according to where c is such that k≥3 c/[k(ln k) 2 ] = ε.Under the constructed {p m,i,j }, for any ε = 1/m, X and Y are not independent, and the corresponding mutual information is However noting, as m → ∞, ε → 0 and hence c → 0, All things considered, it is therefore desirable to have a mutual information measure, say MI n (X, Y ), or for that matter a family of mutual information measures indexed by a positive integer n, such that MI n (X, Y ) < ∞ for all distributions in P X,Y , and with an accordingly defined standardized mutual information measure κ n = κ n (X, Y ) such that the utility of Theorem 1 is preserved with κ n in place of κ for all distributions in P X,Y .

Main Results
Given Z = {z k ; k ≥ 1} and p = {p k }, consider the experiment of drawing an identically and independently distributed (iid) sample of size n.Let C n denote the event that all observations of the sample take on a same letter in Z , and let C n be referred to as the event of total collision.The conditional probability, given C n , that the total collision occurs at letter z k is It is clear that The lemma follows.✷ Lemma 2. For each n, n ≥ 2, and any p ∈ P, H n (Z) = − k≥1 p n,k ln p n,k < ∞.
Proof.Write η n = k≥1 p n k .Noting 0 < η n ≤ 1 and 0 ≤ −p ln p ≤ 1/e for all p ∈ [0, 1], The lemma follows.✷ On the joint alphabet X × Y = {(x i , y j )} with distribution p X,Y = {p i,j }, consider the associated CDOTC for an n and all pairs (i, j) such that i ≥ 1 and j ≥ 1, Proof.For each positive integer n, if p i,j = p i,• × p •,j for all pairs (i, j), i ≥ 1 and j ≥ 1, then and the two factors of the last expression above are respectively P(X The lemma immediately follows the factorization theorem.✷ For each n ∈ N, let H n (X, Y ), H n (X) and H n (Y ) be Shannon's entropies defined with the joint CDOTC, {p n,i,j ; i ≥ 1} as in (3), and the marginal distributions {p n,i,• ; i ≥ 1} and {p n,•,j ; j ≥ 1} as in ( 4) and ( 5) respectively.Let Theorem 2. For every n ≥ 2 and any p X,Y ∈ P X,Y , be referred to as the n th order standardized mutual information, and write Á S = {κ n ; n ≥ 1}.Let (X * , Y * ) be a pair of random elements on X × Y according to the induced joint distribution p n,X,Y with index value n ≥ 1.
Lemma 4. X and Y have an one-to-one correspondence if and only if X * and Y * have one.
Proof.If X and Y have an one-to-one correspondence, then for each i, there is a unique j i such that p i,j i > 0 and p i,j = 0 for all other j, j = j i .By (3), p n,i,j i > 0 and p n,i,j = 0 for all other j, j = j i .That is, X * and Y * have an one-to-one correspondence.
Conversely, if X * and Y * have an one-to-one correspondence, then for each i, there is a unique j i such that p n,i,j i > 0 and p n,i,j = 0 for all other j, j = j i .On the other hand, by (2), p i,j = p

Lemma 1 .
For each n, n ≥ 1, p and p n uniquely determine each other.Proof.Given p = {p k ; k ≥ 1}, by (1), p n = {p n,k ; ≥ 1} is uniquely determined.Conversely, given p n = {p n,k ; ≥ 1}, for each n and all k ≥ 1, p n k /p n 1 = p n,k /p n,1 and therefore

2 .
MI n (X, Y ) = 0 if and only X and Y are independent.Proof.In Part 1, MI n ≥ 0 since MI n is a mutual information and MI n < ∞ by Lemma 2. Part 2 follows Lemma 3 and the fact that MI n is a mutual information.✷ Let

,Corollary 1 .
it follows that p i,j i > 0 and p i,j = 0 for all other j, j = j i .That is, X and Y have an one-to-one correspondence.✷For every n ≥ 2 and any p X,Y ∈ P X,Y ,1.0 ≤ κ n (X, Y ) ≤ 1,2.κ n (X, Y ) = 0 if and only if X and Y are independent, and Let p n,X,Y = {p n,i,j ; i ≥ 1, j ≥ 1}.It is to be noted that p n,X,Y ∈ P X,Y .The two marginal distributions of (3) are p n,X = {p n,i,• } and p n,Y = {p n,•,j } respectively, where