# Intersection Information Based on Common Randomness

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{*}

## Abstract

**:**The introduction of the partial information decomposition generated a flurry of proposals for defining an intersection information that quantifies how much of “the same information” two or more random variables specify about a target random variable. As of yet, none is wholly satisfactory. A palatable measure of intersection information would provide a principled way to quantify slippery concepts, such as synergy. Here, we introduce an intersection information measure based on the Gács-Körner common random variable that is the first to satisfy the coveted target monotonicity property. Our measure is imperfect, too, and we suggest directions for improvement.

## 1. Introduction

Partial information decomposition (PID) [1] is an immensely suggestive framework for deepening our understanding of multivariate interactions, particularly our understanding of informational redundancy and synergy. In general, one seeks a decomposition of the mutual information that n predictors X_{1}, . . ., X_{n} convey about a target random variable, Y. The intersection information is a function that calculates the information that every predictor conveys about the target random variable; the name draws an analogy with intersections in set theory. An anti-chain lattice of redundant, unique and synergistic partial information is then built from the intersection information.

As an intersection information measure, [1] proposes the quantity:

where D_{KL} is the Kullback–Leibler divergence. Although I_{min} is a plausible choice for the intersection information, it has several counterintuitive properties that make it unappealing [2]. In particular, I_{min} is not sensitive to the possibility that differing predictors, X_{i} and X_{j}, can reduce uncertainty about Y in nonequivalent ways. Moreover, the min operator effectively treats all uncertainty reductions as the same, causing it to overestimate the ideal intersection information. The search for an improved intersection information measure ensued and continued through [3–5], and today, a widely accepted intersection information measure remains undiscovered.

Here, we do not definitively solve this problem, but explore a candidate intersection information based on the so-called common random variable [6]. Whereas Shannon mutual information is relevant to communication channels with arbitrarily small error, the entropy of the common random variable (also known as the zero-error information) is relevant to communication channels without error [7]. We begin by proposing a measure of intersection information for the simpler zero-error information case. This is useful in and of itself, because it provides a template for exploring intersection information measures. Then, we modify our proposal, adapting it to the Shannon mutual information case.

The next section introduces several definitions, some notation and a necessary lemma. We extend and clarify the desired properties for intersection information. Section 3 introduces zero-error information and its intersection information measure. Section 4 uses the same methodology to produce a novel candidate for the Shannon intersection information. Section 5 shows the successes and shortcoming of our candidate intersection information measure using example circuits and diagnoses the shortcoming’s origin. Section 6 discusses the negative values of the resulting synergy measure and identifies its origin. Section 7 summarizes our progress towards the ideal intersection information measure and suggests directions for improvement. Appendices are devoted to technical lemmas and their proofs, to which we refer in the main text.

## 2. Preliminaries

#### 2.1. Informational Partial Order and Equivalence

We assume an underlying probability space on which we define random variables denoted by capital letters (e.g., X, Y and Z). In this paper, we consider only random variables taking values on finite spaces. Given random variables X and Y, we write X ≼ Y to signify that there exists a measurable function, f, such that X = f(Y ) almost surely (i.e., with probability one). In this case, following the terminology in [8], we say that X is informationally poorer than Y ; this induces a partial order on the set of random variables. Similarly, we write X ≽ Y if Y ≼ X, in which case we say X is informationally richer than Y.

If X and Y are such that X ≼ Y and X ≽ Y, then we write X ≅= Y. In this case, again following [8], we say that X and Y are informationally equivalent. In other words, X ≅= Y if and only if one can relabel the values of X to obtain a random value that is equal to Y almost surely and vice versa.

This “information-equivalence” can easily be shown to be an equivalence relation, and it partitions the set of all random variables into disjoint equivalence classes. The ≼ ordering is invariant within these equivalence classes in the following sense. If X ≼ Y and Y ≅= Z, then X ≼ Z. Similarly, if X ≼ Y and X ≅= Z, then Z ≼ Y. Moreover, within each equivalence class, the entropy is invariant, as shown in Section 2.2.

#### 2.2. Information Lattice

Next, we follow [8] and consider the join and meet operators. These operators were defined for information elements, which are σ-algebras or, equivalently, equivalence classes of random variables. We deviate from [8] slightly and define the join and meet operators for random variables.

Given random variables X and Y, we define X ⋏ Y (called the join of X and Y ) to be an informationally poorest (“smallest” in the sense of the partial order ≼ ) random variable, such that X ≼ X ⋎ Y and Y ≼ X ⋎ Y. In other words, if Z is such that X ≼ Z and Y ≼ Z, then X ⋎ Y ≼ Z. Note that X ≼ Y is unique only up to equivalence with respect to ≅=. In other words, X ⋎ Y does not define a specific, unique random variable. Nonetheless, standard information-theoretic quantities are invariant over the set of random variables satisfying the condition specified above. For example, the entropy of X ⋎ Y is invariant over the entire equivalence class of random variables satisfying the condition above. Similarly, the inequality Z ≼ X ⋎ Y does not depend on the specific random variable chosen, as long as it satisfies the condition above. Note, the pair (X, Y ) is an instance of X ⋎ Y.

In a similar vein, given random variables X and Y, we define X ⋏ Y (called the meet of X and Y ) to be an informationally richest random variable (“largest” in the sense of ≽ ), such that X ⋏ Y ≼ X and X ⋏ Y ≼ Y. In other words, if Z is such that Z ≼ X and Z ≼ Y, then Z ≼ X ⋏ Y. Following [6], we also call X ⋏ Y the common random variable of X and Y.

An algorithm for computing an instance of the common random variable between two random variables is provided in [7]; it generalizes straightforwardly to n random variables. One can also take intersections of the σ-algebras generated by the random variables that define the meet.

The ⋎ and ⋏ operators satisfy the algebraic properties of a lattice [8]. In particular, the following hold:

commutative laws: X ⋎ Y ≅= Y ⋎ X and X ⋏ Y ≅= Y ⋏ X;

associative laws: X ⋎ (Y ⋎ Z) ≅= (X ⋎ Y ) ⋎ Z and X ⋏ (Y ⋏ Z) ≅= (X ⋏ Y ) ⋏ Z;

absorption laws: X ⋎ (X ⋏ Y ) ≅= X and X ⋏ (X ⋎ Y ) ≅= X;

idempotent laws: X ⋎ X ≅= X and X ⋏ X ≅= X;

generalized absorption laws: if X ≼ Y, then X ⋎ Y ≅= Y and X ⋏ Y ≅= X.

Finally, the partial order ≼ is preserved under ⋎ and ⋏, i.e., if X ≼ Y, then X ⋎ Z ≼ Y ⋎ Z and X ⋏ Z ≼ Y ⋏ Z.

Let H(·) represent the entropy function and H (·|·) the conditional entropy. We denote the Shannon mutual information between X and Y by I(X:Y ). The following results highlight the invariance and monotonicity of the entropy and conditional entropy functions with respect to ≅= and ≼ [8]. Given that X ≼ Y if and only if X = f(Y ), these results are familiar in information theory, but are restated here using the current notation:

- (a)
If X ≅= Y, then H(X) = H(Y ), H(X|Z) = H(Y |Z), and H(Z|X) = H(Z|Y ).

- (b)
If X ≼ Y, then H(X) ≤ H(Y ), H(X|Z) ≤ H(Y |Z), and H(Z|X) ≥ H(Z|Y ).

- (c)
X ≼ Y if and only if H(X|Y ) = 0.

#### 2.3. Desired Properties of Intersection Information

We denote $\mathcal{I}$ (X:Y ) as a nonnegative measure of information between X and Y. For example, $\mathcal{I}$ could be the Shannon mutual information; i.e., $\mathcal{I}$ (X:Y ) ≡ I(X:Y ). Alternatively, we could take $\mathcal{I}$ to be the zero-error information. Yet, other possibilities include the Wyner common information [9] or the quantum mutual information [10]. Generally, though, we require that $\mathcal{I}$ (X:Y ) = 0 if Y is a constant, which is satisfied by both the zero-error and Shannon information.

For a given choice of $\mathcal{I}$, we seek a function that captures the amount of information about Y that is captured by each of the predictors X_{1}, . . ., X_{n}. We say that I_{∩} is an intersection information for $\mathcal{I}$ if I_{∩}(X:Y) = $\mathcal{I}$ (X:Y ). There are currently 11 intuitive properties that we wish the ideal intersection information measure, I_{∩}, to satisfy. Some are new (e.g., lower bound (**LB**), strong monotonicity (**M**** _{1}**), and equivalence-class invariance (

**Eq**)), but most were introduced earlier, in various forms, in [1–5]. They are as follows:

(

**GP**) Global positivity: I_{∩}(X_{1}, . . ., X_{n}:Y ) ≥ 0.(

**Eq**) Equivalence-class invariance: I_{∩}(X_{1}, . . ., X_{n}:Y ) is invariant under substitution of X_{i}(for any i = 1, . . ., n) or Y by an informationally equivalent random variable.(

**TM**) Target monotonicity: If Y ≼ Z, then I_{∩}(X_{1}, . . ., X_{n}:Y ) ≤ I_{∩}(X_{1}, . . ., X_{n}:Z).(

**M**) Weak monotonicity: I_{0}_{∩}(X_{1}, . . ., X_{n}, W :Y ) ≤ I_{∩}(X_{1}, . . ., X_{n}:Y ) with equality if there exists a Z ∈ {X_{1}, . . ., X_{n}} such that Z ≼ W.(

**S**) Weak symmetry: I_{0}_{∩}(X_{1}, . . ., X_{n}:Y ) is invariant under reordering of X_{1}, . . ., X_{n}.

The next set of properties relate the intersection information to the chosen measure of information between X and Y.

(

**LB**) Lower bound: If Q ≼ X_{i}for all i = 1, . . ., n, then I_{∩}(X_{1}, . . ., X_{n}:Y ) ≥ $\mathcal{I}$ (Q:Y ). Note that X_{1}⋏ · · · ⋏ X_{n}is a valid choice for Q. Furthermore, given that we require I_{∩}(X:Y ) = $\mathcal{I}$ (X:Y ), it follows that (**M**) implies (_{0}**LB**).(

**Id**) Identity: I_{∩}(X, Y :X ⋎ Y ) = $\mathcal{I}$(X : Y ).(

**LP**) Weak local positivity: For n = 2 predictors, the derived “partial information” defined in [1] and described in Section 5 are nonnegative. If both (_{0}**GP**) and (**M**) are satisfied, as well as I_{0}_{∩}(X_{1}, X_{2}:Y ) ≥ $\mathcal{I}$ (X_{1}:Y ) + $\mathcal{I}$ (X_{2}:Y ) – $\mathcal{I}$ (X_{1}⋎ X_{2}:Y ), then (**LP**) is satisfied._{0}

Finally, we have the “strong” properties:

(

**M**) Strong monotonicity: I_{1}_{∩}(X_{1}, . . ., X_{n}, W :Y ) ≤ I_{∩}(X_{1}, . . ., X_{n}:Y ) with equality if there exists Z ∈ {X_{1}, . . ., X_{n}, Y } such that Z ≼ W.(

**S**) Strong symmetry: I_{1}_{∩}(X_{1}, . . ., X_{n}:Y ) is invariant under reordering of X_{1}, . . ., X_{n}, Y.(

**LP**) Strong local positivity: For all n, the derived “partial information” defined in [1] is nonnegative._{1}

Properties (**LB**), (**M**** _{1}**) and (

**Eq**) are introduced for the first time here. However, (

**Eq**) is satisfied by most information-theoretic quantities and is implicitly assumed by others. Though absent from our list, it is worthwhile to also consider continuity and chain rule properties, in analogy with the mutual information [4,11].

## 3. Candidate Intersection Information for Zero-Error Information

#### 3.1. Zero-Error Information

Introduced in [7], the zero-error information, or Gács–Körner common information, is a stricter variant of Shannon mutual information. Whereas the mutual information, I(A:B), quantifies the magnitude of information A conveys about B with an arbitrarily small error ε > 0, the zero-error information, denoted I^{0}(A:B), quantifies the magnitude of information A conveys about B with exactly zero error, i.e., ε = 0. The zero-error information between A and B equals the entropy of the common random variable A ⋏ B,

Zero-error information has several notable properties, but the most salient is that it is nonnegative and bounded by the mutual information,

#### 3.2. Intersection Information for Zero-Error Information

For the zero-error information case (i.e., $\mathcal{I}$ = I^{0}), we propose the zero-error intersection information
${\text{I}}_{\u22cf}^{0}({X}_{1},\dots ,{X}_{n}:Y)$ as the maximum zero-error information, I^{0}(Q:Y ), that a random variable, Q, conveys about Y, subject to Q being a function of each predictor X_{1}, . . ., X_{n}:

In Lemma 7 of Appendix 7.3, it is shown that the common random variable across all predictors is the maximizing Q. This simplifies Equation (2) to:

Most importantly, the zero-error information
${\text{I}}_{\u22cf}^{0}({X}_{1},\dots ,{X}_{n}:Y)$ satisfies nine of the 11 desired properties from Section 2.3, leaving only (**LP**** _{0}**) and (

**LP**

**) unsatisfied. See Lemmas 1, 2, and 3 in Appendix 7.3 for details.**

_{1}## 4. Candidate Intersection Information for Shannon Information

In the last section, we defined an intersection information for zero-error information that satisfies the vast majority of the desired properties. This is a solid start, but an intersection information for Shannon mutual information remains the goal. Towards this end, we use the same method as in Equation (2), leading to I_{⋏}, our candidate intersection information measure for Shannon mutual information:

In Lemma 8 of Appendix 7.3, it is shown that Equation (4) simplifies to:

Unfortunately, I_{⋏} does not satisfy as many of the desired properties as
${\text{I}}_{\u22cf}^{0}$. However, our candidate, I_{⋏}, still satisfies seven of the 11 properties; most importantly, the coveted (**TM**) that, until now, had not been satisfied by any proposed measure. See Lemmas 4, 5 and 6 in Appendix 7.3 for details. Table 1 lists the desired properties satisfied by I_{min}, I_{⋏} and
${\text{I}}_{\u22cf}^{0}$. For reference, we also include I_{red}, the proposed measure from [3].

Lemma 9 in Appendix 7.3 allows a comparison of the three subject intersection information measures:

Despite not satisfying (**LP**** _{0}**), I

_{⋏}remains an important stepping-stone towards the ideal Shannon I

_{∩}. First, I

_{⋏}captures what is inarguably redundant information (the common random variable); this makes I

_{⋏}necessarily a lower bound on any reasonable redundancy measure. Second, it is the first proposal to satisfy target monotonicity. Lastly, I

_{⋏}is the first measure to reach intuitive answers in many canonical situations, while also being generalizable to an arbitrary number of inputs.

## 5. Three Examples Comparing I_{min} and I_{⋏}

Example Unq illustrates how I_{min} gives undesirable (some claim fatally so [2]) decompositions of redundant and synergistic information. Examples Unq and RdnXor illustrate I_{⋏}’s successes and example ImperfectRdn illustrates I_{⋏}’s paramount deficiency. For each, we give the joint distribution Pr(x_{1}, x_{2}, y), a diagram and the decomposition derived from setting I_{min} or I_{⋏} as the I_{∩} measure. At each lattice junction, the left number is the I_{∩} value of that node, and the number in parentheses is the I_{∂} value (this is the same notation used in [4]). Readers unfamiliar with the n = 2 partial information lattice should consult [1], but in short, I_{∂} measures the magnitude of “new” information at this node in the lattice beyond the nodes lower in the lattice. Specifically, the mutual information between the pair, X_{1} ⋎ X_{2} and Y, decomposes into four terms:

In order, the terms are given by the redundant information that X_{1} and X_{2} both provide to Y, the unique information that X_{1} provides to Y, the unique information that X_{2} provides to Y and finally, the synergistic information that X_{1} and X_{2} jointly convey about Y. Each of these quantities can be written in terms of standard mutual information and the intersection information, I_{∩}, as follows:

These quantities occupy the bottom, left, right and top nodes in the lattice diagrams, respectively. Except for ImperfectRdn, measures I_{⋏} and
${\text{I}}_{\u22cf}^{0}$ reach the same decomposition for all presented examples.

#### 5.1. Example Unq (Figure 1)

The desired decomposition for example Unq is two bits of unique information; X_{1} uniquely specifies one bit of Y, and X_{2} uniquely specifies the other bit of Y. The chief criticism of I_{min} in [2] was that I_{min} calculated one bit of redundancy and one bit of synergy for Unq (Figure 1c). We see that unlike I_{min}, I_{⋏} satisfyingly arrives at two bits of unique information. This is easily seen by the inequality,

Therefore, as I(X_{1} :X_{2}) = 0, we have I_{⋏} (X_{1}, X_{2} :Y) = 0 bits leading to I_{∂}(X_{1} : Y) = 1 bit and I_{∂}(X_{2} : Y ) = 1 bit (Figure 1d).

#### 5.2. Example RdnXor (Figure 2)

In [2], RdnXor was an example where I_{min} shined by reaching the desired decomposition of one bit of redundancy and one bit of synergy. We see that I_{⋏} finds this same answer. I_{⋏} extracts the common random variable within X_{1} and X_{2}—the r/R bit—and calculates the mutual information between the common random variable and Y to arrive at I_{⋏}(X_{1}, X_{2} :Y ) = 1 bit.

#### 5.3. Example ImperfectRdn (Figure 3)

ImperfectRdn highlights the foremost shortcoming of I_{⋏}: It does not detect “imperfect” or “lossy” correlations between X_{1} and X_{2}. Given (**LP**** _{0}**), we can determine the desired decomposition analytically. First, I(X

_{1}⋎ X

_{2}:Y) = I(X

_{1}:Y) = 1 bit, and thus, I (X

_{2}:Y |X

_{1}) = 0 bits. Since the conditional mutual information is the sum of the synergy I

_{∂}(X

_{1}, X

_{2}:Y ) and unique information I

_{∂}(X

_{2}:Y ), both quantities must also be zero. Then, the redundant information I

_{∂}(X

_{1}, X

_{2}:Y ) = I(X

_{2}:Y )–I

_{∂}(X

_{2}: Y ) = I(X

_{2}:Y ) = 0.99 bits. Having determined three of the partial informations, we compute the final unique information: I

_{∂}(X

_{1}:Y ) = I(X

_{1}:Y ) – 0.99 = 0.01 bits.

How well do I_{min} and I_{⋏} match the desired decomposition of ImperfectRdn? We see that I_{min} calculates the desired decomposition (Figure 3c); however, I_{⋏} does not (Figure 3d). Instead, I_{⋏} calculates zero redundant information, that I_{∩}(X_{1}, X_{2} :Y) = 0 bits. This unpleasant answer arises from Pr(X_{1} = 0, X_{2} = 1) > 0. If this were zero, then both I_{⋏} and I_{min} reach the desired one bit of redundant information. Due to the nature of the common random variable, I_{⋏} only sees the “deterministic” correlations between X_{1} and X_{2}; add even an iota of noise between X_{1} and X_{2}, and I_{⋏} plummets to zero. This highlights the fact that I_{⋏} is not continuous: an arbitrarily small change in the probability distribution can result in a discontinuous jump in the value of _{I⋏}. As with traditional information measures, such as the entropy and the mutual information, it may be desirable to have an I_{∩} measure that is continuous over the simplex.

To summarize, ImperfectRdn shows that when there are additional “imperfect” correlations between A and B, i.e., I(A:B|A ⋏ B) > 0, I_{⋏} sometimes underestimates the ideal I_{∩}(A, B:Y ).

## 6. Negative Synergy

In ImperfectRdn, we saw I_{⋏} calculate a synergy of −0.99 bits (Figure 3d). What does this mean? Could negative synergy be a “real” property of Shannon information? When n = 2, it is fairly easy to diagnose the cause of negative synergy from the equation for I_{∂}(X_{1} ⋎ X_{2} : Y ) in Equation (7). Given (**GP**), negative synergy occurs if and only if,

where I_{∪} is dual to I_{∩} and related by the inclusion-exclusion principle. For arbitrary n, this is I_{∪}(X_{1}, . . ., X_{n} : Y ) ≡ ∑ _{S}_{⊆{}_{X}_{1,...,}_{Xn}_{}}(−1)^{|}^{S}^{|+1} I_{∩} (S_{1}, . . ., S_{|}_{S}_{|} :Y). The intuition behind I_{∪} is that it represents the aggregate information contributed by the sources, X_{1}, . . ., X_{n}, without considering synergies or double-counting redundancies.

From Equation (8), we see that negative synergy occurs when I_{∩} is small, probably too small. Equivalently, negative synergy occurs when the joint random variable conveys less about Y than the sources, X_{1} and X_{2}, convey separately; mathematically, when I(X_{1} ⋎ X_{2} :Y ) < I_{∪}(X_{1}, X_{2} : Y ). On the face of it, this sounds strange. No structure “disappears” after X_{1} and X_{2} are combined by the ⋎ operator. By the definition of ⋎, there are always functions f_{1} and f_{2}, such that X_{1} ≅= f_{1}(Z) and X_{2} ≅= f_{2}(Z). Therefore, if your favorite I_{∩} measure does not satisfy (**LP**** _{0}**), it is too strict.

This means that our measure,
${\text{I}}_{\u22cf}^{0}$, does not account for the full zero-information overlap between I^{0}(X_{1} :Y ) and I^{0}(X_{2} :Y ). This is shown in the example, Subtle (Figure 4), where
${\text{I}}_{\u22cf}^{0}$ calculates a synergy of −0.252 bits. Defining a zero-error, I_{∩}, that satisfies (**LP**** _{0}**) is a matter of ongoing research.

## 7. Conclusions and Path Forward

We made incremental progress on several fronts towards the ideal Shannon I_{∩}.

#### 7.1. Desired Properties

We have expanded, tightened and grounded the desired properties for I_{∩}. Particularly,

(

**LB**) highlights an uncontentious, yet tighter lower bound on I_{∩}than (**GP**).Inspired by I

_{∩}(X_{1}:Y ) = $\mathcal{I}$ (X_{1}:Y ) and (**M**) synergistically implying (_{0}**LB**), we introduced (**M**) as a desired property._{1}What was before an implicit assumption, we introduced (

**Eq**) to better ground one’s thinking.

#### 7.2. A New Measure

Based on the Gács–Körner common random variable, we introduced a new Shannon I_{∩} measure. Our measure, I_{⋏}, is theoretically principled and the first to satisfy (**TM**). A point to keep in mind is that our intersection information is zero whenever the distribution Pr(x_{1}, x_{2}, y) has full support; this dependence on structural zeros is inherited from the common random variable.

#### 7.3. How to Improve

We identified where I_{⋏} fails; it does not detect “imperfect” correlations between X_{1} and X_{2}. One next step is to develop a less stringent I_{∩} measure that satisfies (**LP**** _{0}**) for ImperfectRdn, while still satisfying (

**TM**). Satisfying continuity would also be a good next step.

Contrary to our initial expectation, Subtle, showed that
${\text{I}}_{\u22cf}^{0}$ does not satisfy (**LP**** _{0}**). This matches a result from [4], which shows that (

**LP**

**), (**

_{0}**S**

**), (**

_{1}**M**

**) and (**

_{0}**Id**) cannot all be simultaneously satisfied, and it suggests that ${\text{I}}_{\u22cf}^{0}$ is too strict. Therefore, what kind of zero-error informational overlap is ${\text{I}}_{\u22cf}^{0}$ not capturing? The answer is of paramount importance. The next step is to formalize what exactly is required for a zero-error I

_{∩}to satisfy (

**LP**

**). From Subtle, we can likewise see that within zero-error information, (**

_{0}**Id**) and (

**LP**

**) are incompatible.**

_{0}## Acknowledgments

Virgil Griffith thanks Tracey Ho, and Edwin K. P. Chong thanks Hua Li for valuable discussions. While intrepidly pulling back the veil of ignorance, Virgil Griffith was funded by a Department of Energy Computational Science Graduate Fellowship; Edwin K. P. Chong was funded by Colorado State University’s Information Science & Technology Center; Ryan G. James and James P. Crutchfield were funded by Army Research Office grant W911NF-12-1-0234; Christopher J. Ellison was funded by a subaward from the Santa Fe Institute under a grant from the John Templeton Foundation.

## Appendix

By and large, most of these proofs follow directly from the lattice properties and also from the invariance and monotonicity properties with respect to ≅= and ≼.

#### A. Properties of ${\text{I}}_{\u22cf}^{0}$

#### Lemma 1

${\text{I}}_{\u22cf}^{0}({X}_{1},\dots ,{X}_{n}:Y)$ satisfies (**GP**), (**Eq**), (**TM**), (**M**** _{0}**), and (

**S**

**), but not (**

_{0}**LP**

**).**

_{0}#### Proof

(**GP**) follows immediately from the nonnegativity of the entropy. (**Eq**) follows from the invariance of entropy within the equivalence classes induced by ≅=. (**TM**) follows from the monotonicity of the entropy with respect to ≼. (**M**** _{0}**) also follows from the monotonicity of the entropy, but now applied to ⋏

_{i}X

_{i}⋏ W ⋏ Y ≼ ⋏

_{i}X

_{i}⋏ Y. If there exists some j, such that X

_{j}≼ W, then generalized absorption says that ⋏

_{i}X

_{i}⋏ W ⋏ Y ≅= ⋏

_{i}X

_{i}⋏ Y, and thus, we have the equality condition. (

**S**

**) is a consequence of the commutativity of the ⋏ operator. To see that (**

_{0}**LP**

**) is not satisfied by the ${\text{I}}_{\u22cf}^{0}$, we point to the example, Subtle (Figure 4), which has negative synergy. One can also rewrite (**

_{0}**LP**

**) as the supermodularity law for common information, which is known to be false in general. (See [8], Section 5.4.)**

_{0}#### Lemma 2

${\text{I}}_{\u22cf}^{0}({X}_{1},\dots ,{X}_{n}:Y)$ satisfies (**LB**), (**SR**), and (**Id**).

#### Proof

For (**LB**), note that Q ≼ X_{1} ⋏ · · · ⋏ X_{n} for any Q obeying Q ≼ X_{i} for i = 1, . . ., n. Then, apply the monotonicity of the entropy. (**SR**) is trivially true given Lemma 7 and the definition of zero-error information. Finally, (**Id**) follows from the absorption law and the invariance of the entropy.

#### Lemma 3

${\text{I}}_{\u22cf}^{0}({X}_{1},\dots ,{X}_{n}:Y)$ satisfies (**M**** _{1}**) and (

**S**

**), but not (**

_{1}**LP**

**).**

_{1}#### Proof

(**M**** _{1}**) follows using the absorption and monotonicity of the entropy in nearly the same way that (

**M**

**) does. (**

_{0}**S**

**) follows from commutativity, and (**

_{1}**LP**

**) is false, because (**

_{1}**LP**

**) is false.**

_{0}#### B. Properties of I_{⋏}

The proofs here are nearly identical to those used for ${\text{I}}_{\u22cf}^{0}$.

#### Lemma 4

I_{⋏} (X_{1}, . . ., X_{n} :Y ) satisfies (**GP**), (**Eq**), (**TM**), (**M**** _{0}**), and (

**S**

**), but not (**

_{0}**LP**

**).**

_{0}#### Proof

(**GP**) follows from the nonnegativity of mutual information. (**Eq**) follows from the invariance of entropy. (**TM**) follows from the data processing inequality. (**M**** _{0}**) follows from applying the monotonicity of the mutual information I(Y : · ) to ⋏

_{i}X

_{i}⋏ W ≼ ⋏

_{i}X

_{i}. If there exists some j, such that X

_{j}≼ W, then generalized absorption says that ⋏

_{i}X

_{i}⋏ W ≅= ⋏

_{i}X

_{i}, and thus, we have the equality condition. (

**S**

**) follows from commutativity, and a counterexample for (**

_{0}**LP**

**) is given by ImperfectRdn (Figure 3).**

_{0}#### Lemma 5

I_{⋏} (X_{1}, . . ., X_{n} :Y ) satisfies (**LB**) and (**SR**), but not (**Id**).

#### Proof

For (**LB**), note that Q ≼ X_{1} ⋏ · · · ⋏ X_{n} for any Q obeying Q ≼ X_{i} for i = 1, . . ., n. Then, apply the monotonicity of the mutual information to I(Y : · ). (**SR**) is trivially true given Lemma 8. Finally, (**Id**) does not hold, since X ⋏ Y ≼ X ⋎ Y, and thus, I_{⋏} (X, Y :Y ⋏ Y) = H(X ⋏ Y ).

#### Lemma 6

I_{⋏} (X_{1}, . . ., X_{n} :Y ) does not satisfy (**M**** _{1}**), (

**S**

**), or (**

_{1}**LP**

**).**

_{1}#### Proof

(**M**** _{1}**) is false due to a counterexample provided by ImperfectRdn (Figure 3), where I

_{⋏}(X

_{1}:Y) = 0.99 bits and I

_{⋏}(X

_{1}, Y :Y) = 0 bits. (

**S**

**) is false, since I**

_{1}_{⋏}(X, X:Y ) ≠ I

_{⋏}(X, Y :X). Finally, (

**LP**

**) is false, due to (**

_{1}**LP**

**) being false.**

_{0}#### C. Miscellaneous Results

#### Lemma 7

Simplification of${\text{I}}_{\u22cf}^{0}$ .

#### Proof

Recall that I^{0}(Q:Y ) ≡ H(Q ⋏ Y ), and note that ⋏_{i}X_{i} is a valid choice for Q. By definition, ⋏_{i}X_{i} is the richest possible Q, and so, monotonicity with respect to ≼ then guarantees that H( ⋏_{i}X_{i} ⋏ Y ) ≥ H(Q ⋏ Y ).

#### Lemma 8

Simplification of I_{⋏}.

#### Proof

Note that ⋏_{i}X_{i} is a valid choice for Q. By definition, ⋏_{i}X_{i} is the richest possible Q, and so, monotonicity with respect to ≼ then guarantees that I(Q:Y ) ≤ I(⋏_{i}X_{i} :Y ).

#### Lemma 9

I_{⋏} (X_{1}, . . ., X_{n} :Y ) ≤ I_{min} (X_{1}, . . ., X_{n} : Y )

#### Proof

We need only show that I( ⋏_{i}X_{i} : Y ) ≤ I_{min} (X_{1}, . . ., X_{n} : Y ). This can be restated in terms of the specific information: I( ⋏_{i}X_{i} : y) ≤ min_{i} I (X_{i} : y) for each y. Since the specific information increases monotonically on the lattice (cf. Section 2.2 or [8]), it follows that I( ⋏_{i}X_{i} : y) ≤ I(X_{j} : y) for any j.

## Conflicts of Interest

The authors declare no conflicts of interest.

**Author Contributions**Each of the authors contributed to the design, analysis, and writing of the study.

## References

- Williams, P.L.; Beer, R.D. Nonnegative Decomposition of Multivariate Information.
**2010**. arXiv:1004.2515 [cs.IT]. [Google Scholar] - Griffith, V.; Koch, C. Quantifying Synergistic Mutual Information. In Guided Self-Organization: Inception; Prokopenko, M., Ed.; Springer: Berlin, Germany, 2014. [Google Scholar]
- Harder, M.; Salge, C.; Polani, D. Bivariate measure of redundant information. Phys. Rev. E
**2013**, 87, 012130. [Google Scholar] - Bertschinger, N.; Rauh, J.; Olbrich, E.; Jost, J. Shared Information—New Insights and Problems in Decomposing Information in Complex Systems. Proceedings of the European Conference on Complex Systems 2012, Brussels, Belgium, 3–7 September 2012; Gilbert, T., Kirkilionis, M., Nicolis, G., Eds.; Springer Proceedings in Complexity; Springer: Berlin, Germany, 2013; pp. 251–269. [Google Scholar]
- Lizier, J.; Flecker, B.; Williams, P. Towards a synergy-based approach to measuring information modification. Proceedings of the 2013 IEEE Symposium on Artificial Life (ALIFE), Singapore, 16–17 April 2013; pp. 43–51.
- Gács, P.; Körner, J. Common information is far less than mutual information. Probl. Control Inform. Theor
**1973**, 2, 149–162. [Google Scholar] - Wolf, S.; Wullschleger, J. Zero-error information and applications in cryptography. Proc. IEEE Inform. Theor. Workshop
**2004**, 4, 1–6. [Google Scholar] - Li, H.; Chong, E.K.P. On a Connection between Information and Group Lattices. Entropy
**2011**, 13, 683–708. [Google Scholar] - Wyner, A.D. The common information of two dependent random variables. IEEE Trans. Inform. Theor
**1975**, 21, 163–179. [Google Scholar] - Cerf, N.J.; Adami, C. Negative Entropy and Information in Quantum Mechanics. Phys. Rev. Lett
**1997**, 79, 5194–5197. [Google Scholar] - Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley: New York, NY, USA, 1991. [Google Scholar]

**Figure 1.**Example Unq. This is the canonical example of unique information. X

_{1}and X

_{2}each uniquely specify a single bit of Y. This is the simplest example, where I

_{min}calculates an undesirable decomposition (c) of one bit of redundancy and one bit of synergy. I

_{⋏}and ${\text{I}}_{\u22cf}^{0}$ each calculate the desired decomposition (d). (

**a**) Distribution and information quantities; (

**b**) circuit diagram; (

**c**) I

_{min}; (

**d**) I

_{⋏}and ${\text{I}}_{\u22cf}^{0}$.

**Figure 2.**Example RdnXor. This is the canonical example of redundancy and synergy coexisting. I

_{min}and I

_{⋏}each reach the desired decomposition of one bit of redundancy and one bit of synergy. This is the simplest example demonstrating I

_{⋏}and ${\text{I}}_{\u22cf}^{0}$ correctly extracting the embedded redundant bit within X

_{1}and X

_{2}. (

**a**) Distribution and information quantities; (

**b**) circuit diagram; (

**c**) I

_{min}; (

**d**) I

_{⋏}and ${\text{I}}_{\u22cf}^{0}$.

**Figure 3.**Example ImperfectRdn. I

_{⋏}is blind to the noisy correlation between X

_{1}and X

_{2}and calculates zero redundant information. An ideal I

_{∩}measure would detect that all of the information X

_{2}specifies about Y is also specified by X

_{1}to calculate I

_{∩}(X

_{1}, X

_{2}:Y ) = 0.99 bits. (

**a**) Distribution and information quantities; (

**b**) circuit diagram; (

**c**) I

_{min}; (

**d**) I

_{⋏}; (

**e**) ${\text{I}}_{\u22cf}^{0}$.

**Figure 4.**Example Subtle. In this example, both I

_{⋏}and ${\text{I}}_{\u22cf}^{0}$ calculate a synergy of −0.252 bits of synergy. What kind of redundancy must be captured for a nonnegative decomposition for this example? (

**a**) Distribution and information quantities; (

**b**) circuit diagram; (

**c**) I

_{min}; (

**d**) I

_{⋏}and ${\text{I}}_{\u22cf}^{0}$.

**Table 1.**The I

_{∩}desired properties that each measure satisfies. (The appendices provide proofs for I

_{⋏}and ${\text{I}}_{\u22cf}^{0}$.)

Property | I_{min} | I_{red} | I_{⋏} | ${\text{I}}_{\u22cf}^{0}$ |
---|---|---|---|---|

(GP) Global Positivity | ✓ | ✓ | ✓ | ✓ |

(Eq) Equivalence-Class Invariance | ✓ | ✓ | ✓ | ✓ |

(TM) Target Monotonicity | ✓ | ✓ | ||

(M) Weak Monotonicity_{0} | ✓ | ✓ | ✓ | |

(S) Weak Symmetry_{0} | ✓ | ✓ | ✓ | ✓ |

(LB) Lower bound | ✓ | ✓ | ✓ | ✓ |

(Id) Identity | ✓ | ✓ | ||

(LP) Weak Local Positivity_{0} | ✓ | ✓ | ||

(M) Strong Monotonicity_{1} | ✓ | |||

(S) Strong Symmetry_{1} | ✓ | |||

(LP) Strong Local Positivity_{1} | ✓ |

© 2014 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Griffith, V.; Chong, E.K.P.; James, R.G.; Ellison, C.J.; Crutchfield, J.P. Intersection Information Based on Common Randomness. *Entropy* **2014**, *16*, 1985-2000.
https://doi.org/10.3390/e16041985

**AMA Style**

Griffith V, Chong EKP, James RG, Ellison CJ, Crutchfield JP. Intersection Information Based on Common Randomness. *Entropy*. 2014; 16(4):1985-2000.
https://doi.org/10.3390/e16041985

**Chicago/Turabian Style**

Griffith, Virgil, Edwin K. P. Chong, Ryan G. James, Christopher J. Ellison, and James P. Crutchfield. 2014. "Intersection Information Based on Common Randomness" *Entropy* 16, no. 4: 1985-2000.
https://doi.org/10.3390/e16041985