# Measuring Multivariate Redundant Information with Pointwise Common Change in Surprisal

## Abstract

**:**

## 1. Introduction

## 2. Interaction Information (Co-Information)

#### 2.1. Definitions

#### 2.2. Interpretation

## 3. The Partial Information Decomposition

**Symmetry:**

**Self Redundancy:**

**Subset Equality:**

**Monotonicity:**

**Identity Property (Harder et al.):**

**Independent Identity Property:**

#### 3.1. An Example PID: RdnUnqXor

#### 3.2. Measuring Redundancy With Minimal Specific Information: ${I}_{min}$

#### 3.3. Measuring Redundancy With Maximised Co-Information: ${I}_{broja}$

#### 3.4. Other Redundancy Measures

## 4. Measuring Redundancy With Pointwise Common Change in Surprisal: ${\mathit{I}}_{\mathbf{ccs}}$

#### 4.1. Derivation

**Definition**

**1.**

**Definition**

**2.**

**Definition**

**3.**

#### 4.2. Calculating ${I}_{\mathrm{ccs}}$

#### 4.3. Operational Motivation for Choice of Joint Distribution

#### 4.3.1. A Game-Theoretic Operational Definition of Unique Information

#### 4.3.2. Maximum Entropy Optimisation

#### 4.4. Properties

#### 4.5. Implementation

`dit`package [48,49,50]

## 5. Two Variable Examples

#### 5.1. Examples from Williams and Beer (2010) [6]

#### 5.2. Binary Logical Operators

#### 5.2.1. And/Or

#### 5.2.2. Sum

#### 5.3. Griffith and Koch (2014) Examples

#### 5.4. Dependence on Predictor-Predictor Correlation

## 6. Three Variable Examples

#### 6.1. A Problem With the Three Variable Lattice?

#### 6.2. Other Three Variable Example Systems

#### 6.2.1. Giant Bit and Parity

#### 6.2.2. XorCopy

#### 6.2.3. Other Examples

`examples_3d.m`in accompanying code [47]). ${I}_{\mathrm{ccs}}$ also gives the correct PID for ParityRdnRdn (which appeared in an earlier version of their manuscript).

## 7. Continuous Gaussian Variables

## 8. Discussion

`dit`toolbox [50]. The repository includes all the examples described herein, and it is straightforward for users to apply the method to any other systems or examples they would like.

## Acknowledgments

## Conflicts of Interest

## References

- Shannon, C. A mathematical theory of communication. Bell Syst. Tech. J.
**1948**, 27, 379–423. [Google Scholar] [CrossRef] - Cover, T.; Thomas, J. Elements of Information Theory; Wiley: New York, NY, USA, 1991. [Google Scholar]
- Ince, R.A.; Giordano, B.L.; Kayser, C.; Rousselet, G.A.; Gross, J.; Schyns, P.G. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula. Hum. Brain Mapp.
**2017**, 38, 1541–1573. [Google Scholar] [CrossRef] [PubMed] - Sokal, R.R.; Rohlf, F.J. Biometry; WH Freeman and Company: New York, NY, USA, 1981. [Google Scholar]
- Timme, N.; Alford, W.; Flecker, B.; Beggs, J.M. Synergy, redundancy, and multivariate information measures: An experimentalist’s perspective. J. Comput. Neurosci.
**2013**, 36, 119–140. [Google Scholar] [CrossRef] [PubMed] - Williams, P.L.; Beer, R.D. Nonnegative Decomposition of Multivariate Information. Physics
**2010**, 1004, 2515. [Google Scholar] - Wibral, M.; Priesemann, V.; Kay, J.W.; Lizier, J.T.; Phillips, W.A. Partial information decomposition as a unified approach to the specification of neural goal functions. Brain Cogn.
**2017**, 112, 25–38. [Google Scholar] [CrossRef] [PubMed] - Lizier, J.T.; Prokopenko, M.; Zomaya, A.Y. A Framework for the Local Information Dynamics of Distributed Computation in Complex Systems. In Guided Self-Organization: Inception; Prokopenko, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 115–158. [Google Scholar] [CrossRef]
- Reza, F.M. An Introduction to Information Theory; McGraw-Hill: New York, NY, USA, 1961. [Google Scholar]
- Griffith, V.; Koch, C. Quantifying Synergistic Mutual Information. In Guided Self-Organization: Inception; Prokopenko, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 159–190. [Google Scholar]
- Harder, M.; Salge, C.; Polani, D. Bivariate measure of redundant information. Phys. Rev.
**2013**, 87, 012130. [Google Scholar] [CrossRef] [PubMed] - Bertschinger, N.; Rauh, J.; Olbrich, E.; Jost, J.; Ay, N. Quantifying Unique Information. Entropy
**2014**, 16, 2161–2183. [Google Scholar] [CrossRef] - Griffith, V.; Chong, E.K.P.; James, R.G.; Ellison, C.J.; Crutchfield, J.P. Intersection Information Based on Common Randomness. Entropy
**2014**, 16, 1985–2000. [Google Scholar] [CrossRef] - Bertschinger, N.; Rauh, J.; Olbrich, E.; Jost, J. Shared Information—New Insights and Problems in Decomposing Information in Complex Systems. In Proceedings of the European Conference on Complex Systems 2012; Gilbert, T., Kirkilionis, M., Nicolis, G., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2013; pp. 251–269. [Google Scholar] [CrossRef]
- Olbrich, E.; Bertschinger, N.; Rauh, J. Information Decomposition and Synergy. Entropy
**2015**, 17, 3501–3517. [Google Scholar] [CrossRef] - Griffith, V.; Ho, T. Quantifying Redundant Information in Predicting a Target Random Variable. Entropy
**2015**, 17, 4644–4653. [Google Scholar] [CrossRef] - McGill, W.J. Multivariate information transmission. Psychometrika
**1954**, 19, 97–116. [Google Scholar] [CrossRef] - Jakulin, A.; Bratko, I. Quantifying and Visualizing Attribute Interactions. arXiv, 2003; arXiv:cs/0308002. [Google Scholar]
- Bell, A.J. The co-information lattice. In Proceedings of the 4th International Symposium on Independent Component Analysis and Blind Signal Separation (ICA2003), Nara, Japan, 1–4 April 2003; pp. 921–926. [Google Scholar]
- Matsuda, H. Physical nature of higher-order mutual information: Intrinsic correlations and frustration. Phys. Rev.
**2000**, 62, 3096–3102. [Google Scholar] [CrossRef] - Wibral, M.; Lizier, J.; Vögler, S.; Priesemann, V.; Galuske, R. Local active information storage as a tool to understand distributed neural information processing. Front. Neuroinf.
**2014**, 8, 1. [Google Scholar] [CrossRef] [PubMed] - Lizier, J.T.; Prokopenko, M.; Zomaya, A.Y. Local information transfer as a spatiotemporal filter for complex systems. Phys. Rev.
**2008**, 77, 026110. [Google Scholar] [CrossRef] [PubMed] - Wibral, M.; Lizier, J.T.; Priesemann, V. Bits from Biology for Computational Intelligence. Quant. Biol.
**2014**, 185, 1115–1117. [Google Scholar] - Van de Cruys, T. Two Multivariate Generalizations of Pointwise Mutual Information. In Proceedings of the Workshop on Distributional Semantics and Compositionality; Association for Computational Linguistics: Stroudsburg, PA, USA, 2011; pp. 16–20. [Google Scholar]
- Church, K.W.; Hanks, P. Word Association Norms, Mutual Information, and Lexicography. Comput. Linguist.
**1990**, 16, 22–29. [Google Scholar] - Barrett, A.B. Exploration of synergistic and redundant information sharing in static and dynamical Gaussian systems. Phys. Rev.
**2015**, 91, 052802. [Google Scholar] [CrossRef] [PubMed] - Han, T.S. Multiple mutual informations and multiple interactions in frequency data. Inf. Control
**1980**, 46, 26–45. [Google Scholar] [CrossRef] - Gawne, T.; Richmond, B. How independent are the messages carried by adjacent inferior temporal cortical neurons? J. Neurosci.
**1993**, 13, 2758–2771. [Google Scholar] [PubMed] - Panzeri, S.; Schultz, S.; Treves, A.; Rolls, E. Correlations and the encoding of information in the nervous system. Proc. Biol. Sci.
**1999**, 266, 1001–1012. [Google Scholar] [CrossRef] [PubMed] - Brenner, N.; Strong, S.; Koberle, R.; Bialek, W.; Steveninck, R. Synergy in a neural code. Neural Comput.
**2000**, 12, 1531–1552. [Google Scholar] [CrossRef] [PubMed] - Schneidman, E.; Bialek, W.; Berry, M. Synergy, Redundancy, and Independence in Population Codes. J. Neurosci.
**2003**, 23, 11539–11553. [Google Scholar] [PubMed] - Ting, H. On the Amount of Information. Theory Prob. Appl.
**1962**, 7, 439–447. [Google Scholar] [CrossRef] - Quian Quiroga, R.; Panzeri, S. Extracting information from neuronal populations: Information theory and decoding approaches. Nat. Rev. Neurosci.
**2009**, 10, 173–185. [Google Scholar] [CrossRef] [PubMed] - Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning; Springer Series in Statistics: Berlin/Heidelberg, Germany, 2001; Volume 1. [Google Scholar]
- Crampton, J.; Loizou, G. The completion of a poset in a lattice of antichains. Int. Math. J.
**2001**, 1, 223–238. [Google Scholar] - Ince, R.A.A. The Partial Entropy Decomposition: Decomposing multivariate entropy and mutual information via pointwise common surprisal. arXiv, 2017; arXiv:1702.01591. [Google Scholar]
- James, R.G.; Crutchfield, J.P. Multivariate Dependence Beyond Shannon Information. arXiv, 2016; arXiv:1609.01233. [Google Scholar]
- DeWeese, M.R.; Meister, M. How to measure the information gained from one symbol. Netw. Comput. Neural Syst.
**1999**, 10, 325–340. [Google Scholar] [CrossRef] - Butts, D.A. How much information is associated with a particular stimulus? Netw. Comput. Neural Syst.
**2003**, 14, 177–187. [Google Scholar] [CrossRef] - Osborne, M.J.; Rubinstein, A. A Course in Game Theory; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
- Jaynes, E. Information Theory and Statistical Mechanics. Phys. Rev.
**1957**, 106, 620–630. [Google Scholar] [CrossRef] - Amari, S. Information Geometry of Multiple Spike Trains. In Analysis of Parallel Spike Trains; Grün, S., Rotter, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 221–252. [Google Scholar]
- Schneidman, E.; Still, S.; Berry, M., II; Bialek, W. Network Information and Connected Correlations. Phys. Rev. Lett.
**2003**, 91, 238701. [Google Scholar] [CrossRef] [PubMed] - Ince, R.; Montani, F.; Arabzadeh, E.; Diamond, M.; Panzeri, S. On the presence of high-order interactions among somatosensory neurons and their effect on information transmission. J. Phys. Conf. Ser.
**2009**, 197, 012013. [Google Scholar] [CrossRef] - Roudi, Y.; Nirenberg, S.; Latham, P. Pairwise Maximum Entropy Models for Studying Large Biological Systems: When They Can Work and When They Can’t. PLoS Comput. Biol.
**2009**, 5, e1000380. [Google Scholar] [CrossRef] [PubMed] - Lizier, J.T.; Flecker, B.; Williams, P.L. Towards a synergy-based approach to measuring information modification. In Proceedings of the 2013 IEEE Symposium on Artificial Life (ALIFE), Singapore, 16–19 April 2013; pp. 43–51. [Google Scholar]
- Robince/partial-info-decomp. Available online: https://github.com/robince/partial-info-decomp (accessed on 29 June 2017).
- Dit. Available online: https://github.com/dit/dit (accessed on 29 June 2017).
- Dit: Discrete Information Theory. Available online: http://docs.dit.io/ (accessed on 29 June 2017).
- James, R.G. cheebee7i. Zenodo. dit/dit v1.0.0.dev0 [Data set]. Available online: https://zenodo.org/record/235071#.WVMJ9nuVmpo (accessed on 28 June 2017).
- Kay, J.W. On finding trivariate binary distributions given bivariate marginal distributions. Personal Communication, 2017. [Google Scholar]
- Abdallah, S.A.; Plumbley, M.D. A measure of statistical complexity based on predictive information with application to finite spin systems. Phys. Lett.
**2012**, 376, 275–281. [Google Scholar] [CrossRef] - Rauh, J.; Bertschinger, N.; Olbrich, E.; Jost, J. Reconsidering unique information: Towards a multivariate information decomposition. In Proceedings of the 2014 IEEE International Symposium on Information Theory (ISIT), Honolulu, HI, USA, 29 June–4 July 2014. [Google Scholar]
- Chicharro, D.; Panzeri, S. Synergy and Redundancy in Dual Decompositions of Mutual Information Gain and Information Loss. Entropy
**2017**, 19, 71. [Google Scholar] [CrossRef] - Rauh, J. Secret Sharing and Shared Information. arXiv, 2017; arXiv:1706.06998. [Google Scholar]
- Panzeri, S.; Senatore, R.; Montemurro, M.A.; Petersen, R.S. Correcting for the Sampling Bias Problem in Spike Train Information Measures. J. Neurophys.
**2007**, 96, 1064–1072. [Google Scholar] [CrossRef] [PubMed] - Ince, R.A.A.; Mazzoni, A.; Bartels, A.; Logothetis, N.K.; Panzeri, S. A novel test to determine the significance of neural selectivity to single and multiple potentially correlated stimulus features. J. Neurosci. Methods
**2012**, 210, 49–65. [Google Scholar] [CrossRef] [PubMed] - Kriegeskorte, N.; Mur, M.; Bandettini, P. Representational Similarity Analysis—Connecting the Branches of Systems Neuroscience. Front. Syst. Neurosci.
**2008**, 2, 4. [Google Scholar] [CrossRef] [PubMed] - King, J.R.; Dehaene, S. Characterizing the dynamics of mental representations: The temporal generalization method. Trends Cogn. Sci.
**2014**, 18, 203–210. [Google Scholar] [CrossRef] [PubMed]

**Figure 1.**Venn diagrams of mutual information and interaction information. (

**A**) Illustration of how mutual information is calculated as the overlap of two entropies; (

**B**) The overlapping part of two mutual information values (negative interaction information) can be calculated in the same way—see dashed box in (

**A**); (

**C**) The full structure of mutual information conveyed by two variables about a third should separate redundant and synergistic regions.

**Figure 3.**Partial Information Decomposition for RdnUnqXor (

**A**) The structure of the RdnUnqXor system borrowing the graphical representation from [37]. S is a variable containing 4 bits (labelled $a,b,c,d$). ${X}_{1}$ and ${X}_{2}$ each contain 3 bits. ∼ indicates bits which are coupled (distributed identically) and ⊕ indicates the enclosed variables form the Xor relation; (

**B**) Redundant information values on the lattice (black); (

**C**) Partial information values on the lattice (green).

**Figure 4.**ReducedOr. (

**A**) Probability distribution of ReducedOr system; (

**B**) Distribution resulting from ${I}_{\mathrm{broja}}$ optimisation. Black tiles represent outcomes with $p=0.5$. Grey tiles represent outcomes with $p=0.25$. White tiles are zero-probability outcomes.

**Figure 5.**Probability distributions for three example systems (

**A**–

**C**). Black tiles represent equiprobable outcomes. White tiles are zero-probability outcomes. (

**A**,

**B**) modified from [6].

**Figure 6.**Binary logical operators. Probability distributions for (

**A**) AND; (

**B**): OR. Black tiles represent equiprobable outcomes. White tiles are zero-probability outcomes.

**Figure 7.**PIDs for binary systems with fixed target-predictor marginals as a function of predictor-predictor correlation. ${I}_{\mathrm{broja}}$ (

**A**) and ${I}_{\mathrm{ccs}}$ (

**B**) PIDs are shown for the system defined in Equation (46) as a function of the predictor-predictor correlation c.

**Figure 8.**The DblXor example. (

**A**) Pairwise variable joint distributions. Black tiles represent equiprobable outcomes. White tiles are zero-probability outcomes; (

**B**) Non-zero nodes of the three variable redundancy lattice. Mutual information values for each node are shown in red; (

**C**) PID. ${I}_{\partial}$ values for each node are shown in green.

**Figure 9.**The AndDuplicate example. (

**A**) ${I}_{\mathrm{ccs}}$ values for And; (

**B**) Partial information values from the ${I}_{\mathrm{ccs}}$ PID for And; (

**C**) ${I}_{\mathrm{ccs}}$ values for AndDuplicate; (

**D**) Partial information values from the ${I}_{\mathrm{ccs}}$ PID for AndDuplicate.

**Figure 10.**PIDs for Gaussian systems. (

**A**) PID with ${I}_{\mathrm{mmi}}$ for $a=c=0.5$ as a function of predictor-predictor correlation b; (

**B**) PID with ${I}_{\mathrm{ccs}}$ for $a=c=0.5$; (

**C**) PID with ${I}_{\mathrm{mmi}}$ for $a=0.4,c=0.6$; (

**D**) PID with ${I}_{\mathrm{ccs}}$ for $a=0.4,c=0.6$.

**Table 1.**Full Partial Information Decomposition (PID) in the two-variable case. The four terms here correspond to the four regions in Figure 1C.

Node Label | Redundancy Function | Partial Information | Represented Atom |
---|---|---|---|

{12} | ${I}_{\cap}(S;\{{X}_{1},{X}_{2}\})$ | ${I}_{\cap}(S;\{{X}_{1},{X}_{2}\})$-I(S; { X1 } )- I(S;{ X2 } ) +I(S;{ X1 } { X2 } ) | unique information in ${X}_{1}$ and ${X}_{2}$ together (synergy) |

{1} | ${I}_{\cap}(S;\left\{{X}_{1}\right\})$ | ${I}_{\cap}(S;\left\{{X}_{1}\right\})$-I(S; { X1 } { X2 } ) | unique information in ${X}_{1}$ only |

{2} | ${I}_{\cap}(S;\left\{{X}_{2}\right\})$ | ${I}_{\cap}(S;\left\{{X}_{2}\right\})$-I(S; { X1 } { X2 } ) | unique information in ${X}_{2}$ only |

{1}{2} | ${I}_{\cap}(S;\left\{{X}_{1}\right\}\left\{{X}_{2}\right\})$ | ${I}_{\cap}(S;\left\{{X}_{1}\right\}\left\{{X}_{2}\right\})$ | redundant information between ${X}_{1}$ and ${X}_{2}$ |

Node | ${\mathit{I}}_{\cap}$ | ${\mathit{I}}_{\mathit{\partial}}$ |
---|---|---|

$\left\{1\right\}\left\{2\right\}$ | 1 | 1 |

$\left\{1\right\}$ | 2 | 1 |

$\left\{2\right\}$ | 2 | 1 |

$\left\{12\right\}$ | 4 | 1 |

**Table 3.**Different interpretations of local interaction information terms. ? indicates that combination of terms does not admit a clear interpretation in terms of redundancy or synergy.

${\mathbf{\Delta}}_{\mathit{s}}\mathit{h}\left(\mathit{x}\right)$ | ${\mathbf{\Delta}}_{\mathit{s}}\mathit{h}\left(\mathit{y}\right)$ | ${\mathbf{\Delta}}_{\mathit{s}}\mathit{h}(\mathit{x},\mathit{y})$ | $-\mathit{i}(\mathit{x};\mathit{y};\mathit{s})$ | Interpretation |
---|---|---|---|---|

+ | + | + | + | redundant information |

+ | + | + | − | synergistic information |

− | − | − | − | redundant misinformation |

− | − | − | + | synergistic misinformation |

+ | + | − | ⋯ | ? |

− | − | + | ⋯ | ? |

$+/-$ | $-/+$ | ⋯ | ⋯ | ? |

$({\mathit{x}}_{1},{\mathit{x}}_{2},\mathit{s})$ | ${\mathbf{\Delta}}_{\mathit{s}}\mathit{h}\left({\mathit{x}}_{1}\right)$ | ${\mathbf{\Delta}}_{\mathit{s}}\mathit{h}\left({\mathit{x}}_{2}\right)$ | ${\mathbf{\Delta}}_{\mathit{s}}\mathit{h}({\mathit{x}}_{1},{\mathit{x}}_{2})$ | $\mathit{c}({\mathit{x}}_{1};{\mathit{x}}_{2};\mathit{s})$ | ${\mathbf{\Delta}}_{\mathit{s}}{\mathit{h}}^{\mathbf{com}}({\mathit{x}}_{1},{\mathit{x}}_{2})$ |
---|---|---|---|---|---|

$(0,0,0)$ | 1 | 1 | 1 | 1 | 1 |

$(1,1,1)$ | 1 | 1 | 1 | 1 | 1 |

$({\mathit{x}}_{1},{\mathit{x}}_{2},\mathit{s})$ | ${\mathbf{\Delta}}_{\mathit{s}}\mathit{h}\left({\mathit{x}}_{1}\right)$ | ${\mathbf{\Delta}}_{\mathit{s}}\mathit{h}\left({\mathit{x}}_{2}\right)$ | ${\mathbf{\Delta}}_{\mathit{s}}\mathit{h}({\mathit{x}}_{1},{\mathit{x}}_{2})$ | $\mathit{c}({\mathit{x}}_{1};{\mathit{x}}_{2};\mathit{s})$ | ${\mathbf{\Delta}}_{\mathit{s}}{\mathit{h}}^{\mathbf{com}}({\mathit{x}}_{1},{\mathit{x}}_{2})$ |
---|---|---|---|---|---|

$(0,0,0)$ | 1 | 1 | 2 | 0 | 0 |

$(0,1,1)$ | 0 | 0 | 1 | $-1$ | 0 |

$(1,0,1)$ | 0 | 0 | 1 | $-1$ | 0 |

$(1,1,2)$ | 1 | 1 | 2 | 0 | 0 |

Node | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{min}}\right]$ | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{broja}}\right]$ | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{ccs}}\right]$ |
---|---|---|---|

$\left\{1\right\}\left\{2\right\}$ | $0.31$ | $0.31$ | 0 |

$\left\{1\right\}$ | 0 | 0 | $0.31$ |

$\left\{2\right\}$ | 0 | 0 | $0.31$ |

$\left\{12\right\}$ | $0.69$ | $0.69$ | $0.38$ |

${\mathit{x}}_{1}$ | ${\mathit{x}}_{2}$ | s | $\mathit{p}({\mathit{x}}_{1},{\mathit{x}}_{2},\mathit{s})$ |
---|---|---|---|

0 | 0 | 0 | $0.4$ |

0 | 1 | 0 | $0.1$ |

1 | 1 | 1 | $0.5$ |

**Table 8.**PIDs for example Figure 5A.

Node | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{min}}\right]$ | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{broja}}\right]$ | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{ccs}}\right]$ |
---|---|---|---|

$\left\{1\right\}\left\{2\right\}$ | $0.5850$ | $0.2516$ | $0.3900$ |

$\left\{1\right\}$ | $0.3333$ | $0.6667$ | $0.5283$ |

$\left\{2\right\}$ | $0.3333$ | $0.6667$ | $0.5283$ |

$\left\{12\right\}$ | $0.3333$ | 0 | $0.1383$ |

**Table 9.**PIDs for example Figure 5B.

Node | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{min}}\right]$ | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{broja}}\right]$ | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{ccs}}\right]$ |
---|---|---|---|

$\left\{1\right\}\left\{2\right\}$ | $0.5$ | 0 | 0 |

$\left\{1\right\}$ | 0 | $0.5$ | $0.5$ |

$\left\{2\right\}$ | $0.5$ | 1 | 1 |

$\left\{12\right\}$ | $0.5$ | 0 | 0 |

**Table 10.**PIDs for example Figure 5C.

Node | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{min}}\right]$ | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{broja}}\right]$ | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{ccs}}\right]$ |
---|---|---|---|

$\left\{1\right\}\left\{2\right\}$ | 0 | 0 | 0 |

$\left\{1\right\}$ | 0 | 0 | 0 |

$\left\{2\right\}$ | $0.25$ | $0.25$ | $0.25$ |

$\left\{12\right\}$ | $0.67$ | $0.67$ | $0.67$ |

Node | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{min}}\right]$ | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{broja}}\right]$ | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{ccs}}\right]$ |
---|---|---|---|

$\left\{1\right\}\left\{2\right\}$ | $0.31$ | $0.31$ | $0.10$ |

$\left\{1\right\}$ | 0 | 0 | $0.21$ |

$\left\{2\right\}$ | 0 | 0 | $0.21$ |

$\left\{12\right\}$ | $0.5$ | $0.5$ | $0.29$ |

$({\mathit{x}}_{1},{\mathit{x}}_{2},\mathit{s})$ | ${\mathbf{\Delta}}_{\mathit{s}}\mathit{h}\left({\mathit{x}}_{1}\right)$ | ${\mathbf{\Delta}}_{\mathit{s}}\mathit{h}\left({\mathit{x}}_{2}\right)$ | ${\mathbf{\Delta}}_{\mathit{s}}\mathit{h}({\mathit{x}}_{1},{\mathit{x}}_{2})$ | $\mathit{c}({\mathit{x}}_{1};{\mathit{x}}_{2};\mathit{s})$ | ${\mathbf{\Delta}}_{\mathit{s}}{\mathit{h}}^{\mathbf{com}}({\mathit{x}}_{1},{\mathit{x}}_{2})$ |
---|---|---|---|---|---|

$(0,0,0)$ | $0.415$ | $0.415$ | $0.415$ | $0.415$ | $0.415$ |

$(0,1,0)$ | $0.415$ | $-0.585$ | $0.415$ | $-0.585$ | 0 |

$(1,0,0)$ | $-0.585$ | $0.415$ | $0.415$ | $-0.585$ | 0 |

$(1,1,1)$ | 1 | 1 | 2 | 0 | 0 |

Node | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{min}}\right]$ | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{broja}}\right]$ | ${\mathit{I}}_{\mathit{\partial}}\left[{\mathit{I}}_{\mathbf{ccs}}\right]$ |
---|---|---|---|

$\left\{1\right\}\left\{2\right\}$ | $0.5$ | $0.5$ | 0 |

$\left\{1\right\}$ | 0 | 0 | $0.5$ |

$\left\{2\right\}$ | 0 | 0 | $0.5$ |

$\left\{12\right\}$ | 1 | 1 | $0.5$ |

**Table 14.**Order-structure terms for the three variable lattice. Resulting values for the example systems of a giant bit, even parity and DblXor (Section 6) are shown.

Level | Order-Structure Terms | Giant Bit | Parity | DblXor |
---|---|---|---|---|

7 | $\left(3\right)$ | 0 | 1 | -1 |

6 | $\left(2\right)$ | 0 | 0 | 3 |

5 | $(2,2)$ | 0 | 0 | 0 |

4 | $\left(1\right)$, $(2,2,2)$ | 0, 0 | 0, 0 | 0, 0 |

3 | $(1,2)$ | 0 | 0 | 0 |

2 | $(1,1)$ | 0 | 0 | 0 |

1 | $(1,1,1)$ | 1 | 0 | 0 |

© 2017 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Ince, R.A.A.
Measuring Multivariate Redundant Information with Pointwise Common Change in Surprisal. *Entropy* **2017**, *19*, 318.
https://doi.org/10.3390/e19070318

**AMA Style**

Ince RAA.
Measuring Multivariate Redundant Information with Pointwise Common Change in Surprisal. *Entropy*. 2017; 19(7):318.
https://doi.org/10.3390/e19070318

**Chicago/Turabian Style**

Ince, Robin A. A.
2017. "Measuring Multivariate Redundant Information with Pointwise Common Change in Surprisal" *Entropy* 19, no. 7: 318.
https://doi.org/10.3390/e19070318