# Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

#### 1.1. Notation

- T, $\mathcal{T}$, t, ${t}^{\mathsf{c}}$
- denote the target variable, event space, event and complementary event respectively;
- S, $\mathcal{S}$, s, ${s}^{\mathsf{c}}$
- denote the predictor variable, event space, event and complementary event respectively;
- $\mathit{S}$, $\mathit{s}$
- represent the set of n predictor variables $\{{S}_{1},\dots ,{S}_{n}\}$ and events $\{{s}_{1},\dots ,{s}_{n}\}$ respectively;
- ${\mathcal{T}}^{t}$, ${\mathcal{S}}^{s}$
- denote the two-event partition of the event space, i.e., ${\mathcal{T}}^{t}=\{t,{t}^{\mathsf{c}}\}$ and ${\mathcal{S}}^{s}=\{s,{s}^{\mathsf{c}}\}$;
- $H(T)$, $I(S;T)$
- uppercase function names be used for average information-theoretic measures;
- $h(t)$, $i(s,t)$
- lowercase function names be used for pointwise information-theoretic measures.

- ${s}^{1}$, ${s}^{2}$, ${t}^{1}$, ${t}^{2}$
- superscripts distinguish between different different events in a variable;
- ${S}_{1}$, ${S}_{2}$, ${T}_{1}$, ${T}_{2}$
- subscripts distinguish between different variables;
- ${S}_{1,2}$, ${s}_{1,2}$
- multiple superscripts represent joint variables and joint events.

- ${\mathit{A}}_{1},\dots ,{\mathit{A}}_{k}$
- sources are sets of predictor variables, i.e., ${\mathit{A}}_{i}\in {\mathcal{P}}_{1}(\mathit{S})$ where ${\mathcal{P}}_{1}$ is the power set without ∅;
- ${\mathit{a}}_{1},\dots ,{\mathit{a}}_{k}$
- source events are sets of predictor events, i.e., ${\mathit{a}}_{i}\in {\mathcal{P}}_{1}(\mathit{s})$.

#### 1.2. Partial Information Decomposition

**W&B**

**Axiom**

**1**(Commutativity)

**.**

**W&B**

**Axiom**

**2**(Monotonicity)

**.**

**W&B**

**Axiom**

**3**(Self-redundancy)

**.**

## 2. Pointwise Information Theory

#### 2.1. Pointwise Information Decomposition

#### 2.2. Pointwise Unique

#### 2.3. Pointwise Partial Information Decomposition

**PPID**

**Axiom**

**1**(Symmetry)

**.**

**PPID**

**Axiom**

**2**(Monotonicity)

**.**

**PPID**

**Axiom**

**3**(Self-redundancy)

**.**

## 3. Probability Mass Exclusions and the Directed Components of Pointwise Mutual Information

#### 3.1. Two Distinct Types of Probability Mass Exclusions

**Remark**

**1.**

#### 3.2. The Directed Components of Pointwise Information: Specificity and Ambiguity

**Postulate**

**1**(Decomposition)

**.**

**Postulate**

**2**(Monotonicity)

**.**

**Postulate**

**3**(Self-Information)

**.**

**Postulate**

**4**(Chain Rule)

**.**

- The positive informational component ${i}^{+}(s\to t)$ does not depend on t but rather only on s. This can be interpreted as follows: the less likely s is to occur, the more specific it is when it does occur, the greater the total amount of probability mass excluded $p({s}^{\mathsf{c}})$, and the greater the potential for s to inform about t (or indeed any other target realisation).
- The negative informational component ${i}^{-}(s\to t)$ depends on both s and t, and can be interpreted as follows: the less likely s is to coincide with the event t, the more uncertainty in s given t, the greater size of the misinformative probability mass exclusion $p({s}^{\mathsf{c}},t)$, and therefore the greater the potential for s to misinform about t.

#### 3.3. Operational Interpretation of Redundant Information

**Operational**

**Interpretation**(Redundant Specificity)

**.**

**Operational**

**Interpretation**(Redundant Ambiguity)

**.**

#### 3.4. Motivational Example

## 4. Pointwise Partial Information Decomposition Using Specificity and Ambiguity

**Axiom**

**1**(Symmetry)

**.**

**Axiom**

**2**(Monotonicity)

**.**

**Axiom**

**3**(Self-redundancy)

**.**

#### 4.1. Bivariate PPID Using the Specificity and Ambiguity

#### 4.2. Redundancy Measures on the Specificity and Ambiguity Lattices

**Axiom**

**4**(Two-event Partition)

**.**

**Definition**

**1.**

**Definition**

**2.**

**Theorem**

**1.**

**Theorem**

**2.**

**Theorem**

**3.**

**Theorem**

**4.**

**Theorem**

**5**(Pointwise Target Chain Rule)

**.**

## 5. Discussion

#### 5.1. Comparison to Existing Measures

#### 5.2. Probability Distribution Xor

#### 5.3. Probability Distribution PwUnq

#### 5.4. Probability Distribution RdnErr

#### 5.5. Probability Distribution Tbc

**Theorem**

**6.**

#### 5.6. Summary of Key Properties

**Property**

**1.**

**Property**

**2.**

**Property**

**3.**

**Property**

**4.**

**Property**

**5.**

## 6. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## Appendix A. Kelly Gambling, Axiom 4, and Tbc

#### Appendix A.1. Pointwise Side Information and the Kelly Criterion

#### Appendix A.2. Justification of Axiom 4 and Redundant Information in Tbc

**Horse****0**- is a black horse ${T}_{1}=0$, ridden by a female jockey ${T}_{2}=0$, who is wearing a red jersey ${T}_{3}=0$.
**Horse****1**- is a black horse ${T}_{1}=0$, ridden by a male jockey ${T}_{2}=1$, who is wearing a green jersey ${T}_{3}=1$.
**Horse****2**- is a white horse ${T}_{1}=1$, ridden by a female jockey ${T}_{2}=0$, who is wearing a green jersey ${T}_{3}=1$.
**Horse****3**- is a white horse ${T}_{1}=1$, ridden by a male jockey ${T}_{2}=1$, who is wearing a red jersey ${T}_{3}=0$.

#### Appendix A.3. Accumulator Betting and the Target Chain Rule

## Appendix B. Supporting Proofs and Further Details

#### Appendix B.1. Deriving the Specificity and Ambiguity Lattices from Axioms 1–4

**Proposition**

**A1.**

**Proof.**

**Proposition**

**A2.**

**Proof.**

**Theorem**

**A1.**

**Proof.**

#### Appendix B.2. Redundancy Measures on the Lattices

**Theorem**

**1.**

**Proof.**

**Theorem**

**2.**

**Lemma**

**A1.**

**Proof.**

**Proof**

**of**

**Theorem**

**2.**

**Theorem**

**A2.**

**Proof.**

**Theorem**

**3.**

**Proof.**

**Theorem**

**4.**

**Proof.**

#### Appendix B.3. Target Chain Rule

**Proposition**

**A3.**

**Proof.**

**Proposition**

**A4.**

**Proof.**

**Proposition**

**A5.**

**Proof.**

**Theorem**

**5**(Pointwise Target Chain Rule)

**.**

**Proof.**

**Theorem**

**6.**

**Proof.**

## Appendix C. Additional Example Probability Distributions

#### Appendix C.1. Probability Distribution Tbep

**Figure A1.**Example Tbep. (

**Top**) probability mass diagram for realisation $({S}_{1}=0,{S}_{2}=0,{S}_{3}=0,T=000)$; (

**Bottom left**) With three predictors, it is convenient to represent to decomposition diagrammatically. This is especially true Tbep as one only needs to consider the specificity lattice for one realisation; (

**Bottom right**) The specificity lattice for the realisation $({S}_{1}=0,{S}_{2}=0,{S}_{3}=0,T=000)$. For each source event the left value corresponds to the value of ${i}_{\cap}^{+}$, evaluated using ${r}_{\mathrm{min}}^{+}$, while the right value (surrounded by parenthesis) corresponds to the partial information ${\pi}^{+}$.

#### Appendix C.2. Probability Distribution Unq

**Figure A2.**Example Unq. (

**Top**) the probability mass diagrams for every single possible realisation; (

**Middle**) for each realisation, the PPID using specificity and ambiguity is evaluated (see Figure 4); (

**Bottom**) the atoms of (average) partial infromation obtained through recombination of the averages.

#### Appendix C.3. Probability Distribution And

**Figure A3.**Example And. (

**Top**) the probability mass diagrams for every single possible realisation; (

**Middle**) for each realisation, the PPID using specificity and ambiguity is evaluated (see Figure 4); (

**Bottom**) the atoms of (average) partial infromation obtained through recombination of the averages.

## References and Note

- Williams, P.L.; Beer, R.D. Information decomposition and synergy. Nonnegative decomposition of multivariate information. arXiv, 2010; arXiv:1004.2515. [Google Scholar]
- Williams, P.L.; Beer, R.D. Indiana University. DecomposingMultivariate Information. Privately communicated, 2010. This unpublished paper is highly similar to [1]. Crucially, however, this paper derives the redundancy lattice from the W&B Axioms 1–3 of Section 1. In contrast, [1] derives the redundancy lattice as a property of the particular measure I
_{min}. - Olbrich, E.; Bertschinger, N.; Rauh, J. Information decomposition and synergy. Entropy
**2015**, 17, 3501–3517. [Google Scholar] [CrossRef] - Lizier, J.T.; Flecker, B.; Williams, P.L. Towards a synergy-based approach to measuring information modification. In Proceedings of the IEEE Symposium on Artificial Life (ALife), Singapore, 16–19 April 2013; pp. 43–51. [Google Scholar]
- Bertschinger, N.; Rauh, J.; Olbrich, E.; Jost, J. Shared information—New insights and problems in decomposing information in complex systems. In Proceedings of the European Conference on Complex Systems, Brussels, Belgium, 3–7 September 2012; Springer: Cham, The Netherland, 2013; pp. 251–269. [Google Scholar]
- Harder, M.; Salge, C.; Polani, D. Bivariate measure of redundant information. Phys. Rev. E
**2013**, 87, 012130. [Google Scholar] [CrossRef] [PubMed] - Griffith, V.; Chong, E.K.; James, R.G.; Ellison, C.J.; Crutchfield, J.P. Intersection information based on common randomness. Entropy
**2014**, 16, 1985–2000. [Google Scholar] [CrossRef] - Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J.
**1948**, 27, 379–423. [Google Scholar] [CrossRef] - Fano, R. Transmission of Information; The MIT Press: Cambridge, MA, USA, 1961. [Google Scholar]
- Harder, M. Information driven self-organization of agents and agent collectives. Ph.D. Thesis, University of Hertfordshire, Hertfordshire, UK, 2013. [Google Scholar]
- Bertschinger, N.; Rauh, J.; Olbrich, E.; Jost, J.; Ay, N. Quantifying unique information. Entropy
**2014**, 16, 2161–2183. [Google Scholar] [CrossRef] - Griffith, V.; Koch, C. Quantifying Synergistic Mutual Information. In Guided Self-Organization: Inception; Prokopenko, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2014; Volume 9, pp. 159–190. [Google Scholar]
- Rauh, J.; Bertschinger, N.; Olbrich, E.; Jost, J. Reconsidering unique information: Towards a multivariate information decomposition. In Proceedings of the 2014 IEEE International Symposium on Information Theory, Honolulu, HI, USA, 29 June–4 July 2014; pp. 2232–2236. [Google Scholar]
- Perrone, P.; Ay, N. Hierarchical Quantification of Synergy in Channels. Front. Robot. AI
**2016**, 2, 35. [Google Scholar] [CrossRef] - Griffith, V.; Ho, T. Quantifying redundant information in predicting a target random variable. Entropy
**2015**, 17, 4644–4653. [Google Scholar] [CrossRef] - Rosas, F.; Ntranos, V.; Ellison, C.J.; Pollin, S.; Verhelst, M. Understanding interdependency through complex information sharing. Entropy
**2016**, 18, 38. [Google Scholar] [CrossRef] - Barrett, A.B. Exploration of synergistic and redundant information sharing in static and dynamical Gaussian systems. Phys. Rev. E
**2015**, 91, 052802. [Google Scholar] [CrossRef] [PubMed] - Ince, R. Measuring Multivariate Redundant Information with Pointwise Common Change in Surprisal. Entropy
**2017**, 19, 318. [Google Scholar] [CrossRef] - Ince, R.A. The Partial Entropy Decomposition: Decomposing multivariate entropy and mutual information via pointwise common surprisal. arXiv, 2017; arXiv:1702.01591. [Google Scholar]
- Chicharro, D.; Panzeri, S. Synergy and Redundancy in Dual Decompositions of Mutual Information Gain and Information Loss. Entropy
**2017**, 19, 71. [Google Scholar] [CrossRef] - Rauh, J.; Banerjee, P.K.; Olbrich, E.; Jost, J.; Bertschinger, N. On Extractable Shared Information. Entropy
**2017**, 19, 328. [Google Scholar] [CrossRef] - Rauh, J.; Banerjee, P.K.; Olbrich, E.; Jost, J.; Bertschinger, N.; Wolpert, D. Coarse-Graining and the Blackwell Order. Entropy
**2017**, 19, 527. [Google Scholar] [CrossRef] - Rauh, J. Secret sharing and shared information. Entropy
**2017**, 19, 601. [Google Scholar] [CrossRef] - Faes, L.; Marinazzo, D.; Stramaglia, S. Multiscale information decomposition: Exact computation for multivariate Gaussian processes. Entropy
**2017**, 19, 408. [Google Scholar] [CrossRef] - Pica, G.; Piasini, E.; Chicharro, D.; Panzeri, S. Invariant components of synergy, redundancy, and unique information among three variables. Entropy
**2017**, 19, 451. [Google Scholar] [CrossRef] - James, R.G.; Crutchfield, J.P. Multivariate dependence beyond shannon information. Entropy
**2017**, 19, 531. [Google Scholar] [CrossRef] - Makkeh, A.; Theis, D.O.; Vicente, R. Bivariate Partial Information Decomposition: The Optimization Perspective. Entropy
**2017**, 19, 530. [Google Scholar] [CrossRef] - Kay, J.W.; Ince, R.A.; Dering, B.; Phillips, W.A. Partial and Entropic Information Decompositions of a Neuronal Modulatory Interaction. Entropy
**2017**, 19, 560. [Google Scholar] [CrossRef] - Angelini, L.; de Tommaso, M.; Marinazzo, D.; Nitti, L.; Pellicoro, M.; Stramaglia, S. Redundant variables and Granger causality. Phys. Rev. E
**2010**, 81, 037201. [Google Scholar] [CrossRef] [PubMed] - Stramaglia, S.; Angelini, L.; Wu, G.; Cortes, J.M.; Faes, L.; Marinazzo, D. Synergetic and redundant information flow detected by unnormalized Granger causality: Application to resting state fMRI. IEEE Trans. Biomed. Eng.
**2016**, 63, 2518–2524. [Google Scholar] [CrossRef] [PubMed] - Ghazi-Zahedi, K.; Langer, C.; Ay, N. Morphological computation: Synergy of body and brain. Entropy
**2017**, 19, 456. [Google Scholar] [CrossRef] - Maity, A.K.; Chaudhury, P.; Banik, S.K. Information theoretical study of cross-talk mediated signal transduction in MAPK pathways. Entropy
**2017**, 19, 469. [Google Scholar] [CrossRef] - Tax, T.; Mediano, P.A.; Shanahan, M. The partial information decomposition of generative neural network models. Entropy
**2017**, 19, 474. [Google Scholar] [CrossRef] - Wibral, M.; Finn, C.; Wollstadt, P.; Lizier, J.T.; Priesemann, V. Quantifying Information Modification in Developing Neural Networks via Partial Information Decomposition. Entropy
**2017**, 19, 494. [Google Scholar] [CrossRef] - Woodward, P.M. Probability and Information Theory: With Applications to Radar; Pergamon Press: Oxford, UK, 1953. [Google Scholar]
- Woodward, P.M.; Davies, I.L. Information theory and inverse probability in telecommunication. Proc. IEE-Part III Radio Commun. Eng.
**1952**, 99, 37–44. [Google Scholar] [CrossRef] - Gray, R.M. Probability, Random Processes, and Ergodic Properties; Springer: New York, NY, USA, 1988. [Google Scholar]
- Martin, N.F.; England, J.W. Mathematical Theory of Entropy; Cambridge University Press: Cambridge, UK, 1984. [Google Scholar]
- Finn, C.; Lizier, J.T. Probability Mass Exclusions and the Directed Components of Pointwise Mutual Information. arXiv, 2018; arXiv:1801.09223. [Google Scholar]
- Kelly, J.L. A new interpretation of information rate. Bell Labs Tech. J.
**1956**, 35, 917–926. [Google Scholar] [CrossRef] - Ash, R. Information Theory; Interscience tracts in pure and applied mathematics; Interscience Publishers: Geneva, Switzerland, 1965. [Google Scholar]
- Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: Champaign, IL, USA, 1998. [Google Scholar]
- Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
- Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1988. [Google Scholar]
- Rota, G.C. On the foundations of combinatorial theory I. Theory of Möbius functions. Probab. Theory Relat. Field
**1964**, 2, 340–368. [Google Scholar] - Stanley, R.P. Enumerative Combinatorics. In Cambridge Studies in Advanced Mathematics, 2nd ed.; Cambridge University Press: Cambridge, UK, 2012; Volume 1. [Google Scholar]
- Davey, B.A.; Priestley, H.A. Introduction to Lattices and Order, 2nd ed.; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
- Ross, S.M. A First Course in Probability, 8th ed.; Pearson Prentice Hall: Upper Saddle River, NJ, USA, 2009. [Google Scholar]

**Figure 1.**Sample probability mass diagrams, which use length to represent the probability mass of each joint event from $\mathcal{T}\times \mathcal{S}$. (

**Left**) the joint distribution $P(S,T)$; (

**Middle**) The occurrence of the event ${s}^{1}$ leads to exclusions of the complementary event ${{s}^{1}}^{\mathsf{c}}$ which consists of two elementary event, i.e., ${{s}^{1}}^{\mathsf{c}}=\{{s}^{2},{s}^{3}\}$. This leaves the probability mass $P({s}^{1},T)$ remaining. The exclusion of the probability mass $p({{s}^{1}}^{\mathsf{c}},{t}^{1})$ was misinformative since the event ${t}^{1}$ did occur. By convention, misinformative exclusions will be indicated with diagonal hatching. On the other hand, the exclusion of the probability mass $p({{t}^{1}}^{\mathsf{c}},{{s}^{1}}^{\mathsf{c}})$ was informative since the complementary event ${{t}^{1}}^{\mathsf{c}}$ did not occur. By convention, informative exclusions will be indicated with horizontal or vertical hatching; (

**Right**) this remaining probability mass can be normalised yielding the conditional distribution $P(T|{s}^{1})$.

**Figure 2.**Sample probability mass diagrams for two predictors ${S}_{1}$ and ${S}_{2}$ to a given target T. Here events in the two different predictor spaces provide the same amount of pointwise information about the target event, ${log}_{2}\raisebox{1ex}{$4$}\!\left/ \!\raisebox{-1ex}{$3$}\right.$ bits, since $P(T|{s}_{1}^{1})=P(T|{s}_{2}^{1})$, although each excludes different sections of the target distribution $P(T)$. Since they both provide the same amount of information, is there a way to characterise what information the additional unique exclusions from the event ${s}_{2}^{1}$ are providing?

**Figure 3.**The lattice induced by the partial order ⪯ (A15) over the sources $\mathcal{A}(\mathit{s})$ (A14). (

**Left**) the lattice for $\mathit{s}=\{{s}_{1},{s}_{2}\}$; (

**Right**) the lattice for $\mathit{s}=\{{s}_{1},{s}_{2},{s}_{3}\}$. See Appendix B for further details. Each node corresponds to the self-redundancy (Axiom 3) of a source event, e.g., $\left\{1\right\}$ corresponds to the source event $\left(\right)$, while $\{12,13\}$ corresponds to the source event $\left(\right)$. Note that the specificity and ambiguity lattices share the same structure as the redundancy lattice of partial information decomposition (PID) (cf. Figure 2 in [1]).

**Figure 4.**Example Xor. (

**Top**) probability mass diagrams for the realisation $({S}_{1}=0,{S}_{2}=0,T=0)$; (

**Middle**) For each realisation, the pointwise specificity and pointwise ambiguity has been evaluated using (5) and (8) respectively. The pointwise redundant specificity and pointwise redundant ambiguity are then determined using (23) and (24). The decomposition is calculated using (18) and (19). The expected specificity and ambiguity are calculated with (20); (

**Bottom**) The average information is given by (22). As expected, Xor yields 1 bit of complementary information.

**Figure 5.**Example PwUnq. (

**Top**) probability mass diagrams for the realisation $({S}_{1}=0,{S}_{2}=1,T=1)$; (

**Middle**) For each realisation, the pointwise partial information decomposition (PPID) using specificity and ambiguity is evaluated (see Figure 4 for details). Upon recombination as per (21), the PPI decomposition from Table 1 is attained; (

**Bottom**) as does the average information—the decomposition does not have the pointwise unique problem.

**Figure 6.**Example RdnErr. (

**Top**) probability mass diagrams for the realisations $({S}_{1}=0,{S}_{2}=0,T=0)$ and $({S}_{1}=0,{S}_{2}=1,T=0)$; (

**Middle**) for each realisation, the PPID using specificity and ambiguity is evaluated (see Figure 4 for details); (

**Bottom**) the average PI atoms may be negative as the decomposition does not satisfy local positivity.

**Figure 7.**Example Tbc. (

**Top**) the probability mass diagrams for the realisation $({S}_{1}=0,{S}_{2}=0,T=00)$; (

**Middle**) for each realisation, the PPID using specificity and ambiguity is evaluated (see Figure 4); (

**Bottom**) the decomposition of Xor yields the same result as ${I}_{\mathrm{min}}$.

**Table 1.**Example PwUnq. For each realisation, the pointwise mutual information provided by each individual and joint predictor events, about the target event has been evaluated. Note that one predictor event always provides full information about the target while the other provides zero information. Based on the this, it is assumed that there must be zero redundant information. The pointwise partial information (PPI) atoms are then calculated via (3).

p | ${\mathit{s}}_{1}$ | ${\mathit{s}}_{2}$ | t | $\mathit{i}({\mathit{s}}_{1};\mathit{t})$ | $\mathit{i}({\mathit{s}}_{2};\mathit{t})$ | $\mathit{i}({\mathit{s}}_{1,2};\mathit{t})$ | $\mathit{u}({\mathit{s}}_{1}\backslash {\mathit{s}}_{2}\to \mathit{t})$ | $\mathit{u}({\mathit{s}}_{2}\backslash {\mathit{s}}_{1}\to \mathit{t})$ | $\mathit{r}({\mathit{s}}_{1},{\mathit{s}}_{2}\to \mathit{t})$ | $\mathit{c}({\mathit{s}}_{1},{\mathit{s}}_{2}\to \mathit{t})$ |
---|---|---|---|---|---|---|---|---|---|---|

$\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$4$}\right.$ | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 |

$\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$4$}\right.$ | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 |

$\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$4$}\right.$ | 0 | 2 | 2 | 0 | 1 | 1 | 0 | 1 | 0 | 0 |

$\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$4$}\right.$ | 2 | 0 | 2 | 1 | 0 | 1 | 1 | 0 | 0 | 0 |

Expected values | $\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.$ | $\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.$ | 1 | $\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.$ | $\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.$ | 0 | 0 |

**Table 2.**Shows the decomposition of the quantities in the first row induced by the measures in the first column. For consistency, the decomposition of $I({S}_{1,2};{T}_{1,3})$ should equal both the sum of the decomposition of $I({S}_{1,2};{T}_{1})$ and $I({S}_{1,2};{T}_{3}|{T}_{1})$, and the sum of the decomposition of $I({S}_{1,2};{T}_{3})$ and $I({S}_{1,2};{T}_{1}|3)$. Note that the decomposition induced by $\tilde{UI}$, ${I}_{\mathrm{red}}$ and ${\mathcal{S}}_{\mathrm{VK}}$ are not consistent. In contrast, ${R}_{\mathrm{min}}$ is consistent due to Theorem 5.

$\mathit{I}({\mathit{S}}_{1,2};{\mathit{T}}_{1,3})$ | $\mathit{I}({\mathit{S}}_{1,2};{\mathit{T}}_{1})$ | $\mathit{I}({\mathit{S}}_{1,2};{\mathit{T}}_{3}|{\mathit{T}}_{1})$ | $\mathit{I}({\mathit{S}}_{1,2};{\mathit{T}}_{3})$ | $\mathit{I}({\mathit{S}}_{1,2};{\mathit{T}}_{1}|{\mathit{T}}_{3})$ | |
---|---|---|---|---|---|

$\begin{array}{c}\tilde{UI},\phantom{\rule{0.277778em}{0ex}}{I}_{\mathrm{red}},\\ {\mathcal{S}}_{\mathrm{VK}}\phantom{\rule{1.em}{0ex}}\end{array}$ | $\begin{array}{cc}\hfill U({S}_{1}\backslash {S}_{2}\to {T}_{1,3})& =1\hfill \\ \hfill U({S}_{2}\backslash {S}_{1}\to {T}_{1,3})& =1\hfill \end{array}$ | $U({S}_{1}\backslash {S}_{2}\to {T}_{1})=1$ | $U({S}_{2}\backslash {S}_{1}\to {T}_{3}|{T}_{1})=1$ | $C({S}_{1},{S}_{2}\to {T}_{3})=1$ | $R({S}_{1},{S}_{2}\to {T}_{1}|{T}_{3})=1$ |

${R}_{\mathrm{min}}$ | $\begin{array}{cc}\hfill R({S}_{1},{S}_{2}\to {T}_{1,3})& =1\hfill \\ \hfill C({S}_{1},{S}_{2}\to {T}_{1,3})& =1\hfill \end{array}$ | $\begin{array}{cc}\hfill U({S}_{2}\backslash {S}_{1}\to {T}_{1})& =-1\hfill \\ \hfill R({S}_{1},{S}_{2}\to {T}_{1})& =1\hfill \\ \hfill C({S}_{1},{S}_{2}\to {T}_{1})& =1\hfill \end{array}$ | $U({S}_{2}\backslash {S}_{1}\to {T}_{3}|{T}_{1})=1$ | $C({S}_{1},{S}_{2}\to {T}_{3})=1$ | $R({S}_{1},{S}_{2}\to {T}_{1}|{T}_{3})=1$ |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Finn, C.; Lizier, J.T.
Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices. *Entropy* **2018**, *20*, 297.
https://doi.org/10.3390/e20040297

**AMA Style**

Finn C, Lizier JT.
Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices. *Entropy*. 2018; 20(4):297.
https://doi.org/10.3390/e20040297

**Chicago/Turabian Style**

Finn, Conor, and Joseph T. Lizier.
2018. "Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices" *Entropy* 20, no. 4: 297.
https://doi.org/10.3390/e20040297