# Higher-Order Interactions and Their Duals Reveal Synergy and Logical Dependence beyond Shannon-Information

^{1}

^{2}

^{3}

## Abstract

**:**

## 1. Introduction

#### 1.1. Higher-Order Interactions

#### 1.2. Model-Free Interactions and the Inverse Ising Problem

#### 1.3. Outline

## 2. Background

#### 2.1. Model-Free Interactions

**Definition**

**1**(n-point interaction with respect to outcome Y)

**.**

**Definition**

**2**(model-free n-point interaction between binary variables)

**.**

**Definition**

**3**(derivative of a function with respect to a binary variable)

**.**

- It is symmetric in terms of the variables, as ${I}_{S}={I}_{\pi \left(S\right)}$ for any set of variables S and any permutation $\pi $.
- Conditionally independent variables do not interact: ${X}_{i}\phantom{\rule{-0.166667em}{0ex}}\perp \phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}\perp {X}_{j}\mid \underline{X}\Rightarrow {I}_{ij}=0$.
- The interactions are model-free; no knowledge of the functional form of $p\left(X\right)$ is required, and the probabilities can be directly estimated from i.i.d. samples.
- The MFIs are exactly the Ising interactions in the maximum entropy model after observing moments of the data. This can be readily verified by setting$$p\left(s\right)={\mathcal{Z}}^{-1}exp(\sum _{n}\sum _{{i}_{1},\dots ,{i}_{n}}{J}_{{i}_{1}\dots {i}_{n}}{s}_{{i}_{1}}\dots {s}_{{i}_{n}})$$

- An n-point interaction can only be non-zero if all n variables are in each other’s minimal Markov blanket.
- If $\underline{X}$ does not include the full complement of the interacting variables, the bias this induces in the estimate of the interaction is proportional to the pointwise mutual information of states where the omitted variables are 0.

#### 2.2. Mutual Information as a Möbius Inversion

**Definition**

**4**

**.**Let P be a poset $(S,\le )$, let $\mu :P\times P\to \mathbb{R}$ be the Möbius function from Equation (14), and let $g:P\to \mathbb{R}$ be a function on P. Then, the function

**Summary**

- Mutual information is the Möbius inversion of marginal entropy.
- Pointwise mutual information is the Möbius inversion of marginal surprisal.

## 3. Interactions and Their Duals

#### 3.1. MFIs as Möbius Inversions

**Definition**

**5**(interactions as Möbius inversions)

**.**

**Theorem**

**1**(equivalence of interactions)

**.**

**Proof.**

#### 3.2. Categorical Interactions

#### 3.3. Information and Interactions on Dual Lattices

**Summary**

- Mutual information is the Möbius inversion of marginal entropy on the lattice of subsets ordered by inclusion.
- Differential (or conditional) mutual information is the Möbius inversion of marginal entropy on the dual lattice.
- Model-free interactions are the Möbius inversion of surprisal on the lattice of subsets ordered by inclusion.
- Model-free dual interactions are the Möbius inversion of surprisal on the dual lattice.
- Dual interactions of a variable X are interactions between the other variables where X is set to 1 instead of 0.

## 4. Results and Examples

#### 4.1. Interactions and Their Duals Quantify and Distinguish Synergy in Logic Gates

A | B | C |

0 | 0 | 1 |

0 | 1 | 0 |

1 | 0 | 0 |

1 | 1 | 1 |

A | B | C | D |

0 | 0 | 0 | 0 |

0 | 0 | 1 | 1 |

0 | 1 | 0 | 1 |

1 | 0 | 0 | 1 |

0 | 1 | 1 | 0 |

1 | 0 | 1 | 0 |

1 | 1 | 0 | 0 |

1 | 1 | 1 | 1 |

#### 4.2. Interactions Distinguish Dynamics and Causal Structures

#### 4.3. Higher-Order Categorical Interactions Distinguish Dyadic and Triadic Distributions

**Figure 4.**Different causal dynamics lead to different association metrics. Green edges denote positive values, red edges denote negative values, circles denote a three-point quantity, and dashed lines show edges with marginal significance (depending on ${\sigma}^{2}$). Correlations and mutual information cannot distinguish between most dynamics, and while partial correlation can identify the correct pairwise relationships for certain noise levels, it falls short of distinguishing additive from multiplicative dynamics. Only MFIs can distinguish between all six scenarios and reveal the combinatorial effect of the multiplicative dynamics as a 3-point interaction. See Appendix A.3 for the simulation parameters and raw numbers. This figure is reproduced with permission from the author of [48].

## 5. Discussion

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## Abbreviations

MI | Mutual Information |

MFI | Model-Free Interaction |

DAG | Directed Acyclic Graph |

MB | Markov Blanket |

PID | Partial Information d=Decomposition |

i.i.d. | independent and identically distributed |

## Appendix A

#### Appendix A.1. Markov Blankets

#### Appendix A.2. Proofs

**Proposition**

**A1**(symmetry of Markov blankets)

**.**

**Proof.**

**Theorem**

**A1**(only Markov-connected variables can interact)

**.**

**Proof.**

**Proposition**

**A2**(underconditioning bias)

**.**

**Proof.**

#### Appendix A.3. Numerics of Causal Structures

Genes | Interaction | F | Pearson cor. | Pearson cor. p | Partial cor. | Partial cor. p | MI | |
---|---|---|---|---|---|---|---|---|

0 | [0, 1] | 4.281 | 0.000 | 0.790 | 0.0 | 0.635 | 0.000 × 10${}^{+0}$ | 0.515 |

1 | [0, 2] | 0.056 | 0.117 | 0.622 | 0.0 | 0.031 | 2.261 × 10${}^{-23}$ | 0.301 |

2 | [1, 2] | 4.249 | 0.000 | 0.786 | 0.0 | 0.628 | 0.000 × 10${}^{+0}$ | 0.510 |

3 | [0, 1, 2] | −0.052 | 0.217 | NaN | NaN | NaN | NaN | 0.300 |

Genes | Interaction | F | Pearson cor. | Pearson cor. p | Partial cor. | Partial cor. p | MI | |
---|---|---|---|---|---|---|---|---|

0 | [0, 1] | 4.268 | 0.000 | 0.789 | 0.0 | 0.634 | 0.000 × 10${}^{+0}$ | 0.514 |

1 | [0, 2] | 4.257 | 0.000 | 0.788 | 0.0 | 0.632 | 0.000 × 10${}^{+0}$ | 0.512 |

2 | [1, 2] | −0.014 | 0.376 | 0.622 | 0.0 | 0.028 | 6.518 × 10${}^{-19}$ | 0.300 |

3 | [0, 1, 2] | 0.020 | 0.376 | NaN | NaN | NaN | NaN | 0.300 |

Genes | Interaction | F | Pearson cor. | Pearson cor. p | Partial cor. | Partial cor. p | MI | |
---|---|---|---|---|---|---|---|---|

0 | [0, 1] | 2.144 | 0.000 | 0.395 | 0.000 | 0.505 | 0.000 × 10${}^{+0}$ | 1.154 × 10${}^{-1}$ |

1 | [0, 2] | −0.989 | 0.000 | −0.002 | 0.593 | −0.070 | 5.172 × 10${}^{-109}$ | 2.059 × 10${}^{-6}$ |

2 | [1, 2] | 2.144 | 0.000 | 0.395 | 0.000 | 0.505 | 0.000 × 10${}^{+0}$ | 1.154 × 10${}^{-1}$ |

3 | [0, 1, 2] | 0.003 | 0.438 | NaN | NaN | NaN | NaN | −2.678 × 10${}^{-2}$ |

Genes | Interaction | F | Pearson cor. | Pearson cor. p | Partial cor. | Partial cor. p | MI | |
---|---|---|---|---|---|---|---|---|

0 | [0, 1] | 0.032 | 0.140 | 0.427 | 0.000 | 0.478 | 0.000 × 10${}^{+0}$ | 1.403 × 10${}^{-1}$ |

1 | [0, 2] | −2.156 | 0.000 | −0.005 | 0.145 | −40.087 | 1.463 × 10${}^{-166}$ | 1.529 × 10${}^{-5}$ |

2 | [1, 2] | 0.036 | 0.109 | 0.429 | 0.000 | 0.480 | 0.000 × 10${}^{+0}$ | 1.415 × 10${}^{-1}$ |

3 | [0, 1, 2] | 4.237 | 0.000 | NaN | NaN | NaN | NaN | −1.150 × 10${}^{-1}$ |

Genes | Interaction | F | Pearson cor. | Pearson cor. p | Partial cor. | Partial cor. p | MI | |
---|---|---|---|---|---|---|---|---|

0 | [0, 1] | 2.103 | 0.000 | 0.705 | 0.0 | 0.362 | 0.0 | 0.396 |

1 | [0, 2] | 3.288 | 0.000 | 0.790 | 0.0 | 0.599 | 0.0 | 0.515 |

2 | [1, 2] | 2.113 | 0.000 | 0.706 | 0.0 | 0.364 | 0.0 | 0.397 |

3 | [0, 1, 2] | 0.050 | 0.162 | NaN | NaN | NaN | NaN | 0.335 |

Genes | Interaction | F | Pearson cor. | Pearson cor. p | Partial cor. | Partial cor. p | MI | |
---|---|---|---|---|---|---|---|---|

0 | [0, 1] | −0.017 | 0.342 | 0.709 | 0.0 | 0.365 | 0.0 | 0.403 |

1 | [0, 2] | 2.094 | 0.000 | 0.786 | 0.0 | 0.596 | 0.0 | 0.510 |

2 | [1, 2] | −0.057 | 0.092 | 0.707 | 0.0 | 0.361 | 0.0 | 0.401 |

3 | [0, 1, 2] | 4.359 | 0.000 | NaN | NaN | NaN | NaN | 0.293 |

#### Appendix A.4. Python Code for Calculating Categorical Dyadic and Triadic Interactions

dyadicStates = [[’a’, ’a’, ’a’], [’a’, ’c’, ’b’], [’b’, ’a’, ’c’], [’b’, ’c’, ’d’], [’c’, ’b’, ’a’], [’c’, ’d’, ’b’], [’d’, ’b’, ’c’], [’d’, ’d’, ’d’]] triadicStates = [[’a’, ’a’, ’a’], [’a’, ’c’, ’c’], [’b’, ’b’, ’b’], [’b’, ’d’, ’d’], [’c’, ’a’, ’c’], [’c’, ’c’, ’a’], [’d’, ’b’, ’d’], [’d’, ’d’, ’b’]] stateDict = {0: ’a’, 1: ’b’, 2:’c’, 3: ’d’}defcatIntSymb(x0, x1, y0, y1, z0, z1, states): prob =lambdax, y, z: ’p’if[x, y, z]instateselse~’e’ num = prob(x1, y1, z1) + prob(x1, y0, z0) + prob(x0, y1, z0) + prob(x0, y0, z1) denom = prob(x1, y1, z0) + prob(x1, y0, z1) + prob(x0, y1, z1) + prob(x0, y0, z0)return(num, denom) numDy = ’’ denomDy = ’’ numTri = ’’ denomTri = ’’forx0in range(4):forx1in range(x0+1, 4):fory0in range(4):fory1in range(y0+1, 4):forz0in range(4):forz1in range(z0+1, 4): nDy, dDy = catIntSymb(*[stateDict[x]forxin[x0, x1, y0, y1, z0, z1]], dyadicStates) numDy += nDy denomDy += dDy nTri, dTri = catIntSymb(*[stateDict[x]forxin[x0, x1, y0, y1, z0, z1]], triadicStates) numTri += nTri denomTri += dTri

## References

- Ghazanfar, S.; Lin, Y.; Su, X.; Lin, D.M.; Patrick, E.; Han, Z.G.; Marioni, J.C.; Yang, J.Y.H. Investigating higher-order interactions in single-cell data with scHOT. Nat. Methods
**2020**, 17, 799–806. [Google Scholar] [CrossRef] [PubMed] - Lezon, T.R.; Banavar, J.R.; Cieplak, M.; Maritan, A.; Fedoroff, N.V. Using the principle of entropy maximization to infer genetic interaction networks from gene expression patterns. Proc. Natl. Acad. Sci. USA
**2006**, 103, 19033–19038. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Watkinson, J.; Liang, K.C.; Wang, X.; Zheng, T.; Anastassiou, D. Inference of regulatory gene interactions from expression data using three-way mutual information. Ann. N. Y. Acad. Sci.
**2009**, 1158, 302–313. [Google Scholar] [CrossRef] [PubMed] - Kuzmin, E.; VanderSluis, B.; Wang, W.; Tan, G.; Deshpande, R.; Chen, Y.; Usaj, M.; Balint, A.; Usaj, M.M.; Van Leeuwen, J.; et al. Systematic analysis of complex genetic interactions. Science
**2018**, 360, eaao1729. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Weinreich, D.M.; Lan, Y.; Wylie, C.S.; Heckendorn, R.B. Should evolutionary geneticists worry about higher-order epistasis? Curr. Opin. Genet. Dev.
**2013**, 23, 700–707. [Google Scholar] - Panas, D.; Maccione, A.; Berdondini, L.; Hennig, M.H. Homeostasis in large networks of neurons through the Ising model—Do higher order interactions matter? BMC Neurosci.
**2013**, 14, P166. [Google Scholar] [CrossRef] [Green Version] - Tkačik, G.; Marre, O.; Amodei, D.; Schneidman, E.; Bialek, W.; Berry, M.J. Searching for Collective Behavior in a Large Network of Sensory Neurons. PLoS Comput. Biol.
**2014**, 10, e1003408. [Google Scholar] [CrossRef] [Green Version] - Ganmor, E.; Segev, R.; Schneidman, E. Sparse low-order interaction network underlies a highly correlated and learnable neural population code. Proc. Natl. Acad. Sci. USA
**2011**, 108, 9679–9684. [Google Scholar] [CrossRef] [Green Version] - Yu, S.; Yang, H.; Nakahara, H.; Santos, G.S.; Nikolić, D.; Plenz, D. Higher-order interactions characterized in cortical activity. J. Neurosci.
**2011**, 31, 17514–17526. [Google Scholar] - Gatica, M.; Cofré, R.; Mediano, P.A.; Rosas, F.E.; Orio, P.; Diez, I.; Swinnen, S.P.; Cortes, J.M. High-order interdependencies in the aging brain. Brain Connect.
**2021**, 11, 734–744. [Google Scholar] - Sanchez, A. Defining Higher-Order Interactions in Synthetic Ecology: Lessons from Physics and Quantitative Genetics. Cell Syst.
**2019**, 9, 519–520. [Google Scholar] [CrossRef] [Green Version] - Grilli, J.; Barabás, G.; Michalska-Smith, M.J.; Allesina, S. Higher-order interactions stabilize dynamics in competitive network models. Nature
**2017**, 548, 210–213. [Google Scholar] - Li, Y.; Mayfield, M.M.; Wang, B.; Xiao, J.; Kral, K.; Janik, D.; Holik, J.; Chu, C. Beyond direct neighbourhood effects: Higher-order interactions improve modelling and predicting tree survival and growth. Natl. Sci. Rev.
**2021**, 8, nwaa244. [Google Scholar] - Tekin, E.; White, C.; Kang, T.M.; Singh, N.; Cruz-Loya, M.; Damoiseaux, R.; Savage, V.M.; Yeh, P.J. Prevalence and patterns of higher-order drug interactions in Escherichia coli. NPJ Syst. Biol. Appl.
**2018**, 4, 1–10. [Google Scholar] - Alvarez-Rodriguez, U.; Battiston, F.; de Arruda, G.F.; Moreno, Y.; Perc, M.; Latora, V. Evolutionary dynamics of higher-order interactions in social networks. Nat. Hum. Behav.
**2021**, 5, 586–595. [Google Scholar] - Cencetti, G.; Battiston, F.; Lepri, B.; Karsai, M. Temporal properties of higher-order interactions in social networks. Sci. Rep.
**2021**, 11, 1–10. [Google Scholar] - Grabisch, M.; Roubens, M. An axiomatic approach to the concept of interaction among players in cooperative games. Int. J. Game Theory
**1999**, 28, 547–565. [Google Scholar] - Matsuda, H. Physical nature of higher-order mutual information: Intrinsic correlations and frustration. Phys. Rev. E
**2000**, 62, 3096. [Google Scholar] - Cerf, N.J.; Adami, C. Entropic bell inequalities. Phys. Rev. A
**1997**, 55, 3371. [Google Scholar] - Battiston, F.; Amico, E.; Barrat, A.; Bianconi, G.; Ferraz de Arruda, G.; Franceschiello, B.; Iacopini, I.; Kéfi, S.; Latora, V.; Moreno, Y.; et al. The physics of higher-order interactions in complex systems. Nat. Phys.
**2021**, 17, 1093–1098. [Google Scholar] - Skardal, P.S.; Arenas, A. Higher order interactions in complex networks of phase oscillators promote abrupt synchronization switching. Commun. Phys.
**2020**, 3, 1–6. [Google Scholar] - Merchan, L.; Nemenman, I. On the Sufficiency of Pairwise Interactions in Maximum Entropy Models of Networks. J. Stat. Phys.
**2016**, 162, 1294–1308. [Google Scholar] [CrossRef] [Green Version] - Tkacik, G.; Schneidman, E.; Berry, M.J., II; Bialek, W. Ising models for networks of real neurons. arXiv
**2006**, arXiv:q-bio/0611072. [Google Scholar] - Margolin, A.A.; Nemenman, I.; Basso, K.; Wiggins, C.; Stolovitzky, G.; Favera, R.D.; Califano, A. ARACNE: An algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinform.
**2006**, 7, S7. [Google Scholar] [CrossRef] [Green Version] - Nemenman, I. Information theory, multivariate dependence, and genetic network inference. arXiv
**2004**, arXiv:q-bio/0406015. [Google Scholar] - Watanabe, S. Information theoretical analysis of multivariate correlation. IBM J. Res. Dev.
**1960**, 4, 66–82. [Google Scholar] - Rosas, F.E.; Mediano, P.A.; Luppi, A.I.; Varley, T.F.; Lizier, J.T.; Stramaglia, S.; Jensen, H.J.; Marinazzo, D. Disentangling high-order mechanisms and high-order behaviours in complex systems. Nat. Phys.
**2022**, 18, 476–477. [Google Scholar] - Williams, P.L.; Beer, R.D. Nonnegative decomposition of multivariate information. arXiv
**2010**, arXiv:1004.2515. [Google Scholar] - Wibral, M.; Priesemann, V.; Kay, J.W.; Lizier, J.T.; Phillips, W.A. Partial information decomposition as a unified approach to the specification of neural goal functions. Brain Cogn.
**2017**, 112, 25–38. [Google Scholar] - Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev.
**1957**, 106, 620. [Google Scholar] [CrossRef] - Nguyen, H.C.; Zecchina, R.; Berg, J. Inverse statistical problems: From the inverse Ising problem to data science. Adv. Phys.
**2017**, 66, 197–261. [Google Scholar] [CrossRef] [Green Version] - Beentjes, S.V.; Khamseh, A. Higher-order interactions in statistical physics and machine learning: A model-independent solution to the inverse problem at equilibrium. Phys. Rev. E
**2020**, 102, 053314. [Google Scholar] [CrossRef] - Glonek, G.F.; McCullagh, P. Multivariate logistic models. J. R. Stat. Soc. Ser. B (Methodol.)
**1995**, 57, 533–546. [Google Scholar] - Bartolucci, F.; Colombi, R.; Forcina, A. An extended class of marginal link functions for modelling contingency tables by equality and inequality constraints. Stat. Sin.
**2007**, 17, 691–711. [Google Scholar] - Bateson, G. Steps to an Ecology of Mind; Chandler Publishing Company: San Francisco, CA, USA, 1972. [Google Scholar]
- Stanley, R.P. Enumerative Combinatorics, 2nd ed.; Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 2011; Volume 1. [Google Scholar]
- Rota, G.C. On the foundations of combinatorial theory I. Theory of Möbius functions. Z. Wahrscheinlichkeitstheorie Verwandte Geb.
**1964**, 2, 340–368. [Google Scholar] - Bell, A.J. The co-information lattice. In Proceedings of the Fifth International Workshop on Independent Component Analysis and Blind Signal Separation, ICA, Nara, Japan, 1–4 April 2003; Volume 2003. [Google Scholar]
- Galas, D.J.; Sakhanenko, N.A. Symmetries among multivariate information measures explored using Möbius operators. Entropy
**2019**, 21, 88. [Google Scholar] - Galas, D.J.; Sakhanenko, N.A.; Skupin, A.; Ignac, T. Describing the complexity of systems: Multivariable “set complexity” and the information basis of systems biology. J. Comput. Biol.
**2014**, 21, 118–140. [Google Scholar] - Galas, D.J.; Kunert-Graf, J.; Uechi, L.; Sakhanenko, N.A. Towards an information theory of quantitative genetics. J. Comput. Biol.
**2019**, 28, 527–559. [Google Scholar] [CrossRef] - Freund, Y.; Haussler, D. Unsupervised learning of distributions on binary vectors using two layer networks. Adv. Neural Inf. Process. Syst.
**1991**, 4, 912–919. [Google Scholar] - Le Roux, N.; Bengio, Y. Representational power of restricted Boltzmann machines and deep belief networks. Neural Comput.
**2008**, 20, 1631–1649. [Google Scholar] [PubMed] - Montufar, G.; Ay, N. Refinements of universal approximation results for deep belief networks and restricted Boltzmann machines. Neural Comput.
**2011**, 23, 1306–1319. [Google Scholar] [PubMed] [Green Version] - Cossu, G.; Del Debbio, L.; Giani, T.; Khamseh, A.; Wilson, M. Machine learning determination of dynamical parameters: The Ising model case. Phys. Rev. B
**2019**, 100, 064304. [Google Scholar] - Krumsiek, J.; Suhre, K.; Illig, T.; Adamski, J.; Theis, F.J. Gaussian graphical modeling reconstructs pathway reactions from high-throughput metabolomics data. BMC Syst. Biol.
**2011**, 5, 21. [Google Scholar] - James, R.G.; Crutchfield, J.P. Multivariate dependence beyond Shannon information. Entropy
**2017**, 19, 531. [Google Scholar] - Jansma, A. Higher-Order Interactions in Single-Cell Gene Expression. Ph.D. Thesis, University of Edinburgh, Edinburgh, UK, 2023. [Google Scholar]
- Pearl, J. Models, Reasoning and Inference; Cambridge University Press: Cambridge, UK, 2000; Volume 19. [Google Scholar]
- Imbens, G.W.; Rubin, D.B. Causal Inference in Statistics, Social, and Biomedical Sciences; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar]
- Leinster, T. Notions of Möbius inversion. Bull. Belg. Math. Soc.-Simon Stevin
**2012**, 19, 909–933. [Google Scholar] - Bruineberg, J.; Dołęga, K.; Dewhurst, J.; Baltieri, M. The emperor’s new Markov blankets. Behav. Brain Sci.
**2022**, 45, e183. [Google Scholar]

**Figure 1.**The lattices associated with $\mathcal{P}\left(\right\{X,Y\left\}\right)$ (

**left**) and $\mathcal{P}\left(\right\{X,Y,Z\left\}\right)$ (

**right**) ordered by inclusion. An arrow $b\to a$ indicates $a<b$.

**Figure 2.**(

**Left**) The lattice associated with $\mathcal{P}\left(\right\{X,Y,Z\left\}\right)$ ordered by inclusion as binary strings. Equivalently, the lattice of binary strings where for any two strings a and b, $a\le b\iff a\wedge b=a$. (

**Right**): The two shaded regions correspond to the decomposition of the 3-point interaction into two 2-point interactions.

**Figure 3.**The lattice of two variables that can take three values, ordered by $a\le b\iff \forall i$$:{a}_{i}\le {b}_{i}$.

**Table 1.**The 3-point interactions for all two-input logic gates at equal noise level are related through $I=4log\frac{p}{\u03f5}$ and degenerate in AND∼NOR and OR∼NAND.

$\mathcal{G}$ | ${\mathit{I}}_{\mathbf{ABC}}^{\mathcal{G}}$ |
---|---|

XNOR | I |

XOR | $-I$ |

AND | $\frac{1}{2}I$ |

OR | $-\frac{1}{2}I$ |

NAND | $-\frac{1}{2}I$ |

NOR | $\frac{1}{2}I$ |

**Table 2.**The marginal entropies of variables in a logic gate are degenerate in XOR∼XNOR and AND∼OR∼NAND∼NOR.

$\mathcal{G}$ | $\begin{array}{c}\mathit{H}\left(\mathit{A}\right)\\ =\mathit{H}\left(\mathit{B}\right)\end{array}$ | $\mathit{H}\left(\mathit{C}\right)$ | $\mathit{H}(\mathit{A},\mathit{B})$ | $\begin{array}{c}\mathit{H}(\mathit{A},\mathit{C})\\ =\mathit{H}(\mathit{B},\mathit{C})\end{array}$ | $\mathit{H}(\mathit{A},\mathit{B},\mathit{C})$ |
---|---|---|---|---|---|

XNOR | 1 | 1 | 2 | 2 | 2 |

XOR | 1 | 1 | 2 | 2 | 2 |

AND | 1 | $log\frac{{3}^{3/4}}{4}$ | 2 | $\frac{3}{2}$ | 2 |

OR | 1 | $log\frac{{3}^{3/4}}{4}$ | 2 | $\frac{3}{2}$ | 2 |

NAND | 1 | $log\frac{{3}^{3/4}}{4}$ | 2 | $\frac{3}{2}$ | 2 |

NOR | 1 | $log\frac{{3}^{3/4}}{4}$ | 2 | $\frac{3}{2}$ | 2 |

**Table 3.**While the interactions leave certain gates indistinguishable, the dual J-interactions of the inputs are unique to each gate. The reported decimal values are rounded to three digits; as before, $I=4log\frac{p}{\u03f5}$.

$\mathcal{G}$ | ${\mathit{MI}}_{\mathit{ABC}}$ | ${\mathit{MI}}_{\mathit{BC}}$ | ${\mathit{MI}}_{\mathit{A}}^{*}$ | ${\mathit{I}}_{\mathit{ABC}}^{\mathcal{G}}$ | ${\mathit{I}}_{\mathit{AB}}^{\mathcal{G}}$ | ${\mathit{I}}_{\mathit{BC}}^{\mathcal{G}}$ | ${\mathit{I}}_{\mathit{A}}^{*\mathcal{G}}$ | ${\mathit{I}}_{\mathit{C}}^{*\mathcal{G}}$ | ${\mathit{J}}_{\mathit{A}}^{*\mathcal{G}}$ | ${\mathit{J}}_{\mathit{C}}^{*\mathcal{G}}$ | ${\overline{\mathit{J}}}^{*\mathcal{G}}$ |
---|---|---|---|---|---|---|---|---|---|---|---|

XNOR | $-1$ | 0 | $-1$ | I | $-\frac{1}{2}I$ | $-\frac{1}{2}I$ | $\frac{1}{2}I$ | $\frac{1}{2}I$ | $\frac{3}{2}I$ | $\frac{3}{2}I$ | $\frac{27}{8}{I}^{3}$ |

XOR | $-1$ | 0 | $-1$ | $-I$ | $\frac{1}{2}I$ | $\frac{1}{2}I$ | $-\frac{1}{2}I$ | $-\frac{1}{2}I$ | $-\frac{3}{2}I$ | $-\frac{3}{2}I$ | $-\frac{27}{8}{I}^{3}$ |

AND | $-0.189$ | 0.311 | $-\frac{1}{2}$ | $\frac{1}{2}I$ | $-\frac{1}{4}I$ | 0 | $\frac{1}{2}I$ | $\frac{1}{4}I$ | $\frac{1}{2}I$ | $\frac{3}{4}I$ | $\frac{3}{16}{I}^{3}$ |

OR | $-0.189$ | 0.311 | $-\frac{1}{2}$ | $-\frac{1}{2}I$ | $\frac{1}{4}I$ | $\frac{1}{2}I$ | 0 | $-\frac{1}{4}I$ | $-I$ | $-\frac{3}{4}I$ | $-\frac{3}{4}{I}^{3}$ |

NAND | $-0.189$ | 0.311 | $-\frac{1}{2}$ | $-\frac{1}{2}I$ | $\frac{1}{4}I$ | 0 | $-\frac{1}{2}I$ | $-\frac{1}{4}I$ | $-\frac{1}{2}I$ | $-\frac{3}{4}I$ | $-\frac{3}{16}{I}^{3}$ |

NOR | $-0.189$ | 0.311 | $-\frac{1}{2}$ | $\frac{1}{2}I$ | $-\frac{1}{4}I$ | $-\frac{1}{2}I$ | 0 | $\frac{1}{4}I$ | I | $\frac{3}{4}I$ | $\frac{3}{4}{I}^{3}$ |

**Table 4.**The joint probability of the dyadic and triadic distributions [47]. All other states have a probability of zero.

Dyadic | |||
---|---|---|---|

X | Y | Z | P |

0 | 0 | 0 | 1 / 8 |

0 | 2 | 1 | 1 / 8 |

1 | 0 | 2 | 1 / 8 |

1 | 2 | 3 | 1 / 8 |

2 | 1 | 0 | 1 / 8 |

2 | 3 | 1 | 1 / 8 |

3 | 1 | 2 | 1 / 8 |

3 | 3 | 3 | 1 / 8 |

Triadic | |||

X | Y | Z | P |

0 | 0 | 0 | 1 / 8 |

1 | 1 | 1 | 1 / 8 |

0 | 2 | 2 | 1 / 8 |

1 | 3 | 3 | 1 / 8 |

2 | 0 | 2 | 1 / 8 |

3 | 1 | 3 | 1 / 8 |

2 | 2 | 0 | 1 / 8 |

3 | 3 | 1 | 1 / 8 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Jansma, A.
Higher-Order Interactions and Their Duals Reveal Synergy and Logical Dependence beyond Shannon-Information. *Entropy* **2023**, *25*, 648.
https://doi.org/10.3390/e25040648

**AMA Style**

Jansma A.
Higher-Order Interactions and Their Duals Reveal Synergy and Logical Dependence beyond Shannon-Information. *Entropy*. 2023; 25(4):648.
https://doi.org/10.3390/e25040648

**Chicago/Turabian Style**

Jansma, Abel.
2023. "Higher-Order Interactions and Their Duals Reveal Synergy and Logical Dependence beyond Shannon-Information" *Entropy* 25, no. 4: 648.
https://doi.org/10.3390/e25040648