# Computing Integrated Information (Φ) in Discrete Dynamical Systems with Multi-Valued Elements

^{1}

^{2}

^{*}

^{†}

^{‡}

^{§}

## Abstract

**:**

## 1. Introduction

## 2. Theory and Pyphi Implementation

**Existence:**the system must have cause-effect power—it must be able to take and make a difference.**Intrinsicality:**the system must have cause-effect power upon itself.**Composition:**the system must be composed of parts that have cause-effect power within the whole.**Information:**the system’s cause-effect power must be specific.**Integration:**the system’s cause-effect power must not be reducible to that of its parts.**Exclusion:**the system must specify a maximum of intrinsic cause-effect power.

**Input:**let us begin with the most fundamental item, the Transitional Probability Matrix (TPM). The TPM is a matrix (either deterministic or probabilistic) that specifies the probability with which any state of a system transitions to any other system state, as described in the Methods, Section 5.1. The TPM is determined by the update functions of the system elements and obtained by perturbing the system into all its possible states. It is a matrix of size $S\times S$, where S is the total number of possible system states. Moreover, $S=({S}_{1}{S}_{2}\cdots {S}_{n})$, where ${S}_{i}$ is the number of states of element i.

`num_states_per_node`’, in our implementation), as systems with different numbers of elements may still have the same total number of states S (see below). Moreover, an adjacency matrix may be specified that, as the name indicates, serves to describe who is connected to whom within the network (i.e., a binary matrix). If provided, the adjacency matrix may speed up PyPhi computations. However, because it can also be inferred from the TPM (and the

`num_states_per_node`input), it is not essential to the computation.

**Output:**to obtain the $\mathsf{\Phi}$ value of a system, as well as its cause-effect structure (CES), which includes the causal distinctions and $\phi $ values of the system’s mechanisms, a “system irreducibility analysis” (SIA) is performed. While the full IIT analysis includes a search for the subsystem that specifies a maximum of $\mathsf{\Phi}$, our goal here is to compare systems with different types of multi-valued elements. For this reason, we are mainly interested in the properties of the system as a whole. In PyPhi, we thus select the complete set of nodes as the subsystem to be evaluated.

## 3. Results

#### 3.1. Comparison of Random Systems with Varying Numbers of Elements and States

**44(2222)**and

**88(444)**required no additional matrices, as they share their TPMs with another class, denoted in parenthesis. Nonetheless, we also included comparison sets with different TPMs for those types of networks (classes

**444**and

**88**).

**44**and

**44(2222)**, or between

**88**and

**88(444)**(a two-sample Kolmogorov–Smirnov test was performed in order to confirm that we cannot reject the hypothesis that the distributions of the two sample pairs are the same;

**44***: $p=0.65/0.15$ and

**88***: $p=0.91/0.94$ for $\mathsf{\Phi}$/$\langle \phi \rangle $, respectively). Thus, these classes were pooled together in Figure 1 and Table 2.

**222**to

**2222**,

**33**to

**333**, and

**44**to

**444**). In contrast, the average $\langle \phi \rangle $ is lower for the systems with more elements, possibly due to the larger number of available higher-order mechanisms. Even within the same network, higher-order mechanisms typically have lower $\phi $ values than first-order mechanisms, because they only specify information to the extent that they are irreducible to their parts (see [14,16]). Indeed, the average number of causal distinctions is already high for class

**222**and close to maximal for all other classes in our data set of deterministic random networks (Table 2). When comparing classes with the same number of system elements ($\{\mathbf{222},\mathbf{333},\mathbf{444}\}$, $\{\mathbf{33},\mathbf{44},\mathbf{88}\}$, and $\{\mathbf{2222},\mathbf{2235}\}$), a higher number of states per element resulted in a higher average $\mathsf{\Phi}$ and also higher average $\langle \phi \rangle $.

**2222**and

**333**, as well as

**2235**and

**444**). Nevertheless, the total number of system states determines the upper bound of the system’s capacity for information. The TPM of a system, in particular, determines its effective information [10,34], which corresponds to the mutual information across a system update from time t to $t+1$ while assuming a uniform distribution of system states at t. A system’s effective information is correlated with its intrinsic information ($\sum \phi $) for binary random networks, as shown in [16].

**2222**,

**44(2222)**} and {

**444**,

**88(444)**} are significantly correlated, while {

**2222**,

**44**} and {

**444**,

**88**} are completely unrelated, as expected, since they are independent data samples. Networks with the same underlying TPM necessarily specify the same effective information and the same global dynamics [16]. However, their causal composition and integration depend on the number of interacting elements and their respective number of states. For this reason, the pairs of networks with shared TPMs can be said to have disparate composition and integration (different numbers of elements and different connectivity), but analogous information (TPMs). Thus, the fact that networks of class

**444**only have one more element than networks of class

**88(444)**may explain why their correlation is stronger than that of {

**2222**,

**44(2222)**}, which differ by two elements.

#### 3.2. Model of Biological Example Systems with Non-Binary Elements

**P**,

**Mn**, and

**Mc**, which stand for proteins p53, nuclear Mdm2, and cytoplasmic Mdm2, respectively [35].

**P**takes three values $\{0,1,2\}$, while

**Mn**and

**Mc**are binary variables. In brief,

**Mn**down-regulates the level of active

**P**, which, in turn, up-regulates the level of

**Mc**and also inhibits

**Mn**.

**P**is modeled as ternary, as it may act on

**Mn**and

**Mc**above different threshold levels [5,35]. Figure 3A depicts the p53–Mdm2 network as discussed in [5].

**P1**and

**P2**into the macro element

**P**with the following state mapping: $\{P1,P2\}=(0,0)\to P=0$, $\{P1,P2\}=\left\{\right(1,0),(0,1\left)\right\}\to P=1$, and $\{P1,P2\}=(1,1)\to P=2$. This is not possible with the Tonello method, as

**P1**and

**P2**have different causal roles within the network.

**32**,

**43**, and

**332**; labels indicate the number of elements and states per element, as described in Table 1). In each case, we evaluated and compared the first state in the system’s TPM. Correlations were consistently stronger between the original system and its Fauré–Kaji Boolean implementation than between the original system and its binarization according to the Tonello method, for which only one condition (

**32**) was significantly correlated ($p<0.05$, Table 4).

## 4. Discussion

## 5. Methods

#### 5.1. Non-Binary Implementation

`Network`class and implementing new routines for marginalization, which we describe briefly below. For those interested in the details of these changes, the source code is publicly available in the ’nonbinary’ branch of the PyPhi repository (https://github.com/wmayner/pyphi/tree/nonbinary).

`Network`class (which represents the dynamical system under analysis) in order to store the number of possible states for each element in the

`num_states_per_node`attribute. This information has to be provided by the user and and it is necessary to keep track of which rows and columns in the TPM correspond to which system states, as the system state is determined by the states of the individual system elements (Equation (1)).

`DataFrame`, with the rows and columns indexed using a hierarchical

`MultiIndex`. In each index, there is one level per element and the level values correspond to the the element’s states. This allows for indexing into the TPM using state tuples, as in the original implementation (multidimensional state-by-node format).

`DataFrame`allows for the marginalization to be implemented easily with the

`groupby()`method, e.g.,:

`purview`is a list of node labels (i.e., names in the row

`MultiIndex`).

#### 5.2. Settings

`PARTITION_TYPE = ‘TRI’`

`MEASURE = ‘AID’`

`USE_SMALL_PHI_DIFFERENCE_FOR_CES_DISTANCE = True`

`ASSUME_CUTS_CANNOT_CREATE_NEW_CONCEPTS = True`

`‘AID’`stands for “absolute intrinsic difference” and implements the intrinsic difference (ID) measure that was introduced in [42]. However, here we evaluate the maximum over the absolute difference between the unpartitioned and partitioned repertoire (see Barbosa et al., forthcoming). In addition to the $\phi $ value, the ID also identifies the specific state within the cause and effect repertoire for which the measure is maximal, which corresponds to the specific cause and effect of the mechanism in its current state.

`MEASURE = ‘EMD’`) (see https://pyphi.readthedocs.io/en/latest/configuration.html for the current list of options).

#### 5.3. Overview of the Algorithm in Pseudocode

Algorithm 1. Python-like Pseudocode describing the functions used in the extended non-binary PyPhi. |

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Thomas, R.; D’Ari, R. Biological Feedback; CRC Press: Boca Raton, FL, USA, 1990. [Google Scholar]
- Abou-Jaoudé, W.; Traynard, P.; Monteiro, P.T.; Saez-Rodriguez, J.; Helikar, T.; Thieffry, D.; Chaouiya, C. Logical Modeling and Dynamical Analysis of Cellular Networks. Front. Genet.
**2016**, 7, 94. [Google Scholar] [CrossRef] [PubMed] - Thomas, R. Boolean formalization of genetic control circuits. J. Theor. Biol.
**1973**, 42, 563–585. [Google Scholar] [CrossRef] - Thomas, R. Regulatory networks seen as asynchronous automata: A logical description. J. Theor. Biol.
**1991**, 153, 1–23. [Google Scholar] [CrossRef] - Didier, G.; Remy, E.; Chaouiya, C. Mapping multivalued onto Boolean dynamics. J. Theor. Biol.
**2011**, 270, 177–184. [Google Scholar] [CrossRef] [Green Version] - Dayan, P.; Abbott, L.F. Theoretical Neuroscience—Computational and Mathematical Modeling of Neural Systems; MIT Press: Cambridge, MA, USA, 2000; pp. 1689–1699. [Google Scholar] [CrossRef]
- Hindmarsh, J.L.; Rose, R.M. A model of neuronal bursting using three coupled first order differential equations. Proc. R. Soc. London. Ser. Contain. Pap. Biol. Character. R. Soc.
**1984**, 221, 87–102. [Google Scholar] - Aizenberg, I.N.; Naum, N.; Aizenberg, J.V. Multiple-Valued Threshold Logic and Multi-Valued Neurons. In Multi-Valued and Universal Binary Neurons; Springer: Berlin/Heidelberg, Germany, 2000; pp. 25–80. [Google Scholar]
- Prados, D.; Kak, S. Non-binary neural networks. In Advances in Computing and Control; Springer: Berlin/Heidelberg, Germany, 2006; pp. 97–104. [Google Scholar] [CrossRef]
- Hoel, E.P.; Albantakis, L.; Tononi, G. Quantifying causal emergence shows that macro can beat micro. Proc. Nalt. Acad. Sci. USA
**2013**, 110, 19790–19795. [Google Scholar] [CrossRef] [Green Version] - Van Ham, P. How to Deal with Variables with More Than Two Levels; Springer: Berlin/Heidelberg, Germany, 1979; pp. 326–343. [Google Scholar] [CrossRef]
- Fauré, A.; Kaji, S. A circuit-preserving mapping from multilevel to Boolean dynamics. J. Theor. Biol.
**2018**, 440, 71–79. [Google Scholar] [CrossRef] [Green Version] - Tonello, E. On the conversion of multivalued to Boolean dynamics. Discret. Appl. Math.
**2019**, 259, 193–204. [Google Scholar] [CrossRef] [Green Version] - Oizumi, M.; Albantakis, L.; Tononi, G. From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLoS Comput. Biol.
**2014**, 10, e1003588. [Google Scholar] [CrossRef] [Green Version] - Albantakis, L.; Marshall, W.; Hoel, E.; Tononi, G. What caused what? A quantitative account of actual causation using dynamical causal networks. Entropy
**2019**, 21, 459. [Google Scholar] [CrossRef] [Green Version] - Albantakis, L.; Tononi, G. Causal Composition: Structural Differences among Dynamically Equivalent Systems. Entropy
**2019**, 21, 989. [Google Scholar] [CrossRef] [Green Version] - Tononi, G. An information integration theory of consciousness. BMC Neurosci.
**2004**, 5, 42. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Tononi, G. Integrated information theory. Scholarpedia
**2015**, 10, 4164. [Google Scholar] [CrossRef] - Tononi, G.; Boly, M.; Massimini, M.; Koch, C. Integrated information theory: From consciousness to its physical substrate. Nat. Rev. Neurosci.
**2016**, 17, 450–461. [Google Scholar] [CrossRef] [PubMed] - Albantakis, L. Integrated information theory. In Beyond Neural Correlates of Consciousness; Overgaard, M., Mogensen, J., Kirkeby-Hinrup, A., Eds.; Routledge: Oxford, UK, 2020; pp. 87–103. [Google Scholar] [CrossRef]
- Albantakis, L. A Tale of Two Animats: What Does It Take to Have Goals? Springer: Berlin/Heidelberg, Germany, 2018; pp. 5–15. [Google Scholar] [CrossRef] [Green Version]
- Marshall, W.; Kim, H.; Walker, S.I.; Tononi, G.; Albantakis, L. How causal analysis can reveal autonomy in models of biological systems. Philos. Trans. Ser. Math. Phys. Eng. Sci.
**2017**, 375, 20160358. [Google Scholar] [CrossRef] [PubMed] - Marshall, W.; Gomez-Ramirez, J.; Tononi, G. Integrated Information and State Differentiation. Front. Psychol.
**2016**, 7, 926. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Albantakis, L.; Tononi, G. The Intrinsic Cause-Effect Power of Discrete Dynamical Systems—From Elementary Cellular Automata to Adapting Animats. Entropy
**2015**, 17, 5472–5502. [Google Scholar] [CrossRef] [Green Version] - Aguilera, M. Scaling Behaviour and Critical Phase Transitions in Integrated Information Theory. Entropy
**2019**, 21, 1198. [Google Scholar] [CrossRef] [Green Version] - Popiel, N.J.; Khajehabdollahi, S.; Abeyasinghe, P.M.; Riganello, F.; Nichols, E.S.; Owen, A.M.; Soddu, A. The Emergence of Integrated Information, Complexity, and ‘Consciousness’ at Criticality. Entropy
**2020**, 22, 339. [Google Scholar] [CrossRef] [Green Version] - Albantakis, L.; Hintze, A.; Koch, C.; Adami, C.; Tononi, G. Evolution of Integrated Causal Structures in Animats Exposed to Environments of Increasing Complexity. PLoS Comput. Biol.
**2014**, 10, e1003966. [Google Scholar] [CrossRef] - Oizumi, M.; Amari, S.i.; Yanagawa, T.; Fujii, N.; Tsuchiya, N. Measuring Integrated Information from the Decoding Perspective. PLoS Comput. Biol.
**2016**, 12, e1004654. [Google Scholar] [CrossRef] [PubMed] - Hoel, E.P.; Albantakis, L.; Marshall, W.; Tononi, G. Can the macro beat the micro? Integrated information across spatiotemporal scales. Neurosci. Conscious.
**2016**, 2016. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Marshall, W.; Albantakis, L.; Tononi, G. Black-boxing and cause-effect power. PLoS Comput. Biol.
**2018**, 14, e1006114. [Google Scholar] [CrossRef] [Green Version] - Mayner, W.G.; Marshall, W.; Albantakis, L.; Findlay, G.; Marchman, R.; Tononi, G. PyPhi: A toolbox for integrated information theory. PLoS Comput. Biol.
**2018**, 14, e1006343. [Google Scholar] [CrossRef] [PubMed] - Haun, A.; Tononi, G. Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience. Entropy
**2019**, 21, 1160. [Google Scholar] [CrossRef] [Green Version] - Nilsen, A.S.; Juel, B.E.; Marshall, W.; Storm, J.F. Evaluating Approximations and Heuristic Measures of Integrated Information. Entropy
**2019**, 21, 525. [Google Scholar] [CrossRef] [Green Version] - Tononi, G.; Sporns, O. Measuring information integration. BMC Neurosci.
**2003**, 4, 1–20. [Google Scholar] [CrossRef] [Green Version] - Abou-Jaoudé, W.; Ouattara, D.A.; Kaufman, M. From structure to dynamics: Frequency tuning in the p53–Mdm2 network: I. Logical approach. J. Theor. Biol.
**2009**, 258, 561–577. [Google Scholar] [CrossRef] - Langton, C. Studying artificial life with cellular automata. Phys. Nonlinear Phenom.
**1986**, 22, 120–149. [Google Scholar] [CrossRef] [Green Version] - Ermentrout, G.B.; Edelstein-Keshet, L. Cellular automata approaches to biological modeling. J. Theor. Biol.
**1993**, 160, 97–133. [Google Scholar] [CrossRef] - Gottwald, S. Many-Valued Logic And Fuzzy Set Theory; Springer: Berlin/Heidelberg, Germany, 1999; pp. 5–89. [Google Scholar] [CrossRef]
- Cintula, P.; Hájek, P.; Noguera, C. Handbook of Mathematical Fuzzy Logic (in 2 Volumes); College Publications: London, UK, 2011. [Google Scholar]
- Israeli, N.; Goldenfeld, N. Coarse-graining of cellular automata, emergence, and the predictability of complex systems. Phys. Rev. E
**2006**, 73, 026203. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Hanson, J.R.; Walker, S.I. Integrated Information Theory and Isomorphic Feed-Forward Philosophical Zombies. Entropy
**2019**, 21, 1073. [Google Scholar] [CrossRef] [Green Version] - Barbosa, L.S.; Marshall, W.; Streipert, S.; Albantakis, L.; Tononi, G. A measure for intrinsic information. Sci. Rep.
**2020**, 10, 18803. [Google Scholar] [CrossRef] - Krohn, S.; Ostwald, D. Computing integrated information. Neurosci. Conscious.
**2017**, 2017. [Google Scholar] [CrossRef] [PubMed] [Green Version]

**Figure 1.**Integrated information across data classes. Plotted are the distributions of Integrated Information ($\mathsf{\Phi}$, top) and the average integrated information across a system’s causal distinctions ($\langle \phi \rangle $, bottom) for the various classes in our data set, ordered by the total number of system states from the lowest (leftmost) to the highest (rightmost) (see Table 1). Plots were created with the Python package

`seaborn`, using the

`violinplot`function with parameters

`cut=0, scale=“count”`. Dashed lines indicate quartiles. The mean values are overlaid as stars. Equivalent classes {

**44**,

**44(2222)**} and {

**88**,

**88(444)**} were pooled together in this figure (see text). The mean $\mathsf{\Phi}$ increases with the number of system elements and the number of states per element, while mean $\langle \phi \rangle $ decreases with the number of elements, but increases with the number of states per element.

**Figure 2.**Correlation of $\mathsf{\Phi}$ values between classes with and without shared TPMs. In this figure, we plotted the relationship between $\mathsf{\Phi}$ values of several pairs of classes: (

**A**)

**44(2222)**vs.

**2222**; (

**B**)

**44**vs.

**2222**; (

**C**)

**88(444)**vs.

**444**; and, (

**D**)

**88**vs.

**444**. The null correlation between classes

**44**vs.

**2222**and

**88**vs.

**444**is expected, as these data samples have different TPMs and are, thus, completely independent. Because the other two pairs share their TPMs, they have the same effective information, but they differ in their causal composition (due to the different number of nodes) and their integration (due to differences in how their nodes are connected). Thus, despite the shared TPM, such network pairs will typically differ in their number of causal distinctions, the corresponding causes, effects, and $\phi $ values, and their total amount of integrated information $\mathsf{\Phi}$.

**Figure 3.**The p53–Mdm2 regulatory network. The arrows indicate causal dependencies, which can be excitatory, inhibitory, or nonlinear. (

**A**) Original version with multi-valued element. Circular elements signify Boolean variables, while the square element is multi-valued. (

**B**) Binarized version using the Van Ham method [5,11]. Note that this graph is based on an incomplete TPM and thus does not correspond to a complete causal model (see Table 3). (

**C**) Under the Fauré-Kaji binarization method [12],

**P1**and

**P2**become causally equivalent and act jointly on

**Mc**. (

**D**) The Tonello binarization method [13] introduces additional dependencies between

**P1**and

**P2**. In all cases, the ternary node P is split into two binary nodes P1 and P2. The $\mathsf{\Phi}$ values are provided for the fixed point $\{P,Mc,Mn\}=(0,0,1)$, corresponding to $\{P1,P2,Mc,Mn\}=(0,0,0,1)$ in the binarized versions. While the Fauré–Kaji method largely maintains the causal structure of the original system, the Tonello method introduces many higher-order mechanisms (see text for details).

**Figure 4.**Decreased integrated information in fine-grained system of three interacting neurons. Each neuron fires if it receives excitatory inputs from at least one of the other two neurons and is not currently in a refractory period after firing during the last update (self-loops are thus inhibitory, denoted by round arrow-heads). At each firing, there is a 20% chance that a neuron will emit two action potentials instead of just one (see Ternary Logic). The system’s dynamics are evaluated in 10 ms time bins (top, spike raster plot). Assuming that every neuron has three states (firing 0, 1, or 2 action potentials in one time window) leads to lower values of integrated information when compared to a binary analysis that only distinguishes two states: firing (1) or not (0) when evaluated for the specific state $(1,1,0)$ and also, on average, across all possible states.

**Table 1.**List of data classes. Classes are labeled such that the number of digits represents the number of nodes of the network, and each digit in turn stands for the number of states of a node. Numbers that are accompanied by parentheses denote that the respective network class shares its TPMs with the one indicated in parentheses. Therefore,

**33**corresponds to two-ternary-node networks;

**2235**to networks of four nodes with two, two, three, and five states, respectively; and,

**88(444)**to two-octal-node networks that share their TPMs with the class of three-quaternary-node networks.

Class | 222 | 33 | 2222 | 44(2222) | 44 | 333 | 2235 | 444 | 88(444) | 88 |
---|---|---|---|---|---|---|---|---|---|---|

#Nodes | 3 | 2 | 4 | 2 | 2 | 3 | 4 | 3 | 2 | 2 |

#States (total) | 8 | 9 | 16 | 16 | 16 | 27 | 60 | 64 | 64 | 64 |

**Table 2.**Number of causal distinctions per class. The maximum number of distinctions is determined by the number of system elements (N) as ${2}^{N}-1$.

Class | 222 | 33 | 2222 | 44* | 333 | 2235 | 444 | 88* |
---|---|---|---|---|---|---|---|---|

(max. #distinctions) | (7) | (3) | (15) | (3) | (7) | (15) | (7) | (3) |

〈#distinctions〉 | 5.35 | 2.71 | 13.81 | 2.91 | 7. | 14.95 | 7. | 3. |

% of max | 76% | 90% | 92% | 97% | 100% | 100% | 100% | 100% |

**Table 3.**Evolution function of p53–Mdm2 network model. The asymptotic evolution function of the original network model with a multi-valued element [5] determines the system’s TPM from state t to $t+1$ (left). The associated Boolean evolution functions generated according to three different binarization methods are shown on the right. The equal sign (“=”) indicates that the state mapping of the Van Ham binarization is maintained for that particular state.

Multi-Valued | Binary | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Van Ham | Fauré & Kaji | Tonello | ||||||||||||||||||||

t | t+1 | t | t+1 | t+1 | t+1 | |||||||||||||||||

P | Mc | Mn | P | Mc | Mn | P1 | P2 | Mc | Mn | P1 | P2 | Mc | Mn | P1 | P2 | Mc | Mn | P1 | P2 | Mc | Mn | |

0 | 0 | 0 | 2 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | = | 1 | 0 | 0 | 1 | ||||

1 | 0 | 0 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | = | = | |||||||

0 | 1 | 0 | 0 | - | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | ||||||||||

2 | 0 | 0 | 2 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | = | = | |||||||

0 | 1 | 0 | 2 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | = | 1 | 0 | 0 | 1 | ||||

1 | 1 | 0 | 2 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | = | = | |||||||

0 | 1 | 1 | 0 | - | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | ||||||||||

2 | 1 | 0 | 2 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | = | = | |||||||

0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | = | = | |||||||

1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | = | = | |||||||

0 | 1 | 0 | 1 | - | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ||||||||||

2 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | = | 1 | 0 | 1 | 0 | ||||

0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | = | = | |||||||

1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | = | = | |||||||

0 | 1 | 1 | 1 | - | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | ||||||||||

2 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | = | 1 | 0 | 1 | 1 |

Class | 32 | 43 | 332 | |
---|---|---|---|---|

Fauré-Kaji | r | 0.56 | 0.35 | 0.29 |

method | p-value | ≈0 | <0.001 | <0.005 |

Tonello | r | 0.24 | 0.18 | 0.15 |

method | p-value | 0.015 | 0.08 | 0.14 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Gomez, J.D.; Mayner, W.G.P.; Beheler-Amass, M.; Tononi, G.; Albantakis, L.
Computing Integrated Information (Φ) in Discrete Dynamical Systems with Multi-Valued Elements. *Entropy* **2021**, *23*, 6.
https://doi.org/10.3390/e23010006

**AMA Style**

Gomez JD, Mayner WGP, Beheler-Amass M, Tononi G, Albantakis L.
Computing Integrated Information (Φ) in Discrete Dynamical Systems with Multi-Valued Elements. *Entropy*. 2021; 23(1):6.
https://doi.org/10.3390/e23010006

**Chicago/Turabian Style**

Gomez, Juan D., William G. P. Mayner, Maggie Beheler-Amass, Giulio Tononi, and Larissa Albantakis.
2021. "Computing Integrated Information (Φ) in Discrete Dynamical Systems with Multi-Valued Elements" *Entropy* 23, no. 1: 6.
https://doi.org/10.3390/e23010006