# The Intrinsic Cause-Effect Power of Discrete Dynamical Systems—From Elementary Cellular Automata to Adapting Animats

^{*}

## Abstract

**:**

## 1. Introduction

**Figure 1.**Elementary cellular automata (ECA). (

**A**) ECA consist of a 1-dimensional chain of cells in either state 0 or 1, which are updated according to a time-evolution rule dependent on a cell’s previous state and that of its two nearest neighbors. Given the 2

^{3}= 8 possible past configurations, 256 different update rules exist, which are labeled by the decimal number of their rule in binary. “0” to “1” and/or left-right transformation of a rule (past and current state) lead to rules with equivalent behavior; (

**B**) Example evolutions of four ECA (number of cells N = 50) with distinct long-term behavior over 50 time-steps for a random initial state and periodic boundary conditions; (

**C**) ECA with N = 6 cells and periodic boundary conditions illustrated as a network of interacting elements. Edges denote connections between cells. Each cell in the ECA has a self-loop and connections to its nearest neighbors. Note, however, that depending on the cell’s update rule some of the edges may not be causally effective, e.g., if the state of the left neighbor is irrelevant as in rule 136 (10001000).

_{t}in its current state m

_{t}is a set of two conditional probability distributions: the possible past states of a set of elements within the system (Z

_{t−1}) and the possible futures states of a set of elements within the system (Z

_{t+1}) conditioned on m

_{t}:

_{t}= m

_{t}for a given ${Z}_{t\pm 1}$, its CER is partitioned into two independent parts by injecting independent noise into the connections between $\left\{{M}_{1,t},{Z}_{1,t\pm 1}\right\}$ and $\left\{{M}_{2,t},{Z}_{2,t\pm 1}\right\}$. φ is then measured as the distance between the intact CER and the product CER under the partition P:

_{t}= m

_{t}over the particular set of system elements ${Z}_{t\pm 1}^{*}$ that maximizes $\phi \left({m}_{t},{Z}_{t\pm 1},MIP\right)$ is called its “concept” (see Table 1). A concept determines the mechanism’s causal role within the system from the intrinsic perspective of the system itself. Elementary mechanisms specify 1st-order concepts; mechanisms composed of several system elements specify higher-order concepts.

_{t}in state s

_{t}. Integrated conceptual information Φ quantifies the irreducible cause-effect power of a system of mechanisms taken as a whole. It measures how irreducible a system’s cause-effect structure C(s

_{t}) is compared to the cause-effect structure of the system when it is causally partitioned (unidirectionally) across its weakest link $C({s}_{t},MIP)$:

^{Max}form “complexes” with self-defined causal boundaries. Only sets of elements with many specialized, but integrated concepts can achieve high values of Φ. Φ can thus be viewed as a measure of the intrinsic causal complexity of a system.

^{Max}; (ii) lack composition, meaning their cause-effect structures lack higher-order concepts; or (iii) are reducible (Φ = 0), should not be able to produce interesting global dynamics. In isolated systems, IIT measures characterizing intrinsic cause-effect power, such as Φ

^{Max}, the number of concepts, and their Σφ

^{Max}, should thus correlate with general dynamical properties of the system, such as the maximal transient length to reach fixed points or periodic cycles. By contrast, non-isolated systems can exhibit complex reactive behavior driven by the environment. Analyzing the intrinsic cause-effect structure of a behaving agent can elucidate to what extent the agent itself has a complex structure, and to what extent it is merely reactive. Moreover, integrated systems with a rich cause-effect structure have adaptive advantages in environments that require context-sensitivity and memory [19]. Finally, the examples discussed in this article also reveal conceptual dissociations between the observable behaviors of discrete dynamical systems, which describe what a system “happens to be doing”, and their intrinsic cause-effect structures, which describe what a system “is”.

MECHANISM | cause-effect repertoire $CER({m}_{t},{Z}_{t\pm 1})$ | A set of two conditional probability distributions: $CER({m}_{t},{Z}_{t\pm 1})=\left\{{p}_{cause}\left({z}_{t-1}\text{|}{m}_{t}\right),{p}_{effect}({z}_{t+1}|{m}_{t})\right\}$, describing how the mechanism M _{t} in its current state m_{t} constrains the past and future states of the sets of system elements Z_{t-1} and Z_{t+1}, respectively. |

partition P | $P=\left\{{M}_{1,t},{Z}_{1,t\pm 1};{M}_{2,t},{Z}_{2,t\pm 1}\right\}$, where the connections between the parts $\left\{{M}_{1,t},{Z}_{1,t\pm 1}\right\}$ and $\left\{{M}_{2,t},{Z}_{2,t\pm 1}\right\}$ are injected with independent noise. | |

integrated information φ (“small phi”) | φ measures the irreducibility of a CER w.r.t. a partition P: $\phi \left({m}_{t},{Z}_{t\pm 1},P\right)=D\left(CER\left({m}_{t},{Z}_{t\pm 1}\right),CER({m}_{1,t},{Z}_{1,t\pm 1})\times CER({m}_{2,t},{Z}_{2,t\pm 1})\right)$ | |

MIP | The partition that makes the least difference to a CER: $MIP=\underset{P}{\text{argmin}}\left(\phi \left({m}_{t},{Z}_{t\pm 1},P\right)\right)$ | |

${Z}_{t\pm 1}^{*}$ | The set of system elements ${Z}_{t\pm 1}^{*}=\left\{{Z}_{t-1}^{*},{Z}_{t+1}^{*}\right\}$, where ${Z}_{t-1}^{*}=\left\{\underset{{Z}_{t-1}}{\text{argmax}}\left({\phi}_{cause}\left({m}_{t},{Z}_{t-1},MI{P}_{cause}\right)\right)\right\}$ and ${Z}_{t+1}^{*}=\left\{\underset{{Z}_{t+1}}{\text{argmax}}\left({\phi}_{effect}\left({m}_{t},{Z}_{t+1},MI{P}_{effect}\right)\right)\right\}$ | |

φ^{Max}(m_{t}) | The intrinsic cause-effect power of a mechanisms M_{t}: ${\phi}^{Max}\left({m}_{t}\right)=\phi \left({m}_{t},{Z}_{t\pm 1}^{*},MIP\right)=\mathrm{min}\left({\phi}_{cause}^{Max}({m}_{t}),{\phi}_{effect}^{Max}({m}_{t})\right)=\text{min}\left({\phi}_{cause}\left({m}_{t},{Z}_{t-1}^{*},MI{P}_{cause}\right),{\phi}_{effect}\left({m}_{t},{Z}_{t+1}^{*},MI{P}_{effect}\right)\right)$ | |

concept | The maximally irreducible CER(m_{t}) with φ^{Max}(m_{t}) over ${Z}_{t\pm 1}^{*}:$ $CER({m}_{t},{Z}_{t\pm 1}^{*})=\left\{{p}_{cause}\left({z}_{t-1}^{*}\text{|}{m}_{t}\right),{p}_{effect}({z}_{t+1}^{*}|{m}_{t})\right\}$, describing the causal role of mechanism M _{t} within the system. | |

SYSTEM | cause-effect structure C(s_{t}) | The set of concepts specified by all mechanisms with φ^{Max}(m_{t}) > 0 within the system S_{t} in its current state s_{t}. |

Σφ^{Max} | The sum of all φ^{Max}(m_{t}) of C(s_{t}). | |

unidirectional partition ${P}_{\to}$ | ${P}_{\to}=\left\{{S}_{1},{S}_{2}\right\}$, where the connections from the set of elements S_{1} to S_{2} are injected with independent noise (for t-1 → t and t → t+1). | |

integrated conceptual information Φ (“big phi”) | Φ measures the irreducibility of a cause-effect structure w.r.t. a partition ${P}_{\to}$: $\Phi \left({s}_{t},{P}_{\to}\right)=D\left(C\left({s}_{t}\right)|C({s}_{t},{P}_{\to})\right)$. Φ captures how much the CERs of the system’s mechanisms are altered and how much φ ^{Max} is lost by partitioning the system. | |

MIP | The unidirectional system partition that makes the least difference to C(s_{t}): $MIP=\underset{{P}_{\to}}{\text{argmin}}\left(\text{\Phi}\left({s}_{t},{P}_{\to}\right)\right)$ | |

Φ^{Max} | The intrinsic cause-effect power of a system of elements ${S}_{t}^{*}$. ${\Phi}^{Max}=\Phi \left({s}_{t}^{*}\right)>0$ such that for any other S _{t} with $({S}_{t}{{\displaystyle \cap}}^{\text{}}{S}_{t}^{*})\ne \varnothing $, $\Phi \left({s}_{t}\right)\le \Phi \left({s}_{t}^{*}\right)$. | |

complex | A set of elements ${S}_{t}^{*}$ with ${\Phi}^{Max}=\Phi \left({s}_{t}^{*}\right)>0$. A complex thus specifies a maximally irreducible cause-effect structure. |

## 2. Results and Discussion

#### 2.1. Behavior and Cause-Effect Power of Elementary Cellular Automata

^{N}possible states with different numbers of cells in states “0” and “1”, from which we obtained an average value (<Φ

^{Max}>). Since all elements in an ECA specify the same rule and symmetric states have redundant cause-effect structures, this sample of states is representative of a large number of the 2

^{N}possible states (see Figure S1 for distributions of measured IIT quantities across all ECA with N = 5).

#### 2.1.1. Cause-Effect Structure of ECA vs. Wolfram Classes

^{Max}values of all 88 ECA rule classes are plotted against their respective number of concepts and Σφ

^{Max}for systems with five and six cells. Φ

^{Max}is the integrated conceptual information of the main complex in the system, the set of elements ${S}_{t}^{*}$ that is most irreducible. φ

^{Max}measures the cause-effect power of individual concepts within the main complex (see Table 1 and Methods). Each equivalency class is color-coded according to its Wolfram classification I–IV for random initial states [9] according to whether almost all initial conditions lead to (I) the same uniform stable state (black); (II) a non-uniform stable state or periodic behavior (blue); (III) pseudo-random or chaotic behavior (green), or (IV) complex behavior with a mixture of randomness and order (red). Figure 1B shows the time evolution of four example rules from class I–IV, which are also highlighted in Figure 2. For ECA with five cells, simple rules from class I all have Φ = 0 or low <Φ

^{Max}>, only a limited number of concepts, and a low value of <Σφ

^{Max}>. Rules from class II have low to intermediate values of <Φ

^{Max}>, but can have a high number of concepts, albeit typically with lower <Σφ

^{Max}> than class III rules (Figure 2B). Class III rules show the highest values of <Φ

^{Max}> and <Σφ

^{Max}> with an intermediate number of concepts. For class III systems to have fewer concepts but higher <Σφ

^{Max}> compared to class I and II systems, their concepts must typically be more selective and/or more irreducible (have more cause-effect power) than those of class I or II systems. Finally, class IV rules have high numbers of concepts with intermediate to high <Φ

^{Max}> and <Σφ

^{Max}> values.

**Figure 2.**Relation of <Φ

^{Max}>, <#concepts>, and <Σφ

^{Max}> to Wolfram ECA classes I-IV. (

**A**) Mean number of concepts <#concepts> plotted against <Φ

^{Max}> for all 88 ECA equivalence classes for ECA implementations with N = 5 cells. Each rule is color coded by its Wolfram class: (I) uniform fixed points in black; (II) non-uniform fixed points and periodic behavior in blue, (III) random, chaotic behavior in green; and (IV) complex behavior in red; (

**B**) <Σφ

^{Max}> of all concepts in the system plotted against <Φ

^{Max}> for N = 5 cells. (

**C**,

**D**) Same as (

**A**,

**B**), for N = 6 cells. Rules shown in Figure 1B and below in Figure 3 are labeled by their rule number.

^{Max}>, and <Φ

^{Max}> of ECA tends to grow with the number of cells N (see below). Nevertheless the distribution of rules on the <Φ

^{Max}>/<#concepts> and <Φ

^{Max}>/<Σφ

^{Max}> planes for ECA with six cells stays the same for classes I, II, and IV, and most rules of class III. Exceptions are class III rules 18, 90, 105, and 150, which are strongly dependent on N being even or odd, with lower <#concepts> for even N.

^{Max}> ≤ 1 implement linearly separable rules (defined e.g., in [21]). Nevertheless, some linear rules still show a high number of concepts with low to medium φ

^{Max}and moderate <Φ

^{Max}> values, such as for example the Majority rule 232 (Figure 3C).

^{Max}>, <#concepts>, and <Σφ

^{Max}> of rule 74 are substantially higher than rule 232 (Figure 2, Figure 3 and Figure 4).

**Figure 3.**Examples of N = 5 cell ECA cause-effect structures. (

**A**) Systems that implement reducible rules can be partitioned without loss of cause-effect power, as indicated by the system’s network diagram (left). Note that the three reducible rules 0, 51, and 204 produce different patterns: class I, class II with periodic behavior, and class II with steady state behavior respectively; (

**B**–

**F**) For each example system, the effective network diagram in its current state is displayed on the left, together with the cause-effect structure in cause-effect space (middle), and a time evolution of the rule for 50 cells (right). The cause-effect structures are projected onto three dimensions of cause-effect space (see Methods). Blue axes indicate past system states, green axes futures system states. The φ

^{Max}value of each concept is indicated by the size of its yellow circle. The coordinates for every concept in cause-effect space are the probabilities their cause-effect repertoire specifies for each system state; (

**B**) The AND rule 128 system is a class I example for a system that does form a complex, but has only 1st order concepts. Its cause-effect structure lacks composition, leading to a low Φ

^{Max}value; (

**C**) Linear class II rules can have many concepts, albeit with rather low φ

^{Max}; (

**D**) Example for a non-linear class II rule with all concepts; (

**E**) Example of the nonlinear class III rule 90, which does not have 1st order concepts, but highly irreducible concepts of higher orders and high Φ

^{Max}for odd numbers of cells; (

**F**) Example of a complex class IV rule with all possible concepts and high Φ

^{Max}.

**Figure 4.**Distribution of <#concepts> and <φ

^{Max}> across concept orders for N = 5 ECA. The 88 rule equivalency classes are ordered by their <Φ

^{Max}>. For each rule, the bar plot shows the <#concepts> of each concept order (1st–5th) as a fraction of the maximum number of possible concepts (2

^{N}-1 = 31 for N = 5). The blue line depicts <φ

^{Max}> of all concepts at the respective concept order (same y axis as the fraction of concepts). Both <#concepts> and <φ

^{Max}> are averages across N+1 system states. Bar colors indicate a rule’s Wolfram class, as in Figure 2: (I) black; (II) blue; (III) green; and (IV) red. Note that there are maximally 5 possible 1st and 4th order concepts <φ

^{Max}>, 10 possible 2nd and 3rd order concepts, and only one possible 5th order concept in an N = 5 system.

^{Max}> and <#concepts> are strongly dependent on whether the number of cells N of the system is even or odd. In the case of even N, only N 2nd order concepts exist, which are composed of element pairs separated by one cell, such as AC, BD, etc. For odd N, as in Figure 3E, there can be additional concepts of higher order than 2.

^{Max}> value. In this state the system specifies all possible 2

^{N}-1 concepts. In general, for larger ECA systems certain concepts necessarily become reducible, due to the limited range of nearest neighbor interaction. Nevertheless, we found that all class IV rules have states with concepts of all possible orders, at least up to N = 7 cell systems. Based on this finding, we speculate that the capacity to have irreducible concepts at all orders might be a necessary condition for complex, universal class IV rules.

^{Max}values reflect general properties of a rule’s cause-effect structure and its integrated conceptual information <Φ

^{Max}>. Figure 4 provides an overview of the conceptual profiles of all ECA rules for N = 5 systems, ordered by increasing <Φ

^{Max}>. Note that considering only causes and effects of individual cells (elementary, 1st order mechanisms), or of the ECA system as a whole (highest order) would not expose these differences in complexity across ECA. Our findings thus highlight the importance of assessing the causal composition of a system across all orders.

^{Max}> = 0 do not have any concepts. As discussed above based on the example rule 128 (Figure 3B), several class I and II rules generally specify only 1st order concepts (e.g., rules 32, 34, 10, 15, etc.). Of these, rules 15 and 170 have the highest <Φ

^{Max}> = 1, since their 1st order concepts are very selective with <φ

^{Max}> = 0.5. As shown for rule 90 (Figure 3D), some nonlinear class II and III rules (e.g., 36, 126, 105, 90, etc.) do not specify any 1st order concepts. In these systems only sets of elements can have cause-effect power. By contrast, all class IV rules have concepts of all orders. Overall, rules that specify cause-effect structures with higher <Φ

^{Max}> tend to have more higher-order concepts and more selective concepts with high <φ

^{Max}> at all orders.

^{Max}>, and <Φ

^{Max}> may be necessary for rules to have the capacity for intricate class III and IV patterns. Nevertheless, certain rules that behaviorally lie in class II have only 1st order concepts and <Φ

^{Max}> ≤ 1, and are thus intrinsically not more complex than class I rules. Other class II rules, however, have similar numbers of concepts and <Φ

^{Max}> as rules with more complex or random behavior. This may indicate an evolutionary potential of these class II rules for complex behavior, meaning that only small changes are necessary to transform them into class III or IV rules. Class II rule 74, for example, has a relatively rich causal structure and differs in only one bit from class III rule 90 and class IV rule 106 (Figure 3D–F), and rule 154, the class II rule with the highest <Φ

^{Max}> value, has indeed been classified as (locally) chaotic by other authors [13,22].

#### 2.1.2. Additional Causal Equivalencies

^{Max}>, and <Φ

^{Max}>, there is no inherent causal difference between simple periodic rules and rules with non-uniform stable states. The periodic rule 51, for example, is causally equivalent to the stationary rule 204 (Figure 3A and Figure 4). In the same way, the Majority rule 232 (11101000), which evolves to a non-uniform stable state, is causally equivalent to rule 23 (00010111), which is periodic with cycle length 2. Rule 23 is the negation and reversion of rule 232; the same is true for rule pair 51 and 204. Analyzing the cause-effect structures of ECA here reveals additional equivalences between rules: all rules that under negation or reversion transform into the same rule are causally equivalent to their transformation (e.g., in class III 105 to 150, or in class II the equivalency classes of rules 12 and 34, etc.; see Figure 4, rules with identical <Φ

^{Max}> and concept profiles). These additional symmetries between rules have recently been proposed in a numerical study by [23], which equates ECA rules if they show equivalent compressed state-to-state transition networks for finite ECA across different numbers of cells N. Since compressing the transition network is based on grouping states with equivalent causes and effects, the approach is related to IIT measures of cause-effect information, but lacks the notion of integration. Intrinsic causal equivalencies between rules that converge to fixed points and rules that show simple periodic behavior challenge the usefulness of a distinction based on these dynamical aspects. At least for the system itself it does not make a difference.

#### 2.1.3. Comparison to Several Rule-Based Indicators of Dynamical Complexity

^{Max}>, <#concepts>, and <Σφ

^{Max}> obtained from N = 5 ECA systems and three prominent measures [24], Langton’s λ parameter [25], the sensitivity measure µ [26], and the Z-parameter [27]. For ECA, the λ parameter simply corresponds to the fraction of “0” or “1” outputs in the ECA rule. Rule 232 (11101000) for example has λ = ½. Class III and IV rules typically have high λ, and the rules with the highest <Φ

^{Max}> values also have the maximum value of λ = ½ (Figure 5A, first panel). The parameter µ measures the sensitivity of a rule’s output bit to a 1 bit change in the state of the neighborhood, counting across all possible neighborhood states [26]. Nonlinear rules that depend on the state of every cell in the neighborhood, such as the Parity rule 150 (10010110), have the highest values of µ. Finally, the Z parameter assesses the probability with which a partially known past neighborhood can be completed with certainty to the left or right side [27]. Sensitive rules with high µ also have high Z. However, Z can also be high for some simple rules, such as the Identity rule 204 (11001100), which besides has the highest λ = ½.

**Figure 5.**Relation of IIT measures to the rule based parameters λ, µ, and Z. Correlation of <Φ

^{Max}>, <#concepts>, and <Σφ

^{Max}> with (

**A**) Langton’s λ parameter; (

**B**) sensitivity µ, and (

**C**) Wuensche’s Z parameter (see text for details). ρ is Spearman’s rank correlation coefficient, with p < 0.001 for all correlations, except for the λ and Z parameter vs. <#concepts>, which were not significantly correlated when corrected for multiple comparisons. Colors indicate Wolfram classes I-IV as in Figure 2 and Figure 4: (I) uniform fixed points in black; (II) non-uniform fixed points and periodic behavior in blue; (III) random, chaotic behavior in green; and (IV) complex behavior in red.

_{t}= m

_{t}in this case may still leave the remaining inputs to the elements in Z

_{t+1}undetermined (“noised”, i.e., equally likely to be “0” or “1”). A single cell in an ECA that implements the Majority rule 232, for example, cannot completely determine its next state and that of its neighbors by itself (see Methods). Low values of λ, µ, and Z all indicate high degeneracy in the system. This means that, on average, the system’s mechanisms and current states do not constrain the past and future states of the system much. Unselective cause-effect repertoires lead to concepts with low φ

^{Max}, less higher-order concepts in the system, and less integrated conceptual information Φ.

^{Max}>, <#concepts>, and <Σφ

^{Max}>. This is because, similar to the IIT measures, µ assesses the causal power of each cell in the rule neighborhood, by testing whether perturbing it makes a difference to the output. Unlike the λ and Z parameter, µ thus assigns low (but still not zero) values to rules with selective but trivially reducible causes and effects such as the Identity rule 204 or its negation rule 51, which only depend on a cell’s own past value but not that of its neighbors (Figure 3A). However, compared to the IIT measures, the sensitivity parameter µ lacks the notion of causal composition, according to which higher order mechanisms can have irreducible cause-effect power. Generally, while λ, µ, and Z are largely based on empirical considerations, measures of information integration are derived from one underlying principle—intrinsic, irreducible cause-effect power.

#### 2.1.4. Other Types of Classifications

^{Max}>, <#concepts>, and <Σφ

^{Max}> correlate in a necessary, but not sufficient manner with morphological complexity, which can be low while the IIT measures are high (ρ = 0.60/0.35/0.53 Spearman’s rank correlation coefficient, p <0.001/0.05/0.001 for <Φ

^{Max}>/<#concepts>/<Σφ

^{Max}> for N = 5 ECA systems) (see below Figure 6). Finally, there is also a significant correlation between the IIT measures and the Kolmogorov complexity of an ECA rule, approximated by its Block Shannon entropy (ρ = 0.65/0.40/0.59, p <0.001/0.005/0.001) or compressibility (ρ = 0.61/0.37/0.53, p < 0.001/0.05/0.001) averaged over several different initial conditions [30].

#### 2.1.5. Being vs. Happening

^{N}states). Small systems with few cells thus cannot unfold the full dynamical potential of their rule, although the local interactions of each cell are the same as in a larger system (Figure 6C). In order to predict the actual complexity of a CA’s dynamical evolution accurately, the initial state and the size of the system must be known in addition to the rule. This is why purely rule-based measures can, overall, only provide necessary but not sufficient conditions for complex behavior.

**Figure 6.**“Being” vs. “Happening”. (

**A**) Different rules show the same behavior under certain initial conditions. Rules 0, 128, and 232 all immediately converge to the state “all 0” for the standard initial condition of a single cell in state “1”. Perturbing the system into random initial conditions, however, reveals that rules 0 and 128 belong to class I, while rule 232 belongs to class II; (

**B**) Also rules 2, 74, and 106 show the same behavior for the standard initial condition, but belong to different classes under random initial conditions: rules 2 and 74 belong to class II, while rule 106 belongs to class IV; (

**C**) Rules from the four different Wolfram classes all quickly converge to a steady state or periodic behavior for small ECA with N = 6 cells as indicated by their maximal transient length for N = 6 cells max(transient) (compare to Figure 1B and Figure 3). The IIT measures reflect the classification (“being”), i.e., the potential dynamical complexity, rather than the actual behavior (“happening”). The indicated IIT measures were obtained from the respective N = 6 ECA of each rule.

^{Max}.

^{Max}> already for small systems with 3–7 nodes. The way <Φ

^{Max}>, <#concepts>, and <Σφ

^{Max}> increase with increasing system size is also characteristic of a rule’s cause-effect structure.

^{Max}> values stay low, since no matter where the system is partitioned, only one concept is lost. By contrast, rules with higher order concepts at each subset size, such as rule 30, 106, or 110, show an almost exponential increase of <#concepts>, <Σφ

^{Max}> and <Φ

^{Max}>. The maximum number of potential concepts in any system is determined by the powerset of 2

^{N}−1 candidate mechanisms (combinations of elements) in the system (indicated by the thin black dashed line in Figure 7B). While <#concepts> and <Φ

^{Max}> will continue to grow for rules like rule 30, 106, and 110, the number of impossible concepts also increases, because ECA are limited to nearest neighbor interactions. In an N = 6 ECA system A-F, for example, the concepts AD, BE, and CF are impossible, since they are reducible by default with φ = 0. The reason is that, since their elements do not share any inputs or outputs, partitioning between them would thus never make any difference. Finally, as described above, for certain rules, the IIT measures depend on the parity of N, as for the XOR rule 90 (Figure 3E).

**Figure 7.**(

**A**) <Φ

^{Max}>; (

**B**) <#concepts>; and (

**C**) <Σφ

^{Max}> across N = 3–7 cells. Already small systems reveal typical features of an ECA’s cause-effect structure. How the IIT measures of an ECA rule depend on N is further indicative of the general properties of the rule’s cause-effect structure (see main text); (

**D**) Maximal transient length of the respective rules evaluated across all system states plotted against the number of cells N. Class I rules and class II rules with lower <Φ

^{Max}> values (32, 108, 128, and 232) tend to have shorter transients as well; (

**E**) Maximal transient length of all ECA equivalency classes plotted against <Φ

^{Max}>. The top panel shows the correlation of N

_{T}= 14 ECA transients to the <Φ

^{Max}> values obtained from the respective N

_{I}= 6 ECA systems. The lower panel shows the correlation of N

_{T}= 15 ECA transients to the <Φ

^{Max}> values obtained from the respective N

_{I}= 5 ECA systems. The labels indicate some of the example rules discussed above. (

**A**–

**E**) Note the logarithmic scale of the y-axis in all panels. Colors denote Wolfram classes as in Figure 2, Figure 4, and Figure 5.

^{N}time steps. In Figure 7D,E we show the maximal transient length of ECA rules across all initial states of small N = 5–15 cell systems. While Wolfram’s ECA classification is based on the long-term behavior of a rule for most initial states, the maximal transient length of some class I and II rules in these small systems can be of the same order as the maximal transient length of other class III or IV rules (compare e.g., rule 74 to rule 90 in Figure 7D), indicating a potential for complex dynamics under some initial conditions. In Figure 6E the maximum transient lengths of N = 14 and N = 15 cell systems are plotted against <Φ

^{Max}> of their respective rules in N = 6 and N = 5 systems. Since the maximal transient length as well as <Φ

^{Max}> of certain rules depends on the parity of N (see Figure 7A,D), we evaluated the correlation coefficients between the largest even/odd numbered systems for which computing maximal transient lengths (N

_{T}= 14/15) and IIT measures (N

_{I}= 6/5) was feasible for all rules. Similar results were obtained comparing all other pairs of system sizes. While the maximal transient length also correlated with <#concepts> and <Σφ

^{Max}> (ρ = 0.49/0.56, p < 0.001 for N

_{T}= 14/N

_{I}= 6 and ρ = 0.40/0.51, p < 0.01/0.001 for N

_{T}= 15/N

_{I}= 5), it was most strongly correlated with the integrated conceptual information <Φ

^{Max}> (Figure 7E, ρ = 0.59/0.53, p < 0.001 for both N

_{T}= 14/N

_{I}= 6 and N

_{T}= 15/N

_{I}= 5), indicating that irreducibility is a relevant factor for the dynamical complexity of a system.

^{Max}> are not sufficient conditions for a system to actually show complex behavior under every initial condition. Nevertheless, employing IIT measures of causal complexity can significantly reduce the search space for complex rules, since they appear to be necessary for class III and IV behavior. Finally, class II rules with high <Φ

^{Max}> and many concepts tend to exhibit long transients for some initial conditions and typically share certain rule-based features with more complex class III and IV rules, which can be interpreted as a high potential for complex behavior under small adaptive changes in an evolutionary context.

#### 2.2. Behavior and Cause-Effect Power of Adapting Animats

^{Max}>, as observed in the isolated ECA systems. Finally, we discuss how the adaptive advantages of integrated animats, such as their higher economy in terms of mechanisms per element, and larger degeneracy in architecture and function, are related to the animats’ behavioral and dynamical repertoire.

**Figure 8.**Fitness, <#concepts>, <Φ

^{Max}>, and state entropy H, of animats adapting to four task environments with varying requirements for internal memory. (

**A**) Schematic of animat in example environment. On each trial, the animat has to recognize the size of the downward moving block and either catch or avoid it. Blocks continuously move downward and either to the right or left, at a speed of one unit per time step (periodic boundary conditions). The animat has two sensors with a space of 1 unit between them and thus a total width of three units. Its two motors can move it one unit to the left or right, respectively; (

**B**) Animat evolution. Each animat is initialized at generation 0 without connections between elements. Through mutation and fitness selection, the animats develop complex network structures with mechanisms that enable them to solve their task. Animats were let to evolve for 60,000 generations; (

**C**) Illustration of the four Task environments with increasing difficulty and requirements for internal memory from left to right; (

**D**) The final fitness achieved by the animats after 60,000 generations corresponds to the task difficulty. The two red traces show data from a subset of Task 3 and Task 4 trials with the same high average fitness as Task 1 and 2. Animats that evolved to the more difficult tasks, particularly Task 4, developed significantly more concepts, higher <Φ

^{Max}> values, and more state entropy H, than those animats that evolved to Task 1. Shaded areas around curves denote SEM across 50 independent evolutions (LODs); (

**E**) Scatter plots of all evaluated generations of animats from all 50 LODs of Task 4 illustrating the relation of <#concepts>, <Φ

^{Max}>, and H to fitness, and H to <Φ

^{Max}>. The circle size is proportional to the number of animats with the same pair of values. Red dots denote the final generation of animats from all 50 independent evolutions. Panels A, C, and D were adapted with permission from [19].

^{Max}> than in the simpler task environments (Task 1 and 2). This is even more evident when the tasks are compared at the same level of fitness (red lines in Figure 8D). Note that <#concepts> shown in Figure 8 was evaluated across all of the animat’s elements including sensors and motors in order to capture all fitness relevant causal functions, while Φ

^{Max}is the integrated conceptual information of the set of elements that forms the major complex (MC) in an animat’s “brain” (values for the number of MC concepts behave similarly and can be found in [19]).

^{Max}>, and the state entropy H are plotted against fitness for every animat of all 50 LODs of Task 4. All three measures are positively correlated with fitness (ρ = 0.80/0.79/0.54 Spearman’s rank correlation coefficient for H/<#concepts>/<Φ

^{Max}> with p < 0.001). Note that animats from the same LOD are related. The red dots in Figure 8E highlight the final generation of each LOD, which are independent of each other. Taking only the final generation into account, H and <#concepts> still correlate significantly with fitness. However, the correlation for <Φ

^{Max}> is not significant after correcting for multiple comparisons (ρ = 0.63/0.56 for H/<#concepts> with p < 0.001), since having more <Φ

^{Max}> even at lower fitness levels has no cost for the animats.

_{Sen}is mostly task dependent: during adaptation H

_{Sen}increases only slightly for Tasks 3 and 4 and decreases slightly for Tasks 1 and 2 (see Figure S4 of [19]). The entropy of the motor states H

_{Mot}represents the behavioral repertoire (behavioral complexity) of the animats and is included in H. H

_{Mot}increases during adaptation, but asymptotes at similar values (~1.6) for all tasks. This reflects the fact that the behavioral requirements (“catch” and “avoid”) are similar in all task environments (see Figure S4 of [19]).

^{Max}>. Nevertheless, H is also directly related to <Φ

^{Max}>, since the highest level of entropy for a fixed number of elements is achieved if, for each element, the probability to be in state “0” or “1” is balanced. As we saw above for elementary cellular automata, balanced rules that output “0” or “1” with equal probability are more likely to achieve high values of <Φ

^{Max}> (Figure 5B, λ parameter). This is because mechanisms with balanced cause-effect repertoires have on average higher φ values and lead to more higher-order concepts, and thus cause-effect structures with higher <Φ

^{Max}>. Likewise, as shown for Task 4 in Figure 8E, right panel, animats with high <Φ

^{Max}> also have high entropy H (ρ = 0.66, p < 0.001; taking only the final generation into account the correlation is still almost significant after correcting for multiple comparisons with ρ = 0.44, p = 0.053).

^{Max}> and <#concepts> is necessary in order to have the potential for complex dynamics, and thus high state entropy. The animats, however, receive sensory inputs that can drive their internal dynamics. Consequently, also animats with modular, mainly feedforward structures (Φ = 0) can have high state entropy H while they are behaving in their world. Keeping the sensory inputs constant, animats converge to steady states or periodic dynamics of small cycle length within at most seven time-steps. The average length of these transients, measured for the final generation of animats of all 50 LODs, tends to correlate with the average <Φ

^{Max}> calculated from all states experienced during the 128 test trials especially in the simpler tasks 1 and 2 (ρ = 0.45/0.46/0.43/0.39 Spearman’s rank correlation coefficient for Task 1–4 with p = 0.04/0.03/0.067/0.19 after correcting for multiple comparisons). Interestingly, there is no correlation between the transient length and the animats’ fitness. This is because, in general, high fitness only requires a rich behavioral repertoire while interacting with the world, but not in isolation.

_{SMMI}) [34] and predictive information (I

_{Pred}) [35] of the animats as defined in [19,36] evolved during adaptation. I

_{SMMI}measures the differentiation of the observed input-output behavior of the animats’ sensors and motors. I

_{Pred}, the mutual information between observed past and future system states, measures the differentiation of the observed internal states of the animats’ hidden and motor elements. Both, high I

_{SMMI}and high I

_{Pred}, should be advantageous during adaptation to a complex environment, since they reflect the animats’ behavioral and dynamical repertoire, in particular how deterministically one state leads to another. I

_{Pred}in the animats is indeed closely tied to the state entropy: it increases during adaptation with increasing fitness and a higher number of internal elements. I

_{SMMI}, however, may actually decrease during adaptation in the animats, since an increase in internal memory may reduce the correlation between sensors and motors, which are restricted to two each (see Figure S4 of [19]). Both I

_{SMMI}and I

_{Pred}, are correlational measures, which depend on the observed distributions of system states. By contrast, analyzing the cause-effect structure of a system requires system perturbations that reveal the causal properties of the system’s mechanisms under all possible initial states. The cause-effect structure thus takes the entire set of possible circumstances the animat might be exposed to into account and not just those observed in a given setting. As for cellular automata, an animat’s cause-effect structures, evaluated by its <#concepts> and <Φ

^{Max}> quantify its intrinsic causal complexity and its dynamical potential.

^{Max}> > 0, high degeneracy) than among animats with the same perfect fitness, but <Φ

^{Max}> = 0 (low degeneracy). In Task 1, for example, integrated solutions were encountered in six out of 50 lines of descent (LODs); modular solutions in seven out of 50 LODs. Nevertheless, analyzing all animats with perfect fitness across all generations and LODs, animats with <Φ

^{Max}> > 0 showed 332 different behavioral strategies, while animats with <Φ

^{Max}> = 0 only produced 44 different behavioral strategies. The reason is that integrated networks are more flexible and allow for neutral mutations that do not lead to a decrease in fitness. By contrast, modular networks showed very little variability once a solution was encountered. Having more potential solutions should give a probabilistic selective advantage to integrated networks, and should also lead to more heterogeneous populations, which provide an additional advantage in the face of environmental change.

## 3. Conclusions

## 4. Methods

**Figure 9.**Information, integration, and exclusion postulate at the level of mechanisms. (

**A**) ECA 232 with N = 6 cells A-F illustrated as a network of interacting elements in the current state A-F

_{t}= 111000. All cells implement rule 232, the Majority rule; (

**B**) Information: Element A

_{t}in state 1 constrains the past and future states of ABF

_{t±1}compared to their unconstrained (flat) distribution (overlaid in gray); (

**C**) Integration: Elements A and D do not form a higher order mechanism, since AD

_{t}is reducible to its component mechanisms A

_{t}and D

_{t}. AB

_{t}in state 11, however, does form a higher order mechanism, since AB

_{t}specifies both irreducible causes and irreducible effects, as evaluated by integrated information φ (see text); (

**D**) Exclusion: Cause and effect repertoires are evaluated over all sets of system elements. The cause and effect repertoire that are maximally irreducible and their φ

^{Max}value form the concept of a mechanism. In case of AB, the maximally irreducible cause-repertoire is over elements BCF

_{t-1}; the maximally irreducible effect-repertoire is over elements AB

_{t+1}.

#### 4.1. Mechanisms and Concepts

_{t}in state m

_{t}forms a mechanism with cause-effect power on the system, we first need to evaluate whether (i) the past state of the system makes a difference to the set M

_{t}and (ii) the state of the set M

_{t}= m

_{t}makes a difference to the future state of the system. Condition (i) is satisfied if Mt, by being in its current state m

_{t}, constrains the probability distribution of possible past states of a set of system elements Z

_{t-1}, specified by the cause repertoire ${p}_{cause}\left({z}_{t-1}\text{|}{m}_{t}\right)$. The cause-repertoire of element A

_{t}in state “1” of the ECA shown in Figure 9A over its inputs ABF

_{t-1}, for example, reflects A’s update rule: only states in which two or more of A’s inputs are “1” are possible past states of A

_{t}= 1 (see Supplementary Methods for details on the calculation of the cause repertoire). The cause information (ci) of A

_{t}= 1 quantifies the distance between its cause repertoire and the unconstrained (uc) cause repertoire in which each past state of ABF

_{t-1}is assumed equally likely (see Figure 9B).

_{t}, constrains the probability distribution of possible future states of a set of system elements Z

_{t+1}, specified by the effect repertoire ${p}_{effect}\left({z}_{t+1}\text{|}{m}_{t}\right)$. Figure 9B shows the effect repertoire of A

_{t}= 1 over its outputs ABF

_{t+1}(see Supplementary Methods for details on the calculation of the effect repertoire). The effect information (ei) of A

_{t}= 1 quantifies the distance between its effect repertoire and the unconstrained effect repertoire, which considers all input states to all system elements to be equally likely.

_{t}= 1 by itself cannot perfectly constrain its outputs ABF

_{t+1}. Since the state of the other inputs to ABF

_{t+1}is unconstrained (perturbed into “0” and “1” with equal probability), the effect of A

_{t}= 1 by itself is merely to increase the probability of its output elements to be in state “1” at t + 1 compared to the unconstrained distribution, as specified by the effect repertoire shown in Figure 9B.

_{t}, the minimum of its ci and ei, must be positive. Here, for A

_{t}= 1:

_{t}and D

_{t}together do not constrain the past or future states of the system more than A

_{t}and D

_{t}taken separately. Irreducibility is tested by partitioning connections, which means making connections causally ineffective, and can be thought of as “injecting independent noise” into the connections. Partitioning between A

_{t}and D

_{t}and their respective inputs and outputs does not make a difference to AD

_{t}’s cause or effect repertoire (φ = 0). The elements A and B together, however, do form a higher-order mechanism AB

_{t}, since no matter how the mechanism’s connections are partitioned, AB

_{t}’s cause-effect repertoire is changed. To assess the irreducibility of AB

_{t}, over ABCF

_{t − 1}we first calculate the integrated cause information φ

_{cause}for all possible partitions of AB

_{t}and ABCF

_{t − 1}, as the distance between the intact cause repertoire and the product cause repertoire of the partition. In this way, we find the partition that makes the least difference to the cause repertoire, the minimum information partition $MI{P}_{cause}=\underset{P}{\text{argmin}}\left({\phi}_{cause}\left(A{B}_{t}=11,ABC{F}_{t-1},P\right)\right)$. Here:

_{effect}for all possible partitions to find $MI{P}_{effect}=\underset{P}{\text{argmin}}\left({\phi}_{effect}\left(A{B}_{t}=11,ABC{F}_{t+1},P\right)\right)$. Here:

_{t}= m

_{t}with positive integrated information $\phi \left({m}_{t},{Z}_{t\pm 1},MIP\right)$, the minimum of φ

_{cause}and φ

_{effect}, is a mechanism that has cause-effect power on the system. Note that, depending on whether φ = φ

_{cause}or φ = φ

_{effect}, M

_{t}’s overall $MIP=\underset{P}{\text{argmin}}\left(\phi \left({m}_{t},{Z}_{t\pm 1},P\right)\right)$ is either MIPcause or MIPeffect. The set of Mt’s cause and effect repertoire describe how M

_{t}constrains the past and future of the system elements Z

_{t ± 1}and is termed “cause-effect repertoire” $CER({m}_{t},{Z}_{t\pm 1})$.

_{cause}and φ

_{effect}can be measured for cause-effect repertoires over all possible sets of system elements Z

_{t±1}. The cause-effect repertoire of M

_{t}= m

_{t}that is maximally irreducible over the sets of elements ${Z}_{t\pm 1}^{*}$, with max(φ

_{cause}) and max(φ

_{effect}) (see Table 1), and its integrated information φ

^{Max}form M

_{t}’s “concept”. It can be thought of as the causal role of the mechanism in its current state from the intrinsic perspective of the system itself. The concept of AB in state “11” in the ECA system, for example, is specified by the cause repertoire over the set of elements BCF and the effect repertoire over the set of elements AB, with ${\phi}^{Max}=min({\phi}_{cause}^{Max},{\phi}_{effect}^{Max})=0.2$.

#### 4.2. Cause-Effect Structures

_{t}in state “111000” for example has 14 concepts, six elementary and eight higher-order concepts (Figure 10A).

_{t}) of the intact system S

_{t}= s

_{t}and the cause-effect structure of the partitioned system $C({s}_{t},{P}_{\to})$. Again, we test all possible partitions ${P}_{\to}$ in order to find the minimum information partition (MIP), the partition that makes the least difference to the cause-effect structure of S

_{t}. The distance between two cause-effect structures is evaluated using an extended version of the earth mover’s distance (emd). It quantifies the cost of transforming one cause-effect structure into another, taking into account how much the cause-effect repertoires and φ

^{Max}values of all concepts change through the partition, see [17].

**Figure 10.**Information, integration, and exclusion postulate at the level of systems of mechanisms. (

**A**) Information: The system A-F

_{t}in state “111000” has 14 concepts: six elementary concepts A

_{t},B

_{t},C

_{t},D

_{t},E

_{t}and F

_{t}, four 2nd order concepts of adjacent elements, as well as AC

_{t}and DF

_{t}, and the 3rd order concepts ABC

_{t}and DEF

_{t}; (

**B**) Integration: the cause-effect structures of A-F

_{t}is irreducible to the cause-effect structure of its minimum information partition (MIP), which eliminates the effects of subset ABC

_{t}onto subset DEF

_{t}, as measured by the integrated conceptual information Φ, which quantifies the distance between the whole and partitioned cause-effect structure; (

**C**) Exclusion: Cause-effect structures and their Φ values are evaluated over all sets of system elements. The set of elements ${S}_{t}^{*}$ with the maximally irreducible cause-effect structure forms a complex; here it is the whole system A-F

_{t}. This is not always the case. For different types of rules, or more heterogeneous connections or rules, system subsets may be maximally irreducible complexes. Complexes cannot overlap. In the three examples on the right, DEF

_{t}thus may form another complex beneath the main complex ABC

_{t}; the whole system, however, is excluded from being a complex, as are subsets of elements within a complex.

_{t}the MIP renders the connections from subset ABC to DEF causally ineffective. Although this partition is the one that least affects the cause-effect structure of A-F

_{t}, it still alters the cause-effect repertoires of many concepts (Figure 9B). Only the concepts B

_{t}= 1, E

_{t}= 0, AB

_{t}= 11, and BC

_{t}= 11 remain intact. A-F

_{t}’s cause-effect structure is thus irreducible with:

^{Max}). A set of elements that has Φ

^{Max}and is thus maximally irreducible is called a “complex”. In our example, the whole set A-F

_{t}is a complex, since it has the highest Φ value compared to all its subsets. This is not always the case: given different ECA rules, systems with heterogeneous rules, or systems with slightly different connectivity, smaller sets may have the most irreducible cause-effect structure (Figure 10C). Once the major complex with Φ

^{Max}is identified, non-overlapping sets of elements can form additional, minor complexes, as shown for the system with heterogeneous rules (Figure 10C, middle). The whole system A-F

_{t}, however, is excluded from being a complex in this example, since it overlaps with the major complex ABC

_{t}.

^{N}axes in a system with N cells and binary elements. Each system concept corresponds to a point in this high-dimensional space. The coordinates are given by the probabilities specified in the concept’s cause-effect repertoire for each past and future system state. The size of each point corresponds to the concept’s φ

^{Max}value. To plot cause-effect structures in cause-effect space, we project the high dimensional structure onto the three past or future states (blue or green, respectively) for which the concepts’ cause-effect repertories have the highest variance (see Figure 3C for a N = 5 cell Rule 232 system in state “all 0”). In this way, the selectivity and distribution of concepts in the state space of their system can be visualized in a straightforward manner. On the IIT webpage [20] all example cause-effect structures of this article can be viewed in 3D.

^{Max}. In order to have high cause-effect power, it is necessary to have many internal mechanisms, which form a rich cause-effect structure with many elementary and higher order concepts.

#### 4.3. Transient Lengths

^{N}time steps, corresponding to the number of possible states of the system. Transient lengths for the animats were obtained in the same way, for all possible sensor states, keeping the sensor inputs to the animat constant while the internal states and motors evolved in time.

#### 4.4. Statistics and IIT Code

_{coeff}= 40.

## Supplementary Files

Supplementary File 1Supplementary File 2## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Nykamp, D. Dynamical System Definition. Available online: http://mathinsight.org/dynamical_system_definition (accessed on 15 May 2015).
- Ermentrout, G.B.; Edelstein-Keshet, L. Cellular automata approaches to biological modeling. J. Theor. Biol.
**1993**, 160, 97–133. [Google Scholar] [CrossRef] [PubMed] - De Jong, H. Modeling and simulation of genetic regulatory systems: A literature review. J. Comput. Biol.
**2002**, 9, 67–103. [Google Scholar] [CrossRef] [PubMed] - Nowak, M.A. Evolutionary Dynamics; Harvard University Press: Cambridge, MA, USA, 2006. [Google Scholar]
- Sumpter, D.J.T. The principles of collective animal behaviour. Philos. Trans. R. Soc. Lond. B. Biol. Sci.
**2006**, 361, 5–22. [Google Scholar] [CrossRef] [PubMed] - Kaneko, K. Life: An Introduction to Complex Systems Biology; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Izhikevich, E.M. Dynamical Systems in Neuroscience; MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
- Wolfram, S. Universality and complexity in cellular automata. Physica D
**1984**, 10, 1–35. [Google Scholar] [CrossRef] - Wolfram, S. A New Kind of Science; Wolfram: Champaign, IL, USA, 2002; Volume 5. [Google Scholar]
- Von Neumann, J. Theory of self-reproducing automata. In Essays on Cellular Automata; University of Illinois Press: Champaign, IL, USA, 1966. [Google Scholar]
- Gardner, M. Mathematical games: The fantastic combinations of John Conway’s new solitaire game “life.”. Sci. Am.
**1970**, 223, 120–123. [Google Scholar] [CrossRef] - Knoester, D.B.; Goldsby, H.J.; Adami, C. Leveraging Evolutionary Search to Discover Self-Adaptive and Self-Organizing Cellular Automata.
**2014**. arXiv:1405.4322. [Google Scholar] - Li, W.; Packard, N. The structure of the elementary cellular automata rule space. Complex Syst.
**1990**, 4, 281–297. [Google Scholar] - Culik, K., II; Yu, S. Undecidability of CA classification schemes. Complex Syst.
**1988**, 2, 177–190. [Google Scholar] - Sutner, K. On the Computational Complexity of Finite Cellular Automata. J. Comput. Syst. Sci.
**1995**, 50, 87–97. [Google Scholar] [CrossRef] - Wolfram, S. Undecidability and intractability in theoretical physics. Phys. Rev. Lett.
**1985**, 54, 735–738. [Google Scholar] [CrossRef] [PubMed] - Oizumi, M.; Albantakis, L.; Tononi, G. From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLoS Comput. Biol.
**2014**, 10, e1003588. [Google Scholar] [CrossRef] [PubMed] - Tononi, G. Integrated information theory. Scholarpedia
**2015**, 10, 4164. [Google Scholar] [CrossRef] - Albantakis, L.; Hintze, A.; Koch, C.; Adami, C.; Tononi, G. Evolution of Integrated Causal Structures in Animats Exposed to Environments of Increasing Complexity. PLoS Comput. Biol.
**2014**, 10, e1003966. [Google Scholar] [CrossRef] [PubMed] - Online IIT Calculation. Available online: http://integratedinformationtheory.org/calculate.html (accessed on 9 April 2015).
- Chua, L.O.; Yoon, S.; Dogaru, R. A Nonlinear Dynamics Perspective of Wolfram’s New Kind of Science Part I: Threshold of Complexity. Int. J. Bifurc. Chaos
**2002**, 12, 2655–2766. [Google Scholar] [CrossRef] - Schüle, M.; Stoop, R. A full computation-relevant topological dynamics classification of elementary cellular automata. Chaos
**2012**, 22, 043143. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Krawczyk, M.J. New aspects of symmetry of elementary cellular automata.
**2013**. arXiv:1304.5771. [Google Scholar] [CrossRef] - De Oliveira, G.M.B.; de Oliveira, P.P.B.; Omar, N. Guidelines for dynamics-based parameterization of one-dimensional cellular automata rule spaces. Complexity
**2000**, 6, 63–71. [Google Scholar] [CrossRef] - Langton, C.G. Computation at the edge of chaos: Phase transitions and emergent computation. Physica D
**1990**, 42, 12–37. [Google Scholar] [CrossRef] - Binder, P. A phase diagram for elementary cellular automata. Complex Syst.
**1993**, 7, 241–247. [Google Scholar] - Wuensche, A.; Lesser, M. The Global Dynamics of Cellular Automata: An Atlas of Basin of Attraction Fields of One-dimensional Cellular Automata; Santa Fe Institute Studies in the Sciences of Complexity: Reference Volume I; Addison-Wesley: Boston, MA, USA, 1992. [Google Scholar]
- Hoel, E.P.; Albantakis, L.; Tononi, G. Quantifying causal emergence shows that macro can beat micro. PNAS
**2013**, 110, 19790–19795. [Google Scholar] [CrossRef] [PubMed] - Adamatzky, A.; Martinez, G.J. On generative morphological diversity of elementary cellular automata. Kybernetes
**2010**, 39, 72–82. [Google Scholar] [CrossRef] - Zenil, H.; Villarreal-Zapata, E. Asymptotic Behaviour and Ratios of Complexity in Cellular Automata. Int. J. Bifurc. Chaos
**2013**, 23, 1350159. [Google Scholar] [CrossRef] - Online Animat Animation. Available online: http://integratedinformationtheory.org/animats.html (accessed on 5 May 2015).
- Beer, R.D. The Dynamics of Active Categorical Perception in an Evolved Model Agent. Adapt. Behav.
**2003**, 11, 209–243. [Google Scholar] [CrossRef] - Marstaller, L.; Hintze, A.; Adami, C. The evolution of representation in simple cognitive networks. Neural Comput.
**2013**, 25, 2079–2107. [Google Scholar] [CrossRef] [PubMed] - Ay, N.; Bertschinger, N.; Der, R.; Güttler, F.; Olbrich, E. Predictive information and explorative behavior of autonomous robots. Eur. Phys. J. B
**2008**, 63, 329–339. [Google Scholar] [CrossRef] - Bialek, W.; Nemenman, I.; Tishby, N. Predictability, complexity, and learning. Neural Comput.
**2001**, 13, 2409–2463. [Google Scholar] [CrossRef] [PubMed] - Edlund, J.A.; Chaumont, N.; Hintze, A.; Koch, C.; Tononi, G.; Adami, C. Integrated information increases with fitness in the evolution of animats. PLoS Comput. Biol.
**2011**, 7, e1002236. [Google Scholar] [CrossRef] [PubMed] - Tononi, G.; Sporns, O.; Edelman, G.M. Measures of degeneracy and redundancy in biological networks. Proc. Natl. Acad. Sci. USA
**1999**, 96, 3257–3262. [Google Scholar] [CrossRef] [PubMed] - Edelman, G. Neural Darwinism: The Theory of Neuronal Group Selection; Basic Books: New York, NY, USA, 1987. [Google Scholar]
- Albantakis, L.; Tononi, G. Advantages of integrated cause-effect structures in changing environments; An in silico study on evolving animats.
**2015**. in preparation. [Google Scholar] - Schrödinger, E. What is Life?: With Mind and Matter and Autobiographical Sketches; Cambridge University Press: Cambridge, UK, 1992. [Google Scholar]
- Still, S.; Sivak, D.A.; Bell, A.J.; Crooks, G.E. Thermodynamics of Prediction. Phys. Rev. Lett.
**2012**, 109, 120604. [Google Scholar] [CrossRef] [PubMed] - Kari, J. Theory of cellular automata: A survey. Theor. Comput. Sci.
**2005**, 334, 3–33. [Google Scholar] [CrossRef] - Pavlic, T.; Adams, A.; Davies, P.; Walker, S. Self-Referencing Cellular Automata: A Model of the Evolution of Information Control in Biological Systems. In Artificial Life 14: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems; The MIT Press: Cambridge, MA, USA, 2014; Volume 14, pp. 522–529. [Google Scholar]
- Rubner, Y.; Tomasi, C.; Guibas, L. The earth mover’s distance as a metric for image retrieval. Int. J. Comput. Vis.
**2000**, 40, 99–121. [Google Scholar] [CrossRef] - Pele, O.; Werman, M. Fast and Robust Earth Mover’s Distances. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009.
- PyPhi Package. Available online: https://github.com/wmayner/pyphi (accessed on 5 May 2015).

© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Albantakis, L.; Tononi, G.
The Intrinsic Cause-Effect Power of Discrete Dynamical Systems—From Elementary Cellular Automata to Adapting Animats. *Entropy* **2015**, *17*, 5472-5502.
https://doi.org/10.3390/e17085472

**AMA Style**

Albantakis L, Tononi G.
The Intrinsic Cause-Effect Power of Discrete Dynamical Systems—From Elementary Cellular Automata to Adapting Animats. *Entropy*. 2015; 17(8):5472-5502.
https://doi.org/10.3390/e17085472

**Chicago/Turabian Style**

Albantakis, Larissa, and Giulio Tononi.
2015. "The Intrinsic Cause-Effect Power of Discrete Dynamical Systems—From Elementary Cellular Automata to Adapting Animats" *Entropy* 17, no. 8: 5472-5502.
https://doi.org/10.3390/e17085472