# Information Fragmentation, Encryption and Information Flow in Complex Biological Networks

^{1}

^{2}

^{3}

^{4}

^{5}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

#### Entropy and Information

## 2. Information Fragmentation

## 3. Results

#### 3.1. n-Back Task

#### 3.2. Block Catch Task

#### 3.3. Mutational Robustness of Evolved Networks

## 4. Discussion

## 5. Methods

#### 5.1. Fragmentation, Fragmentation Matrices and Information Flow

#### 5.1.1. Data Collection and Formatting

#### 5.1.2. Generating Information Fragmentation Matrices

#### 5.1.3. Determining Fragmentation

#### 5.1.4. Visualizing Information Flow

#### 5.2. Digital Evolution System

#### 5.2.1. Tasks: n-Back

#### 5.2.2. Tasks: Block Catch

#### 5.3. Cognitive Systems

#### 5.3.1. Recurrent Neural Networks

#### 5.3.2. Markov Brains

#### 5.4. Evolutionary Algorithm

#### 5.5. Testing Mutational Robustness

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## Abbreviations

ACP | Active Categorical Perception |

MABE | Modular Agent Based Evolver |

DIS | Distinct Informative Set |

RNN | Recurrent Neutral Network |

XOR | Exclusive OR |

## References

- Reid, A.T.; Headley, D.B.; Mill, R.D.; Sanchez-Romero, R.; Uddin, L.Q.; Marinazzo, D.; Lurie, D.J.; Valdés-Sosa, P.A.; Hanson, S.J.; Biswal, B.B.; et al. Advancing functional connectivity research from association to causation. Nat. Neurosci.
**2019**, 22, 1751–1760. [Google Scholar] [CrossRef] - Ioannidis, J.P.A.; Thomas, G.; Daly, M.J. Validating, augmenting and refining genome-wide association signals. Nat. Rev. Genet.
**2009**, 10, 318–329. [Google Scholar] [CrossRef] - Borst, A.; Theunissen, F.E. Information theory and neural coding. Nat. Neurosci.
**1999**, 2, 947–957. [Google Scholar] [CrossRef] - Beer, R.D.; Williams, P.L. Information processing and dynamics in minimally cognitive agents. Cogn. Sci.
**2015**, 39, 1–38. [Google Scholar] [CrossRef] - Williams, P.L.; Beer, R.D. Nonnegative decomposition of multivariate information. arXiv
**2010**, arXiv:1004.251. [Google Scholar] - Timme, N.; Alford, W.; Flecker, B.; Beggs, J.M. Synergy, redundancy, and multivariate information measures: An experimentalist’s perspective. J. Comput. Neurosci.
**2014**, 36, 119–140. [Google Scholar] [CrossRef] - DeWeese, M.R.; Meister, M. How to measure the information gained from one symbol. Netw. Comput. Neural Syst.
**1999**, 10, 325–340. [Google Scholar] [CrossRef] - Zwick, M. An overview of reconstructability analysis. Kybernetes
**2004**, 33, 877–905. [Google Scholar] [CrossRef][Green Version] - Clark, E. Dynamic patterning by the Drosophila pair-rule network reconciles long-germ and short-germ segmentation. PLoS Biol.
**2017**, 15, e2002439. [Google Scholar] [CrossRef][Green Version] - Adami, C. What is information? Philos. Trans. A
**2016**, 374, 20150230. [Google Scholar] [CrossRef][Green Version] - Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J.
**1948**, 27, 379–423. [Google Scholar] [CrossRef][Green Version] - Griffith, V.; Koch, C. Quantifying synergistic mutual information. arXiv
**2012**, arXiv:1205.4265. [Google Scholar] - Schneidman, E.; Still, S.; Berry, M.J., 2nd; Bialek, W. Network information and connected correlations. Phys. Rev. Lett.
**2003**, 91, 238701. [Google Scholar] [CrossRef][Green Version] - Gigerenzer, G.; Goldstein, D.G. Reasoning the fast and frugal way: Models of bounded rationality. Psychol. Rev.
**1996**, 103, 650–669. [Google Scholar] [CrossRef][Green Version] - Hintze, A.; Edlund, J.A.; Olson, R.S.; Knoester, D.B.; Schossau, J.; Albantakis, L.; Tehrani-Saleh, A.; Kvam, P.; Sheneman, L.; Goldsby, H.; et al. Markov brains: A technical introduction. arXiv
**2017**, arXiv:1709.05601. [Google Scholar] - Bohm, C.; Kirkpatrick, D.; Hintze, A. Understanding memories of the past in the context of different complex neural network architectures. Neural Comput.
**2022**, 34, 754–780. [Google Scholar] [CrossRef] - Beer, R. Toward the evolution of dynamical neural networks for minimally cognitive behavior. In Proceedings of the 4th International Conference on Simulation of Adaptive Behavior; Maes, P., Mataric, M.J., Meyer, J., Pollack, J., Wilson, S.W., Eds.; MIT Press: Cambridge, MA, USA, 1996; pp. 421–429. [Google Scholar]
- Beer, R. The dynamics of active categorical perception in an evolved model agent. Adapt. Behav.
**2003**, 11, 209–243. [Google Scholar] [CrossRef] - Shannon, C.E. Communication theory of secrecy systems. Bell Syst. Tech. J.
**1949**, 28, 656–715. [Google Scholar] [CrossRef] - Granger, C.W. Investigating causal relations by econometric models and cross-spectral methods. Econom. J. Econom. Soc.
**1969**, 37, 424–438. [Google Scholar] [CrossRef] - Schreiber, T. Measuring information transfer. Phys. Rev. Lett.
**2000**, 85, 461–464. [Google Scholar] [CrossRef][Green Version] - Lizier, J.T.; Prokopenko, M. Differentiating information transfer and causal effect. Eur. Phys. J. B
**2010**, 73, 605–615. [Google Scholar] [CrossRef][Green Version] - James, R.G.; Barnett, N.; Crutchfield, J.P. Information flows? A critique of transfer entropies. Phys. Rev. Lett.
**2016**, 116, 238701. [Google Scholar] [CrossRef] - Bossomaier, T.; Barnett, L.; Harré, M.; Lizier, J.T. An Introduction to Transfer Entropy; Springer International: Cham, Switzerland, 2015. [Google Scholar]
- Tehrani-Saleh, A.; Adami, C. Can transfer entropy infer information flow in neuronal circuits for cognitive processing? Entropy
**2020**, 22, 385. [Google Scholar] [CrossRef][Green Version] - Van Dartel, M.; Sprinkhuizen-Kuyper, I.; Postma, E.; van den Herik, J. Reactive agents and perceptual ambiguity. Adapt. Behav.
**2005**, 13, 227–242. [Google Scholar] [CrossRef] - Marstaller, L.; Hintze, A.; Adami, C. The evolution of representation in simple cognitive networks. Neural Comput.
**2013**, 25, 2079–2107. [Google Scholar] [CrossRef][Green Version] - Hutchins, E. Cognition in the Wild; MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
- Kirsh, D.; Maglio, P. On distinguishing epistemic from pragmatic actions. Cogn. Sci.
**1994**, 18, 513–549. [Google Scholar] [CrossRef] - Stoltzfus, A. On the possibility of constructive neutral evolution. J. Mol. Evol.
**1999**, 49, 169–181. [Google Scholar] [CrossRef][Green Version] - Stoltzfus, A. Constructive neutral evolution: Exploring evolutionary theory’s curious disconnect. Biol. Direct
**2012**, 7, 35. [Google Scholar] [CrossRef][Green Version] - Liard, V.; Parsons, D.P.; Rouzaud-Cornabas, J.; Beslon, G. The complexity ratchet: Stronger than selection, stronger than evolvability, weaker than robustness. Artif. Life
**2020**, 26, 38–57. [Google Scholar] [CrossRef] - Tawfik, D.S. Messy biology and the origins of evolutionary innovations. Nat. Chem. Biol.
**2010**, 6, 692–696. [Google Scholar] [CrossRef] - Beslon, G.; Liard, V.; Parsons, D.; Rouzaud-Cornabas, J. Of evolution, systems and complexity. In Evolutionary Systems: Advances, Questions, and Opportunities, 2nd ed.; Crombach, A., Ed.; Springer: Cham, Switzerland, 2021; pp. 1–18. [Google Scholar]
- Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.; Arshad, H. State-of-the-art in artificial neural network applications: A survey. Heliyon
**2018**, 4, e00938. [Google Scholar] [CrossRef] [PubMed][Green Version] - Downward, J. Targeting RAS signalling pathways in cancer therapy. Nat. Rev. Cancer
**2003**, 3, 11–22. [Google Scholar] [CrossRef] [PubMed] - Wan, P.T.C.; Garnett, M.J.; Roe, S.M.; Lee, S.; Niculescu-Duvaz, D.; Good, V.M.; Jones, C.M.; Marshall, C.J.; Springer, C.J.; Barford, D.; et al. Mechanism of activation of the RAF-ERK signaling pathway by oncogenic mutations of B-RAF. Cell
**2004**, 116, 855–867. [Google Scholar] [CrossRef][Green Version] - Holderfield, M.; Merritt, H.; Chan, J.; Wallroth, M.; Tandeske, L.; Zhai, H.; Tellew, J.; Hardy, S.; Hekmat-Nejad, M.; Stuart, D.D.; et al. RAF inhibitors activate the MAPK pathway by relieving inhibitory autophosphorylation. Cancer Cell
**2013**, 23, 594–602. [Google Scholar] [CrossRef] [PubMed][Green Version] - Herberholz, J.; Issa, F.A.; Edwards, D.H. Patterns of neural circuit activation and behavior during dominance hierarchy formation in freely behaving crayfish. J. Neurosci.
**2001**, 21, 2759–2767. [Google Scholar] [CrossRef][Green Version] - Grant, R.I.; Doncheck, E.M.; Vollmer, K.M.; Winston, K.T.; Romanova, E.V.; Siegler, P.N.; Holman, H.; Bowen, C.W.; Otis, J.M. Specialized coding patterns among dorsomedial prefrontal neuronal ensembles predict conditioned reward seeking. eLife
**2021**, 10, e65764. [Google Scholar] [CrossRef] - Bohm, C.; Hintze, A. MABE (Modular Agent Based Evolver): A framework for digital evolution research. In Proceedings of the European Conference of Artificial Life, Lyon, France, 4–8 September 2017; pp. 76–83. [Google Scholar]
- Albantakis, L.; Hintze, A.; Koch, C.; Adami, C.; Tononi, G. Evolution of integrated causal structures in animats exposed to environments of increasing complexity. PLoS Comput. Biol.
**2014**, 10, e1003966. [Google Scholar] [CrossRef] - Kirkpatrick, D.; Hintze, A. The evolution of representations in genetic programming trees. In Genetic Programming Theory and Practice XVII; Springer: Berlin/Heidelberg, Germany, 2020; pp. 121–143. [Google Scholar]
- Albani, S.; Ackles, A.L.; Ofria, C.; Bohm, C. The comparative hybrid approach to investigate cognition across substrates. In Proceedings of the 2021 Conference on Artificial Life (ALIFE 2021), Prague, Czech Republic, 19–23 July 2021; Čejková, J., Holler, S., Soros, L., Witkowski, O., Eds.; MIT Press: Cambridge, MA, USA, 2021; pp. 232–240. [Google Scholar]
- Lenski, R.E.; Ofria, C.; Pennock, R.T.; Adami, C. The evolutionary origin of complex features. Nature
**2003**, 423, 139–144. [Google Scholar] [CrossRef][Green Version]

**Figure 1.**Entropy Venn diagram showing how the information about the joint variable $X={X}_{1}{X}_{2}$ stored in Y is distributed across the subsystems ${X}_{1}$ and ${X}_{2}$. The information $I({X}_{1};Y)$ shared between ${X}_{1}$ and Y is indicated by righthatching, while the information $I({X}_{2};Y)$ is shown with lefthatching. As ${X}_{1}$ and ${X}_{2}$ can share entropy, the sum of $I({X}_{1};Y)$ and $I({X}_{2};Y)$ double counts any information shared between all three: $I({X}_{1};{X}_{2};Y)$ (crosshatched). Because information shared between three (or more) parties can be negative, the sum $I({X}_{1};Y)+I({X}_{2};Y)$ can be larger or smaller than $I(X;Y)$.

**Figure 2.**Node configuration for Markov Brains and Recurrent Neural Networks. (

**a**) Networks for the n-Back task have a single input node (blue), eight memory nodes (green) and five output nodes (red) to report on prior inputs. (

**b**) Networks for the Block Catch task have four “retinal” or sensor (input) nodes, eight memory nodes, and two motor (output) nodes that allow the agent to move left or right.

**Figure 3.**Fragmentation matrices for the n-Back task. Matrices from four Markov Brains evolved on the n-Back task that evolved perfect performance, shown as (

**a**–

**d**). The features labeling the rows of the matrix are the expected outputs on the current update, while sets (labelling columns) are combinations of the brain’s eight memory values ${m}_{1}\cdots {m}_{8}$. The amount of information between each feature and each set is indicated by gray-scale, where white squares indicate perfect correlation, and gray to black represents successively less correlation. The black diamond within a white square indicates the smallest distinct informative set (DIS) that predicts each feature. A portion of each matrix containing sets of intermediate size is not shown to save space.

**Figure 4.**Entropy Venn diagram for element {${o}_{5},{m}_{2},{m}_{7}$} of the fragmentation matrix shown in Figure 3c. As ${o}_{5}\left(t\right)={m}_{2}(t-1)\otimes {m}_{7}(t-1)$ (⊗ is the XOR operator), information about ${o}_{5}$ is perfectly encrypted so that each of the nodes ${m}_{2}$ and ${m}_{7}$ reveal no information about ${o}_{5}$. Because this Venn diagram is symmetric, it is arbitrary which variable is called the sender, the receiver, or the key.

**Figure 5.**Information flow through nodes of the Markov Brains evolved to solve the n-Back task. Diagrams (

**a**–

**d**) correspond to the fragmentation matrices shown in Figure 3a–d. Input node ${i}_{1}$ is in green, output neurons ${o}_{k}$ are blue, and memory neurons ${m}_{k}$ are white. The numbers within the nodes are the entropy of that node throughout a trial (as the inputs are random, each node has one bit of entropy). The arrows going into each node represent the connections necessary to account for the total entropy in that node. The labels accompanying each arrow and the arrows’ widths both indicate the proportion of the entropy in the downstream node that can be accounted for by each arrow alone, but because information is distributed and not additive, the sum of informations often does not equal the entropy of the downstream node. Memory nodes with zero entropy are not shown to simplify the graphs (all brains have eight memory nodes). In this configuration, n-Back agents were required to report on the outputs correspondent to $t-1$, $t-3$, $t-5$, $t-7$ and $t-8$, where t is the current time.

**Figure 6.**Depiction of the two tasks used. (

**a**) In the n-Back task, successive bits provided to the agent at input i and must pass though various portions of the memory m and delivered to outputs at later times, such that the outputs ${o}_{1}$, ${o}_{2}$, ${o}_{3}$, ${o}_{4}$, and ${o}_{5}$ at a given time t provide the input state from prior time points $t-1$, $t-3$, $t-5$, $t-7$, and $t-8$, respectively. (

**b**) In the Block Catch task, blocks of various sizes with left or right lateral motion are dropped. Some blocks must be avoided (those shown in red) while other blocks (shown in green) are to be caught. The right portion of (

**b**) shows a subsection of the environment at a particular moment, with a left-falling size-4 block (red). The agent is depicted in blue, with the sensors in dark blue and the “blind spot” in light blue. As currently positioned, only the rightmost sensor of the agent would be activated. Here, the agent should miss the block. The agent “catches” a block if any part of the block intersects any part of the agent.

**Figure 7.**Fragmentation matrices for three Markov Brains evolved to perfect performance on the Block Catch task. For each brain two fragmentation matrices are shown, the first using the state information from the full lifetime (all 120 conditions for 31 updates), and the other only the late lifetime, that is, the last 25% of updates (all 120 conditions). The features (labeling the rows) represent various salient features of the world state. The columns are combinations (sets) of the brain’s 8 memory nodes ${m}_{1}\cdots {m}_{8}$. The amount of information between each feature and each memory set is indicated by gray-scale, where white squares indicate perfect correlation, and gray to black represents successively less correlation. A portion of each matrix containing sets of intermediate size is not shown to save space. (

**a**) Full-lifetime fragmentation matrix of a simple brain (1), same brain, late-lifetime fragmentation matrix (2); (

**b**) full-lifetime and late-lifetime fragmentation matrices for an intermediate-complexity brain (1 and 2, respectively); (

**c**) full-lifetime and late-lifetime fragmentation matrices for a complex brain (1 and 2, respectively).

**Figure 8.**Full-lifetime (

**a.1**,

**b.1**,

**c.1**) and late-lifetime (

**a.2**,

**b.2**,

**c.2**) information flow diagrams for the Block Catch task, for the three brains shown in Figure 7. Green, white, and blue nodes indicate inputs (i), memory (m), and output (o) nodes respectively. The numbers in the nodes indicate the entropy (in bits) in that node. The labels accompanying each connecting link and the link’s width both indicate the proportion of the entropy in the downstream node that can be accounted for by that link. The links rendered in black going into each node represent the connections necessary to account for the total entropy in that node. Red links indicate connections that may (but do not necessarily) account for downstream information (indicating redundant predictive sets). Memory nodes with zero entropy are not shown to simplify the figures (all brains have eight memory nodes). Figure labels correspond to results shown in Figure 7.

**Figure 9.**Mutational robustness (average degradation of performance) vs. flow-complexity (number of informative arrows in the information flow diagram), for Markov Brains (left panel) and RNNs (right panel). In the left panel, three dots are circled and annotated (a–c) to indicate values generated by the three networks shown in Figure 7 and Figure 8. Black solid lines indicate a line of best linear fit.

**Figure 10.**Depiction of the two cognitive systems used in this work. Both brain types have the same general structure, which consists of a “before” state, ${T}_{0}$ and an “after” state, ${T}_{1}$. The ${T}_{0}$ state is made up of inputs and prior memory, while the ${T}_{1}$ state is made up of outputs and updated memory. (

**a**) shows the structure of the RNNs where data flows from ${T}_{0}$ (input and prior memory) through summation and threshold nodes to ${T}_{1}$ (outputs and updated memory). (

**b**) shows the structure of the Markov Brains, where information flows from ${T}_{0}$, through genetically encoded logic gates, to ${T}_{1}$.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Bohm, C.; Kirkpatrick, D.; Cao, V.; Adami, C. Information Fragmentation, Encryption and Information Flow in Complex Biological Networks. *Entropy* **2022**, *24*, 735.
https://doi.org/10.3390/e24050735

**AMA Style**

Bohm C, Kirkpatrick D, Cao V, Adami C. Information Fragmentation, Encryption and Information Flow in Complex Biological Networks. *Entropy*. 2022; 24(5):735.
https://doi.org/10.3390/e24050735

**Chicago/Turabian Style**

Bohm, Clifford, Douglas Kirkpatrick, Victoria Cao, and Christoph Adami. 2022. "Information Fragmentation, Encryption and Information Flow in Complex Biological Networks" *Entropy* 24, no. 5: 735.
https://doi.org/10.3390/e24050735