# Can Transfer Entropy Infer Information Flow in Neuronal Circuits for Cognitive Processing?

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Markov Brains

#### 2.2. Motion Detection

#### 2.3. Sound Localization

## 3. Results

#### 3.1. Gate Composition of Evolved Circuits

#### 3.2. Transfer Entropy Misestimates Caused by Encryption or Polyadicity

#### 3.3. Transfer Entropy Measurements from Recordings of Evolved Brains

## 4. Discussion

## 5. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## Abbreviations

MB | Markov Brains |

GA | Genetic Algorithm |

TE | Transfer Entropy |

PD | Preferred Direction |

ND | Null Direction |

MD | Motion Detection |

SL | Sound Localization |

XOR | Exclusive OR |

XNOR | Exclusive NOR |

SD | Standard Deviation |

SE | Standard Error of Mean |

## References

- Phillips, W.A.; Singer, W. In search of common foundations for cortical computation. Behav. Brain Sci.
**1997**, 20, 657–683. [Google Scholar] [CrossRef] - Rivoire, O.; Leibler, S. The value of information for populations in varying environments. J. Stat. Phys.
**2011**, 142, 1124–1166. [Google Scholar] [CrossRef][Green Version] - Adami, C. The use of information theory in evolutionary biology. Ann. N. Y. Acad. Sci.
**2012**, 1256, 49–65. [Google Scholar] [CrossRef][Green Version] - Oizumi, M.; Albantakis, L.; Tononi, G. From the phenomenology to the mechanisms of consciousness: Integrated Information Theory 3.0. PLoS Comput. Biol.
**2014**, 10, e1003588. [Google Scholar] [CrossRef] [PubMed][Green Version] - Wibral, M.; Lizier, J.T.; Priesemann, V. Bits from brains for biologically inspired computing. Front. Robot. AI
**2015**, 2, 5. [Google Scholar] [CrossRef][Green Version] - Bunge, M.A. Causality: The Place of the Causal Principle in Modern Science; Harvard University Press: Cambridge, MA, USA, 1959. [Google Scholar]
- Granger, C.W. Investigating causal relations by econometric models and cross-spectral methods. Econometrica
**1969**, 37, 424–438. [Google Scholar] [CrossRef] - Pearl, J. Causality: Models, Reasoning and Inference; Springer: New York, NY, USA, 2000; Volume 29. [Google Scholar]
- Pearl, J. Causality; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
- Sun, J.; Bollt, E.M. Causation entropy identifies indirect influences, dominance of neighbors and anticipatory couplings. Physica D
**2014**, 267, 49–57. [Google Scholar] [CrossRef][Green Version] - Albantakis, L.; Marshall, W.; Hoel, E.; Tononi, G. What Caused What? A quantitative Account of Actual Causation Using Dynamical Causal Networks. Entropy
**2019**, 21, 459. [Google Scholar] [CrossRef][Green Version] - Schreiber, T. Measuring information transfer. Phys. Rev. Lett.
**2000**, 85, 461. [Google Scholar] [CrossRef][Green Version] - Bossomaier, T.; Barnett, L.; Harré, M.; Lizier, J.T. An Introduction to Transfer Entropy; Springer International: Cham, Switzerland, 2015. [Google Scholar]
- Barnett, L.; Barrett, A.B.; Seth, A.K. Granger causality and transfer entropy are equivalent for Gaussian variables. Phys. Rev. Lett.
**2009**, 103, 238701. [Google Scholar] [CrossRef][Green Version] - Vicente, R.; Wibral, M.; Lindner, M.; Pipa, G. Transfer entropy—a model-free measure of effective connectivity for the neurosciences. J. Comput. Neurosci.
**2011**, 30, 45–67. [Google Scholar] [CrossRef] [PubMed][Green Version] - Wibral, M.; Vicente, R.; Lindner, M. Transfer entropy in neuroscience. In Directed Information Measures in Neuroscience; Springer: New York, NY, USA, 2014; pp. 3–36. [Google Scholar]
- Lizier, J.T.; Prokopenko, M. Differentiating information transfer and causal effect. Eur. Phys. J. B
**2010**, 73, 605–615. [Google Scholar] [CrossRef][Green Version] - James, R.G.; Barnett, N.; Crutchfield, J.P. Information Flows? A Critique of Transfer Entropies. Phys. Rev. Lett.
**2016**, 116, 238701. [Google Scholar] [CrossRef] [PubMed] - Janzing, D.; Balduzzi, D.; Grosse-Wentrup, M.; Schölkopf, B. Quantifying causal influences. Ann. Stat.
**2013**, 41, 2324–2358. [Google Scholar] [CrossRef] - Shannon, C.E. Communication theory of secrecy systems. Bell Syst. Tech. J.
**1949**, 28, 656–715. [Google Scholar] [CrossRef] - Williams, P.L.; Beer, R.D. Nonnegative decomposition of multivariate information. arXiv
**2010**, arXiv:1004.2515. [Google Scholar] - Kriegeskorte, N.; Douglas, P.K. Cognitive computational neuroscience. Nat. Neurosci.
**2018**, 21, 1148–1160. [Google Scholar] [CrossRef] - Hintze, A.; Edlund, J.A.; Olson, R.S.; Knoester, D.B.; Schossau, J.; Albantakis, L.; Tehrani-Saleh, A.; Kvam, P.; Sheneman, L.; Goldsby, H.; et al. Markov brains: A technical introduction. arXiv
**2017**, arXiv:1709.05601. [Google Scholar] - Borst, A.; Egelhaaf, M. Principles of visual motion detection. Trends Neurosci.
**1989**, 12, 297–306. [Google Scholar] [CrossRef][Green Version] - Moore, B.C. An Introduction to the Psychology of Hearing; Bril: Leiden, The Netherlands, 2012. [Google Scholar]
- Pickles, J. An Introduction to the Physiology of Hearing; Brill: Leiden, The Netherlands, 2013. [Google Scholar]
- Edlund, J.A.; Chaumont, N.; Hintze, A.; Koch, C.; Tononi, G.; Adami, C. Integrated information increases with fitness in the evolution of animats. PLoS Comput. Biol.
**2011**, 7, e1002236. [Google Scholar] [CrossRef][Green Version] - Albantakis, L.; Hintze, A.; Koch, C.; Adami, C.; Tononi, G. Evolution of integrated causal structures in animats exposed to environments of increasing complexity. PLoS Comput. Biol.
**2014**, 10, e1003966. [Google Scholar] [CrossRef] [PubMed] - Schossau, J.; Adami, C.; Hintze, A. Information-theoretic neuro-correlates boost evolution of cognitive systems. Entropy
**2016**, 18, 6. [Google Scholar] [CrossRef][Green Version] - Marstaller, L.; Hintze, A.; Adami, C. The evolution of representation in simple cognitive networks. Neural Comput.
**2013**, 25, 2079–2107. [Google Scholar] [CrossRef] [PubMed][Green Version] - Juel, B.E.; Comolatti, R.; Tononi, G.; Albantakis, L. When is an action caused from within? Quantifying the causal chain leading to actions in simulated agents. arXiv
**2019**, arXiv:1904.02995. [Google Scholar] - Michalewicz, Z. Genetic Algorithms + Data Strucures = Evolution Programs; Springer: New York, NY, USA, 1996. [Google Scholar]
- Hassenstein, B.; Reichardt, W. Systemtheoretische Analyse der Zeit-, Reihenfolgen- und Vorzeichenauswertung bei der Bewegungsperzeption des Rüsselkäfers Chlorophanus. Z. Naturforsch B
**1956**, 11, 513–524. [Google Scholar] [CrossRef][Green Version] - Tehrani-Saleh, A.; LaBar, T.; Adami, C. Evolution leads to a diversity of motion-detection neuronal circuits. In Proceedings of Artificial Life 16; Ikegami, T., Virgo, N., Witkowski, O., Oka, M., Suzuki, R., Iizuka, H., Eds.; MIT Press: Cambridge, MA, USA, 2018; pp. 625–632. [Google Scholar]
- Middlebrooks, J.C.; Green, D.M. Sound localization by human listeners. Annu. Rev. Psychol.
**1991**, 42, 135–159. [Google Scholar] [CrossRef] - Jeffress, L.A. A place theory of sound localization. J. Comp. Physiol. Psychol.
**1948**, 41, 35. [Google Scholar] [CrossRef] - Ay, N.; Polani, D. Information flows in causal networks. Adv. Complex Syst.
**2008**, 11, 17–41. [Google Scholar] [CrossRef][Green Version] - Paul, L.A.; Hall, N.; Hall, E.J. Causation: A User’s Guide; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
- Halpern, J.Y. Actual Causality; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Macmillan, N.A.; Creelman, C.D. Detection Theory: A User’s Guide; Psychology Press: East Sussex, UK, 2004. [Google Scholar]
- Prinz, A.A.; Bucher, D.; Marder, E. Similar network activity from disparate circuit parameters. Nat. Neurosci.
**2004**, 7, 1345. [Google Scholar] [CrossRef] - Goaillard, J.M.; Taylor, A.L.; Schulz, D.J.; Marder, E. Functional consequences of animal-to-animal variation in circuit parameters. Nat. Neurosci.
**2009**, 12, 1424. [Google Scholar] [CrossRef][Green Version] - Marder, E. Variability, compensation, and modulation in neurons and circuits. Proc. Natl. Acad. Sci. USA
**2011**, 108, 15542. [Google Scholar] [CrossRef] [PubMed][Green Version]

**Figure 1.**(

**A**) A network where processes X and Y influence future state of Z, ${Z}_{t+1}=f({X}_{t},{Y}_{t})$. (

**B**) A feedback network in which processes Y and Z influence future state Z, ${Z}_{t+1}=f({Y}_{t},{Z}_{t})$.

**Figure 2.**(

**A**) A Reichardt detector circuit. In this circuit, the results of the multiplications from each pathway are subtracted to generate the response. The circuit’s outcome for PD is +1, ND is −1, and for stationary patterns is 0. (

**B**) Schematic examples of three types of input patterns received by the two sensory neurons at two consecutive time steps. Grey squares show presence of the stimuli in those neurons. The sensory pattern shown here for PD is 10 at time t and 01 at time $t+1$, which we write as: $10\to 01$. Patterns $11\to 01$ and $00\to 10$ also represent PD. Similarly, pattern $01\to 10$ is shown as an example of ND but patterns $11\to 10$ and $01\to 11$ are also instances of ND.

**Figure 3.**(

**A**) Schematic of 5 sound sources at different angles with respect to a listener (top view) and Jeffress model of sound localization. (

**B**) Schematic examples of 5 time sequences of input patterns received by the two sensory neurons (receptors of two ears) at three consecutive time steps. Black squares show presence of the stimuli in those neurons.

**Figure 4.**Frequency distribution of all, as well as essential, gates in evolved Markov Brains that perform the motion detection or sound localization task perfectly. (

**A**) All gates. (

**B**) Essential gates.

**Figure 5.**Transfer entropy measures, exact measures and misestimates by transfer entropy, on essential gates of perfect circuits for motion detection, and sound localization task. Columns show mean values and 95% confidence interval of misestimates and exact measures (

**A**) per Brain, and (

**B**) per gate.

**Figure 6.**(

**A**) Transfer entropy measures from neural recordings of a Markov Brain evolved for sound localization. (

**B**) Influence map (also receptive field) of neurons derived from a combination of the logic gates connections and the Boolean logic functions for the same evolved Markov Brain, shown in (

**C**). (

**C**) The logic circuit of the same evolved Markov Brain; neurons ${N}_{0}$ and ${N}_{1}$ are sensory neurons, and neurons ${N}_{11}-{N}_{15}$ are actuator (or decision) neurons.

**Figure 7.**Transfer entropy performance in detecting relations among neurons of evolved (

**A**) motion detection circuits, (

**B**) sound localization circuits. Presented values are averaged across best performing Brains along with 95% confidence intervals. Receiver operating characteristic (ROC) curve representing TE performance with different thresholds to detect neurons relations in evolved (

**C**) motion detection, (

**D**) sound localization circuits.

**Table 1.**Transfer entropies and information in all possible 2-to-1 binary logic gates with or without feedback. The logic of the gate is determined by the value ${Z}_{t+1}$ (second column) as a function of the input ${X}_{t}{Y}_{t}$=(00,01,10,11). $H\left({Z}_{t+1}\right)$ is the Shannon entropy of the output assuming equal probability inputs, $T{E}_{X\to Z}$ is the transfer entropy from X to Z. In 2-to-1 gates without feedback, transfer entropies ${\mathrm{TE}}_{X\to Z}$ and ${\mathrm{TE}}_{Y\to Z}$ reduce to $I({X}_{t}\phantom{\rule{-0.166667em}{0ex}}:\phantom{\rule{-0.166667em}{0ex}}{Z}_{t+1})$, and $I({Y}_{t}\phantom{\rule{-0.166667em}{0ex}}:\phantom{\rule{-0.166667em}{0ex}}{Z}_{t+1})$, respectively. Similarly, transfer entropy of a process to itself is simply $I({Z}_{t}\phantom{\rule{-0.166667em}{0ex}}:\phantom{\rule{-0.166667em}{0ex}}{Z}_{t+1})$ which is the information processed by Z.

2-to-1 Network, $\mathit{Z}=\mathit{f}(\mathit{X},\mathit{Y})$ | 2-to-1 Feedback Loop, $\mathit{Z}=\mathit{f}(\mathit{Y},\mathit{Z})$ | |||||||
---|---|---|---|---|---|---|---|---|

Gate | ${\mathit{Z}}_{\mathit{t}+\mathbf{1}}$ | $\mathit{H}({\mathit{Z}}_{\mathit{t}+\mathbf{1}}$) | ${\mathrm{TE}}_{\mathit{X}\to \mathit{Z}}$ | ${\mathrm{TE}}_{\mathit{Y}\to \mathit{Z}}$ | TE Error | ${\mathrm{TE}}_{\mathit{Y}\to \mathit{Z}}$ | $\mathit{I}({\mathit{Z}}_{\mathit{t}}:{\mathit{Z}}_{\mathit{t}+\mathbf{1}})$ | TE Error |

ZERO | (0,0,0,0) | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |

AND | (0,0,0,1) | 0.81 | 0.31 | 0.31 | 0.19 | 0.5 | 0.31 | 0.19 |

AND-NOT | (0,0,1,0) | 0.81 | 0.31 | 0.31 | 0.19 | 0.5 | 0.31 | 0.19 |

AND-NOT | (0,1,0,0) | 0.81 | 0.31 | 0.31 | 0.19 | 0.5 | 0.31 | 0.19 |

NOR | (1,0,0,0) | 0.81 | 0.31 | 0.31 | 0.19 | 0.5 | 0.31 | 0.19 |

COPY | (0,0,1,1) | 1.0 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |

COPY | (0,1,0,1) | 1.0 | 0.0 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 |

XOR | (0,1,1,0) | 1.0 | 0.0 | 0.0 | 1.0 | 1.0 | 0.0 | 1.0 |

XNOR | (1,0,0,1) | 1.0 | 0.0 | 0.0 | 1.0 | 1.0 | 0.0 | 1.0 |

NOT | (1,0,1,0) | 1.0 | 0.0 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 |

NOT | (1,1,0,0) | 1.0 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |

OR | (0,1,1,1) | 0.81 | 0.31 | 0.31 | 0.19 | 0.5 | 0.31 | 0.19 |

OR-NOT | (1,0,1,1) | 0.81 | 0.31 | 0.31 | 0.19 | 0.5 | 0.31 | 0.19 |

OR-NOT | (1,1,0,1) | 0.81 | 0.31 | 0.31 | 0.19 | 0.5 | 0.31 | 0.19 |

NAND | (1,1,1,0) | 0.81 | 0.31 | 0.31 | 0.19 | 0.5 | 0.31 | 0.19 |

ONE | (1,1,1,1) | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Tehrani-Saleh, A.; Adami, C. Can Transfer Entropy Infer Information Flow in Neuronal Circuits for Cognitive Processing? *Entropy* **2020**, *22*, 385.
https://doi.org/10.3390/e22040385

**AMA Style**

Tehrani-Saleh A, Adami C. Can Transfer Entropy Infer Information Flow in Neuronal Circuits for Cognitive Processing? *Entropy*. 2020; 22(4):385.
https://doi.org/10.3390/e22040385

**Chicago/Turabian Style**

Tehrani-Saleh, Ali, and Christoph Adami. 2020. "Can Transfer Entropy Infer Information Flow in Neuronal Circuits for Cognitive Processing?" *Entropy* 22, no. 4: 385.
https://doi.org/10.3390/e22040385