# Examining the Causal Structures of Deep Neural Networks Using Information Theory

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Quantifying the Causal Structure of DNNs

## 3. Information in the Causal Structure Changes During Training

## 4. Deep Neural Networks in the Causal Plane

## 5. Measuring Joint Effects of Layer-to-Layer Connectivity

## 6. Discussion

## Author Contributions

## Funding

## Conflicts of Interest

## Appendix A

#### Appendix A.1. Effective Information Converges Across Measurement Schemes and Can Be Found via Extrapolation

**Figure A1.**Convergence of $E{I}_{parts}$ measures to theoretical values. The $E{I}_{parts}$ a $30\to 30$ layer injected with a sample of noise up to ${10}^{8}$ time-steps and analyzed with different numbers of bins.

**Figure A2.**Convergence of $EI$ and $E{I}_{parts}$. If evaluated on enough noise samples, $EI$ and $E{I}_{parts}$ converge. In panels (

**a**,

**b**), we show how $EI$ and $E{I}_{parts}$, respectively, converge for dense layers of varying width, initialized with the distribution $\mathcal{U}\left([-\frac{1}{\sqrt{{\mathrm{fan}}_{\mathrm{in}}}},\frac{1}{\sqrt{{\mathrm{fan}}_{\mathrm{in}}}}]\right)$. In panels (

**c**,

**d**), we show the same, but with weights sampled from $\mathcal{U}\left([-\frac{5}{\sqrt{{\mathrm{fan}}_{\mathrm{in}}}},\frac{5}{\sqrt{{\mathrm{fan}}_{\mathrm{in}}}}]\right)$.

#### Appendix A.2. Effective Information Tracks Changes in Causal Structure Regardless of Activation Function

**Figure A3.**Changes in $EI$ during training across activation functions. Tanh (

**a**,

**b**) and ReLU (

**c**,

**d**) versions of a network trained on the reduced-MNIST task, three runs each, showing the different layers.

## References

- Wu, Z.; Watts, O.; King, S. Merlin: An Open Source Neural Network Speech Synthesis System. In Proceedings of the 9th ISCA Speech Synthesis Workshop, Sunnyvale, CA, USA, 13–15 September 2016; pp. 202–207. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM
**2017**, 60, 84–90. [Google Scholar] [CrossRef] - Xi, E.; Bing, S.; Jin, Y. Capsule network performance on complex data. arXiv
**2017**, arXiv:1712.03480. [Google Scholar] - Sutskever, I.; Vinyals, O.; Le, Q. Sequence to sequence learning with neural networks. Adv. Nips
**2014**, 27, 3104–3112. [Google Scholar] - LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature
**2015**, 521, 436–444. [Google Scholar] [CrossRef] - Raina, R.; Madhavan, A.; Ng, A.Y. Large-scale deep unsupervised learning using graphics processors. In Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada, 14–18 June 2009; ACM: New York, NY, USA, 2009; pp. 873–880. [Google Scholar]
- Zhang, C.; Bengio, S.; Hardt, M.; Recht, B.; Vinyals, O. Understanding deep learning requires rethinking generalization. arXiv
**2016**, arXiv:1611.03530. [Google Scholar] - Neyshabur, B.; Bhojanapalli, S.; McAllester, D.; Srebro, N. Exploring generalization in deep learning. Adv. Neural Inf. Process. Syst.
**2017**, 30, 5947–5956. [Google Scholar] - Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw.
**1989**, 2, 359–366. [Google Scholar] [CrossRef] - Tishby, N.; Pereira, F.C.; Bialek, W. The information bottleneck method. arXiv
**2000**, arXiv:physics/0004057. [Google Scholar] - Yu, S.; Principe, J.C. Understanding autoencoders with information theoretic concepts. Neural Netw.
**2019**, 117, 104–123. [Google Scholar] [CrossRef][Green Version] - Shwartz-Ziv, R.; Tishby, N. Opening the black box of deep neural networks via information. arXiv
**2017**, arXiv:1703.00810. [Google Scholar] - Wickstrøm, K.; Løkse, S.; Kampffmeyer, M.; Yu, S.; Principe, J.; Jenssen, R. Information Plane Analysis of Deep Neural Networks via Matrix-Based Renyi’s Entropy and Tensor Kernels. arXiv
**2019**, arXiv:1909.11396. [Google Scholar] - Saxe, A.M.; Bansal, Y.; Dapello, J.; Advani, M.; Kolchinsky, A.; Tracey, B.D.; Cox, D.D. On the information bottleneck theory of deep learning. J. Stat. Mech. Theory Exp.
**2019**, 2019, 124020. [Google Scholar] [CrossRef] - Tononi, G.; Sporns, O. Measuring information integration. BMC Neurosci.
**2003**, 4, 31. [Google Scholar] [CrossRef] [PubMed][Green Version] - Hoel, E.P.; Albantakis, L.; Tononi, G. Quantifying causal emergence shows that macro can beat micro. Proc. Natl. Acad. Sci. USA
**2013**, 110, 19790–19795. [Google Scholar] [CrossRef] [PubMed][Green Version] - Klein, B.; Hoel, E. The emergence of informative higher scales in complex networks. Complexity
**2020**, 2020, 8932526. [Google Scholar] [CrossRef] - Oizumi, M.; Albantakis, L.; Tononi, G. From the phenomenology to the mechanisms of consciousness: Integrated information theory 3.0. PLoS Comput. Biol.
**2014**, 10, e1003588. [Google Scholar] [CrossRef][Green Version] - Gunning, D. Explainable artificial intelligence (xai). Def. Adv. Res. Proj. Agency (Darpa) Web
**2017**, 2. [Google Scholar] [CrossRef][Green Version] - Alvi, M.; Zisserman, A.; Nellåker, C. Turning a blind eye: Explicit removal of biases and variation from deep neural network embeddings. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Mignan, A.; Broccardo, M. One neuron versus deep learning in aftershock prediction. Nature
**2019**, 574, E1–E3. [Google Scholar] [CrossRef] - Bojarski, M.; Del Testa, D.; Dworakowski, D.; Firner, B.; Flepp, B.; Goyal, P.; Jackel, L.D.; Monfort, M.; Muller, U.; Zhang, J.; et al. End to end learning for self-driving cars. arXiv
**2016**, arXiv:1604.07316. [Google Scholar] - Floreano, D.; Wood, R.J. Science, technology and the future of small autonomous drones. Nature
**2015**, 521, 460–466. [Google Scholar] [CrossRef][Green Version] - Shin, H.C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging
**2016**, 35, 1285–1298. [Google Scholar] [CrossRef] [PubMed][Green Version] - Pearl, J. Causality; Cambridge University Press: New York, NY, USA, 2000. [Google Scholar] [CrossRef]
- Harradon, M.; Druce, J.; Ruttenberg, B. Causal learning and explanation of deep neural networks via autoencoded activations. arXiv
**2018**, arXiv:1802.00541. [Google Scholar] - Narendra, T.; Sankaran, A.; Vijaykeerthy, D.; Mani, S. Explaining deep learning models using causal inference. arXiv
**2018**, arXiv:1811.04376. [Google Scholar] - Hoel, E.P. When the map is better than the territory. Entropy
**2017**, 19, 188. [Google Scholar] [CrossRef][Green Version] - Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J.
**1948**, 27, 379–423. [Google Scholar] [CrossRef][Green Version] - Fisher, R.A. The Design of Experiments. Am. Math. Mon.
**1936**, 43, 180. [Google Scholar] [CrossRef] - Balduzzi, D. Information, learning and falsification. arXiv
**2011**, arXiv:1110.3592. [Google Scholar] - Amjad, R.A.; Geiger, B.C. Learning representations for neural network-based classification using the information bottleneck principle. IEEE Trans. Pattern Anal. Mach. Intell.
**2019**, 42, 2225–2239. [Google Scholar] [CrossRef][Green Version] - LeCun, Y.; Cortes, C.; Burges, C. MNIST Handwritten Digit Database. 2010. p. 2. ATT Labs. Available online: http://yann.lecun.com/exdb/mnist (accessed on 1 September 2020).
- Tononi, G. Consciousness as integrated information: A provisional manifesto. Biol. Bull.
**2008**, 215, 216–242. [Google Scholar] [CrossRef] - Marshall, W.; Kim, H.; Walker, S.I.; Tononi, G.; Albantakis, L. How causal analysis can reveal autonomy in models of biological systems. Philos. Trans. R. Soc. Math. Phys. Eng. Sci.
**2017**, 375, 20160358. [Google Scholar] [CrossRef] - Albantakis, L.; Marshall, W.; Hoel, E.; Tononi, G. What Caused What? A quantitative Account of Actual Causation Using Dynamical Causal Networks. Entropy
**2019**, 21, 459. [Google Scholar] [CrossRef][Green Version] - Balduzzi, D.; Tononi, G. Integrated information in discrete dynamical systems: Motivation and theoretical framework. PLoS Comput. Biol.
**2008**, 4, e1000091. [Google Scholar] [CrossRef] [PubMed][Green Version] - Oizumi, M.; Tsuchiya, N.; Amari, S.I. Unified framework for information integration based on information geometry. Proc. Natl. Acad. Sci. USA
**2016**, 113, 14817–14822. [Google Scholar] [CrossRef] [PubMed][Green Version] - Tegmark, M. Improved measures of integrated information. PLoS Comput. Biol.
**2016**, 12, e1005123. [Google Scholar] [CrossRef] - Mediano, P.A.; Seth, A.K.; Barrett, A.B. Measuring integrated information: Comparison of candidate measures in theory and simulation. Entropy
**2019**, 21, 17. [Google Scholar] [CrossRef] [PubMed][Green Version] - Williams, P.L.; Beer, R.D. Nonnegative decomposition of multivariate information. arXiv
**2010**, arXiv:1004.2515. [Google Scholar] - Schneidman, E.; Bialek, W.; Berry, M.J. Synergy, redundancy, and independence in population codes. J. Neurosci.
**2003**, 23, 11539–11553. [Google Scholar] [CrossRef] - Tishby, N.; Zaslavsky, N. Deep learning and the information bottleneck principle. In Proceedings of the 2015 IEEE Information Theory Workshop (ITW), Jerusalem, Israel, 26 April–1 May 2015; pp. 1–5. [Google Scholar]
- Karlik, B.; Olgac, A.V. Performance analysis of various activation functions in generalized MLP architectures of neural networks. Int. J. Artif. Intell. Expert Syst.
**2011**, 1, 111–122. [Google Scholar] - Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Wiskott, L.; Sejnowski, T.J. Slow feature analysis: Unsupervised learning of invariances. Neural Comput.
**2002**, 14, 715–770. [Google Scholar] [CrossRef] - Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput.
**1997**, 9, 1735–1780. [Google Scholar] [CrossRef] - Hoel, E.P.; Albantakis, L.; Marshall, W.; Tononi, G. Can the macro beat the micro? Integrated information across spatiotemporal scales. Neurosci. Conscious.
**2016**, 2016, niw012. [Google Scholar] [CrossRef] [PubMed][Green Version]

**Figure 1.**$EI$ is a function of weights and connectivity. Plots (

**a**–

**c**) show $EI$ vs. weight for a single input and output neuron, using sigmoid, tanh, and ReLU activation functions, and computed using 8, 16, 32, and 64 bins. Marked is the most informative weights (in isolation) for transmitting a set of perturbations for each activation function. Plots (

**d**–

**f**) show $EI$ for a layer with two input nodes A, B and a single output nodes C. Different activation functions have different characteristic $EI$ manifolds.

**Figure 2.**How $EI$ evolves during training across three different runs per condition. Notably, the largest changes in $EI$ occur during the steepest reductions in the loss function for both Iris-trained networks (

**a**,

**b**) and MNIST-trained networks (

**c**,

**d**).

**Figure 3.**EI is composed of sensitivity and degeneracy. The above surfaces are the sensitivity and degeneracy of a layer with two input nodes and a single output nodes, with a sigmoid activation function. Subtracting the surface in panel (

**b**) from the surface in panel (

**a**) gives the $EI$ manifold as in panel (

**c**).

**Figure 4.**Behavior on the causal plane during training. Paths traced on the causal plane in different layers. All paths get less smooth over time during the period of overfitting and move about less in the causal plane. Networks trained on the simpler Iris task (

**a**) show less differentiation between layers than those trained on the MNIST task (

**b**). The causal plane shows which layers are redundant, as an MNIST-trained network with a single hidden layer shows significant movement (

**c**) whereas for an MNIST-trained network with five hidden layers, all five layers show minimal movement in the plane (

**d**).

**Figure 5.**Integrated Information over training. MNIST-trained networks (

**a**,

**b**) develop more ${\varphi}_{feedforward}$ during training than IRIS-trained networks (

**c**,

**d**).

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Marrow, S.; Michaud, E.J.; Hoel, E.
Examining the Causal Structures of Deep Neural Networks Using Information Theory. *Entropy* **2020**, *22*, 1429.
https://doi.org/10.3390/e22121429

**AMA Style**

Marrow S, Michaud EJ, Hoel E.
Examining the Causal Structures of Deep Neural Networks Using Information Theory. *Entropy*. 2020; 22(12):1429.
https://doi.org/10.3390/e22121429

**Chicago/Turabian Style**

Marrow, Scythia, Eric J. Michaud, and Erik Hoel.
2020. "Examining the Causal Structures of Deep Neural Networks Using Information Theory" *Entropy* 22, no. 12: 1429.
https://doi.org/10.3390/e22121429