Next Article in Journal
Evaluation of Uncertainties in the Design Process of Complex Mechanical Systems
Next Article in Special Issue
Quantifying Information Modification in Developing Neural Networks via Partial Information Decomposition
Previous Article in Journal
Structure of Multipartite Entanglement in Random Cluster-Like Photonic Systems
Previous Article in Special Issue
Information Theoretical Study of Cross-Talk Mediated Signal Transduction in MAPK Pathways
Article Menu
Issue 9 (September) cover image

Export Article

Open AccessArticle
Entropy 2017, 19(9), 474; https://doi.org/10.3390/e19090474

The Partial Information Decomposition of Generative Neural Network Models

1
Corti, Nørrebrogade 45E 2, 2200 Copenhagen N, Denmark
2
Department of Computing, Imperial College London, London SW7 2RH, UK
These authors contributed equally to this work.
*
Authors to whom correspondence should be addressed.
Received: 8 July 2017 / Revised: 13 August 2017 / Accepted: 1 September 2017 / Published: 6 September 2017
Full-Text   |   PDF [297 KB, uploaded 6 September 2017]   |  

Abstract

In this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch of information theory called partial information decomposition (PID) and track the informational properties of the neurons through training. We find two differentiated phases during the training process: a first short phase in which the neurons learn redundant information about the target, and a second phase in which neurons start specialising and each of them learns unique information about the target. We also find that in smaller networks individual neurons learn more specific information about certain features of the input, suggesting that learning pressure can encourage disentangled representations. View Full-Text
Keywords: partial information decomposition; neural networks; information theory partial information decomposition; neural networks; information theory
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Tax, T.M.; Mediano, P.A.; Shanahan, M. The Partial Information Decomposition of Generative Neural Network Models. Entropy 2017, 19, 474.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top