Next Article in Journal
A Complexity-Entropy Based Approach for the Detection of Fish Choruses
Next Article in Special Issue
Nonlinear Information Bottleneck
Previous Article in Journal
Information Theoretic Causal Effect Quantification
Previous Article in Special Issue
Learnability for the Information Bottleneck
Open AccessArticle

Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks

by 1,*,† and 2,*,‡
1
Applied Artificial Intelligence Institute, Deakin University, Geelong VIC 3220, Australia
2
Graduate School of Artificial Intelligence, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea
*
Authors to whom correspondence should be addressed.
Part of this work was done at Ulsan National Institute of Science and Technology, Ulsan 44919, Korea.
Part of this work was done at Ulsan National Institute of Science and Technology, Ulsan 44919, Korea; part of the work was done at KAIST.
Entropy 2019, 21(10), 976; https://doi.org/10.3390/e21100976
Received: 8 September 2019 / Revised: 8 September 2019 / Accepted: 30 September 2019 / Published: 6 October 2019
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)
While rate distortion theory compresses data under a distortion constraint, information bottleneck (IB) generalizes rate distortion theory to learning problems by replacing a distortion constraint with a constraint of relevant information. In this work, we further extend IB to multiple Markov bottlenecks (i.e., latent variables that form a Markov chain), namely Markov information bottleneck (MIB), which particularly fits better in the context of stochastic neural networks (SNNs) than the original IB. We show that Markov bottlenecks cannot simultaneously achieve their information optimality in a non-collapse MIB, and thus devise an optimality compromise. With MIB, we take the novel perspective that each layer of an SNN is a bottleneck whose learning goal is to encode relevant information in a compressed form from the data. The inference from a hidden layer to the output layer is then interpreted as a variational approximation to the layer’s decoding of relevant information in the MIB. As a consequence of this perspective, the maximum likelihood estimate (MLE) principle in the context of SNNs becomes a special case of the variational MIB. We show that, compared to MLE, the variational MIB can encourage better information flow in SNNs in both principle and practice, and empirically improve performance in classification, adversarial robustness, and multi-modal learning in MNIST. View Full-Text
Keywords: information bottleneck; stochastic neural networks; variational inference; machine learning information bottleneck; stochastic neural networks; variational inference; machine learning
Show Figures

Figure 1

MDPI and ACS Style

Tang Nguyen, T.; Choi, J. Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks. Entropy 2019, 21, 976. https://doi.org/10.3390/e21100976

AMA Style

Tang Nguyen T, Choi J. Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks. Entropy. 2019; 21(10):976. https://doi.org/10.3390/e21100976

Chicago/Turabian Style

Tang Nguyen, Thanh; Choi, Jaesik. 2019. "Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks" Entropy 21, no. 10: 976. https://doi.org/10.3390/e21100976

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop