Next Article in Journal
From Key Encapsulation to Authenticated Group Key Establishment—A Compiler for Post-Quantum Primitives
Next Article in Special Issue
Pareto-Optimal Data Compression for Binary Classification Tasks
Previous Article in Journal
Thermalization of Finite Many-Body Systems by a Collision Model
Previous Article in Special Issue
Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks
Open AccessFeature PaperEditor’s ChoiceArticle

Nonlinear Information Bottleneck

1
Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
2
Department of Aeronautics & Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
3
Complexity Science Hub, 1080 Vienna, Austria
4
Center for Bio-Social Complex Systems, Arizona State University, Tempe, AZ 85281, USA
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(12), 1181; https://doi.org/10.3390/e21121181
Received: 16 October 2019 / Revised: 27 November 2019 / Accepted: 28 November 2019 / Published: 30 November 2019
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)
Information bottleneck (IB) is a technique for extracting information in one random variable X that is relevant for predicting another random variable Y. IB works by encoding X in a compressed “bottleneck” random variable M from which Y can be accurately decoded. However, finding the optimal bottleneck variable involves a difficult optimization problem, which until recently has been considered for only two limited cases: discrete X and Y with small state spaces, and continuous X and Y with a Gaussian joint distribution (in which case optimal encoding and decoding maps are linear). We propose a method for performing IB on arbitrarily-distributed discrete and/or continuous X and Y, while allowing for nonlinear encoding and decoding maps. Our approach relies on a novel non-parametric upper bound for mutual information. We describe how to implement our method using neural networks. We then show that it achieves better performance than the recently-proposed “variational IB” method on several real-world datasets. View Full-Text
Keywords: information bottleneck; mutual information; representation learning; neural networks information bottleneck; mutual information; representation learning; neural networks
Show Figures

Figure 1

MDPI and ACS Style

Kolchinsky, A.; Tracey, B.D.; Wolpert, D.H. Nonlinear Information Bottleneck. Entropy 2019, 21, 1181. https://doi.org/10.3390/e21121181

AMA Style

Kolchinsky A, Tracey BD, Wolpert DH. Nonlinear Information Bottleneck. Entropy. 2019; 21(12):1181. https://doi.org/10.3390/e21121181

Chicago/Turabian Style

Kolchinsky, Artemy; Tracey, Brendan D.; Wolpert, David H. 2019. "Nonlinear Information Bottleneck" Entropy 21, no. 12: 1181. https://doi.org/10.3390/e21121181

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop