Next Article in Journal
From Key Encapsulation to Authenticated Group Key Establishment—A Compiler for Post-Quantum Primitives
Previous Article in Journal
Thermalization of Finite Many-Body Systems by a Collision Model
Previous Article in Special Issue
Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks
Open AccessFeature PaperArticle

Nonlinear Information Bottleneck

1
Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
2
Department of Aeronautics & Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
3
Complexity Science Hub, 1080 Vienna, Austria
4
Center for Bio-social Complex Systems, Arizona State University, Tempe, AZ 85281, USA
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(12), 1181; https://doi.org/10.3390/e21121181
Received: 16 October 2019 / Revised: 27 November 2019 / Accepted: 28 November 2019 / Published: 30 November 2019
(This article belongs to the Special Issue Information–Theoretic Approaches to Computational Intelligence)
Information bottleneck (IB) is a technique for extracting information in one random variable X that is relevant for predicting another random variable Y. IB works by encoding X in a compressed “bottleneck” random variable M from which Y can be accurately decoded. However, finding the optimal bottleneck variable involves a difficult optimization problem, which until recently has been considered for only two limited cases: discrete X and Y with small state spaces, and continuous X and Y with a Gaussian joint distribution (in which case optimal encoding and decoding maps are linear). We propose a method for performing IB on arbitrarily-distributed discrete and/or continuous X and Y, while allowing for nonlinear encoding and decoding maps. Our approach relies on a novel non-parametric upper bound for mutual information. We describe how to implement our method using neural networks. We then show that it achieves better performance than the recently-proposed “variational IB” method on several real-world datasets.
Keywords: information bottleneck; mutual information; representation learning; neural networks information bottleneck; mutual information; representation learning; neural networks
MDPI and ACS Style

Kolchinsky, A.; Tracey, B.D.; Wolpert, D.H. Nonlinear Information Bottleneck. Entropy 2019, 21, 1181.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop