Next Article in Journal
Machine Learning Predictors of Extreme Events Occurring in Complex Dynamical Systems
Next Article in Special Issue
Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks
Previous Article in Journal
Fault Diagnosis Method for High-Pressure Common Rail Injector Based on IFOA-VMD and Hierarchical Dispersion Entropy
Previous Article in Special Issue
Gaussian Mean Field Regularizes by Limiting Learned Information
Open AccessArticle

Learnability for the Information Bottleneck

1
Department of Physics, MIT, 77 Massachusetts Ave, Cambridge, MA 02139, USA
2
Google Research, 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(10), 924; https://doi.org/10.3390/e21100924
Received: 1 August 2019 / Revised: 29 August 2019 / Accepted: 12 September 2019 / Published: 23 September 2019
(This article belongs to the Special Issue Information–Theoretic Approaches to Computational Intelligence)
The Information Bottleneck (IB) method provides an insightful and principled approach for balancing compression and prediction for representation learning. The IB objective I ( X ; Z ) β I ( Y ; Z ) employs a Lagrange multiplier β to tune this trade-off. However, in practice, not only is β chosen empirically without theoretical guidance, there is also a lack of theoretical understanding between β , learnability, the intrinsic nature of the dataset and model capacity. In this paper, we show that if β is improperly chosen, learning cannot happen—the trivial representation P ( Z | X ) = P ( Z ) becomes the global minimum of the IB objective. We show how this can be avoided, by identifying a sharp phase transition between the unlearnable and the learnable which arises as β is varied. This phase transition defines the concept of IB-Learnability. We prove several sufficient conditions for IB-Learnability, which provides theoretical guidance for choosing a good β . We further show that IB-learnability is determined by the largest confident, typical and imbalanced subset of the examples (the conspicuous subset), and discuss its relation with model capacity. We give practical algorithms to estimate the minimum β for a given dataset. We also empirically demonstrate our theoretical conditions with analyses of synthetic datasets, MNIST and CIFAR10. View Full-Text
Keywords: learnability; information bottleneck; representation learning; conspicuous subset learnability; information bottleneck; representation learning; conspicuous subset
Show Figures

Figure 1

MDPI and ACS Style

Wu, T.; Fischer, I.; Chuang, I.L.; Tegmark, M. Learnability for the Information Bottleneck. Entropy 2019, 21, 924.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop