Next Article in Journal / Special Issue
Probabilistic Ensemble of Deep Information Networks
Previous Article in Journal
A Review of the Application of Information Theory to Clinical Diagnostic Testing
Previous Article in Special Issue
Pareto-Optimal Data Compression for Binary Classification Tasks
Open AccessArticle

The Convex Information Bottleneck Lagrangian

Department of Intelligent Systems, Division of Information Science and Engineering (ISE), KTH Royal Institute of Technology, 11428 Stockholm, Sweden
*
Authors to whom correspondence should be addressed.
Current address: Malvinas väg 10, 100 44 Stockholm, Sweden
Entropy 2020, 22(1), 98; https://doi.org/10.3390/e22010098
Received: 9 December 2019 / Revised: 3 January 2020 / Accepted: 8 January 2020 / Published: 14 January 2020
(This article belongs to the Special Issue Information–Theoretic Approaches to Computational Intelligence)
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the task, I ( T ; Y ) , while ensuring that a certain level of compression r is achieved (i.e., I ( X ; T ) r ). For practical reasons, the problem is usually solved by maximizing the IB Lagrangian (i.e., L IB ( T ; β ) = I ( T ; Y ) - β I ( X ; T ) ) for many values of β [ 0 , 1 ] . Then, the curve of maximal I ( T ; Y ) for a given I ( X ; T ) is drawn and a representation with the desired predictability and compression is selected. It is known when Y is a deterministic function of X, the IB curve cannot be explored and another Lagrangian has been proposed to tackle this problem: the squared IB Lagrangian: L sq - IB ( T ; β sq ) = I ( T ; Y ) - β sq I ( X ; T ) 2 . In this paper, we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) provide the exact one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes; and (iii) show we can approximately obtain a specific compression level with the convex IB Lagrangian for both known and unknown IB curve shapes. This eliminates the burden of solving the optimization problem for many values of the Lagrange multiplier. That is, we prove that we can solve the original constrained problem with a single optimization.
Keywords: information bottleneck; representation learning; mutual information; optimization information bottleneck; representation learning; mutual information; optimization
MDPI and ACS Style

Rodríguez Gálvez, B.; Thobaben, R.; Skoglund, M. The Convex Information Bottleneck Lagrangian. Entropy 2020, 22, 98.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop